Re: [openstack-dev] [magnum] Magnum conductor async container operations

2015-12-17 Thread Joshua Harlow

SURO wrote:

Josh,
Please find my reply inline.

Regards,
SURO
irc//freenode: suro-patz

On 12/16/15 6:37 PM, Joshua Harlow wrote:



SURO wrote:

Hi all,
Please review and provide feedback on the following design proposal for
implementing the blueprint[1] on async-container-operations -

1. Magnum-conductor would have a pool of threads for executing the
container operations, viz. executor_threadpool. The size of the
executor_threadpool will be configurable. [Phase0]
2. Every time, Magnum-conductor(Mcon) receives a
container-operation-request from Magnum-API(Mapi), it will do the
initial validation, housekeeping and then pick a thread from the
executor_threadpool to execute the rest of the operations. Thus Mcon
will return from the RPC request context much faster without blocking
the Mapi. If the executor_threadpool is empty, Mcon will execute in a
manner it does today, i.e. synchronously - this will be the
rate-limiting mechanism - thus relaying the feedback of exhaustion.
[Phase0]
How often we are hitting this scenario, may be indicative to the
operator to create more workers for Mcon.
3. Blocking class of operations - There will be a class of operations,
which can not be made async, as they are supposed to return
result/content inline, e.g. 'container-logs'. [Phase0]
4. Out-of-order considerations for NonBlocking class of operations -
there is a possible race around condition for create followed by
start/delete of a container, as things would happen in parallel. To
solve this, we will maintain a map of a container and executing thread,
for current execution. If we find a request for an operation for a
container-in-execution, we will block till the thread completes the
execution. [Phase0]
This mechanism can be further refined to achieve more asynchronous
behavior. [Phase2]
The approach above puts a prerequisite that operations for a given
container on a given Bay would go to the same Magnum-conductor instance.
[Phase0]
5. The hand-off between Mcon and a thread from executor_threadpool can
be reflected through new states on the 'container' object. These states
can be helpful to recover/audit, in case of Mcon restart. [Phase1]

Other considerations -
1. Using eventlet.greenthread instead of real threads => This approach
would require further refactoring the execution code and embed yield
logic, otherwise a single greenthread would block others to progress.
Given, we will extend the mechanism for multiple COEs, and to keep the
approach straight forward to begin with, we will use 'threading.Thread'
instead of 'eventlet.greenthread'.



Also unsure about the above, not quite sure I connect how greenthread
usage requires more yield logic (I'm assuming you mean the yield
statement here)? Btw if magnum is running with all things monkey
patched (which it seems like
https://github.com/openstack/magnum/blob/master/magnum/common/rpc_service.py#L33
does) then magnum usage of 'threading.Thread' is a
'eventlet.greenthread' underneath the covers, just fyi.


SURO> Let's consider this -
function A () {
block B; // validation
block C; // Blocking op
}
Now, if we make C a greenthread, as it is, would it not block the entire
thread that runs through all the greenthreads? I assumed, it would and
that's why we have to incorporate finer grain yield into C to leverage
greenthread. If the answer is no, then we can use greenthread.
I will validate which version of threading.Thread was getting used.


Unsure how to answer this one.

If all things are monkey patched then any time a blocking operation 
(i/o, lock acquisition...) is triggered the internals of eventlet go 
through a bunch of jumping around to then switch to another green thread 
(http://eventlet.net/doc/hubs.html). Once u start partially using 
greenthreads and mixing real threads then you have to start trying to 
reason about yielding in certain places (and at that point you might as 
well go to py3.4+ since it has syntax made just for this kind of thinking).


Pointer for the thread monkey patching btw:

https://github.com/eventlet/eventlet/blob/master/eventlet/patcher.py#L346

https://github.com/eventlet/eventlet/blob/master/eventlet/patcher.py#L212

Easy way to see this:

>>> import eventlet
>>> eventlet.monkey_patch()
>>> import thread
>>> thread.start_new_thread.__module__
'eventlet.green.thread'
>>> thread.allocate_lock.__module__
'eventlet.green.thread'



In that case, keeping the code for thread.Threading is portable, as it
would work as desired, even if we remove monkey_patching, right?


Yes, use `thread.Threading` (if u can) so that maybe magnum could switch 
off monkey patching someday, although typically unless u are already 
testing that turning it off in unit tests/functional tests it wouldn't 
be an easy flip that will typically 'just work' (especially since afaik 
magnum is using some oslo libraries which only work under 
greenthreads/eventlet).




Refs:-
[1] -
https://blueprints.launchpad.net/magnum/+spec/async-container-operations




Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11 completed.

2015-12-17 Thread Andrea Frittoli
Thanks for the upgrade, and for all the effort you folks put into keeping
our gerrit close to upstream!

One thing that I find inconvenient in the new UI is the size of the middle
(test results) column: it's too narrow, no matter how large my browser
window is, or what browser I use - which causes the name of some of the
jobs to wrap to a second line, making it really hard to read test results.
At least that's my experience on tempest reviews, where we have a lot of
jobs, and rather long names - see [0] for instance.

Is there any chance via configuration to make that column slightly wider?

thank you!

andrea
[0] https://review.openstack.org/#/c/254274/



On Thu, Dec 17, 2015 at 2:52 AM David Pursehouse 
wrote:

> The "Copy to clipboard" icons are not working on Chrome/Ubuntu.
>
> This was already fixed upstream.  I've cherry-picked it here:
>
> https://review.openstack.org/#/c/258753/
>
>
> On Thu, Dec 17, 2015 at 6:22 AM Zaro  wrote:
>
>> On Tue, Dec 1, 2015 at 6:38 PM, Spencer Krum 
>> wrote:
>> > Hi All,
>> >
>> > The infra team will be taking gerrit offline for an upgrade on December
>> > 16th. We
>> > will start the operation at 17:00 UTC and will continue until about
>> > 21:00 UTC.
>> >
>> > This outage is to upgrade Gerrit to version 2.11. The IP address of
>> > Gerrit will not be changing.
>> >
>> > There is a thread beginning here:
>> >
>> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
>> > which covers what to expect from the new software.
>> >
>> > If you have questions about the Gerrit outage you are welcome to post a
>> > reply to this thread or find the infra team in the #openstack-infra irc
>> > channel on freenode. If you have questions about the version of Gerrit
>> > we are upgrading to please post a reply to the email thread linked
>> > above, or again you are welcome to ask in the #openstack-infra channel.
>> >
>>
>>
>> Thanks to everyone for their patience while we upgraded to Gerrit
>> 2.11.  I'm happy to announce that we were able to successfully
>> completed this task at around 21:00 UTC.  You may hack away once more.
>>
>> If you encounter any problems, please let us know here or in
>> #openstack-infra on Freenode.
>>
>> Enjoy,
>> -Khai
>>
>> ___
>> OpenStack-Infra mailing list
>> openstack-in...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [vitrage] Gerrit Upgrade 12/16

2015-12-17 Thread Eugene Nikanorov
I'm sorry to say that, but the new front page design is horrible and
totally confusing.

I hope it'll change soon in the new release.

E.


On Tue, Dec 15, 2015 at 10:53 AM, AFEK, Ifat (Ifat) <
ifat.a...@alcatel-lucent.com> wrote:

> Hi,
>
> Reminder: Gerrit upgrade is scheduled for tomorrow at 17:00 UTC.
>
> Ifat.
>
>
> -Original Message-
> From: Spencer Krum [mailto:n...@spencerkrum.com]
> Sent: Monday, December 14, 2015 9:53 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Gerrit Upgrade 12/16
>
> This is a gentle reminder that the downtime will be this Wednesday
> starting at 17:00 UTC.
>
> Thank you for your patience,
> Spencer
>
> --
>   Spencer Krum
>   n...@spencerkrum.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-17 Thread Evgeniy L
Hi Andrew,

It doesn't look fair at all to say that we use Postgres specific feature
for no reasons
or as you said "just because we want".
For example we used Arrays which fits pretty well for our roles usage,
which improved
readability and performance.
Or try to fit into relational system something like that [1], I don't think
that we will get
a good result.

P.S. sending a link to a holywar topic (schema vs schemaless), won't help
to solve our
specific problem with Postgres downgrading vs keeping old (new) version.

[1]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml


On Tue, Dec 15, 2015 at 10:53 PM, Andrew Maksimov 
wrote:

> +1 to Igor suggestion to downgrade Postgres to 9.2. Our users don't work
> directly with Postgres, so there is no any deprecation of Fuel features.
> Maintaining our own custom Postgres package just because we want "JSON
> column" is not a rational decision. Come on, fuel is not a billing system
> with thousands tables and special requirements to database. At least, we
> should try to keep it simple and avoid unnecessary complication.
>
> PS
>  BTW, some people suggest to avoid using  json columns, read [1]
> PostgreSQL anti-patterns: unnecessary json columns.
>
> [1] -
> http://blog.2ndquadrant.com/postgresql-anti-patterns-unnecessary-jsonhstore-dynamic-columns/
>
> Regards,
> Andrey Maximov
> Fuel Project Manager
>
>
> On Tue, Dec 15, 2015 at 9:34 PM, Vladimir Kuklin 
> wrote:
>
>> Folks
>>
>> Let me add my 2c here.
>>
>> I am for using Postgres 9.3. Here is an additional argument to the ones
>> provided by Artem, Aleksandra and others.
>>
>> Fuel is being sometimes highly customized by our users for their specific
>> needs. It has been Postgres 9.3 for a while and they might have as well
>> gotten used to it and assumed by default that this would not change. So
>> some of their respective features they are developing for their own sake
>> may depend on Postgres 9.3 and we will never be able to tell the fraction
>> of such use cases. Moreover, downgrading DBMS version of Fuel should be
>> inevitably considered as a 'deprecation' of some features our software
>> suite is providing to our users. This actually means that we MUST provide
>> our users with a warning and deprecation period to allow them to adjust to
>> these changes. Obviously, accidental change of Postgres version does not
>> follow such a policy in any way. So I see no other ways except for getting
>> back to Postgres 9.3.
>>
>>
>> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky 
>> wrote:
>>
>>> Hey Mike,
>>>
>>> Thanks for your input.
>>>
>>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>>>
>>> It still needs to fix the code, i.e. change ARRAY-specific queries
>>> with JSON ones around the code. ;)
>>>
>>> > there's already a mostly finished PR for SQLAlchemy support in the
>>> queue.
>>>
>>> Does it mean SQLAlchemy will have one unified interface to make JSON
>>> queries? So we can use different backends if necessary?
>>>
>>> Thanks,
>>> - Igor
>>>
>>> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
>>> >
>>> >
>>> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
>>> >> Hey Julien,
>>> >>
>>> >>>
>>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>>> >>
>>> >> I believe this blueprint is about DB for OpenStack cloud (we use
>>> >> Galera now), while here we're talking about DB backend for Fuel
>>> >> itself. Fuel has a separate node (so called Fuel Master) and we use
>>> >> PostgreSQL now.
>>> >>
>>> >>> does that mean Fuel is only going to be able to run with PostgreSQL?
>>> >>
>>> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
>>> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
>>> >> way to tighten knots harder.
>>> >
>>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>>> > MySQL has JSON as well now:
>>> > https://dev.mysql.com/doc/refman/5.7/en/json.html
>>> >
>>> > there's already a mostly finished PR for SQLAlchemy support in the
>>> queue.
>>> >
>>> >
>>> >
>>> >>
>>> >> - Igor
>>> >>
>>> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou 
>>> wrote:
>>> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>>> >>>
>>>  The things I want to notice are:
>>> 
>>>  * Currently we aren't tied up to PostgreSQL 9.3.
>>>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using
>>> a
>>>  set of JSON operations.
>>> >>>
>>> >>> I'm curious and have just a small side question: does that mean Fuel
>>> is
>>> >>> only going to be able to run with PostgreSQL?
>>> >>>
>>> >>> I also see
>>> >>>
>>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>>> ,
>>> >>> maybe it's related?
>>> >>>
>>> >>> Thanks!
>>> >>>
>>> >>> --
>>> >>> Julien Danjou
>>> >>> // Free Software hacker
>>> >>> // 

Re: [openstack-dev] FW: [vitrage] Gerrit Upgrade 12/16

2015-12-17 Thread Vikram Choudhary
+1 for getting the old view!
On Dec 17, 2015 2:13 PM, "Eugene Nikanorov"  wrote:

> I'm sorry to say that, but the new front page design is horrible and
> totally confusing.
>
> I hope it'll change soon in the new release.
>
> E.
>
>
> On Tue, Dec 15, 2015 at 10:53 AM, AFEK, Ifat (Ifat) <
> ifat.a...@alcatel-lucent.com> wrote:
>
>> Hi,
>>
>> Reminder: Gerrit upgrade is scheduled for tomorrow at 17:00 UTC.
>>
>> Ifat.
>>
>>
>> -Original Message-
>> From: Spencer Krum [mailto:n...@spencerkrum.com]
>> Sent: Monday, December 14, 2015 9:53 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] Gerrit Upgrade 12/16
>>
>> This is a gentle reminder that the downtime will be this Wednesday
>> starting at 17:00 UTC.
>>
>> Thank you for your patience,
>> Spencer
>>
>> --
>>   Spencer Krum
>>   n...@spencerkrum.com
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [vitrage] Gerrit Upgrade 12/16

2015-12-17 Thread Mohan Kumar
Eugene:  +1 , Old Gerrit page was better than new one . Please fix

On Thu, Dec 17, 2015 at 2:11 PM, Eugene Nikanorov 
wrote:

> I'm sorry to say that, but the new front page design is horrible and
> totally confusing.
>
> I hope it'll change soon in the new release.
>
> E.
>
>
> On Tue, Dec 15, 2015 at 10:53 AM, AFEK, Ifat (Ifat) <
> ifat.a...@alcatel-lucent.com> wrote:
>
>> Hi,
>>
>> Reminder: Gerrit upgrade is scheduled for tomorrow at 17:00 UTC.
>>
>> Ifat.
>>
>>
>> -Original Message-
>> From: Spencer Krum [mailto:n...@spencerkrum.com]
>> Sent: Monday, December 14, 2015 9:53 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] Gerrit Upgrade 12/16
>>
>> This is a gentle reminder that the downtime will be this Wednesday
>> starting at 17:00 UTC.
>>
>> Thank you for your patience,
>> Spencer
>>
>> --
>>   Spencer Krum
>>   n...@spencerkrum.com
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Reminder: Low Priority Blueprint Review Day is Thursday 17th December 2015

2015-12-17 Thread John Garbutt
On 15 December 2015 at 15:31, John Garbutt  wrote:
> To help with the review push on Thursday, I have created a list of the
> blueprint reviews, that are approved, have Jenkins passing, and are
> more than 50 days old.
>
> The list is a new section at the top of the regular etherpad we use to
> track the most important reviews:
> https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
>
> A full list of approved mitaka blueprints can be seen here:
> https://blueprints.launchpad.net/nova/mitaka
> http://5885fef486164bb8596d-41634d3e64ee11f37e8658ed1b4d12ec.r44.cf3.rackcdn.com/release_status.html

Happy review day (for those in a similar timezone to the UK)!

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Gerrit Upgrade 12/16

2015-12-17 Thread ZhiQiang Fan
thanks for the effort, new feature is welcomed

but the new UI is really worse than the old one, it is not only related to
habit, it is just ugly, for the first view I thought it has css bug.

And it is not easy to find out current review status, old UI has a clear
and nice table for review list

On Thu, Dec 17, 2015 at 6:24 PM, Marc Koderer  wrote:

> +1 I am also very confused about the new UI.. but maybe it takes time to
> get use to it.
>
> Regards
> Marc
>
> Am 17.12.2015 um 10:55 schrieb Mohan Kumar :
>
> > Eugene:  +1 , Old Gerrit page was better than new one . Please fix
> >
> > On Thu, Dec 17, 2015 at 2:11 PM, Eugene Nikanorov <
> enikano...@mirantis.com> wrote:
> > I'm sorry to say that, but the new front page design is horrible and
> totally confusing.
> >
> > I hope it'll change soon in the new release.
> >
> > E.
> >
> >
> > On Tue, Dec 15, 2015 at 10:53 AM, AFEK, Ifat (Ifat) <
> ifat.a...@alcatel-lucent.com> wrote:
> > Hi,
> >
> > Reminder: Gerrit upgrade is scheduled for tomorrow at 17:00 UTC.
> >
> > Ifat.
> >
> >
> > -Original Message-
> > From: Spencer Krum [mailto:n...@spencerkrum.com]
> > Sent: Monday, December 14, 2015 9:53 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] Gerrit Upgrade 12/16
> >
> > This is a gentle reminder that the downtime will be this Wednesday
> starting at 17:00 UTC.
> >
> > Thank you for your patience,
> > Spencer
> >
> > --
> >   Spencer Krum
> >   n...@spencerkrum.com
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-17 Thread Evgeniy L
Hi Oleg,

With the same degree of confidence we can say that anything we have in the
beginning of
the release cycle is not urgent enough. We pushed early branching
specifically for
such big changes as Docker removal/Changing repos structures and merging
invasive patches
for new release features.

Vladimir Kuklin,

I'm not sure what do you mean by "fixing 2 different environments"? With
environment without
containers it will simplify debugging process.

Thanks,

On Wed, Dec 16, 2015 at 10:12 PM, Oleg Gelbukh 
wrote:

> Hi
>
> Although I agree that it should be done, the removal of Docker doesn't
> seem an urgent feature to me. It is not blocking anything besides moving to
> full package-based deployment of Fuel, as far as I understand. So it could
> be easily delayed for one milestone, especially if it is already almost
> done and submitted for review, so it could be merged fast before any other
> significant changes land in 'master' after it is open.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Wed, Dec 16, 2015 at 8:56 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Vladimir,
>>
>> I have other activities planned for the time immediately after SCF
>> (separating UI from fuel-web, maybe it is even more invasive :-)) and it is
>> not a big deal to postpone this feature or another. I am against the
>> approach itself of postponing something because it is too invasive. If we
>> create stable branch master becomes open. That was our primary intention to
>> open master earlier than later when we decided to move stable branch
>> creation.
>>
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Dec 16, 2015 at 8:28 PM, Vladimir Kuklin 
>> wrote:
>>
>>> Vladimir
>>>
>>> I am pretty much for removing docker, but I do not think that we should
>>> startle our developers/QA folks with additional efforts on fixing 2
>>> different environments. Let's just think from the point of development
>>> velocity here and at delay such changes for at least after NY. Because if
>>> we do it immediately after SCF there will be a whole bunch of holidays and
>>> Russian holidays are Jan 1st-10th and you (who is the SME for docker
>>> removal) will be offline. Do you really want to fix things instead of
>>> enjoying holidays?
>>>
>>> On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L  wrote:
>>>
 +1 to Vladimir Kozhukalov,

 Entire point of moving branches creation to SCF was to perform such
 changes as
 early as possible in the release, I see no reasons to wait for HCF.

 Thanks,

 On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> -1
>
> We already discussed this and we have made a decision to move stable
> branch creation from HCF to SCF. There were reasons for this. We agreed
> that once stable branch is created, master becomes open for new features.
> Let's avoid discussing this again.
>
> Vladimir Kozhukalov
>
> On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin <
> bgaiful...@mirantis.com> wrote:
>
>> +1
>>
>> Regards,
>> Bulat Gaifullin
>> Mirantis Inc.
>>
>>
>>
>> On 15 Dec 2015, at 22:19, Andrew Maksimov 
>> wrote:
>>
>> +1
>>
>> Regards,
>> Andrey Maximov
>> Fuel Project Manager
>>
>> On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin <
>> vkuk...@mirantis.com> wrote:
>>
>>> Folks
>>>
>>> This email is a proposal to push Docker containers removal from the
>>> master node to the date beyond 8.0 HCF.
>>>
>>> Here is why I propose to do so.
>>>
>>> Removal of Docker is a rather invasive change and may introduce a
>>> lot of regressions. It is well may affect how bugs are fixed - we might
>>> have 2 ways of fixing them, while during SCF of 8.0 this may affect
>>> velocity of bug fixing as you need to fix bugs in master prior to fixing
>>> them in stable branches. This actually may significantly increase our
>>> bugfixing pace and put 8.0 GA release on risk.
>>>
>>>
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 35bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com 
>>> www.mirantis.ru
>>> vkuk...@mirantis.com
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 

[openstack-dev] [Ironic][Neutron] Integration status

2015-12-17 Thread Vasyl Saienko
Hello Ironic/Neutron community,

Ironic patches were stale and were in merge conflict during last time.
Yesterday I've rebased those pathes and put them in single chain. I already
replied/resolved some comments and will do it for the rest in nearest
future.

I'm happy to announce that it is possible to test Ironic/Neutron
integration on devstack.
Devstack should be patched with [0]. local.conf can be found here [1].

I'm kindly asking to start actively reviewing patches [2]. It would be cool
to have this feature in Mitaka.


[0] https://review.openstack.org/256364
[1]
https://review.openstack.org/#/c/258596/3/devstack/doc/source/guides/ironic-neutron-integration.rst
[2]
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bp/ironic-ml2-integration
[3] https://etherpad.openstack.org/p/ironic-neutron-mid-cycle

Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Bareon] Fuel & Bareon integration (fuel modularisation)

2015-12-17 Thread Evgeniy L
Hi,

Some time ago, we’ve started a discussion [0] about Fuel modularisation
activity.
Due to unexpected circumstances POC has been delayed.

Regarding to partitioning/provisioning system, we have POC with a demo [1]
(thanks to Sylwester), which shows how the integration of Fuel and Bareon
[2] can
be done.

To summarise the implementation:
* we have a simple implementation of Bareon-API [3], which stores
partitioning
  related data and allows to edit it
* for Nailgun new extension has been implemented [4], which uses Bareon-API
  to store partitioning information, so we will be able to easily switch
between
  classic volume_manager implementation and Bareon-API extension
* when provisioning gets started, extensions retrieves the data from
Bareon-API

Next steps:
* create Bareon-API repository, and start production ready implementation
* create a spec for Fuel project
* create a spec for Bareon project

If you have any questions don’t hesitate to ask them in this thread, also
you can
find us on #openstack-bareon channel.

Thanks!

[0]
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077025.html
[1] https://www.youtube.com/watch?v=GTJM8i7DL0w
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082397.html
[3] https://github.com/Mirantis/bareon-api
[4] https://review.openstack.org/#/c/250864/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Integration status

2015-12-17 Thread Vasyl Saienko
On Thu, Dec 17, 2015 at 12:23 PM, Vasyl Saienko 
wrote:

> Hello Ironic/Neutron community,
>
> Ironic patches were stale and were in merge conflict during last time.
> Yesterday I've rebased those pathes and put them in single chain. I already
> replied/resolved some comments and will do it for the rest in nearest
> future.
>
> I'm happy to announce that it is possible to test Ironic/Neutron
> integration on devstack.
> Devstack should be patched with [0]. local.conf can be found here [1].
>
> I'm kindly asking to start actively reviewing patches [2]. It would be
> cool to have this feature in Mitaka.
>
>
> [0] https://review.openstack.org/256364
>
Correct patch to devstack is https://review.openstack.org/#/c/256294/

>
> [1]
> https://review.openstack.org/#/c/258596/3/devstack/doc/source/guides/ironic-neutron-integration.rst
> [2]
> https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bp/ironic-ml2-integration
> [3] https://etherpad.openstack.org/p/ironic-neutron-mid-cycle
>
> Sincerely,
> Vasyl Saienko
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-17 Thread Evgeniy L
Hi,

Since older Postgres doesn't introduce bugs and it won't harm new features,
I would vote for downgrade to 9.2

The reasons are:
1. not to support own package for Centos (as far as I know 9.3 for Ubuntu
is already there)
2. should Fuel some day be a part of upstream Centos? If yes, or there is
even small probability that
it's going to be, we should be as much as possible compatible with
upstream repo. If we don't
consider such possibility, it doesn't really matter, because user will
have to connect external
repo anyway.

Since we already use Postgres specific features, we should spawn a separate
thread, if
we should or shouldn't continue doing that, and if there is a real need to
support mysql
for example.

Thanks,

On Wed, Dec 16, 2015 at 3:58 PM, Igor Kalnitsky 
wrote:

> > From what I understand, we are using 9.2 since the CentOS 7 switch. Can
> > anyone point me to a bug caused by that?
>
> AFAIK, there's no such bugs. Some folks have just *concerns*. Anyway,
> it's up to packaging team to decide whether to package or not.
>
> From Nailgun POV, I'd like to see classical RDBMS schemas as much as
> possible, and do not rely on database backend and its version.
>
> On Wed, Dec 16, 2015 at 11:30 AM, Bartłomiej Piotrowski
>  wrote:
> > On 2015-12-16 10:14, Bartłomiej Piotrowski wrote:
> >> On 2015-12-16 08:23, Mike Scherbakov wrote:
> >>> We could consider downgrading in Fuel 9.0, but I'd very carefully
> >>> consider that. As Vladimir Kuklin said, there are may be other users
> who
> >>> already rely on 9.3 for some of their enhancements.
> >>
> >> That will be way too late for that, as it will make upgrade procedure
> >> more complicated. Given no clear upgrade path from 7.0 to 8.0, it sounds
> >> like perfect opportunity to use what is provided by base distribution.
> >> Are there actual users facilitating 9.3 features or is it some kind of
> >> Invisible Pink Unicorn?
> >>
> >> Bartłomiej
> >>
> >
> > I also want to remind that we are striving for possibility to let users
> > do 'yum install fuel' (or apt) to make the magic happen. There is not
> > much magic in requiring potential users to install specific PostgreSQL
> > version because someone said so. It's either supporting the lowest
> > version available (CentOS 7 – 9.2, Ubuntu 14.04 – 9.3, Debian Jessie –
> > 9.4, openSUSE Leap – 9.4) or "ohai add this repo with our manually
> > imported and rebuilt EPEL package".
> >
> > From what I understand, we are using 9.2 since the CentOS 7 switch. Can
> > anyone point me to a bug caused by that?
> >
> > BP
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gnocchi] 'bad' resource_id

2015-12-17 Thread Julien Danjou
On Wed, Dec 16 2015, gord chung wrote:

> but when you query, you do use the original resource_id.  the translation
> happens on both writes and reads. while in reality, the db is will store a
> different id, users shouldn't really be aware of this.

We could add another field storing the actual non-translated ID as a
string I guess.

> that said, because of pecan, translations don't help when our ids have '/' in
> them... we should definitely fix that.

Yeah, nothing is gonna help here.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-17 Thread Oleg Gelbukh
In fact, it seems that 9.2 is in the mix since the introduction of centos7.
Thus, all tests that have been made since then are made against 9.2. So,
upgrading it to 9.3 actually is a change that has to be blocked by FF/SCF.

Just my 2c.

--
Best regards,
Oleg Gelbukh

On Thu, Dec 17, 2015 at 12:13 PM, Evgeniy L  wrote:

> Hi Andrew,
>
> It doesn't look fair at all to say that we use Postgres specific feature
> for no reasons
> or as you said "just because we want".
> For example we used Arrays which fits pretty well for our roles usage,
> which improved
> readability and performance.
> Or try to fit into relational system something like that [1], I don't
> think that we will get
> a good result.
>
> P.S. sending a link to a holywar topic (schema vs schemaless), won't help
> to solve our
> specific problem with Postgres downgrading vs keeping old (new) version.
>
> [1]
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml
>
>
> On Tue, Dec 15, 2015 at 10:53 PM, Andrew Maksimov 
> wrote:
>
>> +1 to Igor suggestion to downgrade Postgres to 9.2. Our users don't work
>> directly with Postgres, so there is no any deprecation of Fuel features.
>> Maintaining our own custom Postgres package just because we want "JSON
>> column" is not a rational decision. Come on, fuel is not a billing system
>> with thousands tables and special requirements to database. At least, we
>> should try to keep it simple and avoid unnecessary complication.
>>
>> PS
>>  BTW, some people suggest to avoid using  json columns, read [1]
>> PostgreSQL anti-patterns: unnecessary json columns.
>>
>> [1] -
>> http://blog.2ndquadrant.com/postgresql-anti-patterns-unnecessary-jsonhstore-dynamic-columns/
>>
>> Regards,
>> Andrey Maximov
>> Fuel Project Manager
>>
>>
>> On Tue, Dec 15, 2015 at 9:34 PM, Vladimir Kuklin 
>> wrote:
>>
>>> Folks
>>>
>>> Let me add my 2c here.
>>>
>>> I am for using Postgres 9.3. Here is an additional argument to the ones
>>> provided by Artem, Aleksandra and others.
>>>
>>> Fuel is being sometimes highly customized by our users for their
>>> specific needs. It has been Postgres 9.3 for a while and they might have as
>>> well gotten used to it and assumed by default that this would not change.
>>> So some of their respective features they are developing for their own sake
>>> may depend on Postgres 9.3 and we will never be able to tell the fraction
>>> of such use cases. Moreover, downgrading DBMS version of Fuel should be
>>> inevitably considered as a 'deprecation' of some features our software
>>> suite is providing to our users. This actually means that we MUST provide
>>> our users with a warning and deprecation period to allow them to adjust to
>>> these changes. Obviously, accidental change of Postgres version does not
>>> follow such a policy in any way. So I see no other ways except for getting
>>> back to Postgres 9.3.
>>>
>>>
>>> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky >> > wrote:
>>>
 Hey Mike,

 Thanks for your input.

 > actually not.  if you replace your ARRAY columns with JSON entirely,

 It still needs to fix the code, i.e. change ARRAY-specific queries
 with JSON ones around the code. ;)

 > there's already a mostly finished PR for SQLAlchemy support in the
 queue.

 Does it mean SQLAlchemy will have one unified interface to make JSON
 queries? So we can use different backends if necessary?

 Thanks,
 - Igor

 On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
 >
 >
 > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
 >> Hey Julien,
 >>
 >>>
 https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
 >>
 >> I believe this blueprint is about DB for OpenStack cloud (we use
 >> Galera now), while here we're talking about DB backend for Fuel
 >> itself. Fuel has a separate node (so called Fuel Master) and we use
 >> PostgreSQL now.
 >>
 >>> does that mean Fuel is only going to be able to run with PostgreSQL?
 >>
 >> Unfortunately we already tied up to PostgreSQL. For instance, we use
 >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
 >> way to tighten knots harder.
 >
 > actually not.  if you replace your ARRAY columns with JSON entirely,
 > MySQL has JSON as well now:
 > https://dev.mysql.com/doc/refman/5.7/en/json.html
 >
 > there's already a mostly finished PR for SQLAlchemy support in the
 queue.
 >
 >
 >
 >>
 >> - Igor
 >>
 >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou 
 wrote:
 >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
 >>>
  The things I want to notice are:
 
  * Currently we aren't tied up to PostgreSQL 9.3.
  * There's a patch [2] that 

Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-17 Thread Oleg Gelbukh
Evgeniy,

True, and I fully support merging this particular change as soon as
possible, i.e. the moment the 'master' is open for 9.0 development.

-Oleg

On Thu, Dec 17, 2015 at 12:28 PM, Evgeniy L  wrote:

> Hi Oleg,
>
> With the same degree of confidence we can say that anything we have in the
> beginning of
> the release cycle is not urgent enough. We pushed early branching
> specifically for
> such big changes as Docker removal/Changing repos structures and merging
> invasive patches
> for new release features.
>
> Vladimir Kuklin,
>
> I'm not sure what do you mean by "fixing 2 different environments"? With
> environment without
> containers it will simplify debugging process.
>
> Thanks,
>
> On Wed, Dec 16, 2015 at 10:12 PM, Oleg Gelbukh 
> wrote:
>
>> Hi
>>
>> Although I agree that it should be done, the removal of Docker doesn't
>> seem an urgent feature to me. It is not blocking anything besides moving to
>> full package-based deployment of Fuel, as far as I understand. So it could
>> be easily delayed for one milestone, especially if it is already almost
>> done and submitted for review, so it could be merged fast before any other
>> significant changes land in 'master' after it is open.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Wed, Dec 16, 2015 at 8:56 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Vladimir,
>>>
>>> I have other activities planned for the time immediately after SCF
>>> (separating UI from fuel-web, maybe it is even more invasive :-)) and it is
>>> not a big deal to postpone this feature or another. I am against the
>>> approach itself of postponing something because it is too invasive. If we
>>> create stable branch master becomes open. That was our primary intention to
>>> open master earlier than later when we decided to move stable branch
>>> creation.
>>>
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Wed, Dec 16, 2015 at 8:28 PM, Vladimir Kuklin 
>>> wrote:
>>>
 Vladimir

 I am pretty much for removing docker, but I do not think that we should
 startle our developers/QA folks with additional efforts on fixing 2
 different environments. Let's just think from the point of development
 velocity here and at delay such changes for at least after NY. Because if
 we do it immediately after SCF there will be a whole bunch of holidays and
 Russian holidays are Jan 1st-10th and you (who is the SME for docker
 removal) will be offline. Do you really want to fix things instead of
 enjoying holidays?

 On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L  wrote:

> +1 to Vladimir Kozhukalov,
>
> Entire point of moving branches creation to SCF was to perform such
> changes as
> early as possible in the release, I see no reasons to wait for HCF.
>
> Thanks,
>
> On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> -1
>>
>> We already discussed this and we have made a decision to move stable
>> branch creation from HCF to SCF. There were reasons for this. We agreed
>> that once stable branch is created, master becomes open for new features.
>> Let's avoid discussing this again.
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin <
>> bgaiful...@mirantis.com> wrote:
>>
>>> +1
>>>
>>> Regards,
>>> Bulat Gaifullin
>>> Mirantis Inc.
>>>
>>>
>>>
>>> On 15 Dec 2015, at 22:19, Andrew Maksimov 
>>> wrote:
>>>
>>> +1
>>>
>>> Regards,
>>> Andrey Maximov
>>> Fuel Project Manager
>>>
>>> On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin <
>>> vkuk...@mirantis.com> wrote:
>>>
 Folks

 This email is a proposal to push Docker containers removal from the
 master node to the date beyond 8.0 HCF.

 Here is why I propose to do so.

 Removal of Docker is a rather invasive change and may introduce a
 lot of regressions. It is well may affect how bugs are fixed - we might
 have 2 ways of fixing them, while during SCF of 8.0 this may affect
 velocity of bug fixing as you need to fix bugs in master prior to 
 fixing
 them in stable branches. This actually may significantly increase our
 bugfixing pace and put 8.0 GA release on risk.



 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com 
 www.mirantis.ru
 vkuk...@mirantis.com


 

Re: [openstack-dev] Gerrit Upgrade 12/16

2015-12-17 Thread Vikram Choudhary
Hi All,

Old interface was better. Is it possible to restore the same!

Thanks
Vikram

On Tue, Dec 15, 2015 at 1:23 AM, Spencer Krum  wrote:

> This is a gentle reminder that the downtime will be this Wednesday
> starting at 17:00 UTC.
>
> Thank you for your patience,
> Spencer
>
> --
>   Spencer Krum
>   n...@spencerkrum.com
>
> On Tue, Dec 1, 2015, at 10:19 PM, Stefano Maffulli wrote:
> > On 12/01/2015 06:38 PM, Spencer Krum wrote:
> > > There is a thread beginning here:
> > >
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
> > > which covers what to expect from the new software.
> >
> > Nice! This is awesome: the new review panel lets you edit files on the
> > web interface. No more `git review -d` and subsequent commit to fix a
> > typo. I think this is huge for documentation and all sort of nitpicking
> > :)
> >
> > And while I'm at it, I respectfully bow to the infra team: keeping pace
> > with frequent software upgrades at this size is no small feat and a rare
> > accomplishment. Good job.
> >
> > /stef
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-17 Thread Artem Silenkov
Hello!
We have merged 9.3 a week ago. From packaging team side downgrade is not an
option and was made by mistake.
Regards
Artem Silenkov
---
MOS-PAckaging

On Thu, Dec 17, 2015, 12:32 Oleg Gelbukh  wrote:

> In fact, it seems that 9.2 is in the mix since the introduction of
> centos7. Thus, all tests that have been made since then are made against
> 9.2. So, upgrading it to 9.3 actually is a change that has to be blocked by
> FF/SCF.
>
> Just my 2c.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Thu, Dec 17, 2015 at 12:13 PM, Evgeniy L  wrote:
>
>> Hi Andrew,
>>
>> It doesn't look fair at all to say that we use Postgres specific feature
>> for no reasons
>> or as you said "just because we want".
>> For example we used Arrays which fits pretty well for our roles usage,
>> which improved
>> readability and performance.
>> Or try to fit into relational system something like that [1], I don't
>> think that we will get
>> a good result.
>>
>> P.S. sending a link to a holywar topic (schema vs schemaless), won't help
>> to solve our
>> specific problem with Postgres downgrading vs keeping old (new) version.
>>
>> [1]
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml
>>
>>
>> On Tue, Dec 15, 2015 at 10:53 PM, Andrew Maksimov > > wrote:
>>
>>> +1 to Igor suggestion to downgrade Postgres to 9.2. Our users don't work
>>> directly with Postgres, so there is no any deprecation of Fuel features.
>>> Maintaining our own custom Postgres package just because we want "JSON
>>> column" is not a rational decision. Come on, fuel is not a billing system
>>> with thousands tables and special requirements to database. At least, we
>>> should try to keep it simple and avoid unnecessary complication.
>>>
>>> PS
>>>  BTW, some people suggest to avoid using  json columns, read [1]
>>> PostgreSQL anti-patterns: unnecessary json columns.
>>>
>>> [1] -
>>> http://blog.2ndquadrant.com/postgresql-anti-patterns-unnecessary-jsonhstore-dynamic-columns/
>>>
>>> Regards,
>>> Andrey Maximov
>>> Fuel Project Manager
>>>
>>>
>>> On Tue, Dec 15, 2015 at 9:34 PM, Vladimir Kuklin 
>>> wrote:
>>>
 Folks

 Let me add my 2c here.

 I am for using Postgres 9.3. Here is an additional argument to the ones
 provided by Artem, Aleksandra and others.

 Fuel is being sometimes highly customized by our users for their
 specific needs. It has been Postgres 9.3 for a while and they might have as
 well gotten used to it and assumed by default that this would not change.
 So some of their respective features they are developing for their own sake
 may depend on Postgres 9.3 and we will never be able to tell the fraction
 of such use cases. Moreover, downgrading DBMS version of Fuel should be
 inevitably considered as a 'deprecation' of some features our software
 suite is providing to our users. This actually means that we MUST provide
 our users with a warning and deprecation period to allow them to adjust to
 these changes. Obviously, accidental change of Postgres version does not
 follow such a policy in any way. So I see no other ways except for getting
 back to Postgres 9.3.


 On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky <
 ikalnit...@mirantis.com> wrote:

> Hey Mike,
>
> Thanks for your input.
>
> > actually not.  if you replace your ARRAY columns with JSON entirely,
>
> It still needs to fix the code, i.e. change ARRAY-specific queries
> with JSON ones around the code. ;)
>
> > there's already a mostly finished PR for SQLAlchemy support in the
> queue.
>
> Does it mean SQLAlchemy will have one unified interface to make JSON
> queries? So we can use different backends if necessary?
>
> Thanks,
> - Igor
>
> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
> >
> >
> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> >> Hey Julien,
> >>
> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> >>
> >> I believe this blueprint is about DB for OpenStack cloud (we use
> >> Galera now), while here we're talking about DB backend for Fuel
> >> itself. Fuel has a separate node (so called Fuel Master) and we use
> >> PostgreSQL now.
> >>
> >>> does that mean Fuel is only going to be able to run with
> PostgreSQL?
> >>
> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
> >> way to tighten knots harder.
> >
> > actually not.  if you replace your ARRAY columns with JSON entirely,
> > MySQL has JSON as well now:
> > https://dev.mysql.com/doc/refman/5.7/en/json.html
> >
> > there's already a mostly finished PR for SQLAlchemy support 

Re: [openstack-dev] [vitrage] Gerrit Upgrade 12/16

2015-12-17 Thread Marc Koderer
+1 I am also very confused about the new UI.. but maybe it takes time to get 
use to it.

Regards
Marc

Am 17.12.2015 um 10:55 schrieb Mohan Kumar :

> Eugene:  +1 , Old Gerrit page was better than new one . Please fix 
> 
> On Thu, Dec 17, 2015 at 2:11 PM, Eugene Nikanorov  
> wrote:
> I'm sorry to say that, but the new front page design is horrible and totally 
> confusing.
> 
> I hope it'll change soon in the new release.
> 
> E.
> 
> 
> On Tue, Dec 15, 2015 at 10:53 AM, AFEK, Ifat (Ifat) 
>  wrote:
> Hi,
> 
> Reminder: Gerrit upgrade is scheduled for tomorrow at 17:00 UTC.
> 
> Ifat.
> 
> 
> -Original Message-
> From: Spencer Krum [mailto:n...@spencerkrum.com]
> Sent: Monday, December 14, 2015 9:53 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Gerrit Upgrade 12/16
> 
> This is a gentle reminder that the downtime will be this Wednesday starting 
> at 17:00 UTC.
> 
> Thank you for your patience,
> Spencer
> 
> --
>   Spencer Krum
>   n...@spencerkrum.com
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-17 Thread Victor Stinner

Le 16/12/2015 20:33, Davanum Srinivas a écrit :

Brant,

I am ok either way, guess the alternative was to add keystone-core
directly to the oslo.policy core group (can't check right now).

The name is very possibly going to create confusion

-- Dims


I heard some people consider that "OpenStack" and "Oslo" are two 
projects with different goal and two separated developer groups.


It's sad that the purpose of Oslo is misundestood :-/ Oslo project is 
working for OpenStack. Oslo developers collaborate closely with all 
OpenStack subprojects, they take care of backward compatibility, they 
work to reduce the technical debt, and they regulary help to fix Oslo 
issues in other projects. So Oslo doesn't only produce code and break 
code for free, Oslo developers are also consumers of Oslo code and make 
sure that everything works fine.


Sorry, this email is not really related to the Keystone discussion, it's 
just to share my feelings on Oslo.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] meeting time proposal

2015-12-17 Thread Thierry Carrez
Matt Riedemann wrote:
> The proposed meeting schedule was approved so we have this:
> 
> http://eavesdrop.openstack.org/#Stable_Team_Meeting

I can attend the Tuesday ones.

> Now I guess the question is if anyone wants to meet next week (Tuesday
> 12/22 at 1500 UTC)?
> 
> I'm working Monday and Tuesday next week and then I'm out for the rest
> of the year, so I'm fine with skipping an initial meeting before a long
> break and I'm assuming most other people are too. If anyone feels
> otherwise and really wants to meet next Tuesday, let me know and I'll
> get an agenda put together. Otherwise I'll plan for the first meeting in
> early January.

I can attend next week, but it's probably better to maximize the
attendance for the first team meeting(s) so starting on January 5th
could be a better bet.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gnocchi][vitrage] 'bad' resource_id

2015-12-17 Thread gord chung



On 17/12/15 08:17 AM, AFEK, Ifat (Ifat) wrote:

What do you mean by "create new types of resources"?
Suppose we want to add a new resource type with a new id format, what
should we do for that? Change Gnocchi code / add a plug-in / add a
new type definition in the database / ...?


i believe this comment was because we currently don't have a way to 
support dynamic creation of new resource types. if you want to just add 
a new resource sans metadata, you probably don't need a new type, but if 
you want to capture specific metadata attributes, you will probably need 
to create a new resource in the db.


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gnocchi][vitrage] 'bad' resource_id

2015-12-17 Thread AFEK, Ifat (Ifat)
Hi,

In Vitrage[1] project we handle different kinds of resources, even ones 
that are not managed by OpenStack. For example, we can get from nagios
information about an unreachable switch. I don't know in advance what 
the different IDs would look like, since we have an open pluggable 
architecture.

Our current use cases are about alarms and are related to Aodh and not
to Gnocchi; yet I think we might want to use non-UUID ids on metrics 
in the future.

Please see my comment below.

[1] https://wiki.openstack.org/wiki/Vitrage

Thanks,
Ifat.


> -Original Message-
> From: Lu, Lianhao [mailto:lianhao...@intel.com]
> Sent: Wednesday, December 16, 2015 10:02 PM
> 
> Hi stackers,
> 
> In ceilometer, some metrics(e.g. network.incoming.bytes for VM net
> interface, hardware.network.incoming.bytes for host net interface,
> compute.node.cpu.percentage for nova compute node host cpu utilization,
> etc.) don't have their resource_id in UUID format(which is required by
> gnocchi). Instead, they have something like . as
> their resource_id, in some cases even the  part won't be in uuid
> format.  Gnocchi will treat these kind of resource_id as bad id, and
> build a new UUID format resource_id for them. Since users are mostly
> using resource_id to identify their resources, changing user passed in
> resource_id would require the users extra effort to identify the
> resources in gnocchi and link them with the resources they original
> passed in.
> 
> It seems there're several options to handle this kind of 'bad'
> resource_id problem. I'm writing this email to ask for your opinios.
> 
> 1. Create new types of resource in gnocchi, and put original
> resource_id information as new resource attributes for that specific
> type. This might require adding different new code in gnocchi for
> different types of metrics with 'bad' resource_id, but it might give
> user fine grain control and aware they're dealing with special types of
> resources with 'bad' resource_id.

What do you mean by "create new types of resources"? 
Suppose we want to add a new resource type with a new id format, what
should we do for that? Change Gnocchi code / add a plug-in / add a 
new type definition in the database / ...?

> 
> 2. Added a new resource attribute original_resource_id in the generic
> resource type, and inhence will be inherited by all resource types in
> goncchi. This won't require adding new code for resources with 'bad'
> id, but might require adding a new db index on original_resource_id for
> resource search purpose.
> 
> Any comments or suggestions?
> 
> Best Regards,
> -Lianhao Lu




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit Upgrade 12/16

2015-12-17 Thread Arkady_Kanevsky
Here. Here.
Even trivial thing like review button to submit is hard to find. New UI is much 
less intuitive than old one

From: Vikram Choudhary [mailto:viks...@gmail.com]
Sent: Thursday, December 17, 2015 3:59 AM
To: OpenStack Development Mailing List (not for usage questions) 
; n...@spencerkrum.com
Subject: Re: [openstack-dev] Gerrit Upgrade 12/16

Hi All,

Old interface was better. Is it possible to restore the same!

Thanks
Vikram

On Tue, Dec 15, 2015 at 1:23 AM, Spencer Krum 
> wrote:
This is a gentle reminder that the downtime will be this Wednesday
starting at 17:00 UTC.

Thank you for your patience,
Spencer

--
  Spencer Krum
  n...@spencerkrum.com

On Tue, Dec 1, 2015, at 10:19 PM, Stefano Maffulli wrote:
> On 12/01/2015 06:38 PM, Spencer Krum wrote:
> > There is a thread beginning here:
> > http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
> > which covers what to expect from the new software.
>
> Nice! This is awesome: the new review panel lets you edit files on the
> web interface. No more `git review -d` and subsequent commit to fix a
> typo. I think this is huge for documentation and all sort of nitpicking
> :)
>
> And while I'm at it, I respectfully bow to the infra team: keeping pace
> with frequent software upgrades at this size is no small feat and a rare
> accomplishment. Good job.
>
> /stef
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Bareon] Fuel & Bareon integration (fuel modularisation)

2015-12-17 Thread Igor Kalnitsky
> create Bareon-API repository, and start production ready implementation

For what reason do we need a separate repo? I thought API will be a
part of bareon repo. Or bareon is just a provisioning agent, which
will be driven by bareon-api?

On Thu, Dec 17, 2015 at 12:29 PM, Evgeniy L  wrote:
> Hi,
>
> Some time ago, we’ve started a discussion [0] about Fuel modularisation
> activity.
> Due to unexpected circumstances POC has been delayed.
>
> Regarding to partitioning/provisioning system, we have POC with a demo [1]
> (thanks to Sylwester), which shows how the integration of Fuel and Bareon
> [2] can
> be done.
>
> To summarise the implementation:
> * we have a simple implementation of Bareon-API [3], which stores
> partitioning
>   related data and allows to edit it
> * for Nailgun new extension has been implemented [4], which uses Bareon-API
>   to store partitioning information, so we will be able to easily switch
> between
>   classic volume_manager implementation and Bareon-API extension
> * when provisioning gets started, extensions retrieves the data from
> Bareon-API
>
> Next steps:
> * create Bareon-API repository, and start production ready implementation
> * create a spec for Fuel project
> * create a spec for Bareon project
>
> If you have any questions don’t hesitate to ask them in this thread, also
> you can
> find us on #openstack-bareon channel.
>
> Thanks!
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/077025.html
> [1] https://www.youtube.com/watch?v=GTJM8i7DL0w
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082397.html
> [3] https://github.com/Mirantis/bareon-api
> [4] https://review.openstack.org/#/c/250864/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] what are the key errors with volume detach

2015-12-17 Thread Andrea Rosa

>> The communication with cinder is async, Nova doesn't wait or check if
>> the detach on cinder side has been executed correctly.
> 
> Yeah, I guess nova gets the 202 back:
> 
> http://logs.openstack.org/18/258118/2/check/gate-tempest-dsvm-full-ceph/7a5290d/logs/screen-n-cpu.txt.gz#_2015-12-16_03_30_43_990
> 
> 
> Should nova be waiting for detach to complete before it tries deleting
> the volume (in the case that delete_on_termination=True in the bdm)?
> 
> Should nova be waiting (regardless of volume delete) for the volume
> detach to complete - or timeout and fail the instance delete if it doesn't?

I'll revisit this change next year trying to look at the problem in a
different way.
Thank you all for your time and all the suggestions.
--
Andrea Rosa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Bareon] Fuel & Bareon integration (fuel modularisation)

2015-12-17 Thread Evgeniy L
Hi Igor,

Bareon by itself doesn't have any REST interface, Bareon is basically
fuel_agent,
which is framework + CLI wrapper to use it as an agent.
In order to store and edit required entities in the database we need some
wrapper,
which adds this functionality. This simple wrapper will be implemented in
Bareon-API.
User should be able to use Bareon without any additional API/Database if
she/he
wants to do some basic stuff without need to store the configuration, which
is not
Fuel use case.
If the question was specifically about Bareon-API in separate repo, there
is no
reason to store it in a single repo, since we may have separate teams
working
on those sub-projects and those solve a bit different problems, user facing
api
vs low level tools.

Thanks,

On Thu, Dec 17, 2015 at 5:33 PM, Igor Kalnitsky 
wrote:

> > create Bareon-API repository, and start production ready implementation
>
> For what reason do we need a separate repo? I thought API will be a
> part of bareon repo. Or bareon is just a provisioning agent, which
> will be driven by bareon-api?
>
> On Thu, Dec 17, 2015 at 12:29 PM, Evgeniy L  wrote:
> > Hi,
> >
> > Some time ago, we’ve started a discussion [0] about Fuel modularisation
> > activity.
> > Due to unexpected circumstances POC has been delayed.
> >
> > Regarding to partitioning/provisioning system, we have POC with a demo
> [1]
> > (thanks to Sylwester), which shows how the integration of Fuel and Bareon
> > [2] can
> > be done.
> >
> > To summarise the implementation:
> > * we have a simple implementation of Bareon-API [3], which stores
> > partitioning
> >   related data and allows to edit it
> > * for Nailgun new extension has been implemented [4], which uses
> Bareon-API
> >   to store partitioning information, so we will be able to easily switch
> > between
> >   classic volume_manager implementation and Bareon-API extension
> > * when provisioning gets started, extensions retrieves the data from
> > Bareon-API
> >
> > Next steps:
> > * create Bareon-API repository, and start production ready implementation
> > * create a spec for Fuel project
> > * create a spec for Bareon project
> >
> > If you have any questions don’t hesitate to ask them in this thread, also
> > you can
> > find us on #openstack-bareon channel.
> >
> > Thanks!
> >
> > [0]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/077025.html
> > [1] https://www.youtube.com/watch?v=GTJM8i7DL0w
> > [2]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082397.html
> > [3] https://github.com/Mirantis/bareon-api
> > [4] https://review.openstack.org/#/c/250864/
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] stable/liberty branch needed for oslo-incubator

2015-12-17 Thread Thierry Carrez
Matt Riedemann wrote:
> On 12/13/2015 10:33 PM, Robert Collins wrote:
>> On 14 December 2015 at 15:28, Matt Riedemann
>>  wrote:
>>> I don't have a pressing need to backport something right now, but as
>>> long as
>>> there was code in oslo-incubator that *could* be synced to other
>>> projects
>>> which wasn't in libraries, then that code could have bugs and code
>>> require
>>> backports to stable/liberty oslo-incubator for syncing to projects
>>> that use
>>> it.
>>
>> I thought the thing to do was backport the application of the change
>> from the projects master?
> 
> Unless the rules changed, things from oslo-incubator were always
> backported to stable oslo-incubator and then sync'ed to the stable
> branches of the affected projects. This is so we wouldn't lose the fix
> in stable oslo-incubator which is shared across other projects, not just
> the target project consuming the fix from oslo-incubator.

I think I remember that the Oslo crew made a change there for the last
branch before incubator removal (something like, "backport it directly
to the local copy"). I couldn't quickly find a thread reference though.
Maybe wait for Doug or dims to be around and chime in.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-12-17 Thread Ihar Hrachyshka

Rossella Sblendido  wrote:


Hi Ihar,


wow, good job!!
Sorry for the very slow reply.
I really like your proposal...some comments inline.

On 12/03/2015 04:46 PM, Ihar Hrachyshka wrote:

Hi,

Small update on the RFE. It was approved for Mitaka, assuming we come up
with proper details upfront thru neutron-specs process.

In the meantime, we have found more use cases for flow management among
features in development: QoS DSCP, also the new OF based firewall
driver. Both authors for those new features independently realized that
agent does not currently play nice with flows set by external code due
to its graceful restart behaviour when rules with unknown cookies are
cleaned up. [The agent uses a random session uuid() to mark rules that
belong to its current run.]

Before I proceed, full disclosure: I know almost nothing about OpenFlow
capabilities, so some pieces below may make no sense. I tried to come up
with high level model first and then try to map it to available OF
features. Please don’t hesitate to comment, I like to learn new stuff! ;)


I am not an expert either so I encourage people to chime in here.


I am thinking lately on the use cases we collected so far. One common
need for all features that were seen to be interested in proper
integration with Open vSwitch agent is to be able to manage feature
specific flows on br-int and br-tun. There are other things that
projects may need, like patch ports, though I am still struggling with
the question of whether it may be postponed or avoided for phase 1.

There are several specific operation 'kinds' that we should cover for
the RFE:
- managing flows that modify frames in-place;
- managing flows that redirect frames.

There are some things that should be considered to make features
cooperate with the agent and other extensions:
- feature flows should have proper priorities based on their ‘kind’
(f.e. in-place modification probably go before redirections);
- feature flows should survive flow reset that may be triggered by the
agent;
- feature flows should survive flow reset without data plane disruption
(=they should support graceful restart:
https://review.openstack.org/#/c/182920).

With that in mind, I see the following high level design for the flow
tables:

- table 0 serves as a dispatcher for specific features;
- each feature gets one or more tables, one per flow ‘kind’ needed;
- for each feature table, a new flow entry is added to table 0 that
would redirect to feature specific table; the rule will be triggered
only if OF metadata is not updated inside the feature table (see the
next bullet); the rule will have priority that is defined for the ‘kind’
of the operation that is implemented by the table it redirects to;
-  each feature table will have default actions that will 1) mark OF
metadata for the frame as processed by the feature; 2) redirect back to
table 0;
- all feature specific flow rules (except dispatcher rules) belong to
feature tables;

Now, the workflow for extensions that are interested in setting flows
would be:
- on initialize() call, extension defines feature tables it will need;


Do you mean this in a dynamic way or every extension will have tables  
assigned, basically hard-coded? I prefer the second way so we have more  
controls of the tables that are currently used.


Do you suggest creating several tables even if an extension is not  
interested in all of them? As for the table name, I guess we may build it  
as agent_cookie + extension name so that it’s clear which tables were  
bootstrapped in current session, and which can be cleaned up after we clear  
flows from previous sessions.





it passes the name of the feature table and the ‘kind’ of the actions it
will execute; with that, the following is initialized by the agent: 1)


It would be nice to pass also a filter to match some packets. We probably  
don't want to send all the packet to the feature table, the extension can  
define that.




It probably stands for some optimization, though I am not sure how serious.  
If we go this route, we also need to short-circuit metadata marking on  
filter unmatched, or do we expect other extensions to influence filter  
matching?


I am not sure how it would look like. Do we allow random matching filters,  
or enforce some base types and leave more detailed filters to extension  
tables?



table 0 dispatcher entry to redirect frames into feature table; the
entry has the priority according to the ‘kind’ of the table; 2) the


I think we need to define the priority better. According to what you  
wrote we assign priority based on "in-place modification probably go  
before redirections" not sure if it's enough. What happens if we have two  
features that both requires in place-modifications? How do we prioritize  
them? Are we going to allow 2 extension at the same time? Let me think  
more about this...It would be nice to have some real world example…


I assumed that multiple extensions don’t mess 

Re: [openstack-dev] [gate] job failure rate at ~ 12% (check queue) <= issue?

2015-12-17 Thread Sean Dague
On 12/17/2015 05:52 AM, Markus Zoeller wrote:
> The job failure rates had an unusual rise at 06:30 UTC this morning [1].
> I couldn't figure out if this is a real issue or somewhat related to
> the gerrit update ~ 18 hours ago. The only thing I found was a time
> frame of ~ 1h where the jobs failed to update the apt repos [2]. As
> this issue is not present anymore in logstash, I expected that the job
> failure rate would drop, but that didn't happen. Long story short,
> do we have an issue? Or is this the aftermath of bug 1526675? 
> 
> [1] http://grafana.openstack.org/dashboard/db/tempest-failure-rate
> [2] logstash query: http://bit.ly/1O8qjtn
> 
> Regards, Markus Zoeller (markus_z)

That graph is a pretty narrow time slice. What's the rolling average on
that?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sender Auth Failure] Re: [neutron] How could an L2 agent extension access agent methods ?

2015-12-17 Thread Ihar Hrachyshka

Margaret  wrote:


Hello Ihar,

I have some comments and questions about your proposal.  My apologies if
any of what I say here results from misunderstandings on my part.


Thanks a lot for the reply. I will try to clear up below.



1. I believe there are two sorts of redirection at play here.  The first
involves inter-table traversal while the second allows a frame to exit the
OF pipeline either by being sent to a different port or by being dropped.
Some of what I say next makes use of this distinction.



I am not sure I understand what is inter-table traversal. My assumption was  
that extension tables defined for modification actions don’t redirect  
anywhere except back into table0 (using default flow rules we add to each  
table on its bootstrapping).



2. OpenFlow¹s Goto instruction directs a frame from one table to the next.
 A redirection in this sense must be to a higher-numbered table, which is
to say that OF pipeline processing can only go forward (see p.18, para.2
of the 1.4.1 spec
).  However, OvS (at
least v2.0.2) implements a resubmit action, which re-searches another
table‹higher-, lower-, or even same-numbered‹and executes any actions
found there in addition to any subsequent actions in the current flow
entry.  It is by using resubmit that the proposed design could work, as
shown in the ovs-ofctl command posted here

.  (Maybe there are other ways, too.)  The resubmit action is a Nicira
vendor extension that at least at one point, and maybe still, was known to
be implemented only by OvS.  I mention this because I wonder if the
proposed design (and my sample command) calls for flow traversal in a
manner not explicitly supported by OpenFlow and so may not work in future
versions of OvS.



Thanks for directions to specific OF features we can utilize! I believe we  
may assume some implementation of resubmit can be safely expected to be  
present in the OVS. We may refine API we rely on later if we see it  
deprecated.



3. Regarding the idea of sorting feature flows by kind: I believe that
what is meant by a 'redirection flow table' is a table that could possibly
remove the frame from OF pipeline processing‹i.e., by forwarding or
dropping it.  Can you correct/confirm?



Yes, that’s the intent. I feel I need to dig OF documentation a bit so that  
I make myself more in line with terminology used there. Sorry for  
misunderstandings occurring due to vague terms used.



4. Even though the design promotes playing nice by means of feature flow
kinds, I think that features might nevertheless still step on each others¹
toes due to assumptions made about field content.  I¹m thinking, for
instance, of two features whose in-place frame modifications should be
done in a particular order.  Because of this, I¹m not sure that the
granularity of the proposed design can guarantee feature cooperation.
Maybe it would help to prioritize feature flows as ingress-processing
(that is, the flow should be exercised as early as possible in the
pipeline) versus egress-processing (the opposite) in addition to kind‹or
maybe that is just what  the notion of feature flow kind calls for, at
least in part.  Tied (tangential?) to this is the distinction that
OpenFlow makes between an action list and an action set: the former is a
series of actions that is applied to the frame immediately and in the
order specified in the flow entry; the latter is a proper set of actions
that is applied to the frame only upon its exit from the OF pipeline and
in an order specified by protocol.  (Action set content is modified as the
frame traverses the OF pipeline.)  Should action sets be disallowed?


I learned a bit more from you, again. :) Thanks!

I am not sure I completely follow the suggestion with prioritizing as  
ingress-processing. Can you elaborate?


I hope that we can leave unraveling corner case ordering issues for the 2nd  
phase of the feature and see how pressing it is once we start using the  
framework.


Based on what you wrote, probably action sets should then be discouraged.  
That said, I am not sure we should have control on what is used in  
extension tables. I envisioned we pass tables to extensions and allow them  
to manage them. If they break the flow, it’s sad but not strictly limiting  
types of flows that can be used by extensions seems to me a freedom to  
retain. What do you think?




5. Is it a correct rephrasing of the third bullet of the high-level design
to say: each feature-specific flow entry in table 0 would be triggered
only if the frame's relevant OF metadata has not already been updated as a
result of the frame's previous traversal of the feature table.  I
apologize if I¹m suggesting something here that you didn¹t mean.


That’s exactly what I meant. 

Re: [openstack-dev] [oslo] stable/liberty branch needed for oslo-incubator

2015-12-17 Thread Davanum Srinivas
Thierry,

Right, i believe it was on IRC and not a ML thread. Since there is no
urgency on this one, we can wait till Doug gets back to revisit the
decision.

Thanks,
Dims

On Thu, Dec 17, 2015 at 7:50 AM, Thierry Carrez  wrote:
> Matt Riedemann wrote:
>> On 12/13/2015 10:33 PM, Robert Collins wrote:
>>> On 14 December 2015 at 15:28, Matt Riedemann
>>>  wrote:
 I don't have a pressing need to backport something right now, but as
 long as
 there was code in oslo-incubator that *could* be synced to other
 projects
 which wasn't in libraries, then that code could have bugs and code
 require
 backports to stable/liberty oslo-incubator for syncing to projects
 that use
 it.
>>>
>>> I thought the thing to do was backport the application of the change
>>> from the projects master?
>>
>> Unless the rules changed, things from oslo-incubator were always
>> backported to stable oslo-incubator and then sync'ed to the stable
>> branches of the affected projects. This is so we wouldn't lose the fix
>> in stable oslo-incubator which is shared across other projects, not just
>> the target project consuming the fix from oslo-incubator.
>
> I think I remember that the Oslo crew made a change there for the last
> branch before incubator removal (something like, "backport it directly
> to the local copy"). I couldn't quickly find a thread reference though.
> Maybe wait for Doug or dims to be around and chime in.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit Upgrade to ver 2.11 completed.

2015-12-17 Thread Zaro
forgot this one as well: https://review.openstack.org/241278

On Thu, Dec 17, 2015 at 7:02 AM, Zaro  wrote:
> On Thu, Dec 17, 2015 at 12:31 AM, Andrea Frittoli
>  wrote:
>> Thanks for the upgrade, and for all the effort you folks put into keeping
>> our gerrit close to upstream!
>>
>> One thing that I find inconvenient in the new UI is the size of the middle
>> (test results) column: it's too narrow, no matter how large my browser
>> window is, or what browser I use - which causes the name of some of the jobs
>> to wrap to a second line, making it really hard to read test results. At
>> least that's my experience on tempest reviews, where we have a lot of jobs,
>> and rather long names - see [0] for instance.
>>
>> Is there any chance via configuration to make that column slightly wider?
>>
>
> There are a few proposals already:
>   https://review.openstack.org/258751
>   https://review.openstack.org/258744
>
>
>> thank you!
>>
>> andrea
>> [0] https://review.openstack.org/#/c/254274/
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11 completed.

2015-12-17 Thread Zaro
On Thu, Dec 17, 2015 at 12:31 AM, Andrea Frittoli
 wrote:
> Thanks for the upgrade, and for all the effort you folks put into keeping
> our gerrit close to upstream!
>
> One thing that I find inconvenient in the new UI is the size of the middle
> (test results) column: it's too narrow, no matter how large my browser
> window is, or what browser I use - which causes the name of some of the jobs
> to wrap to a second line, making it really hard to read test results. At
> least that's my experience on tempest reviews, where we have a lot of jobs,
> and rather long names - see [0] for instance.
>
> Is there any chance via configuration to make that column slightly wider?
>

There are a few proposals already:
  https://review.openstack.org/258751
  https://review.openstack.org/258744


> thank you!
>
> andrea
> [0] https://review.openstack.org/#/c/254274/
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] what are the key errors with volume detach

2015-12-17 Thread Matt Riedemann



On 12/17/2015 8:51 AM, Andrea Rosa wrote:



The communication with cinder is async, Nova doesn't wait or check if
the detach on cinder side has been executed correctly.


Yeah, I guess nova gets the 202 back:

http://logs.openstack.org/18/258118/2/check/gate-tempest-dsvm-full-ceph/7a5290d/logs/screen-n-cpu.txt.gz#_2015-12-16_03_30_43_990


Should nova be waiting for detach to complete before it tries deleting
the volume (in the case that delete_on_termination=True in the bdm)?

Should nova be waiting (regardless of volume delete) for the volume
detach to complete - or timeout and fail the instance delete if it doesn't?


I'll revisit this change next year trying to look at the problem in a
different way.
Thank you all for your time and all the suggestions.
--
Andrea Rosa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I had a quick discussion with hemna this morning and he confirmed that 
nova should be waiting for os-detach to complete before we try to delete 
the volume, because if the volume status isn't 'available' the delete 
will fail.


Also, if nova is hitting a failure to delete the volume it's swallowing 
it by passing raise_exc=False to _cleanup_volumes here [1]. Then we go 
on our merry way and delete the bdms in the nova database [2]. But I'd 
think at that point we're orphaning volumes in cinder that think they 
are still attached.


If this is passing today it's probably just luck that we're getting the 
volume detached fast enough before we try to delete it.


[1] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2425-L2426
[2] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L909


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum][Kuryr] Help with using docker-python client in gate

2015-12-17 Thread Gal Sagie
Hello Everyone,

We are trying to add some gate testing for Kuryr and hopefully convert
these also to Rally
plugins.

What i am facing in the gate right now is this:
I configure the docker client:

self.docker_client = docker.Client(
base_url='unix://var/run/docker.sock')

And call this:
self.docker_client.create_network(name='fakenet', driver='kuryr')

This works locally, and i also tried to run this code with a different user

But on the gate this fails:
http://logs.openstack.org/79/258379/5/check/gate-kuryr-dsvm-fullstack-nv/f46ebdb/

2-17 05:22:16.900

| 2015-12-17 05:22:16.851 | 2015-12-17 05:22:16.902

| 2015-12-17 05:22:16.852 | {0}
kuryr.tests.fullstack.test_network.NetworkTest.test_create_delete_network
[0.093287s] ... FAILED2015-12-17 05:22:16.934

| 2015-12-17 05:22:16.854 | 2015-12-17 05:22:16.935

| 2015-12-17 05:22:16.855 | Captured traceback:2015-12-17 05:22:16.935

| 2015-12-17 05:22:16.856 | ~~~2015-12-17 05:22:16.935

| 2015-12-17 05:22:16.857 | Traceback (most recent call
last):2015-12-17 05:22:16.935

| 2015-12-17 05:22:16.859 |   File
"kuryr/tests/fullstack/test_network.py", line 27, in
test_create_delete_network2015-12-17 05:22:16.936

| 2015-12-17 05:22:16.860 |
self.docker_client.create_network(name='fakenet',
driver='kuryr')2015-12-17 05:22:16.936

| 2015-12-17 05:22:16.861 |   File
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/utils/decorators.py",
line 35, in wrapper2015-12-17 05:22:16.936

| 2015-12-17 05:22:16.862 | return f(self, *args,
**kwargs)2015-12-17 05:22:16.936

| 2015-12-17 05:22:16.864 |   File
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/api/network.py",
line 28, in create_network2015-12-17 05:22:16.936

| 2015-12-17 05:22:16.865 | res = self._post_json(url,
data=data)2015-12-17 05:22:16.937

| 2015-12-17 05:22:16.866 |   File
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/client.py",
line 166, in _post_json2015-12-17 05:22:16.937

| 2015-12-17 05:22:16.867 | return self._post(url,
data=json.dumps(data2), **kwargs)2015-12-17 05:22:16.937

| 2015-12-17 05:22:16.870 |   File
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/client.py",
line 107, in _post2015-12-17 05:22:16.937

| 2015-12-17 05:22:16.871 | return self.post(url,
**self._set_request_timeout(kwargs))2015-12-17 05:22:16.937

| 2015-12-17 05:22:16.873 |   File
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/requests/sessions.py",
line 511, in post2015-12-17 05:22:16.937

| 2015-12-17 05:22:16.874 | return self.request('POST', url,
data=data, json=json, **kwargs)2015-12-17 05:22:16.938

Re: [openstack-dev] [magnum] Magnum conductor async container operations

2015-12-17 Thread Hongbin Lu
Suro,

FYI. In before, we tried a distributed lock implementation for bay operations 
(here are the patches [1,2,3,4,5]). However, after several discussions online 
and offline, we decided to drop the blocking implementation for bay operations, 
in favor of non-blocking implementation (which is not implemented yet). You can 
find more discussion in here [6,7].

For the async container operations, I would suggest to consider a non-blocking 
approach first. If it is impossible and we need a blocking implementation, 
suggest to use the bay operations patches below as a reference.

[1] https://review.openstack.org/#/c/171921/
[2] https://review.openstack.org/#/c/172603/
[3] https://review.openstack.org/#/c/172772/
[4] https://review.openstack.org/#/c/172773/
[5] https://review.openstack.org/#/c/172774/
[6] https://blueprints.launchpad.net/magnum/+spec/horizontal-scale
[7] https://etherpad.openstack.org/p/liberty-work-magnum-horizontal-scale

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: December-16-15 10:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: s...@yahoo-inc.com
Subject: Re: [openstack-dev] [magnum] Magnum conductor async container 
operations


> On Dec 16, 2015, at 6:24 PM, Joshua Harlow  wrote:
> 
> SURO wrote:
>> Hi all,
>> Please review and provide feedback on the following design proposal 
>> for implementing the blueprint[1] on async-container-operations -
>> 
>> 1. Magnum-conductor would have a pool of threads for executing the 
>> container operations, viz. executor_threadpool. The size of the 
>> executor_threadpool will be configurable. [Phase0] 2. Every time, 
>> Magnum-conductor(Mcon) receives a container-operation-request from 
>> Magnum-API(Mapi), it will do the initial validation, housekeeping and 
>> then pick a thread from the executor_threadpool to execute the rest 
>> of the operations. Thus Mcon will return from the RPC request context 
>> much faster without blocking the Mapi. If the executor_threadpool is 
>> empty, Mcon will execute in a manner it does today, i.e. 
>> synchronously - this will be the rate-limiting mechanism - thus 
>> relaying the feedback of exhaustion.
>> [Phase0]
>> How often we are hitting this scenario, may be indicative to the 
>> operator to create more workers for Mcon.
>> 3. Blocking class of operations - There will be a class of 
>> operations, which can not be made async, as they are supposed to 
>> return result/content inline, e.g. 'container-logs'. [Phase0] 4. 
>> Out-of-order considerations for NonBlocking class of operations - 
>> there is a possible race around condition for create followed by 
>> start/delete of a container, as things would happen in parallel. To 
>> solve this, we will maintain a map of a container and executing 
>> thread, for current execution. If we find a request for an operation 
>> for a container-in-execution, we will block till the thread completes 
>> the execution. [Phase0]
> 
> Does whatever do these operations (mcon?) run in more than one process?

Yes, there may be multiple copies of magnum-conductor running on separate hosts.

> Can it be requested to create in one process then delete in another? 
> If so is that map some distributed/cross-machine/cross-process map 
> that will be inspected to see what else is manipulating a given 
> container (so that the thread can block until that is not the case... 
> basically the map is acting like a operation-lock?)

That’s how I interpreted it as well. This is a race prevention technique so 
that we don’t attempt to act on a resource until it is ready. Another way to 
deal with this is check the state of the resource, and return a “not ready” 
error if it’s not ready yet. If this happens in a part of the system that is 
unattended by a user, we can re-queue the call to retry after a minimum delay 
so that it proceeds only when the ready state is reached in the resource, or 
terminated after a maximum number of attempts, or if the resource enters an 
error state. This would allow other work to proceed while the retry waits in 
the queue.

> If it's just local in one process, then I have a library for u that 
> can solve the problem of correctly ordering parallel operations ;)

What we are aiming for is a bit more distributed. 

Adrian

>> This mechanism can be further refined to achieve more asynchronous 
>> behavior. [Phase2] The approach above puts a prerequisite that 
>> operations for a given container on a given Bay would go to the same 
>> Magnum-conductor instance.
>> [Phase0]
>> 5. The hand-off between Mcon and a thread from executor_threadpool 
>> can be reflected through new states on the 'container' object. These 
>> states can be helpful to recover/audit, in case of Mcon restart. 
>> [Phase1]
>> 
>> Other considerations -
>> 1. Using eventlet.greenthread instead of real threads => This 
>> approach would require further refactoring the execution 

[openstack-dev] [openstack][magnum] Networking Subteam Meeting

2015-12-17 Thread Daneyon Hansen (danehans)
All,

I have a scheduling conflict and am unable to chair today's subteam meeting. 
Due to the holiday break, we will reconvene on 1/7/16.  Have a happy holidays 
and thanks for your participation in 2015.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][grenade] l3.filters fail to update at test server

2015-12-17 Thread Carl Baldwin
On Thu, Dec 17, 2015 at 5:12 AM, Ihar Hrachyshka  wrote:
> I believe that we should always unconditionally update filters with new
> versions when doing upgrades. Filters should not actually be considered
> configuration files in the first place since they are tightly coupled with
> the code that triggers commands.

+1.  I was actually very surprised to learn how this is handled and
had the same thought.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Kuryr] Help with using docker-python client in gate

2015-12-17 Thread Egor Guz
Gal,

I think you need to setup your Docker environment to allow run cli without sudo 
permission (https://docs.docker.com/engine/installation/ubuntulinux/).
Or use tcp socket instead (https://docs.docker.com/v1.8/articles/basics/), 
Magnum/Swarm/docker-machine uses this approach all the time.

—
Egor

On Dec 17, 2015, at 07:53, Gal Sagie 
> wrote:

Hello Everyone,

We are trying to add some gate testing for Kuryr and hopefully convert these 
also to Rally
plugins.

What i am facing in the gate right now is this:
I configure the docker client:


self.docker_client = docker.Client(
base_url='unix://var/run/docker.sock')


And call this:
self.docker_client.create_network(name='fakenet', driver='kuryr')


This works locally, and i also tried to run this code with a different user

But on the gate this fails:
http://logs.openstack.org/79/258379/5/check/gate-kuryr-dsvm-fullstack-nv/f46ebdb/

2-17 
05:22:16.900
 | 2015-12-17 05:22:16.851 |
2015-12-17 
05:22:16.902
 | 2015-12-17 05:22:16.852 | {0} 
kuryr.tests.fullstack.test_network.NetworkTest.test_create_delete_network 
[0.093287s] ... FAILED
2015-12-17 
05:22:16.934
 | 2015-12-17 05:22:16.854 |
2015-12-17 
05:22:16.935
 | 2015-12-17 05:22:16.855 | Captured traceback:
2015-12-17 
05:22:16.935
 | 2015-12-17 05:22:16.856 | ~~~
2015-12-17 
05:22:16.935
 | 2015-12-17 05:22:16.857 | Traceback (most recent call last):
2015-12-17 
05:22:16.935
 | 2015-12-17 05:22:16.859 |   File 
"kuryr/tests/fullstack/test_network.py", line 27, in test_create_delete_network
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.860 | 
self.docker_client.create_network(name='fakenet', driver='kuryr')
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.861 |   File 
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/utils/decorators.py",
 line 35, in wrapper
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.862 | return f(self, *args, **kwargs)
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.864 |   File 
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/api/network.py",
 line 28, in create_network
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.865 | res = self._post_json(url, data=data)
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.866 |   File 
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/client.py",
 line 166, in _post_json
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.867 | return self._post(url, 
data=json.dumps(data2), **kwargs)
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.870 |   File 
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/client.py",
 line 107, in _post
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.871 | return self.post(url, 
**self._set_request_timeout(kwargs))
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.873 |   

[openstack-dev] [oslo][osprofiler] OSprofiler spec is ready for review

2015-12-17 Thread Boris Pavlovic
Hi stackers,

OSprofiler spec is ready for review.

Please review it, if you are interested in making native profiling/tracing
OpenStack happen:
https://review.openstack.org/#/c/103825/

Thanks!


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Re: [gate] any project using olso.db test_migrations is currently blocked

2015-12-17 Thread Ihar Hrachyshka
Adding [neutron] tag since the subthread now discusses specifically neutron  
stuff.


Ihar Hrachyshka  wrote:


Carl Baldwin  wrote:

On Thu, Dec 17, 2015 at 5:29 AM, Ihar Hrachyshka   
wrote:
I will update on Neutron side of things since it seems I have some  
knowledge

from the fields.

So first, on resources: we have Sachi and lifeless from infra side + Ihar
and pc_m looking into expanding constrained jobs in Neutron world.


This is great.  Thank you for your efforts.


Current status is:
- all devstack based jobs are already constrained thanks to infra;
- openstack/neutron pep8/unit/doc/cover jobs are already constrained in
Liberty and master;
- for *aas repos, pc_m is currently on top of it, following up on my
original patches to introduce tox targets and make them voting in master
gate; we also look into backporting targets and gate setup into Liberty.


This is good progress.


Note that the new alembic release still broke openstack/neutron repo
(functional job). This is because even though the job uses devstack to
bootstrap the environment, it still calls tox to start tests, which makes
the new venv with no constraints applied prepared and used. Same problem
probably affects fullstack and api jobs since they use similar setup
approach.


Is there a bug for this?  If not, I'm considering filing a bug with at
least High importance to track this.  I'd be tempted to mark it as
Critical because when this stuff breaks, it generates a Critical bug.
But, since it isn't blocking anyone *at the moment* maybe High is
better.


I haven’t reported it. Please do.


So I have closing the remaining loop hole on my personal agenda; but note
that the agenda is really packed pretty much all the time, hence no time
guarantees; so if someone has cycles to help making the neutron gate more
fault proof, don’t hesitate to volunteer, I am extremely happy to help  
with

it.


I understand.  We're all getting pulled in many directions.  My own
agenda is so over-full I don't think anyone could depend on me to
follow through with this.  Is there anyone who could volunteer to tie
up some of these loose ends for Neutron?


Hopefully. :)

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][nova] Orchiestrated upgrades in kolla

2015-12-17 Thread Michał Jastrzębski
Hey,

Since our instances (and qemu) is in different container than nova
services, there is no reason (specific to kolla) to meddle with them.
Ofc it would be required if you'd like to upgrade qemu, but that's
beyond scope of this change. For now we assume that project upgrade
won't affect running vms, or if that is the case, operator will have
to migrate it himself (or herself). Problem is with orchiestraded
ansible it will require a bit more tinkering to run playbook on just
one node (namely one we migrated from), but that's doable. Our default
upgrade strategy assumes that there is no downtime causing change.

Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Mid Cycle Sprint

2015-12-17 Thread Jesse Pretorius
Hi everyone,

Thank you all for responding and giving your feedback.

Given that the Ops Mid Cycle [1] is in the UK (15-16 Feb) and AnsibleFest
[2] is also in the UK (18 Feb), and also that everyone seems ok with the UK
as a location for the mid cycle then we shall have it in the UK!

The Ops Mid Cycle organisers have also graciously offered for us to be a
part of the Ops Mid Cycle agenda. I would like as many of us as possible to
play a part in the Ops Mid Cycle - listening, learning and contributing
where possible. As such, I'd like us to limit the number of sessions we
have within the Ops Mid Cycle. Let's have a few open design-style sessions.

Thereafter, as discussed in today's community meeting [3], we can break
down into work sessions where we can work on specific goals. This will
happen on the Wednesday 17 Feb and we will be hosted at the Rackspace
offices in Hayes (West London).

I've setup an Etherpad [4] to gather proposals for Fishbowl Sessions (at
the Ops Mid Cycle) and Work Sessions (for 17 Feb). Please add any sessions
you'd like to facilitate/moderate. If there are sessions there, then please
feel free to add your +1 to show that you'd like to see it.

As discussed in the meeting today [3] I'm on holiday until the New Year.
Please all have a wonderful time over the year's end and I'll see you all
bright-eyed and bushy tailed next year!

Best regards,

Jesse
IRC: odyssey4me

[1] https://etherpad.openstack.org/p/MAN-ops-meetup
[2] http://www.ansible.com/ansiblefest
[3]
http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2015/openstack_ansible_meeting.2015-12-17-16.01.log.html
[4] https://etherpad.openstack.org/p/openstack-ansible-mitaka-midcycle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New things in Gerrit 2.11 to enjoy

2015-12-17 Thread Sean Dague
While I realize there is some sentiment about people not liking the UI
changes in Gerrit 2.11, I thought I'd provide you with some reasons to
like new gerrit.

New Search Strings:

* patch size

delta:<=10 - show only patches with <= 10 lines of change

* scoring by group

label:Code-Review<=-1,nova-core

you can now search scores by a group, so that if you are -1 filtering,
you are only doing it from the core team, so that new folks -1ing things
don't take it off your radar.

* message:/comment: queries actually work now with more than one word.


New UI features:

There is a separate Mergable field (is:mergable) which lets you know
that the code in question is no longer mergable with master.

The column 3 on the change page includes the following:

* Related Changes - all changes in the linear patch series

* Conflicts With - all open patches in the system that will conflict
with this one

This is great for discovering duplicates for the same fix.

* Same topic - anything with the same topic, again for group reviewing.


Inline Edit:

You can now inline edit the entire patch. Click the Edit button above
the list of files, and you go into edit mode, and can fix the commit
message or any files. This pops you into a separate screen where you
'Save' the change, then eventually "Publish Edit" on the main page.

There is also the "Follow Up" button where it builds you an empty follow
up patch that you can inline edit to fix something. Especially good if
you want to just follow up with a typo fix.


Yes, a lot of the UI elements move around from the old change screen
(even from the new one in old gerrit), so there will be some getting
used to. However a lot of these new features are quite nice for
increasing review productivity once you get used to some of the new widgets.


-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-17 Thread Carl Baldwin
On Thu, Dec 17, 2015 at 5:29 AM, Ihar Hrachyshka  wrote:
> I will update on Neutron side of things since it seems I have some knowledge
> from the fields.
>
> So first, on resources: we have Sachi and lifeless from infra side + Ihar
> and pc_m looking into expanding constrained jobs in Neutron world.

This is great.  Thank you for your efforts.

> Current status is:
> - all devstack based jobs are already constrained thanks to infra;
> - openstack/neutron pep8/unit/doc/cover jobs are already constrained in
> Liberty and master;
> - for *aas repos, pc_m is currently on top of it, following up on my
> original patches to introduce tox targets and make them voting in master
> gate; we also look into backporting targets and gate setup into Liberty.

This is good progress.

> Note that the new alembic release still broke openstack/neutron repo
> (functional job). This is because even though the job uses devstack to
> bootstrap the environment, it still calls tox to start tests, which makes
> the new venv with no constraints applied prepared and used. Same problem
> probably affects fullstack and api jobs since they use similar setup
> approach.

Is there a bug for this?  If not, I'm considering filing a bug with at
least High importance to track this.  I'd be tempted to mark it as
Critical because when this stuff breaks, it generates a Critical bug.
But, since it isn't blocking anyone *at the moment* maybe High is
better.

> So I have closing the remaining loop hole on my personal agenda; but note
> that the agenda is really packed pretty much all the time, hence no time
> guarantees; so if someone has cycles to help making the neutron gate more
> fault proof, don’t hesitate to volunteer, I am extremely happy to help with
> it.

I understand.  We're all getting pulled in many directions.  My own
agenda is so over-full I don't think anyone could depend on me to
follow through with this.  Is there anyone who could volunteer to tie
up some of these loose ends for Neutron?

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-17 Thread Ihar Hrachyshka

Carl Baldwin  wrote:

On Thu, Dec 17, 2015 at 5:29 AM, Ihar Hrachyshka   
wrote:
I will update on Neutron side of things since it seems I have some  
knowledge

from the fields.

So first, on resources: we have Sachi and lifeless from infra side + Ihar
and pc_m looking into expanding constrained jobs in Neutron world.


This is great.  Thank you for your efforts.


Current status is:
- all devstack based jobs are already constrained thanks to infra;
- openstack/neutron pep8/unit/doc/cover jobs are already constrained in
Liberty and master;
- for *aas repos, pc_m is currently on top of it, following up on my
original patches to introduce tox targets and make them voting in master
gate; we also look into backporting targets and gate setup into Liberty.


This is good progress.


Note that the new alembic release still broke openstack/neutron repo
(functional job). This is because even though the job uses devstack to
bootstrap the environment, it still calls tox to start tests, which makes
the new venv with no constraints applied prepared and used. Same problem
probably affects fullstack and api jobs since they use similar setup
approach.


Is there a bug for this?  If not, I'm considering filing a bug with at
least High importance to track this.  I'd be tempted to mark it as
Critical because when this stuff breaks, it generates a Critical bug.
But, since it isn't blocking anyone *at the moment* maybe High is
better.


I haven’t reported it. Please do.




So I have closing the remaining loop hole on my personal agenda; but note
that the agenda is really packed pretty much all the time, hence no time
guarantees; so if someone has cycles to help making the neutron gate more
fault proof, don’t hesitate to volunteer, I am extremely happy to help  
with

it.


I understand.  We're all getting pulled in many directions.  My own
agenda is so over-full I don't think anyone could depend on me to
follow through with this.  Is there anyone who could volunteer to tie
up some of these loose ends for Neutron?


Hopefully. :)

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [Cinder] [Nova] [Neutron] Gathering quota usage data in Horizon

2015-12-17 Thread Timur Sufiev
Hello, folks!

I'd like to initiate a discussion of the feature request I'm going to make
on behalf of Horizon to every core OpenStack service which supports Quota
feature, namely Cinder, Nova and Neutron.

Although all three services' APIs support special calls to get current
quota limitations (Nova and Cinder allows to get and update both per-tenant
and default cloud-wide limitations, Neutron allows to do it only for
per-tenant limitations), there is no special call in any of these services
to get current per-tenant usage of quota. Because of that Horizon needs to
get, say for 'volumes' quota, a list of Cinder volumes in the current
tenant and then just calculate its length [1]. When there are really a lot
of entities in tenant - instances/volumes/security groups/whatever - all
this calls sum up and make rendering pages in Horizon much more slower than
it could be. Is it possible to provide special API calls to alleviate this?

[1]
https://github.com/openstack/horizon/blob/9.0.0.0b1/openstack_dashboard/usage/quotas.py#L350
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][drivers] Spec freeze approaching: Review priorities

2015-12-17 Thread Flavio Percoco

On 09/12/15 18:52 -0430, Flavio Percoco wrote:

Greetings,

To all Glance drivers and people interested in following up on Glance
specs. I've added to our meeting agenda etherpad[0] the list of review
priorities for specs.

Please, bare in mind that our spec freeze is approaching and we need
to provide as much feedback as possible on the proposed specs so that
spec writers will have enough time to address our comments.

As a reminder, the spec freeze for Glance will start on Mon 28th and
it'll end on Jan 1st.

Thanks everyone for your efforts,
Flavio

[0] https://etherpad.openstack.org/p/glance-drivers-meeting-agenda




Just another heads up that the above deadline is getting closer and
closer!

To all drivers, please help reviewing as many specs as possible. To
spec owners, keep an eye on your specs and address comments in a
timely manner so we can have them ready and merged in time.

Cheers,
Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum conductor async container operations

2015-12-17 Thread SURO

Hongbin,
Very useful pointers! Thanks for bringing up the relevant contexts!

The proposal to block here for consecutive operations on same container, 
is the approach to start with. We can have a wait queue implementation 
following - that way the approach will be amortized over time. If you 
feel strongly, I am okay implementing the wait queue on the first go 
itself.

[ I felt step-by-step approach carries in sizable code, easier to review ]

By the way, I think the scope of bay lock and scope of 
per-bay-per-container operation is different too, in terms of blocking.


I have a confusion about non-blocking bay-operations for horizontal 
scale [1]  -
" Heat will be having concurrency support, so we can rely on heat for 
the concurrency issue for now and drop the baylock implementation."
- if user issues two consecutive updates on a Bay, and if the updates go 
through different magnum-conductors,
they can land up at Heat in different order, resulting in different 
state of the bay. How Heat-concurrency will prevent that I am not very 
clear. [ Take an example of 'magnum bay-update k8sbay replace 
node_count=100' followed by 'magnum bay-update k8sbay replace 
node_count=10']



[1] - 
https://etherpad.openstack.org/p/liberty-work-magnum-horizontal-scale 
(Line 33)


Regards,
SURO
irc//freenode: suro-patz

On 12/17/15 8:10 AM, Hongbin Lu wrote:

Suro,

FYI. In before, we tried a distributed lock implementation for bay operations 
(here are the patches [1,2,3,4,5]). However, after several discussions online 
and offline, we decided to drop the blocking implementation for bay operations, 
in favor of non-blocking implementation (which is not implemented yet). You can 
find more discussion in here [6,7].

For the async container operations, I would suggest to consider a non-blocking 
approach first. If it is impossible and we need a blocking implementation, 
suggest to use the bay operations patches below as a reference.

[1] https://review.openstack.org/#/c/171921/
[2] https://review.openstack.org/#/c/172603/
[3] https://review.openstack.org/#/c/172772/
[4] https://review.openstack.org/#/c/172773/
[5] https://review.openstack.org/#/c/172774/
[6] https://blueprints.launchpad.net/magnum/+spec/horizontal-scale
[7] https://etherpad.openstack.org/p/liberty-work-magnum-horizontal-scale

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: December-16-15 10:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: s...@yahoo-inc.com
Subject: Re: [openstack-dev] [magnum] Magnum conductor async container 
operations



On Dec 16, 2015, at 6:24 PM, Joshua Harlow  wrote:

SURO wrote:

Hi all,
Please review and provide feedback on the following design proposal
for implementing the blueprint[1] on async-container-operations -

1. Magnum-conductor would have a pool of threads for executing the
container operations, viz. executor_threadpool. The size of the
executor_threadpool will be configurable. [Phase0] 2. Every time,
Magnum-conductor(Mcon) receives a container-operation-request from
Magnum-API(Mapi), it will do the initial validation, housekeeping and
then pick a thread from the executor_threadpool to execute the rest
of the operations. Thus Mcon will return from the RPC request context
much faster without blocking the Mapi. If the executor_threadpool is
empty, Mcon will execute in a manner it does today, i.e.
synchronously - this will be the rate-limiting mechanism - thus
relaying the feedback of exhaustion.
[Phase0]
How often we are hitting this scenario, may be indicative to the
operator to create more workers for Mcon.
3. Blocking class of operations - There will be a class of
operations, which can not be made async, as they are supposed to
return result/content inline, e.g. 'container-logs'. [Phase0] 4.
Out-of-order considerations for NonBlocking class of operations -
there is a possible race around condition for create followed by
start/delete of a container, as things would happen in parallel. To
solve this, we will maintain a map of a container and executing
thread, for current execution. If we find a request for an operation
for a container-in-execution, we will block till the thread completes
the execution. [Phase0]

Does whatever do these operations (mcon?) run in more than one process?

Yes, there may be multiple copies of magnum-conductor running on separate hosts.


Can it be requested to create in one process then delete in another?
If so is that map some distributed/cross-machine/cross-process map
that will be inspected to see what else is manipulating a given
container (so that the thread can block until that is not the case...
basically the map is acting like a operation-lock?)

That’s how I interpreted it as well. This is a race prevention technique so 
that we don’t attempt to act on a resource until it is ready. Another way to 
deal with this is check the state of the resource, and return a “not 

[openstack-dev] [nova] (Live migration track) Libvirt storage pools initial step

2015-12-17 Thread Matthew Booth
I'm off now for the holiday period, so I'm sharing what I'm sitting on.
This represents a bit over a week of staring at code, and a couple of days
of actual keyboard banging. It doesn't run. Details are in the commit
message, including where I think I'm going with this. Read the commit
message like a slightly more detailed spec.

  https://review.openstack.org/259148

I have a reasonable idea of where I want to go with this, but finding a
consumable set of steps to get there is challenging. The current code makes
lots of assumptions about storage in lots of places. Consequently, adding
to it without regressing is becoming increasingly difficult. Making the
wholesale change to libvirt storage pools, supporting multiple layouts in
the transition period, would require updating these assumptions everywhere
they exist in the code. My initial goal is to centralise these assumptions
whilst making minimal/no functional changes. Once all the code which
assumes on-disk layout is in one place, we can be more confident in making
changes to it, and more easily validate it. We can also more easily see
what the current assumptions are. This will also make it easier for people
to continue adding storage-related features to the libvirt driver.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] No meetings for the next 2 weeks

2015-12-17 Thread Flavio Percoco

Greetings,

We discussed this in our last meeting (Dec 17th 2015) and we agreed on
skipping the next 2 meetings. Therfore, we won't have Glance meetings
on the following dateS:

- Dec 24th, 2015
- Dec 31st, 2015

See you all in 2016 on the 7th of January to have our very first
meeting of the year.

Cheers,
Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Cinder] [Nova] [Neutron] Gathering quota usage data in Horizon

2015-12-17 Thread Ivan Kolodyazhny
Hi Timur,

Did you try this Cinder API [1]?  Here [2] is cinderclient output.



[1]
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/quotas.py#L33
[2] http://paste.openstack.org/show/482225/

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Thu, Dec 17, 2015 at 8:41 PM, Timur Sufiev  wrote:

> Hello, folks!
>
> I'd like to initiate a discussion of the feature request I'm going to make
> on behalf of Horizon to every core OpenStack service which supports Quota
> feature, namely Cinder, Nova and Neutron.
>
> Although all three services' APIs support special calls to get current
> quota limitations (Nova and Cinder allows to get and update both per-tenant
> and default cloud-wide limitations, Neutron allows to do it only for
> per-tenant limitations), there is no special call in any of these services
> to get current per-tenant usage of quota. Because of that Horizon needs to
> get, say for 'volumes' quota, a list of Cinder volumes in the current
> tenant and then just calculate its length [1]. When there are really a lot
> of entities in tenant - instances/volumes/security groups/whatever - all
> this calls sum up and make rendering pages in Horizon much more slower than
> it could be. Is it possible to provide special API calls to alleviate this?
>
> [1]
> https://github.com/openstack/horizon/blob/9.0.0.0b1/openstack_dashboard/usage/quotas.py#L350
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-17 Thread Sergii Golovatiuk
Hi,

On Thu, Dec 17, 2015 at 1:09 AM, Evgeniy L  wrote:

> Hi,
>
> Since older Postgres doesn't introduce bugs and it won't harm new features,
> I would vote for downgrade to 9.2
>
> The reasons are:
> 1. not to support own package for Centos (as far as I know 9.3 for Ubuntu
> is already there)
> 2. should Fuel some day be a part of upstream Centos? If yes, or there is
> even small probability that
> it's going to be, we should be as much as possible compatible with
> upstream repo. If we don't
> consider such possibility, it doesn't really matter, because user will
> have to connect external
> repo anyway.
>

+100


> Since we already use Postgres specific features, we should spawn a
> separate thread, if
> we should or shouldn't continue doing that, and if there is a real need to
> support mysql
> for example.
>
> Thanks,
>
> On Wed, Dec 16, 2015 at 3:58 PM, Igor Kalnitsky 
> wrote:
>
>> > From what I understand, we are using 9.2 since the CentOS 7 switch. Can
>> > anyone point me to a bug caused by that?
>>
>> AFAIK, there's no such bugs. Some folks have just *concerns*. Anyway,
>> it's up to packaging team to decide whether to package or not.
>>
>> From Nailgun POV, I'd like to see classical RDBMS schemas as much as
>> possible, and do not rely on database backend and its version.
>>
>> On Wed, Dec 16, 2015 at 11:30 AM, Bartłomiej Piotrowski
>>  wrote:
>> > On 2015-12-16 10:14, Bartłomiej Piotrowski wrote:
>> >> On 2015-12-16 08:23, Mike Scherbakov wrote:
>> >>> We could consider downgrading in Fuel 9.0, but I'd very carefully
>> >>> consider that. As Vladimir Kuklin said, there are may be other users
>> who
>> >>> already rely on 9.3 for some of their enhancements.
>> >>
>> >> That will be way too late for that, as it will make upgrade procedure
>> >> more complicated. Given no clear upgrade path from 7.0 to 8.0, it
>> sounds
>> >> like perfect opportunity to use what is provided by base distribution.
>> >> Are there actual users facilitating 9.3 features or is it some kind of
>> >> Invisible Pink Unicorn?
>> >>
>> >> Bartłomiej
>> >>
>> >
>> > I also want to remind that we are striving for possibility to let users
>> > do 'yum install fuel' (or apt) to make the magic happen. There is not
>> > much magic in requiring potential users to install specific PostgreSQL
>> > version because someone said so. It's either supporting the lowest
>> > version available (CentOS 7 – 9.2, Ubuntu 14.04 – 9.3, Debian Jessie –
>> > 9.4, openSUSE Leap – 9.4) or "ohai add this repo with our manually
>> > imported and rebuilt EPEL package".
>> >
>> > From what I understand, we are using 9.2 since the CentOS 7 switch. Can
>> > anyone point me to a bug caused by that?
>> >
>> > BP
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][tempest][defcore] Process to imrpove tests coverge in temepest

2015-12-17 Thread Flavio Percoco

On 17/12/15 02:00 +, Egle Sigler wrote:

Thank you Flavio for bringing this up! We are using tempest tests for
DefCore testing, and we would like to work with anyone willing to increase
coverages in any of the current covered capabilities. We would also like
to hear from the teams when they are planning on removing, changing, or
renaming tests, as that could affect what DefCore tests.

Upcoming DefCore guidelines and tests:
https://github.com/openstack/defcore/blob/master/2016.01.json


+1

It's taking me a bit longer than I expeced but I'm working on a list
of tests that would be great to have in tempest and some that would
perhaps be better to remove.

Thanks for the feedback,
Flavio



Thank you,
Egle

On 12/8/15, 1:25 PM, "Flavio Percoco"  wrote:


Greetings,

I just reviewed a patch in tempest that proposed adding new tests for
the Glance's task API. While I believe it's awesome that folks in the
tempest team keep adding missing tests for projects, I think it'd be
better for the tempest team, the project's team and defcore if we'd
discuss these tests before they are worked on. This should help people
avoid wasting time.

I believe these cases are rare but the benefits of discussing missing
tests across teams could also help prioritizing the work based on what
the teams goals are, what the defcore team needs are, etc.

So, I'd like to start improving this process by inviting folks from
the tempest team to join project's meeting whenever new tests are
going to be worked on.

I'd also like to invite PTLs (or anyone, really) from each team to go
through what's in tempest and what's missing and help this team
improve the test suite. Remember that these tests are also used by the
defcore team and they are not important just for the CI but have an
impact on other areas as well.

I'm doing the above for Glance and I can't stress enough how important
it is for projects to do the same.

Do teams have a different workflow/process to increase tests coverage
in tempest?

Cheers,
Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum conductor async container operations

2015-12-17 Thread SURO

Josh,

Thanks for bringing up this discussion. Modulo-hashing introduces a 
possibility for 'window of inconsistency', and to address the dynamism 
'consistent hashing' is better.


BUT, for the problem in hand I think modulo hashing is good enough, as 
number of worker instances for conductor in OpenStack space is managed 
through config - a change in which would require a restart of the 
conductor. If the conductor is restarted, then the 'window of 
inconsistency' does not occur for the situation we are discussing.



Regards,
SURO
irc//freenode: suro-patz

On 12/16/15 11:39 PM, Joshua Harlow wrote:

SURO wrote:

Please find the reply inline.

Regards,
SURO
irc//freenode: suro-patz

On 12/16/15 7:19 PM, Adrian Otto wrote:

On Dec 16, 2015, at 6:24 PM, Joshua Harlow 
wrote:

SURO wrote:

Hi all,
Please review and provide feedback on the following design 
proposal for

implementing the blueprint[1] on async-container-operations -

1. Magnum-conductor would have a pool of threads for executing the
container operations, viz. executor_threadpool. The size of the
executor_threadpool will be configurable. [Phase0]
2. Every time, Magnum-conductor(Mcon) receives a
container-operation-request from Magnum-API(Mapi), it will do the
initial validation, housekeeping and then pick a thread from the
executor_threadpool to execute the rest of the operations. Thus Mcon
will return from the RPC request context much faster without blocking
the Mapi. If the executor_threadpool is empty, Mcon will execute in a
manner it does today, i.e. synchronously - this will be the
rate-limiting mechanism - thus relaying the feedback of exhaustion.
[Phase0]
How often we are hitting this scenario, may be indicative to the
operator to create more workers for Mcon.
3. Blocking class of operations - There will be a class of 
operations,

which can not be made async, as they are supposed to return
result/content inline, e.g. 'container-logs'. [Phase0]
4. Out-of-order considerations for NonBlocking class of operations -
there is a possible race around condition for create followed by
start/delete of a container, as things would happen in parallel. To
solve this, we will maintain a map of a container and executing 
thread,

for current execution. If we find a request for an operation for a
container-in-execution, we will block till the thread completes the
execution. [Phase0]
Does whatever do these operations (mcon?) run in more than one 
process?

Yes, there may be multiple copies of magnum-conductor running on
separate hosts.


Can it be requested to create in one process then delete in another?
If so is that map some distributed/cross-machine/cross-process map
that will be inspected to see what else is manipulating a given
container (so that the thread can block until that is not the case...
basically the map is acting like a operation-lock?)

Suro> @Josh, just after this, I had mentioned

"The approach above puts a prerequisite that operations for a given
container on a given Bay would go to the same Magnum-conductor 
instance."


Which suggested multiple instances of magnum-conductors. Also, my idea
for implementing this was as follows - magnum-conductors have an 'id'
associated, which carries the notion of [0 - (N-1)]th instance of
magnum-conductor. Given a request for a container operation, we would
always have the bay-id and container-id. I was planning to use
'hash(bay-id, key-id) modulo N' to be the logic to ensure that the right
instance picks up the intended request. Let me know if I am missing any
nuance of AMQP here.


Unsure about nuance of AMQP (I guess that's an implementation detail 
of this); but what this sounds like is similar to the hash-rings other 
projects have built (ironic uses one[1], ceilometer is slightly 
different afaik, see 
http://www.slideshare.net/EoghanGlynn/hash-based-central-agent-workload-partitioning-37760440 
and 
https://github.com/openstack/ceilometer/blob/master/ceilometer/coordination.py#L48).


The typical issue with modulo hashing is changes in N (whether adding 
new conductors or deleting them) and what that change in N does to 
ongoing requests, how do u change N in an online manner (and so-on); 
typically with modulo hashing a large amount of keys get shuffled 
around[2]. So just a thought but a (consistent) hashing 
routine/ring... might be worthwhile to look into, and/or talk with 
those other projects to see what they have been up to.


My 2 cents,

[1] 
https://github.com/openstack/ironic/blob/master/ironic/common/hash_ring.py


[2] https://en.wikipedia.org/wiki/Consistent_hashing


That’s how I interpreted it as well. This is a race prevention
technique so that we don’t attempt to act on a resource until it is
ready. Another way to deal with this is check the state of the
resource, and return a “not ready” error if it’s not ready yet. If
this happens in a part of the system that is unattended by a user, we
can re-queue the call to retry after a minimum delay so that 

Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-17 Thread Davanum Srinivas
Thinking more about it. The only change we'll have is that if someone
files a oslo-specs for oslo.policy we need to tell them to switch over
to keystone-specs. We could add notes in README etc to make this
apparent. So i am +1 to making this move.

Brant, other keystone cores,
Can you please file the governance review request and we can make sure
oslo cores chime in there? and make it official?

Thanks,
Dims


On Thu, Dec 17, 2015 at 2:40 PM, Flavio Percoco  wrote:
> On 16/12/15 18:51 -0800, Morgan Fainberg wrote:
>>
>> For what is is worth, we originally proposed oslo.policy to graduate to
>> Keystone when we were converting to the library. I still think it belongs
>> in
>> keystone (as long as the oslo team doesn't mind that long-term keystone
>> team
>> owns something in the oslo. namespace).
>>
>> The short term adding keystone-core should get some more eyes on the
>> reviews,
>> so +1 to that.
>
>
>
> Just want to +1 all the above.
>
> It'd be great if we can finally hand the library over to the keystone
> team, where I think it belongs.
>
> Cheers,
> Flavio
>
>>
>> --Morgan
>>
>> On Wed, Dec 16, 2015 at 4:08 PM, Davanum Srinivas 
>> wrote:
>>
>>As an interim measure, added keystone-core to oslo-policy-core[1]
>>
>>Thanks,
>>Dims
>>
>>[1] https://review.openstack.org/#/admin/groups/556,members
>>
>>On Wed, Dec 16, 2015 at 10:40 PM, Dolph Mathews
>> 
>>wrote:
>>>
>>> On Wed, Dec 16, 2015 at 1:33 PM, Davanum Srinivas 
>>wrote:
>>>>
>>>> Brant,
>>>>
>>>> I am ok either way, guess the alternative was to add keystone-core
>>>> directly to the oslo.policy core group (can't check right now).
>>>
>>>
>>> That's certainly reasonable, and kind of what we did with pycadf.
>>>
>>>>
>>>>
>>>> The name is very possibly going to create confusion
>>>
>>>
>>> I assume you're not referring to unnecessarily changing the name of
>> the
>>> project itself (oslo.policy) just because there might be a shift in
>> the
>>> group of maintainers! Either way, let's definitely not do that.
>>>
>>>>
>>>> -- Dims
>>>>
>>>> On Wed, Dec 16, 2015 at 7:22 PM, Jordan Pittier
>>>>  wrote:
>>>> > Hi,
>>>> > I am sure oslo.policy would be good under Keystone's governance.
>> But I
>>>> > am
>>>> > not sure I understood what's wrong in having oslo.policy under the
>>oslo
>>>> > program ?
>>>> >
>>>> > Jordan
>>>> >
>>>> > On Wed, Dec 16, 2015 at 6:13 PM, Brant Knudson 
>> wrote:
>>>> >>
>>>> >>
>>>> >> I'd like to propose moving oslo.policy from the oslo program to
>> the
>>>> >> keystone program. Keystone developers know what's going on with
>>>> >> oslo.policy
>>>> >> and I think are more interested in what's going on with it so
>> that
>>>> >> reviews
>>>> >> will get proper vetting, and it's not like oslo doesn't have
>> enough
>>>> >> going on
>>>> >> with all the other repos. Keystone core has equivalent stringent
>>>> >> development
>>>> >> policy that we already enforce with keystoneclient and
>> keystoneauth,
>>so
>>>> >> oslo.policy isn't going to be losing any stability.
>>>> >>
>>>> >> If there aren't any objections, let's go ahead with this. I heard
>>this
>>>> >> requires a change to a governance repo, and gerrit permission
>> changes
>>>> >> to
>>>> >> make keystone-core core, and updates in oslo.policy to change
>> some
>>docs
>>>> >> or
>>>> >> links. Any oslo.policy specs that are currently proposed
>>>> >>
>>>> >> - Brant
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>
>> __
>>>> >> OpenStack Development Mailing List (not for usage questions)
>>>> >> Unsubscribe:
>>>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> >
>>
>> __
>>>> > OpenStack Development Mailing List (not for usage questions)
>>>> > Unsubscribe:
>>>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> Davanum Srinivas :: https://twitter.com/dims
>>>>
>>>>
>>
>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?
>>subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>

Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-17 Thread Flavio Percoco

On 16/12/15 18:51 -0800, Morgan Fainberg wrote:

For what is is worth, we originally proposed oslo.policy to graduate to
Keystone when we were converting to the library. I still think it belongs in
keystone (as long as the oslo team doesn't mind that long-term keystone team
owns something in the oslo. namespace).

The short term adding keystone-core should get some more eyes on the reviews,
so +1 to that.



Just want to +1 all the above.

It'd be great if we can finally hand the library over to the keystone
team, where I think it belongs.

Cheers,
Flavio



--Morgan

On Wed, Dec 16, 2015 at 4:08 PM, Davanum Srinivas  wrote:

   As an interim measure, added keystone-core to oslo-policy-core[1]

   Thanks,
   Dims

   [1] https://review.openstack.org/#/admin/groups/556,members

   On Wed, Dec 16, 2015 at 10:40 PM, Dolph Mathews 
   wrote:
   >
   > On Wed, Dec 16, 2015 at 1:33 PM, Davanum Srinivas 
   wrote:
   >>
   >> Brant,
   >>
   >> I am ok either way, guess the alternative was to add keystone-core
   >> directly to the oslo.policy core group (can't check right now).
   >
   >
   > That's certainly reasonable, and kind of what we did with pycadf.
   >
   >>
   >>
   >> The name is very possibly going to create confusion
   >
   >
   > I assume you're not referring to unnecessarily changing the name of the
   > project itself (oslo.policy) just because there might be a shift in the
   > group of maintainers! Either way, let's definitely not do that.
   >
   >>
   >> -- Dims
   >>
   >> On Wed, Dec 16, 2015 at 7:22 PM, Jordan Pittier
   >>  wrote:
   >> > Hi,
   >> > I am sure oslo.policy would be good under Keystone's governance. But I
   >> > am
   >> > not sure I understood what's wrong in having oslo.policy under the
   oslo
   >> > program ?
   >> >
   >> > Jordan
   >> >
   >> > On Wed, Dec 16, 2015 at 6:13 PM, Brant Knudson  wrote:
   >> >>
   >> >>
   >> >> I'd like to propose moving oslo.policy from the oslo program to the
   >> >> keystone program. Keystone developers know what's going on with
   >> >> oslo.policy
   >> >> and I think are more interested in what's going on with it so that
   >> >> reviews
   >> >> will get proper vetting, and it's not like oslo doesn't have enough
   >> >> going on
   >> >> with all the other repos. Keystone core has equivalent stringent
   >> >> development
   >> >> policy that we already enforce with keystoneclient and keystoneauth,
   so
   >> >> oslo.policy isn't going to be losing any stability.
   >> >>
   >> >> If there aren't any objections, let's go ahead with this. I heard
   this
   >> >> requires a change to a governance repo, and gerrit permission changes
   >> >> to
   >> >> make keystone-core core, and updates in oslo.policy to change some
   docs
   >> >> or
   >> >> links. Any oslo.policy specs that are currently proposed
   >> >>
   >> >> - Brant
   >> >>
   >> >>
   >> >>
   >> >>
   __
   >> >> OpenStack Development Mailing List (not for usage questions)
   >> >> Unsubscribe:
   >> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   >> >>
   >> >
   >> >
   >> >
   >> >
   __
   >> > OpenStack Development Mailing List (not for usage questions)
   >> > Unsubscribe:
   >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   >> >
   >>
   >>
   >>
   >> --
   >> Davanum Srinivas :: https://twitter.com/dims
   >>
   >>
   __
   >> OpenStack Development Mailing List (not for usage questions)
   >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?
   subject:unsubscribe
   >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   >
   >
   >
   >
   __
   > OpenStack Development Mailing List (not for usage questions)
   > Unsubscribe: openstack-dev-requ...@lists.openstack.org?
   subject:unsubscribe
   > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   >



   --
   Davanum Srinivas :: https://twitter.com/dims

   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [Neutron] [IPv6] [radvd] Advertise tenant prefixes from router to outside

2015-12-17 Thread Vladimir Eremin
Hi

For now, when end user is creating IPv6-enabled tenant network and attaching it 
to the virtual router, there is only way to set up external infrastructure to 
put traffic back to the router is using DHCPv6 PD[1], unfortunately, it’s not 
working at all[2]. Other methods like implementing BGP is still in development.

BTW, in IPv6 Router Advertisements we have an option called Route Information 
Option, RA-RIO[3] to advertise more specific routes from gateway. We could 
easily append a section like next one to advertise tenant prefix 
2001:db8:1::/64 to public network. And if provider network router outside 
OpenStack will be configured to accept these.

interface qg- {
AdvDefaultLifetime 0;
route 2001:db8:1::/64 {
};
};

Cisco accepts it by default AFAIK, linux needs a sysctl 
net.ipv6.conf.*.accept_ra_rt_info_max_plen set to 64.

Moreover, enabling receiving RA-RIO prefixes in router namespaces allows 
routers communicate by themselves.

I’ve done PoC patch for it https://gist.github.com/yottatsa/8282e670da16934960b3


[1]: 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/ipv6-prefix-delegation.html
[2]: https://bugs.launchpad.net/neutron/+bug/1505316
[3]: https://tools.ietf.org/html/rfc4191

-- 
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum conductor async container operations

2015-12-17 Thread SURO

Josh,
You pointed out correct! magnum-conductor has monkey-patched code, so 
the underlying thread module is actually using greenthread.
- I would use eventlet.greenthread explicitly, as that would enhance the 
readability
- greenthread has a potential of not yielding by itself, if no i/o, 
blocking call is made. But in the present scenario, it is not much of a 
concern, as the container-operation execution is lighter on the client 
side, and mostly block for the response from the server, after issuing 
the request.


I will update the proposal with this change.

Regards,
SURO
irc//freenode: suro-patz

On 12/16/15 11:57 PM, Joshua Harlow wrote:

SURO wrote:

Josh,
Please find my reply inline.

Regards,
SURO
irc//freenode: suro-patz

On 12/16/15 6:37 PM, Joshua Harlow wrote:



SURO wrote:

Hi all,
Please review and provide feedback on the following design proposal 
for

implementing the blueprint[1] on async-container-operations -

1. Magnum-conductor would have a pool of threads for executing the
container operations, viz. executor_threadpool. The size of the
executor_threadpool will be configurable. [Phase0]
2. Every time, Magnum-conductor(Mcon) receives a
container-operation-request from Magnum-API(Mapi), it will do the
initial validation, housekeeping and then pick a thread from the
executor_threadpool to execute the rest of the operations. Thus Mcon
will return from the RPC request context much faster without blocking
the Mapi. If the executor_threadpool is empty, Mcon will execute in a
manner it does today, i.e. synchronously - this will be the
rate-limiting mechanism - thus relaying the feedback of exhaustion.
[Phase0]
How often we are hitting this scenario, may be indicative to the
operator to create more workers for Mcon.
3. Blocking class of operations - There will be a class of operations,
which can not be made async, as they are supposed to return
result/content inline, e.g. 'container-logs'. [Phase0]
4. Out-of-order considerations for NonBlocking class of operations -
there is a possible race around condition for create followed by
start/delete of a container, as things would happen in parallel. To
solve this, we will maintain a map of a container and executing 
thread,

for current execution. If we find a request for an operation for a
container-in-execution, we will block till the thread completes the
execution. [Phase0]
This mechanism can be further refined to achieve more asynchronous
behavior. [Phase2]
The approach above puts a prerequisite that operations for a given
container on a given Bay would go to the same Magnum-conductor 
instance.

[Phase0]
5. The hand-off between Mcon and a thread from executor_threadpool can
be reflected through new states on the 'container' object. These 
states

can be helpful to recover/audit, in case of Mcon restart. [Phase1]

Other considerations -
1. Using eventlet.greenthread instead of real threads => This approach
would require further refactoring the execution code and embed yield
logic, otherwise a single greenthread would block others to progress.
Given, we will extend the mechanism for multiple COEs, and to keep the
approach straight forward to begin with, we will use 
'threading.Thread'

instead of 'eventlet.greenthread'.



Also unsure about the above, not quite sure I connect how greenthread
usage requires more yield logic (I'm assuming you mean the yield
statement here)? Btw if magnum is running with all things monkey
patched (which it seems like
https://github.com/openstack/magnum/blob/master/magnum/common/rpc_service.py#L33 


does) then magnum usage of 'threading.Thread' is a
'eventlet.greenthread' underneath the covers, just fyi.


SURO> Let's consider this -
function A () {
block B; // validation
block C; // Blocking op
}
Now, if we make C a greenthread, as it is, would it not block the entire
thread that runs through all the greenthreads? I assumed, it would and
that's why we have to incorporate finer grain yield into C to leverage
greenthread. If the answer is no, then we can use greenthread.
I will validate which version of threading.Thread was getting used.


Unsure how to answer this one.

If all things are monkey patched then any time a blocking operation 
(i/o, lock acquisition...) is triggered the internals of eventlet go 
through a bunch of jumping around to then switch to another green 
thread (http://eventlet.net/doc/hubs.html). Once u start partially 
using greenthreads and mixing real threads then you have to start 
trying to reason about yielding in certain places (and at that point 
you might as well go to py3.4+ since it has syntax made just for this 
kind of thinking).


Pointer for the thread monkey patching btw:

https://github.com/eventlet/eventlet/blob/master/eventlet/patcher.py#L346

https://github.com/eventlet/eventlet/blob/master/eventlet/patcher.py#L212

Easy way to see this:

>>> import eventlet
>>> eventlet.monkey_patch()
>>> import thread
>>> thread.start_new_thread.__module__
'eventlet.green.thread'
>>> 

Re: [openstack-dev] [glance]Configure Glance to use multiple backends

2015-12-17 Thread Flavio Percoco

On 17/12/15 15:33 +0800, 陈迪豪 wrote:

Hi all, we have a question about configuring Glance to use multiple ceph
clusters. Currently we're trying to integrate one OpenStack cluster with two or
more ceph clusters. It's meaningful for us to support multiple backends.

Cinder can support different backends or multiple ceph clusters. Nova can boot
from volume can leverage what cinder has done. But how about Glance?



Glance currently doesn't have support for multiple backends. It's been
discussed in the past but, unfortunately, there were other priorities.


If we want to store images in different ceph cluster, can we upload them in
cinder in advance?


This, however, is possible. In addition to this, there's a on-going
work to add a Cinder driver to glance_store to allow users to store
images directly in volumes.

https://review.openstack.org/#/c/166414/

Cheers,
Flavio




Thanks & regards

-- tobe from UnitedStack




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit Upgrade to ver 2.11 completed.

2015-12-17 Thread Andrea Frittoli
nice, thank you!

On Thu, Dec 17, 2015 at 4:05 PM Zaro  wrote:

> forgot this one as well: https://review.openstack.org/241278
>
> On Thu, Dec 17, 2015 at 7:02 AM, Zaro  wrote:
> > On Thu, Dec 17, 2015 at 12:31 AM, Andrea Frittoli
> >  wrote:
> >> Thanks for the upgrade, and for all the effort you folks put into
> keeping
> >> our gerrit close to upstream!
> >>
> >> One thing that I find inconvenient in the new UI is the size of the
> middle
> >> (test results) column: it's too narrow, no matter how large my browser
> >> window is, or what browser I use - which causes the name of some of the
> jobs
> >> to wrap to a second line, making it really hard to read test results. At
> >> least that's my experience on tempest reviews, where we have a lot of
> jobs,
> >> and rather long names - see [0] for instance.
> >>
> >> Is there any chance via configuration to make that column slightly
> wider?
> >>
> >
> > There are a few proposals already:
> >   https://review.openstack.org/258751
> >   https://review.openstack.org/258744
> >
> >
> >> thank you!
> >>
> >> andrea
> >> [0] https://review.openstack.org/#/c/254274/
> >>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

2015-12-17 Thread Sean M. Collins
On Thu, Dec 17, 2015 at 02:08:42PM EST, Vasudevan, Swaminathan (PNB Roseville) 
wrote:
> Hi Folks,
> I would like organize a meeting between the Nova and Neutron team to work 
> refining the Nova/Neutron notificiations for the Live Migration.
> 
> Today we only have Notification from Neutron to Nova on any port status 
> update.
> 
> But we don't have any similar notification from Nova on any Migration state 
> change.
> Neutron L3 will be interested in knowing the state change for vm migration 
> and can take necessary action pro-actively to create the necessary L3 related 
> plumbing that is required.
> 
> Here are some of the bugs that are currently filed with respect to nova live 
> migration and neutron.
> https://bugs.launchpad.net/neutron/+bug/1456073
> https://bugs.launchpad.net/neutron/+bug/1414559
> 
> Please let me know who will be interested in participating in the discussion.
> It would be great if we get some cores attention from "Nova and Neutron".
> 
> Thanks.
> Swaminathan Vasudevan
> Systems Software Engineer (TC)


Cool. Brent and I are inter-project liaisons between Neutron and Nova,
so let us know what we can do to help raise awareness on both sides.

https://wiki.openstack.org/wiki/CrossProjectLiaisons#Inter-project_Liaisons

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] No meetings for the next 2 weeks

2015-12-17 Thread Matthew Treinish

Hi Everyone,

As we discussed during today's QA meeting we'll not be holding the weekly QA
meeting for the next 2 weeks. Most people will be on vacation anyway, so this
likely isn't a big surprise. 

The next meeting will be on Jan. 7th at 0900 UTC.

Thanks,

Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

2015-12-17 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Sean M. Collins,
Thanks for the information.
It would be great if we can bring in the right people from both sides to 
discuss and solve this problem
Please let me know if you can pull in the right people from the nova side and I 
can get the people from the neutron side.

Thanks
Swami

-Original Message-
From: Sean M. Collins [mailto:s...@coreitpro.com] 
Sent: Thursday, December 17, 2015 1:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

On Thu, Dec 17, 2015 at 02:08:42PM EST, Vasudevan, Swaminathan (PNB Roseville) 
wrote:
> Hi Folks,
> I would like organize a meeting between the Nova and Neutron team to work 
> refining the Nova/Neutron notificiations for the Live Migration.
> 
> Today we only have Notification from Neutron to Nova on any port status 
> update.
> 
> But we don't have any similar notification from Nova on any Migration state 
> change.
> Neutron L3 will be interested in knowing the state change for vm migration 
> and can take necessary action pro-actively to create the necessary L3 related 
> plumbing that is required.
> 
> Here are some of the bugs that are currently filed with respect to nova live 
> migration and neutron.
> https://bugs.launchpad.net/neutron/+bug/1456073
> https://bugs.launchpad.net/neutron/+bug/1414559
> 
> Please let me know who will be interested in participating in the discussion.
> It would be great if we get some cores attention from "Nova and Neutron".
> 
> Thanks.
> Swaminathan Vasudevan
> Systems Software Engineer (TC)


Cool. Brent and I are inter-project liaisons between Neutron and Nova, so let 
us know what we can do to help raise awareness on both sides.

https://wiki.openstack.org/wiki/CrossProjectLiaisons#Inter-project_Liaisons

--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-17 Thread Thomas Goirand
On 12/16/2015 06:04 PM, Mike Bayer wrote:
> 
> 
> On 12/16/2015 11:53 AM, Sean Dague wrote:
>> On 12/16/2015 11:37 AM, Sean Dague wrote:
>>> On 12/16/2015 11:22 AM, Mike Bayer wrote:


 On 12/16/2015 09:10 AM, Sylvain Bauza wrote:
>
>
> Le 16/12/2015 14:59, Sean Dague a écrit :
>> oslo.db test_migrations is using methods for alembic, which changed in
>> the 0.8.4 release. This ends up causing a unit test failure (at least in
>> the Nova case) that looks like this -
>> http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404
>>
>>
>> There is an oslo.db patch out there
>> https://review.openstack.org/#/c/258478 to fix it, but #openstack-oslo
>> has been pretty quiet this morning, so no idea how fast this can get out
>> into a release.
>>
>> -Sean
>>
>
> So, it seems that the issue came when
> https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
> Fortunatelt, Mike seems to have a patch in place for Nova in order to
> fix this https://review.openstack.org/#/c/253859/
>
> I'd suggest an intensive review pass on that one to make sure it's OK.

 do you folks have a best practice suggestion on this?  My patch kind of
 stayed twisting in the wind for a week even though those who read it
 would have seen "hey, this is going to break on Alembic's next minor
 release!"I pinged the important people and all on it, but it still
 got no attention.
>>>
>>> Which people were those? I guess none of us this morning knew this was
>>> going to be an issue and were surprised that 12 hours worth of patches
>>> had all failed.
>>>
>>> -Sean
>>
>> Best practice is send an email to the openstack-dev list:
>>
>> Subject: [all] the following test jobs will be broken by Alembic 0.8.4
>> release
>>
>> The Alembic 0.8.4 release is scheduled on 12/15. When it comes out it
>> will break Nova unit tests on all branches.
>>
>> The following patch will fix master - .
>>
>> You all will need to backport it as well to all branches.
>>
>>
>> Instead of just breaking the world, and burning 10s to 100 engineer
>> hours in redo tests and investigating and addressing the break after the
>> fact.
> 
> I was hoping to get a thanks for even *testing* unreleased versions of
> my entirely non-Openstack, upstream projects against Openstack itself.
>  If I did *less* effort here, and just didn't bother the way 100% of all
> other non-Openstack projects do, then I'd not have been scolded by you.

IMHO, it shouldn't be *tested*, but *gated*. Meaning that such a
disruptive patch should be accepted only when there's a fix in all of
OpenStack (if that is possible, of course, as I don't really know the
details, just this thread...).

Or do you still consider SQLA / Alembic as just a 3rd party lib for
OpenStack? Wouldn't it be nice to have it maintained directly in
OpenStack infra? Your thoughts?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] [radvd] Advertise tenant prefixes from router to outside

2015-12-17 Thread Carl Baldwin
On Thu, Dec 17, 2015 at 1:30 PM, Vladimir Eremin  wrote:
> Hi
>
> For now, when end user is creating IPv6-enabled tenant network and attaching 
> it to the virtual router, there is only way to set up external infrastructure 
> to put traffic back to the router is using DHCPv6 PD[1], unfortunately, it’s 
> not working at all[2]. Other methods like implementing BGP is still in 
> development.
>
> BTW, in IPv6 Router Advertisements we have an option called Route Information 
> Option, RA-RIO[3] to advertise more specific routes from gateway. We could 
> easily append a section like next one to advertise tenant prefix 
> 2001:db8:1::/64 to public network. And if provider network router outside 
> OpenStack will be configured to accept these.
>
> interface qg- {
> AdvDefaultLifetime 0;
> route 2001:db8:1::/64 {
> };
> };
>
> Cisco accepts it by default AFAIK, linux needs a sysctl 
> net.ipv6.conf.*.accept_ra_rt_info_max_plen set to 64.
>
> Moreover, enabling receiving RA-RIO prefixes in router namespaces allows 
> routers communicate by themselves.
>
> I’ve done PoC patch for it 
> https://gist.github.com/yottatsa/8282e670da16934960b3

This is an interesting idea.  I've wondered if we could do something
like this before but I didn't know all the details around RA-RIO.  The
problem is that, in general, we have no idea if the subnets behind the
routers are viable in the external network context.  So, we can't just
blindly have routers advertising whatever.

In Mitaka, we're merging a new feature called "address scopes".  We
could limit advertising to only subnets that come from the address
scope matching that of the external network.  If we do this then we'll
know that the subnet came from a pool of addresses that are valid in
the external network context and that the addresses are unique.

This could be relatively easy to implement on top of the current
address scopes work.  I think this is worth exploring with an RFE.
Would you mind filing an RFE according to the Neutron process [1]?

Carl

[1] 
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][grenade] l3.filters fail to update at test server

2015-12-17 Thread Zong Kai YL Li
>> Hi, all.
>> I met a problem when I submitted a neutron l3 related patch, I updated
>> l3.filters to add a rootwrap for command arp, but on test server,
grenade
>> seems doesn't update l3.filters, that cause gate-grenade-dsvm-neutron
>> failed.
>>
>> Does anyone met this problem before? If not yet, can someone please help

>> review [1] ? And [2] is the bug report for this.
>>
>> Thanks.
>> Best regards.
>>
>> [1] https://review.openstack.org/#/c/258758/
>> [2] https://bugs.launchpad.net/grenade/+bug/1527018

> Yes, you need to update rootwrap filters in grenade if they are required

> for the new side of the cloud. But why do you update from-juno scripts
for
> that? Does your patch really target Kilo?

Hi, Ihar, I was misunderstood for that. So seems I need create from-liberty
directory, and for cherry-pick concern, from-kilo is also needed.

>> I believe that we should always unconditionally update filters with new

>> versions when doing upgrades. Filters should not actually be considered

>> configuration files in the first place since they are tightly coupled
with
>> the code that triggers commands.

>> Ihar

> +1.  I was actually very surprised to learn how this is handled and
> had the same thought.

> Carl

Hi, Carl and Ihar, A simple idea I have for now, is to make the whole
rootwarp.d directory been updated, not a single filters file. I will try
to do something for this.

Lizk

Thanks.
Best regards.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] cross project communication: Return request-id to caller

2015-12-17 Thread Kekane, Abhishek
Hi Devs,

I have submitted a cross-project specs [1] for returning request-id to the 
caller which was approved. For implementation we have submitted patches in 
python-cinderclient [2] as per design in cross-project specs. However we have 
found one issue while implementing this design in python-glanceclient and 
submitted a lite-spec [3] to address this issue.

As per the comments [4] on lite-specs, glance core are suggesting new approach 
to use hooks and pass empty list to each of the api, so that when the hook is 
executed it will append the request-id to the list.

POC code for approach suggested by glance:
--
# change in python-glanceclient/v2/images.py to delete api

import functools

def append_request_id(req_id_lst, response, *args, **kwargs):
req_id = response.headers.get('x-openstack-request-id')
if req_id:
req_id_lst.append(req_id)

def setup_request_id_hook(req_id_lst):
if req_id_lst is not None:
return dict(response=functools.partial(append_request_id, req_id_lst))

def delete(self, image_id, request_ids=None):
hook = setup_request_id_hook(request_ids)
url = '/v2/images/%s' % image_id
self.http_client.delete(url, hooks=hook)

# modify the do_image_delete function (glanceclient/v2/shell.py) to look as 
follows:

request_ids = []
gc.images.delete(args_id, request_ids=request_ids)
print request_ids

$ glance image-delete 11887d68-7faf-4821-9a42-3ec63451067c
['req-a0892f56-2626-4069-8787-f8bf56e0c1cc']

We have tested this approach and it is working as per expectation, only thing 
is that we need to make changes in every method to add hook and request_id list.
>From third-party tools, user need to pass this request-ids list as mentioned 
>in above poc in order to get the request-id back.

My point here is, should we follow this new approach in all python-clients to 
maintain the consistency across openstack?

Please provide your feedback for the same.

[1] 
http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html
[2] https://review.openstack.org/#/c/257170/  (base patch, followed by 
dependent patches)
[3] https://bugs.launchpad.net/glance/+bug/1525259
[4] https://bugs.launchpad.net/glance/+bug/1525259/comments/1 and 
https://bugs.launchpad.net/glance/+bug/1525259/comments/5


Thank you,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] No meetings for the next 2 weeks

2015-12-17 Thread Yaroslav Lobankov
Hi Matt,

Unfortunately, I had no chance to participate in the latest meeting. It is
very helpful information. Thanks!

Regards,
Yaroslav Lobankov.

On Fri, Dec 18, 2015 at 12:06 AM, Matthew Treinish 
wrote:

>
> Hi Everyone,
>
> As we discussed during today's QA meeting we'll not be holding the weekly
> QA
> meeting for the next 2 weeks. Most people will be on vacation anyway, so
> this
> likely isn't a big surprise.
>
> The next meeting will be on Jan. 7th at 0900 UTC.
>
> Thanks,
>
> Matt Treinish
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] [radvd] Advertise tenant prefixes from router to outside

2015-12-17 Thread Vladimir Eremin
Hi Carl,

I’ll fil RFE for sure, thank you for the link to the process )

So actually, we should announce all SUBNETS we’ve attached to router. Otherwise 
is will not work, because external network router will have no idea, where the 
traffic should be routed back. It is an actual viability discriminator: 
subnets, that doesn’t attached are counting as unviable to the external network 
context.

BTW, could you please point me to the spec for address scopes.

-- 
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.



> On Dec 17, 2015, at 1:13 PM, Carl Baldwin  wrote:
> 
> On Thu, Dec 17, 2015 at 1:30 PM, Vladimir Eremin  wrote:
>> Hi
>> 
>> For now, when end user is creating IPv6-enabled tenant network and attaching 
>> it to the virtual router, there is only way to set up external 
>> infrastructure to put traffic back to the router is using DHCPv6 PD[1], 
>> unfortunately, it’s not working at all[2]. Other methods like implementing 
>> BGP is still in development.
>> 
>> BTW, in IPv6 Router Advertisements we have an option called Route 
>> Information Option, RA-RIO[3] to advertise more specific routes from 
>> gateway. We could easily append a section like next one to advertise tenant 
>> prefix 2001:db8:1::/64 to public network. And if provider network router 
>> outside OpenStack will be configured to accept these.
>> 
>> interface qg- {
>>AdvDefaultLifetime 0;
>>route 2001:db8:1::/64 {
>>};
>> };
>> 
>> Cisco accepts it by default AFAIK, linux needs a sysctl 
>> net.ipv6.conf.*.accept_ra_rt_info_max_plen set to 64.
>> 
>> Moreover, enabling receiving RA-RIO prefixes in router namespaces allows 
>> routers communicate by themselves.
>> 
>> I’ve done PoC patch for it 
>> https://gist.github.com/yottatsa/8282e670da16934960b3
> 
> This is an interesting idea.  I've wondered if we could do something
> like this before but I didn't know all the details around RA-RIO.  The
> problem is that, in general, we have no idea if the subnets behind the
> routers are viable in the external network context.  So, we can't just
> blindly have routers advertising whatever.
> 
> In Mitaka, we're merging a new feature called "address scopes".  We
> could limit advertising to only subnets that come from the address
> scope matching that of the external network.  If we do this then we'll
> know that the subnet came from a pool of addresses that are valid in
> the external network context and that the addresses are unique.
> 
> This could be relatively easy to implement on top of the current
> address scopes work.  I think this is worth exploring with an RFE.
> Would you mind filing an RFE according to the Neutron process [1]?
> 
> Carl
> 
> [1] 
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-17 Thread Mike Scherbakov
If there are no concrete points why we should wait, I'm +1 to go ahead with
merges.

On Thu, Dec 17, 2015 at 1:32 AM Oleg Gelbukh  wrote:

> Evgeniy,
>
> True, and I fully support merging this particular change as soon as
> possible, i.e. the moment the 'master' is open for 9.0 development.
>
> -Oleg
>
> On Thu, Dec 17, 2015 at 12:28 PM, Evgeniy L  wrote:
>
>> Hi Oleg,
>>
>> With the same degree of confidence we can say that anything we have in
>> the beginning of
>> the release cycle is not urgent enough. We pushed early branching
>> specifically for
>> such big changes as Docker removal/Changing repos structures and merging
>> invasive patches
>> for new release features.
>>
>> Vladimir Kuklin,
>>
>> I'm not sure what do you mean by "fixing 2 different environments"? With
>> environment without
>> containers it will simplify debugging process.
>>
>> Thanks,
>>
>> On Wed, Dec 16, 2015 at 10:12 PM, Oleg Gelbukh 
>> wrote:
>>
>>> Hi
>>>
>>> Although I agree that it should be done, the removal of Docker doesn't
>>> seem an urgent feature to me. It is not blocking anything besides moving to
>>> full package-based deployment of Fuel, as far as I understand. So it could
>>> be easily delayed for one milestone, especially if it is already almost
>>> done and submitted for review, so it could be merged fast before any other
>>> significant changes land in 'master' after it is open.
>>>
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>>
>>> On Wed, Dec 16, 2015 at 8:56 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Vladimir,

 I have other activities planned for the time immediately after SCF
 (separating UI from fuel-web, maybe it is even more invasive :-)) and it is
 not a big deal to postpone this feature or another. I am against the
 approach itself of postponing something because it is too invasive. If we
 create stable branch master becomes open. That was our primary intention to
 open master earlier than later when we decided to move stable branch
 creation.




 Vladimir Kozhukalov

 On Wed, Dec 16, 2015 at 8:28 PM, Vladimir Kuklin 
 wrote:

> Vladimir
>
> I am pretty much for removing docker, but I do not think that we
> should startle our developers/QA folks with additional efforts on fixing 2
> different environments. Let's just think from the point of development
> velocity here and at delay such changes for at least after NY. Because if
> we do it immediately after SCF there will be a whole bunch of holidays and
> Russian holidays are Jan 1st-10th and you (who is the SME for docker
> removal) will be offline. Do you really want to fix things instead of
> enjoying holidays?
>
> On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L  wrote:
>
>> +1 to Vladimir Kozhukalov,
>>
>> Entire point of moving branches creation to SCF was to perform such
>> changes as
>> early as possible in the release, I see no reasons to wait for HCF.
>>
>> Thanks,
>>
>> On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> -1
>>>
>>> We already discussed this and we have made a decision to move stable
>>> branch creation from HCF to SCF. There were reasons for this. We agreed
>>> that once stable branch is created, master becomes open for new 
>>> features.
>>> Let's avoid discussing this again.
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin <
>>> bgaiful...@mirantis.com> wrote:
>>>
 +1

 Regards,
 Bulat Gaifullin
 Mirantis Inc.



 On 15 Dec 2015, at 22:19, Andrew Maksimov 
 wrote:

 +1

 Regards,
 Andrey Maximov
 Fuel Project Manager

 On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin <
 vkuk...@mirantis.com> wrote:

> Folks
>
> This email is a proposal to push Docker containers removal from
> the master node to the date beyond 8.0 HCF.
>
> Here is why I propose to do so.
>
> Removal of Docker is a rather invasive change and may introduce a
> lot of regressions. It is well may affect how bugs are fixed - we 
> might
> have 2 ways of fixing them, while during SCF of 8.0 this may affect
> velocity of bug fixing as you need to fix bugs in master prior to 
> fixing
> them in stable branches. This actually may significantly increase our
> bugfixing pace and put 8.0 GA release on risk.
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> 

Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11, completed.

2015-12-17 Thread Drew Varner

On 2015-12-16 16:24, openstack-dev-requ...@lists.openstack.org wrote:

Thanks to everyone for their patience while we upgraded to Gerrit
2.11.  I'm happy to announce that we were able to successfully
completed this task at around 21:00 UTC.  You may hack away once more.

If you encounter any problems, please let us know here or in
#openstack-infra on Freenode.



I have encountered severe user interface problems.

Arrow keys and pageup/pagedown don't scroll correctly. Instead, it's 
stuck in a hack version of caret browsing mode (even when my browser is 
not set to caret mode). The scroll bar is very difficult to use, because 
it uses a reimplemented quarter-width semitransparent scroll bar widget 
instead of the operating system's correct scroll bar widget. 
Fortunately, the mouse wheel scroll still works correctly, aside from 
leaving a lingering header at the top of the page that never scrolls away.


The "review" button used to open its own page to post a review. This has 
been replaced by a "reply" button, which only opens a modal div in the 
same page. The modal div pretends to be a popup window, without the 
ability to be moved or resized, as a popup window can be.


The old list of reviewers was better, because it was a table that showed 
each reviewer, and their status. Now, reviewers are shown for each 
status in a horizontal list. Also, given a person's name, it's more 
difficult to tell where they stand in the review (before, a reviewer's 
statuses were all in a single row of the table). Now, it is necessary to 
search each list, which is hard because each list is horizontal. I can 
see this being more useful for reviews where there are a hundred 
reviewers, and it is more important to quickly see who is -1 or -2, to 
address their comments. However, I've never seen a review that big on 
this site.


Regards,
Drew Varner
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Gertty 1.3.0

2015-12-17 Thread James E. Blair
Announcing Gertty 1.3.0
===

Gertty is a console-based interface to the Gerrit Code Review system.

Gertty is designed to support a workflow similar to reading network
news or mail.  It syncs information from Gerrit to local storage to
support disconnected operation and easy manipulation of local git
repos.  It is fast and efficient at dealing with large numbers of
changes and projects.

The full README with installation instructions may be found here:

  https://git.openstack.org/cgit/openstack/gertty/tree/README.rst

Changes since 1.2.0:


* Moved the git repo to git.openstack.org/openstack/gertty.

* Updated commit message editing to work with API versions >= 2.11.

* Added interactive search in diff view (C-s by default).

* Added a simple kill-ring (cut/paste buffer) (C-k / C-y by default).

* Added support for multiple keystroke commands.

* Added bulk edit of topics from the change list view.

* Added a refine-search command (M-o by default) which will pre-fill the
  search dialog with the current query.

* Made the permalink selectable.

* Added support for '-' as a negation operator in queries.

* Fixed a bug syncing changes with comments on a file not actually in
  the revision.

* Fixed a collision in the default key binding (r is review, R is
  reverse sort).

* Fixed identification of internal links where Gerrit is hosted on the
  same host as another service.

Thanks to the following people whose changes are included in this
release:

  Alex Schultz
  Clint Adams
  David Stanek
  James Polley
  Jeremy Stanley
  Paul Bourke
  Sean M. Collins
  Sirushti Murugesan
  Wouter van Kesteren

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Custom fields for versioned objects

2015-12-17 Thread Jay S. Bryant

Sean,

Just an FYI that I have created a work item for my team to start working 
this.  So, watch for patches from Slade, Kendall, Ryan, Jacob and I to 
get this implemented.


Thanks,
Jay


On 12/15/2015 10:31 AM, Sean McGinnis wrote:

On Tue, Dec 15, 2015 at 04:46:02PM +0100, Micha?? Dulko wrote:

On 12/15/2015 04:08 PM, Ryan Rossiter wrote:

Thanks for the review Michal! As for the bp/bug report, there???s four options:

1. Tack the work on as part of bp cinder-objects
2. Make a new blueprint (bp cinder???object-fields)
3. Open a bug to handle all changes for enums/fields
4. Open a bug for each changed enum/field

Personally, I???m partial to #1, but #2 is better if you want to track this 
work separately from the other objects work. I don???t think we should go with 
bug reports because #3 will be a lot of Partial-Bug and #4 will be kinda 
spammy. I don???t know what the spec process is in Cinder compared to Nova, but 
this is nowhere near enough work to be spec-worthy.

If this is something you or others think should be discussed in a meeting, I 
can tack it on to the agenda for tomorrow.

bp/cinder-object topic is a little crowded with patches and it tracks
mostly rolling-upgrades-related stuff. This is more of a refactoring
than a ovo essential change, so simple specless bp/cinder-object-fields
is totally fine to me.

I agree. If you can file a new blueprint for this, I can approve it
right away. That will help track the effort.

Thanks for working on this!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Upgrading containers in Mesos/Marathon environment - request for feedback

2015-12-17 Thread Marek Zawadzki
Update - you can take a look into comments about Mesos maintaining 
upgrade process + my response to it that breaks the topic into a list of 
OpenStack services (starting with a minimal list that is necessary to 
build a working cloud) and their requirements in terms of data storage.
TL; DR: I still think in some cases we need to land containers on the 
same slave after upgrade, please provide your feedback.


Thanks!

-marek

-
[G.O] As I remember the spec for Mesos assumes that self-configuring 
service will be used. There is another spec for oslo.config to support 
remote configuration storages like ZooKeeper, ETCd, Consul. This 
approach should simplify an upgrade process as most of the configuration 
part will be done automatically by the service container itself. I 
think, we need to discuss the ways how OpenStack service can be upgraded 
and provide a baseline standards(requirements) for OpenStack services so 
that OpenStack service code will support one or another way for 
upgrades. Marathon framewok should support at least two ways of 
upgrades:(https://mesosphere.github.io/marathon/docs/deployment-design-doc.html) 


1) Rolling-Upgrade (Canary)
2) Green-Blue (A/B) upgrades
As an operator I should be able to select the specific version of 
container which I want to roll-out to the existing cloud and I have to 
be able to do a rolling-back operation in case of upgrade failure.
If we need to use volume based configuration storage then it should 
rely on Mesos Volume management 
(http://mesos.apache.org/documentation/latest/persistent-volume/) 
which is 
not released yet as I know. Mesos/Marathon should be able to place 
upgared container correctly and we should not define any contstrains for 
that in the request.  We still might use constraints but for providing 
more flexible/complex rolling-upgrade process like upgrading only 
specific number of instances at once.


[M.Z.] I agree in general about Mesos maintaing upgrades but in some 
cases it's not about volumes but underlying (host-based) data (nova, 
cinder).
Let's break it down into a list of OpenStack services (starting with a 
minimal list that is necessary to build a working cloud) and their 
requirements in terms of data storage:
1. nova: we need to ensure upgraded nova container is started on the 
same slave so it can reconnect to hypervisor and see the VMs. Not a 
candidate for Mesos Volumes (MVs).
2. cinder: must be on the same host to see block devices it uses for 
storage. Not a candidate for MVs.
3. mariadb: must be on the same host to see directory it uses for data. 
In the future when we'll use gallera perhaps it may be a group of hosts, 
not just 1 host. Sounds like a candidate for MVs.

4. keystone: does not care about the host
5. neutron: does not care about the host
Additional notes:
- currently for neutron we mount "/lib/modules" and do modprobe from the 
inside of the container to make neutron working - isnt' this wrong by 
design? Slave host should be prepared beforehand and load necessary modules.
- since for now config files are stored on slaves in a well-known path 
(this is provisioned by ansible) we can assume each slave is identical 
in this regard so we can easily move services that do not use data 
storage between slaves


 * - also I don't see how we can avoid sticking to the same slave after
   upgrade (whatever rolling or green/blue) without MVs for mariadb and
   for nova & cinder/local disks at all.

 * -  I can see cinder container not being host-dependent as long as
   it's not using local disks for storage (but ceph for example)


On 16.12.2015 18:16, Marek Zawadzki wrote:

Hello all,

I described use case and simple tool that I'd like to implement as a 
first step in this topic - would you please

review it and provide me with feedback before I start coding?
Is the use-case realistic? Is this tool going to be useful given the 
features I described? Any other comments?


https://etherpad.openstack.org/p/kolla_upgrading_containers

Thank you,

-marek zawadzki



--
Marek Zawadzki
Mirantis Kolla Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][nova] Orchiestrated upgrades in kolla

2015-12-17 Thread Marek Zawadzki
Michal, hi, what's your assumption about instances? Do we want to shut 
them down, migrate nova, and restart them or we'd like to preserve all 
running VMs?


-marek

On 04.12.2015 21:50, Michał Jastrzębski wrote:

Hey guys,

Orchiestrated upgrades is one of our highest priorities for M in 
kolla, so following up after discussion on summit I'd like to suggest 
an approach:


Instead of creating playbook called "upgrade my openstack" we will 
create "upgrade my nova" instead and approach to each service case by 
case (since all of our running services are in dockers, this is possible).
We will also make use of image tags as version carriers, so ansible 
will deploy new container only if version tag differs from what we ask 
it to deploy. This will help with idempotency of upgrade process.


So let's start with nova. Upgrade-my-nova playbook will do something 
like this:


0. We create snapshot of our mariadb-data container. This will affect 
every service, but it's always good to have backup and rollback of db 
will be manual action


1. Nova bootstrap will be called and it will perform db-migration. 
Since current approach to nova code is add-only we shouldn't need to 
stop services and old services should keep working on newer database. 
Also for minor version upgrades there will be no action here unless 
there is migration.
2. We upgrade all conductor at the same time. This should take mere 
seconds since we'll have prebuilt containers
3. We will upgrade rest of controller services with using "series: 1" 
in ansible to ensure rolling upgrades.

4. We will upgrade all of nova-compute services on it's own pace.

This workflow should be pretty robust (I think it is) and it should 
also provide idempotency.


Thoughts?

Regards,
Michal


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Marek Zawadzki
Mirantis Kolla Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit Upgrade 12/16

2015-12-17 Thread MALIN, Eylon (Eylon)
Like!

Eylon

From: ZhiQiang Fan [mailto:aji.zq...@gmail.com]
Sent: Thursday, December 17, 2015 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] Gerrit Upgrade 12/16

thanks for the effort, new feature is welcomed

but the new UI is really worse than the old one, it is not only related to 
habit, it is just ugly, for the first view I thought it has css bug.

And it is not easy to find out current review status, old UI has a clear and 
nice table for review list

On Thu, Dec 17, 2015 at 6:24 PM, Marc Koderer 
> wrote:
+1 I am also very confused about the new UI.. but maybe it takes time to get 
use to it.

Regards
Marc

Am 17.12.2015 um 10:55 schrieb Mohan Kumar 
>:

> Eugene:  +1 , Old Gerrit page was better than new one . Please fix
>
> On Thu, Dec 17, 2015 at 2:11 PM, Eugene Nikanorov 
> > wrote:
> I'm sorry to say that, but the new front page design is horrible and totally 
> confusing.
>
> I hope it'll change soon in the new release.
>
> E.
>
>
> On Tue, Dec 15, 2015 at 10:53 AM, AFEK, Ifat (Ifat) 
> > wrote:
> Hi,
>
> Reminder: Gerrit upgrade is scheduled for tomorrow at 17:00 UTC.
>
> Ifat.
>
>
> -Original Message-
> From: Spencer Krum [mailto:n...@spencerkrum.com]
> Sent: Monday, December 14, 2015 9:53 PM
> To: 
> openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Gerrit Upgrade 12/16
>
> This is a gentle reminder that the downtime will be this Wednesday starting 
> at 17:00 UTC.
>
> Thank you for your patience,
> Spencer
>
> --
>   Spencer Krum
>   n...@spencerkrum.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][grenade] l3.filters fail to update at test server

2015-12-17 Thread Ihar Hrachyshka

Zong Kai YL Li  wrote:


Hi, all.
I met a problem when I submitted a neutron l3 related patch, I updated  
l3.filters to add a rootwrap for command arp, but on test server, grenade  
seems doesn't update l3.filters, that cause gate-grenade-dsvm-neutron  
failed.


Does anyone met this problem before? If not yet, can someone please help  
review [1] ? And [2] is the bug report for this.


Thanks.
Best regards.

[1] https://review.openstack.org/#/c/258758/
[2] https://bugs.launchpad.net/grenade/+bug/1527018


Yes, you need to update rootwrap filters in grenade if they are required  
for the new side of the cloud. But why do you update from-juno scripts for  
that? Does your patch really target Kilo?


I believe that we should always unconditionally update filters with new  
versions when doing upgrades. Filters should not actually be considered  
configuration files in the first place since they are tightly coupled with  
the code that triggers commands.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-17 Thread Ihar Hrachyshka

Jeremy Stanley  wrote:


On 2015-12-16 13:59:55 -0700 (-0700), Carl Baldwin wrote:

Is someone from Neutron actively helping out here?  Need more?

[...]

I believe all of the jobs currently voting on changes proposed to
the master branch of the openstack/neutron repo are using centrally
constrained requirements (they all either have "constraints" or
"dsvm" in their job names).

I'll let Robert and Sachi, who have been spearheading this effort up
to this point, comment on whether additional assistance is needed
and where they see next steps leading.


I will update on Neutron side of things since it seems I have some  
knowledge from the fields.


So first, on resources: we have Sachi and lifeless from infra side + Ihar  
and pc_m looking into expanding constrained jobs in Neutron world.


Current status is:
- all devstack based jobs are already constrained thanks to infra;
- openstack/neutron pep8/unit/doc/cover jobs are already constrained in  
Liberty and master;
- for *aas repos, pc_m is currently on top of it, following up on my  
original patches to introduce tox targets and make them voting in master  
gate; we also look into backporting targets and gate setup into Liberty.


Note that the new alembic release still broke openstack/neutron repo  
(functional job). This is because even though the job uses devstack to  
bootstrap the environment, it still calls tox to start tests, which makes  
the new venv with no constraints applied prepared and used. Same problem  
probably affects fullstack and api jobs since they use similar setup  
approach.


So I have closing the remaining loop hole on my personal agenda; but note  
that the agenda is really packed pretty much all the time, hence no time  
guarantees; so if someone has cycles to help making the neutron gate more  
fault proof, don’t hesitate to volunteer, I am extremely happy to help with  
it.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11, completed.

2015-12-17 Thread Brian Haley

On 2015-12-16 16:24, openstack-dev-requ...@lists.openstack.org wrote:

Thanks to everyone for their patience while we upgraded to Gerrit
2.11.  I'm happy to announce that we were able to successfully
completed this task at around 21:00 UTC.  You may hack away once more.

If you encounter any problems, please let us know here or in
#openstack-infra on Freenode.


I'm still undecided on 2.11, have to give it more time, but I have noticed one 
thing that's annoying...


Trying to copy text from a review no longer works easily.  When I highlight text 
there's a little "bubble" pop-up of {press c to comment}, which seems to 
interfere with both my three-button mouse copy buffer, as well as Ctrl-C.  Call 
me a nitpicker, but having to highlight text, right-button click, Copy, 
right-button click, Paste, is a pain.


Maybe someone has a simple work-around for that.

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting minutes - 12/17/2015

2015-12-17 Thread Christopher Aedo
This morning we had a nice meeting which included an update on Glare
from Alexander Tivelkov. They're making really good progress on the
glance artifact repository work and it will be a big benefit to the
App Catalog soon.  We also discussed the dead-link checking that we
will implement in the next few days (links to related reviews are
below, plus this one that was just submitted to the project-config
repo: https://review.openstack.org/259232).

Hopefully before the end of the year we'll finish that work which also
lays out the pieces I'll need in order to add in automated asset
updating (like keeping the hash up to date for frequently updated
images).

This was our last meeting of the year, due to overlap with holidays.
Our next meeting will be January 7th, 2016 - hope to see you there!
(in the mean time, hope you all enjoy the holidays!)

-Christopher

=
#openstack-meeting-3: app-catalog
=
Meeting started by docaedo at 17:00:43 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/app_catalog/2015/app_catalog.2015-12-17-17.00.log.html
.

Meeting summary
---
* LINK:
  
https://wiki.openstack.org/wiki/Meetings/app-catalog#Proposed_Agenda_for_December_17th.2C_2015_.281700_UTC.29
  (docaedo, 17:00:57)
* rollcall  (docaedo, 17:01:04)
* Status updates  (docaedo, 17:02:00)
  * LINK: https://review.openstack.org/#/c/257663/  (docaedo, 17:02:34)
  * LINK: https://review.openstack.org/#/c/254710/  - the api
refactoring spec  (ativelkov, 17:09:54)
  * LINK: https://review.openstack.org/#/q/topic:bp/move-v3-to-glare -
patches to move the glare api to separate process / keystone
endpoint  (ativelkov, 17:10:28)
* Dead link check script  (docaedo, 17:13:38)
  * LINK: http://paste.openstack.org/show/482152  (docaedo, 17:13:58)
  * LINK: http://paste.openstack.org/show/482151  (docaedo, 17:14:03)
  * LINK: http://cdimage.debian.org/cdimage/openstack/testing/
(docaedo, 17:15:00)
  * LINK: https://etherpad.openstack.org/p/app-cat-glare  (docaedo,
17:27:28)
* Open discussion  (docaedo, 17:29:14)

Meeting ended at 17:44:16 UTC.

People present (lines said)
---
* docaedo (70)
* ativelkov (29)
* kfox (14)
* olaph (4)
* openstack (3)

Generated by `MeetBot`_ 0.1.4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-17 Thread Mike Bayer


On 12/17/2015 04:00 PM, Thomas Goirand wrote:
> On 12/16/2015 06:04 PM, Mike Bayer wrote:
>>
>>
>> On 12/16/2015 11:53 AM, Sean Dague wrote:
>>> On 12/16/2015 11:37 AM, Sean Dague wrote:
 On 12/16/2015 11:22 AM, Mike Bayer wrote:
>
>
> On 12/16/2015 09:10 AM, Sylvain Bauza wrote:
>>
>>
>> Le 16/12/2015 14:59, Sean Dague a écrit :
>>> oslo.db test_migrations is using methods for alembic, which changed in
>>> the 0.8.4 release. This ends up causing a unit test failure (at least in
>>> the Nova case) that looks like this -
>>> http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404
>>>
>>>
>>> There is an oslo.db patch out there
>>> https://review.openstack.org/#/c/258478 to fix it, but #openstack-oslo
>>> has been pretty quiet this morning, so no idea how fast this can get out
>>> into a release.
>>>
>>> -Sean
>>>
>>
>> So, it seems that the issue came when
>> https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
>> Fortunatelt, Mike seems to have a patch in place for Nova in order to
>> fix this https://review.openstack.org/#/c/253859/
>>
>> I'd suggest an intensive review pass on that one to make sure it's OK.
>
> do you folks have a best practice suggestion on this?  My patch kind of
> stayed twisting in the wind for a week even though those who read it
> would have seen "hey, this is going to break on Alembic's next minor
> release!"I pinged the important people and all on it, but it still
> got no attention.

 Which people were those? I guess none of us this morning knew this was
 going to be an issue and were surprised that 12 hours worth of patches
 had all failed.

-Sean
>>>
>>> Best practice is send an email to the openstack-dev list:
>>>
>>> Subject: [all] the following test jobs will be broken by Alembic 0.8.4
>>> release
>>>
>>> The Alembic 0.8.4 release is scheduled on 12/15. When it comes out it
>>> will break Nova unit tests on all branches.
>>>
>>> The following patch will fix master - .
>>>
>>> You all will need to backport it as well to all branches.
>>>
>>>
>>> Instead of just breaking the world, and burning 10s to 100 engineer
>>> hours in redo tests and investigating and addressing the break after the
>>> fact.
>>
>> I was hoping to get a thanks for even *testing* unreleased versions of
>> my entirely non-Openstack, upstream projects against Openstack itself.
>>  If I did *less* effort here, and just didn't bother the way 100% of all
>> other non-Openstack projects do, then I'd not have been scolded by you.
> 
> IMHO, it shouldn't be *tested*, but *gated*. Meaning that such a
> disruptive patch should be accepted only when there's a fix in all of
> OpenStack (if that is possible, of course, as I don't really know the
> details, just this thread...).
> 
> Or do you still consider SQLA / Alembic as just a 3rd party lib for
> OpenStack? Wouldn't it be nice to have it maintained directly in
> OpenStack infra? Your thoughts?

Alembic / SQLAlchemy are completely outside of Openstack and are
intrinsic to thousands of non-Openstack environments and userbases.  I
wonder why don't we ask the same question of other Openstack
dependencies, like numpy, lxml, boto, requests, rabbitMQ, and everything
else.

As far as it being *gated*, that is already the plan within Openstack
itself via the upper-constraints system discussed in this thread, which
I mistakenly thought was already in use across the board.  That is, new
release of library X hits pypi, some series of CI only involved with
testing new releases of libs above that of upper-constraints runs tests
on it to see if it breaks current openstack applications, and if so, the
constraints file stays unchanged and the bulk of gate jobs remain
unaffected.



> 
> Cheers,
> 
> Thomas Goirand (zigo)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] job failure rate at ~ 12% (check queue) <= issue?

2015-12-17 Thread Markus Zoeller
The job failure rates had an unusual rise at 06:30 UTC this morning [1].
I couldn't figure out if this is a real issue or somewhat related to
the gerrit update ~ 18 hours ago. The only thing I found was a time
frame of ~ 1h where the jobs failed to update the apt repos [2]. As
this issue is not present anymore in logstash, I expected that the job
failure rate would drop, but that didn't happen. Long story short,
do we have an issue? Or is this the aftermath of bug 1526675? 

[1] http://grafana.openstack.org/dashboard/db/tempest-failure-rate
[2] logstash query: http://bit.ly/1O8qjtn

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Regarding Designate install through Openstack-Ansible

2015-12-17 Thread Jesse Pretorius
Hi Swati,

It looks like you're doing well so far! In addition to my review feedback
via IRC, let me try to answer your questions.

The directory containing the files which hold the SHA's is here:
https://github.com/openstack/openstack-ansible/tree/master/playbooks/defaults/repo_packages

Considering that Designate is an OpenStack Service, the appropriate entries
should be added into this file:
https://github.com/openstack/openstack-ansible/blob/master/playbooks/defaults/repo_packages/openstack_services.yml

The order of the services is generally alphabetic, so Designate should be
added after Cinder and before Glance.

I'm not sure I understand your second question, but let me try and respond
with what I think you're asking. Assuming a running system with all the
other components, and an available container for Designate, the workflow
will be:

1 - you execute the os-designate-install.yml playbook.
2 - Ansible executes the pre-tasks, then the role at
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/os-designate-install.yml#L64
3 - Ansible then executes
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/roles/os_designate/tasks/main.yml
4 - Handlers are triggered when you notify them, for example:
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/roles/os_designate/tasks/designate_post_install.yml#L54

Does that help you understand how the tasks and handlers are included for
execution? Does that answer your question?

With regards to creating a DB user & DB - as you've modeled the role from
Aodh, which doesn't use Galera, you're missing that part An example you can
model from is here:
https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_glance/tasks/glance_db_setup.yml

Question 4 is a complex one, and I don't know enough about Designate to
answer properly. From what I can see you're already doing the following in
the role:

1 - preparing the host/container, RabbitMQ (and soon will be doing the DB)
for a Designate deployment
2 - installing the apt and python packages required for Designate to be
able to run
3 - placing down the config files and upstart scripts for Designate to run
4 - registering the Designate service endpoint

Once that's done, I'm not entirely sure what else needs to be done to make
Designate do what it needs to do. At that point, are you able to see the
service in the Keystone service catalog? Can you interact with it via the
CLI?

A few housekeeping items relating to the use of email and the mailing list:

If you wish to gain the attention of particular communities on the
openstack-dev mailing list, the best is to tag the subject line. In this
particular case as you're targeting the OpenStack-Ansible community with
questions you should add '[openstack-ansible]' as a tag in your subject
line. If you were also targeting questions regarding Designate, or wish for
the Designate community to also be informed then similarly add
'[designate]' as a tag in the subject line.

Secondly, note that you're addressing a community of people. There's no
need to single anyone out and very definitely no need to address or CC our
email addresses. We're all members of the openstack-dev mailing list and
will all respond to you via the mailing list.

I'd appreciate it if you could respect these conventions from now on. It'll
help improve your experience in the OpenStack community.

Thanks,

Jesse
IRC: odyssey4me

On 16 December 2015 at 04:46, Sharma Swati6  wrote:

> Hi
> *Major, Jean, Jesse and Kevin,*
> I have added some part of *Designate* code and uploaded it on
> https://github.com/sharmaswati6/designate_files
>
> Could you please review this and help me in answering the following
> questions- 
>
>1. Is there some specific location for the server-side code for all
>Openstack components? And whether I will downloading the actual designate
>git code to the same location?
>2. Is there some specific file where I have to give the reference for
>"tasks:" and "handlers:", so that they can be called via roles.
>3. To create Designate mysql database, is it reference to be given
>somewhere?
>4. How are the hooks(setup details) of a new component associated to
>it in Openstack-Ansible. *Eg*- the setup details for Designate
>
> http://git.openstack.org/cgit/openstack/designate/tree/setup.cfg?wb48617274=B56AA8FF
>should map to which file in Openstack-Ansible structure?
>
> Thanks in advance.
> Regards,
> Swati Sharma
> System Engineer
> Tata Consultancy Services
> Gurgaon - 122 004,Haryana
> India
> Cell:- +91-9717238784
> Mailto: sharma.swa...@tcs.com
> Website: http://www.tcs.com
> 
> Experience certainty. IT Services
> Business Solutions
> Consulting
> 
>
>
> -Sharma Swati6/DEL/TCS wrote: -
> To: ma...@mhtx.net, 

Re: [openstack-dev] [stable] meeting time proposal

2015-12-17 Thread Ihar Hrachyshka

Matt Riedemann  wrote:




On 12/16/2015 2:45 PM, Mark McClain wrote:
On Dec 16, 2015, at 2:12 PM, Matt Riedemann  
 wrote:


I'm not entirely sure what the geo distribution is for everyone that  
works on stable, but I know we have people in Europe and some people in  
Australia.  So I was thinking alternating weekly meetings:


Mondays at 2100 UTC

Tuesdays at 1500 UTC


Were you thinking of putting these on the opposite weeks as Neutron’s  
Monday/Tuesday schedule?


Does that at least sort of work for people that would be interested in  
attending a meeting about stable? I wouldn't expect a full hour  
discussion, my main interests are highlighting status, discussing any  
issues that come up in the ML or throughout the week, and whatever else  
people want to go over (work items, questions, process discussion, etc).


Thanks for considering neutron overlaps. I am really interested in both  
meetings, so it makes my life easier if they are not going at the same time.


I am also fine starting in January.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][grenade] l3.filters fail to update at test server

2015-12-17 Thread Zong Kai YL Li
Hi, all.
I met a problem when I submitted a neutron l3 related patch, I updated
l3.filters to add a rootwrap for command arp, but on test server, grenade
seems doesn't update l3.filters, that cause gate-grenade-dsvm-neutron
failed.

Does anyone met this problem before? If not yet, can someone please help
review [1] ? And [2] is the bug report for this.

Thanks.
Best regards.

[1] https://review.openstack.org/#/c/258758/
[2] https://bugs.launchpad.net/grenade/+bug/1527018
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Gerrit Upgrade 12/16

2015-12-17 Thread MALIN, Eylon (Eylon)
Please remove the [vitrage] from the title. It’s has nothing with vitrage 
project.

From: ZhiQiang Fan [mailto:aji.zq...@gmail.com]
Sent: Thursday, December 17, 2015 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [vitrage] Gerrit Upgrade 12/16

thanks for the effort, new feature is welcomed

but the new UI is really worse than the old one, it is not only related to 
habit, it is just ugly, for the first view I thought it has css bug.

And it is not easy to find out current review status, old UI has a clear and 
nice table for review list

On Thu, Dec 17, 2015 at 6:24 PM, Marc Koderer 
> wrote:
+1 I am also very confused about the new UI.. but maybe it takes time to get 
use to it.

Regards
Marc

Am 17.12.2015 um 10:55 schrieb Mohan Kumar 
>:

> Eugene:  +1 , Old Gerrit page was better than new one . Please fix
>
> On Thu, Dec 17, 2015 at 2:11 PM, Eugene Nikanorov 
> > wrote:
> I'm sorry to say that, but the new front page design is horrible and totally 
> confusing.
>
> I hope it'll change soon in the new release.
>
> E.
>
>
> On Tue, Dec 15, 2015 at 10:53 AM, AFEK, Ifat (Ifat) 
> > wrote:
> Hi,
>
> Reminder: Gerrit upgrade is scheduled for tomorrow at 17:00 UTC.
>
> Ifat.
>
>
> -Original Message-
> From: Spencer Krum [mailto:n...@spencerkrum.com]
> Sent: Monday, December 14, 2015 9:53 PM
> To: 
> openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Gerrit Upgrade 12/16
>
> This is a gentle reminder that the downtime will be this Wednesday starting 
> at 17:00 UTC.
>
> Thank you for your patience,
> Spencer
>
> --
>   Spencer Krum
>   n...@spencerkrum.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

2015-12-17 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
I would like organize a meeting between the Nova and Neutron team to work 
refining the Nova/Neutron notificiations for the Live Migration.

Today we only have Notification from Neutron to Nova on any port status update.

But we don't have any similar notification from Nova on any Migration state 
change.
Neutron L3 will be interested in knowing the state change for vm migration and 
can take necessary action pro-actively to create the necessary L3 related 
plumbing that is required.

Here are some of the bugs that are currently filed with respect to nova live 
migration and neutron.
https://bugs.launchpad.net/neutron/+bug/1456073
https://bugs.launchpad.net/neutron/+bug/1414559

Please let me know who will be interested in participating in the discussion.
It would be great if we get some cores attention from "Nova and Neutron".

Thanks.
Swaminathan Vasudevan
Systems Software Engineer (TC)


Hewlett Packard Enterprise
Networking Business Unit
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hpe.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-17 Thread Brant Knudson
On Thu, Dec 17, 2015 at 1:51 PM, Davanum Srinivas  wrote:

> Thinking more about it. The only change we'll have is that if someone
> files a oslo-specs for oslo.policy we need to tell them to switch over
> to keystone-specs. We could add notes in README etc to make this
> apparent. So i am +1 to making this move.
>
> Brant, other keystone cores,
> Can you please file the governance review request and we can make sure
> oslo cores chime in there? and make it official?
>
>
Here's the proposed governance change:
https://review.openstack.org/#/c/259169/

 - Brant



> Thanks,
> Dims
>
>
> On Thu, Dec 17, 2015 at 2:40 PM, Flavio Percoco  wrote:
> > On 16/12/15 18:51 -0800, Morgan Fainberg wrote:
> >>
> >> For what is is worth, we originally proposed oslo.policy to graduate to
> >> Keystone when we were converting to the library. I still think it
> belongs
> >> in
> >> keystone (as long as the oslo team doesn't mind that long-term keystone
> >> team
> >> owns something in the oslo. namespace).
> >>
> >> The short term adding keystone-core should get some more eyes on the
> >> reviews,
> >> so +1 to that.
> >
> >
> >
> > Just want to +1 all the above.
> >
> > It'd be great if we can finally hand the library over to the keystone
> > team, where I think it belongs.
> >
> > Cheers,
> > Flavio
> >
> >>
> >> --Morgan
> >>
> >> On Wed, Dec 16, 2015 at 4:08 PM, Davanum Srinivas 
> >> wrote:
> >>
> >>As an interim measure, added keystone-core to oslo-policy-core[1]
> >>
> >>Thanks,
> >>Dims
> >>
> >>[1] https://review.openstack.org/#/admin/groups/556,members
> >>
> >>On Wed, Dec 16, 2015 at 10:40 PM, Dolph Mathews
> >> 
> >>wrote:
> >>>
> >>> On Wed, Dec 16, 2015 at 1:33 PM, Davanum Srinivas <
> dava...@gmail.com>
> >>wrote:
> >>>>
> >>>> Brant,
> >>>>
> >>>> I am ok either way, guess the alternative was to add keystone-core
> >>>> directly to the oslo.policy core group (can't check right now).
> >>>
> >>>
> >>> That's certainly reasonable, and kind of what we did with pycadf.
> >>>
> >>>>
> >>>>
> >>>> The name is very possibly going to create confusion
> >>>
> >>>
> >>> I assume you're not referring to unnecessarily changing the name of
> >> the
> >>> project itself (oslo.policy) just because there might be a shift in
> >> the
> >>> group of maintainers! Either way, let's definitely not do that.
> >>>
> >>>>
> >>>> -- Dims
> >>>>
> >>>> On Wed, Dec 16, 2015 at 7:22 PM, Jordan Pittier
> >>>>  wrote:
> >>>> > Hi,
> >>>> > I am sure oslo.policy would be good under Keystone's governance.
> >> But I
> >>>> > am
> >>>> > not sure I understood what's wrong in having oslo.policy under
> the
> >>oslo
> >>>> > program ?
> >>>> >
> >>>> > Jordan
> >>>> >
> >>>> > On Wed, Dec 16, 2015 at 6:13 PM, Brant Knudson 
> >> wrote:
> >>>> >>
> >>>> >>
> >>>> >> I'd like to propose moving oslo.policy from the oslo program to
> >> the
> >>>> >> keystone program. Keystone developers know what's going on with
> >>>> >> oslo.policy
> >>>> >> and I think are more interested in what's going on with it so
> >> that
> >>>> >> reviews
> >>>> >> will get proper vetting, and it's not like oslo doesn't have
> >> enough
> >>>> >> going on
> >>>> >> with all the other repos. Keystone core has equivalent
> stringent
> >>>> >> development
> >>>> >> policy that we already enforce with keystoneclient and
> >> keystoneauth,
> >>so
> >>>> >> oslo.policy isn't going to be losing any stability.
> >>>> >>
> >>>> >> If there aren't any objections, let's go ahead with this. I
> heard
> >>this
> >>>> >> requires a change to a governance repo, and gerrit permission
> >> changes
> >>>> >> to
> >>>> >> make keystone-core core, and updates in oslo.policy to change
> >> some
> >>docs
> >>>> >> or
> >>>> >> links. Any oslo.policy specs that are currently proposed
> >>>> >>
> >>>> >> - Brant
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >>
> >>
> >>
> __
> >>>> >> OpenStack Development Mailing List (not for usage questions)
> >>>> >> Unsubscribe:
> >>>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>>> >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>> >>
> >>>> >
> >>>> >
> >>>> >
> >>>> >
> >>
> >>
> __
> >>>> > OpenStack Development Mailing List (not for usage questions)
> >>>> > Unsubscribe:
> >>>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>>> >
> 

Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-17 Thread Sergii Golovatiuk
+1

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Thu, Dec 17, 2015 at 3:43 PM, Mike Scherbakov 
wrote:

> If there are no concrete points why we should wait, I'm +1 to go ahead
> with merges.
>
> On Thu, Dec 17, 2015 at 1:32 AM Oleg Gelbukh 
> wrote:
>
>> Evgeniy,
>>
>> True, and I fully support merging this particular change as soon as
>> possible, i.e. the moment the 'master' is open for 9.0 development.
>>
>> -Oleg
>>
>> On Thu, Dec 17, 2015 at 12:28 PM, Evgeniy L  wrote:
>>
>>> Hi Oleg,
>>>
>>> With the same degree of confidence we can say that anything we have in
>>> the beginning of
>>> the release cycle is not urgent enough. We pushed early branching
>>> specifically for
>>> such big changes as Docker removal/Changing repos structures and merging
>>> invasive patches
>>> for new release features.
>>>
>>> Vladimir Kuklin,
>>>
>>> I'm not sure what do you mean by "fixing 2 different environments"? With
>>> environment without
>>> containers it will simplify debugging process.
>>>
>>> Thanks,
>>>
>>> On Wed, Dec 16, 2015 at 10:12 PM, Oleg Gelbukh 
>>> wrote:
>>>
 Hi

 Although I agree that it should be done, the removal of Docker doesn't
 seem an urgent feature to me. It is not blocking anything besides moving to
 full package-based deployment of Fuel, as far as I understand. So it could
 be easily delayed for one milestone, especially if it is already almost
 done and submitted for review, so it could be merged fast before any other
 significant changes land in 'master' after it is open.

 --
 Best regards,
 Oleg Gelbukh

 On Wed, Dec 16, 2015 at 8:56 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Vladimir,
>
> I have other activities planned for the time immediately after SCF
> (separating UI from fuel-web, maybe it is even more invasive :-)) and it 
> is
> not a big deal to postpone this feature or another. I am against the
> approach itself of postponing something because it is too invasive. If we
> create stable branch master becomes open. That was our primary intention 
> to
> open master earlier than later when we decided to move stable branch
> creation.
>
>
>
>
> Vladimir Kozhukalov
>
> On Wed, Dec 16, 2015 at 8:28 PM, Vladimir Kuklin  > wrote:
>
>> Vladimir
>>
>> I am pretty much for removing docker, but I do not think that we
>> should startle our developers/QA folks with additional efforts on fixing 
>> 2
>> different environments. Let's just think from the point of development
>> velocity here and at delay such changes for at least after NY. Because if
>> we do it immediately after SCF there will be a whole bunch of holidays 
>> and
>> Russian holidays are Jan 1st-10th and you (who is the SME for docker
>> removal) will be offline. Do you really want to fix things instead of
>> enjoying holidays?
>>
>> On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L  wrote:
>>
>>> +1 to Vladimir Kozhukalov,
>>>
>>> Entire point of moving branches creation to SCF was to perform such
>>> changes as
>>> early as possible in the release, I see no reasons to wait for HCF.
>>>
>>> Thanks,
>>>
>>> On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 -1

 We already discussed this and we have made a decision to move
 stable branch creation from HCF to SCF. There were reasons for this. We
 agreed that once stable branch is created, master becomes open for new
 features. Let's avoid discussing this again.

 Vladimir Kozhukalov

 On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin <
 bgaiful...@mirantis.com> wrote:

> +1
>
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
>
>
>
> On 15 Dec 2015, at 22:19, Andrew Maksimov 
> wrote:
>
> +1
>
> Regards,
> Andrey Maximov
> Fuel Project Manager
>
> On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin <
> vkuk...@mirantis.com> wrote:
>
>> Folks
>>
>> This email is a proposal to push Docker containers removal from
>> the master node to the date beyond 8.0 HCF.
>>
>> Here is why I propose to do so.
>>
>> Removal of Docker is a rather invasive change and may introduce a
>> lot of regressions. It is well may affect how bugs are fixed - we 
>> might
>> have 2 ways of fixing them, while during SCF of 8.0 this may affect
>> velocity of bug fixing as you need to fix bugs 

[openstack-dev] [Fuel] Meeting Schedule for Dec / Jan and Holidays

2015-12-17 Thread Andrew Woodward
As we discussed in the meeting today, the normal meeting schedule of every
Thursday overlaps with a number of holiday times

Dec 24, Christmas Eve
Dec 31, New Years Eve
Jan 7, Russian Orthodox Christmas day // Part of New Years rest

We agreed for the following schedule.

Dec 24 will be moved to Dec 23.
Dec 31 is canceled.
Jan 7 is canceled.
Jan 14, return to our normal schedule.

For the Dec 23rd meeting, I will try look to see if there is an opening in
the #openstack-meeting* rooms for us in our regular time slot, otherwise we
will conduct the meeting on #fuel-dev

-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] [radvd] Advertise tenant prefixes from router to outside

2015-12-17 Thread Carl Baldwin
On Thu, Dec 17, 2015 at 4:09 PM, Vladimir Eremin  wrote:
> Hi Carl,
>
> I’ll fil RFE for sure, thank you for the link to the process )
>
> So actually, we should announce all SUBNETS we’ve attached to router. 
> Otherwise is will not work, because external network router will have no 
> idea, where the traffic should be routed back. It is an actual viability 
> discriminator: subnets, that doesn’t attached are counting as unviable to the 
> external network context.

The subnets attached to the router which are not controlled by a
subnet pool come straight from the user and there is no validation of
the addresses used, no overlap prevention, nor any other kind of
control.  We can't leave it up to tenants to advertise whatever
subnets they want to to the external network.  The advertisements must
be limited to subnets allocated to the tenant by the operator of the
cloud with some mechanism for preventing overlap of addresses between
subnets.

A subnet that has an address scope was allocated from a pool defined
under that scope.  We know where the address came from and that it
will not overlap any other subnet in the same scope.

For subnets that don't meet this criteria, their traffic should not
even be routed out to the external network in the first place let
alone get a route back to the router.  The address pools blueprint
covers this too.

> BTW, could you please point me to the spec for address scopes.

Sure:  https://blueprints.launchpad.net/neutron/+spec/address-scopes

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][ceilometer][aodh] Only support sql for alarm data

2015-12-17 Thread liu sheng
Hi operators and developers,


We have support dedicated dabase for alarm data[1], that allow deployers to 
config ceilometer distributing alarm data to a separate database. And In 
liberty, we have splitted out the alarming service as a separate project Aodh 
from Ceilometer tree. Continue from Ceilometer implementation, we now support 
mongodb, Hbase, sqalchemy storage drivers for alarm data, but, we agree to 
deprecate the mongodb and Hbase support for alarm data and only keep sql. The 
reasons and advantages for this change have been described in the proposal[2]. 
we will deprecate them in this release cycle and try to provide data migration 
tool for migrating data from mongodb/Hbase to sql, and remove the mongodb and 
Hbase support in future O* cycle.


For now, I want to confirm the effect about this change of your current 
deployment/products, appreciate your feedback.


[1] https://blueprints.launchpad.net/ceilometer/+spec/dedicated-alarm-database
[2] 
https://review.openstack.org/#/c/258283/2/specs/mitaka/only-support-sqlalchemy-in-aodh.rst


Best regards
Liu sheng__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-17 Thread Robert Collins
On 18 December 2015 at 16:58, Mike Bayer  wrote:
>
>

>> Or do you still consider SQLA / Alembic as just a 3rd party lib for
>> OpenStack? Wouldn't it be nice to have it maintained directly in
>> OpenStack infra? Your thoughts?
>
> Alembic / SQLAlchemy are completely outside of Openstack and are
> intrinsic to thousands of non-Openstack environments and userbases.  I
> wonder why don't we ask the same question of other Openstack
> dependencies, like numpy, lxml, boto, requests, rabbitMQ, and everything
> else.

Whats happening organically is that many contributing orgs also
contribute to projects like libvirt, sqlaclhemy and so on - so there's
some 'common funding source' pattern happening - but IMO its entirely
appropriate to consider these projects as independent. Because.. they
are :).

> As far as it being *gated*, that is already the plan within Openstack
> itself via the upper-constraints system discussed in this thread, which
> I mistakenly thought was already in use across the board.  That is, new
> release of library X hits pypi, some series of CI only involved with
> testing new releases of libs above that of upper-constraints runs tests
> on it to see if it breaks current openstack applications, and if so, the
> constraints file stays unchanged and the bulk of gate jobs remain
> unaffected.

Yah, we're rolling that out more broadly at the moment. cookiecutter
has been updated, there's a review setting -constraints as the
expected pattern in the infra manual now.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev