Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-02 Thread Konstantin Kalin
I would add on top of that Dmirty said that HA queues also increases 
probability to have messages duplications under certain scenarios (besides of 
that they are ~10x slower). Would Openstack services tolerate if RPC request 
will be duplicated? What I've already learned - No. Also if 
cluster_partition_handling=autoheal (what we currently have) the messages may 
be lost as well during the failover scenarios like non-HA queues. Honestly I 
believe there is no difference between HA queues and non HA-queues in RPC layer 
fail-tolerance in the way how we use RabbitMQ. 

Thank you,
Konstantin. 

> On Dec 2, 2015, at 4:05 AM, Dmitry Mescheryakov  
> wrote:
> 
> 
> 
> 2015-12-02 12:48 GMT+03:00 Sergii Golovatiuk  >:
> Hi,
> 
> 
> On Tue, Dec 1, 2015 at 11:34 PM, Peter Lemenkov  > wrote:
> Hello All!
> 
> Well, side-effects (or any other effects) are quite obvious and
> predictable - this will decrease availability of RPC queues a bit.
> That's for sure.
> 
> Imagine the case when user creates VM instance, and some nova messages are 
> lost. I am not sure we want half-created instances. Who is going to clean up 
> them? Since we do not have results of destructive tests, I vote -2 for FFE 
> for this feature.
> 
> Sergii, actually messaging layer can not provide any guarantee that it will 
> not happen even if all messages are preserved. Assume the following scenario:
> 
>  * nova-scheduler (or conductor?) sends request to nova-compute to spawn a VM
>  * nova-compute receives the message and spawned the VM
>  * due to some reason (rabbitmq unavailable, nova-compute lagged) 
> nova-compute did not respond within timeout (1 minute, I think)
>  * nova-scheduler does not get response within 1 minute and marks the VM with 
> Error status.
> 
> In that scenario no message was lost, but still we have a VM half spawned and 
> it is up to Nova to handle the error and do the cleanup in that case.
> 
> Such issue already happens here and there when something glitches. For 
> instance our favorite MessagingTimeout exception could be caused by such 
> scenario. Specifically, in that example when nova-scheduler times out waiting 
> for reply, it will throw exactly that exception. 
> 
> My point is simple - lets increase our architecture scalability by 2-3 times 
> by _maybe_ causing more errors for users during failover. The failover time 
> itself should not get worse (to be tested by me) and errors should be 
> correctly handler by services anyway.
> 
> 
> However, Dmitry's guess is that the overall messaging backplane
> stability increase (RabitMQ won't fail too often in some cases) would
> compensate for this change. This issue is very much real - speaking of
> me I've seen an awful cluster's performance degradation when a failing
> RabbitMQ node was killed by some watchdog application (or even worse
> wasn't killed at all). One of these issues was quite recently, and I'd
> love to see them less frequently.
> 
> That said I'm uncertain about the stability impact of this change, yet
> I see a reasoning worth discussing behind it.
> 
> 2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk  >:
> > Hi,
> >
> > -1 for FFE for disabling HA for RPC queue as we do not know all side effects
> > in HA scenarios.
> >
> > On Tue, Dec 1, 2015 at 7:34 PM, Dmitry Mescheryakov
> > > wrote:
> >>
> >> Folks,
> >>
> >> I would like to request feature freeze exception for disabling HA for RPC
> >> queues in RabbitMQ [1].
> >>
> >> As I already wrote in another thread [2], I've conducted tests which
> >> clearly show benefit we will get from that change. The change itself is a
> >> very small patch [3]. The only thing which I want to do before proposing to
> >> merge this change is to conduct destructive tests against it in order to
> >> make sure that we do not have a regression here. That should take just
> >> several days, so if there will be no other objections, we will be able to
> >> merge the change in a week or two timeframe.
> >>
> >> Thanks,
> >>
> >> Dmitry
> >>
> >> [1] https://review.openstack.org/247517 
> >> 
> >> [2]
> >> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081006.html
> >>  
> >> 
> >> [3] https://review.openstack.org/249180 
> >> 
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> >> 
> >> 

Re: [openstack-dev] [nova]New Quota Subteam on Nova

2015-12-02 Thread Dulko, Michal
On Tue, 2015-12-01 at 11:45 -0800, Vilobh Meshram wrote:

> Having worked in the area of Quotas for a while now by introducing
> features like Cinder Nested Quota Driver [1] [2] I strongly feel that
> something like a Nova Quota sub-team will definitely help. Mentioning
> about Cinder Quota driver since it was accepted in Mitaka design
> summit that Nova Nested Quota Driver[3] would like to pursue the route
> taken by Cinder.  Since Nested quota is a one part of Quota subsystem
> and working in small team helped to iterate quickly for Nested Quota
> patches[4][5][6][7] so IMHO forming a Nova quota subteam will help.

Just FYI - recently we've identified several caveats in Cinder's nested
quotas approach. Main issue is inability to function without Keystone V3
API. I'm not sure if dropping support for V2 was intentional. Apart from
that some exceptions are silenced which results in odd behavior when
calling quotas from non-admin user.

I don't want to disappreciate the work you've done, but just signal that
quota management functionality isn't trivial to work on.

On the whole topic - I think this may be even cross-project effort. In
case of Cinder - we have quotas code that's very similar to Nova's, so I
will be watching work of the subteam very closely for any improvements
that can be applied to Cinder. We're struggling with quotas getting out
of sync all the time.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [oslo.messaging] [fuel] [ha] Is Swift going to support oslo.messaging?

2015-12-02 Thread Denis Egorenko
Hi Mehdi,

Thank you for your reply! It works for us.

2015-12-01 20:40 GMT+03:00 Mehdi Abaakouk :

> Hi,
>
> Current scheme supports only one RabbitMQ node with url parameter.
>>
>
> That's not true, you can pass many hosts via the url like that:
> rabbit://user:pass@host1:port1,user:pass@host2:port2/vhost
>
>
> http://docs.openstack.org/developer/oslo.messaging/transport.html#oslo_messaging.TransportURL
>
> But this is perhaps not enough for your use-case.
>
> Cheers,
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>



-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-02 Thread Dmitry Mescheryakov
2015-12-02 13:11 GMT+03:00 Bogdan Dobrelya :

> On 01.12.2015 23:34, Peter Lemenkov wrote:
> > Hello All!
> >
> > Well, side-effects (or any other effects) are quite obvious and
> > predictable - this will decrease availability of RPC queues a bit.
> > That's for sure.
>
> And consistency. Without messages and queues being synced between all of
> the rabbit_hosts, how exactly dispatching rpc calls would work then
> workers connected to different AMQP urls?
>

There will be no problem with consistency here. Since we will disable HA,
queues will not be synced across the cluster and there will be exactly one
node hosting messages for a queue.


> Perhaps that change would only raise the partitions tolerance to the
> very high degree? But this should be clearly shown by load tests - under
> network partitions with mirroring against network partitions w/o
> mirroring. Rally could help here a lot.


Nope, the change will not increase partitioning tolerance at all. What I
expect is that it will not get worse. Regarding tests, sure we are going to
perform destructive testing to verify that there is no regression in
recovery time.


>
> >
> > However, Dmitry's guess is that the overall messaging backplane
> > stability increase (RabitMQ won't fail too often in some cases) would
> > compensate for this change. This issue is very much real - speaking of
>
> Agree, that should be proven by (rally) tests for the specific case I
> described in the spec [0]. Please correct it as I may understand things
> wrong, but here it is:
> - client 1 submits RPC call request R to the server 1 connected to the
> AMQP host X
> - worker A listens for jobs topic to the AMQP host X
> - worker B listens for jobs topic to the AMQP host Y
> - a job by the R was dispatched to the worker B
> Q: would the B never receive its job message because it just cannot see
> messages at the X?
> Q: timeout failure as the result.
>
> And things may go even much more weird for more complex scenarios.
>

Yes, in the described scenario B will receive the job. Node Y will proxy B
listening to node X. So, we will not experience timeout. Also, I have
replied in the review.


>
> [0] https://review.openstack.org/247517
>
> > me I've seen an awful cluster's performance degradation when a failing
> > RabbitMQ node was killed by some watchdog application (or even worse
> > wasn't killed at all). One of these issues was quite recently, and I'd
> > love to see them less frequently.
> >
> > That said I'm uncertain about the stability impact of this change, yet
> > I see a reasoning worth discussing behind it.
>
> I would support this to the 8.0 if only proven by the load tests within
> scenario I described plus standard destructive tests


As I said in my initial email, I've run boot_and_delete_server_with_secgroups
Rally scenario to verify my change. I think I should provide more details:

Scale team considers this test to be the worst case we have for RabbitMQ.
I've ran the test on 200 nodes lab and what I saw is that when I disable
HA, test time becomes 2 times smaller. That clearly shows that there is a
test where our current messaging system is bottleneck and just tuning it
considerably improves performance of OpenStack as a whole. Also while there
was small fail rate for HA mode (around 1-2%), in non-HA mode all tests
always completed successfully.

Overall, I think current results are already enough to consider the change
useful. What is left is to confirm that it does not make our failover worse.


> >
> > 2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk :
> >> Hi,
> >>
> >> -1 for FFE for disabling HA for RPC queue as we do not know all side
> effects
> >> in HA scenarios.
> >>
> >> On Tue, Dec 1, 2015 at 7:34 PM, Dmitry Mescheryakov
> >>  wrote:
> >>>
> >>> Folks,
> >>>
> >>> I would like to request feature freeze exception for disabling HA for
> RPC
> >>> queues in RabbitMQ [1].
> >>>
> >>> As I already wrote in another thread [2], I've conducted tests which
> >>> clearly show benefit we will get from that change. The change itself
> is a
> >>> very small patch [3]. The only thing which I want to do before
> proposing to
> >>> merge this change is to conduct destructive tests against it in order
> to
> >>> make sure that we do not have a regression here. That should take just
> >>> several days, so if there will be no other objections, we will be able
> to
> >>> merge the change in a week or two timeframe.
> >>>
> >>> Thanks,
> >>>
> >>> Dmitry
> >>>
> >>> [1] https://review.openstack.org/247517
> >>> [2]
> >>>
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081006.html
> >>> [3] https://review.openstack.org/249180
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> 

Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-02 Thread Vladimir Kuklin
Dmitry

Although, I am a big fan of disabling replication for RPC, I think it is
too late to introduce it so late by default. I would suggest that we
control this part of OCF script with a specific parameter 'e.g. enable RPC
replication' and set it to 'true' by default. Then we can set this option
to false as an experimental feature, run some tests and decide whether it
should be enabled by default or not. In this case, users who are interested
in this, will be able to enable it when they need it, while we still stick
to our old and tested approach.

On Wed, Dec 2, 2015 at 5:52 PM, Konstantin Kalin 
wrote:

> I would add on top of that Dmirty said that HA queues also increases
> probability to have messages duplications under certain scenarios (besides
> of that they are ~10x slower). Would Openstack services tolerate if RPC
> request will be duplicated? What I've already learned - No. Also if
> cluster_partition_handling=autoheal (what we currently have) the messages
> may be lost as well during the failover scenarios like non-HA
> queues. Honestly I believe there is no difference between HA queues and non
> HA-queues in RPC layer fail-tolerance in the way how we use RabbitMQ.
>
> Thank you,
> Konstantin.
>
> On Dec 2, 2015, at 4:05 AM, Dmitry Mescheryakov <
> dmescherya...@mirantis.com> wrote:
>
>
>
> 2015-12-02 12:48 GMT+03:00 Sergii Golovatiuk :
>
>> Hi,
>>
>>
>> On Tue, Dec 1, 2015 at 11:34 PM, Peter Lemenkov 
>> wrote:
>>
>>> Hello All!
>>>
>>> Well, side-effects (or any other effects) are quite obvious and
>>> predictable - this will decrease availability of RPC queues a bit.
>>> That's for sure.
>>>
>>
>> Imagine the case when user creates VM instance, and some nova messages
>> are lost. I am not sure we want half-created instances. Who is going to
>> clean up them? Since we do not have results of destructive tests, I vote -2
>> for FFE for this feature.
>>
>
> Sergii, actually messaging layer can not provide any guarantee that it
> will not happen even if all messages are preserved. Assume the following
> scenario:
>
>  * nova-scheduler (or conductor?) sends request to nova-compute to spawn a
> VM
>  * nova-compute receives the message and spawned the VM
>  * due to some reason (rabbitmq unavailable, nova-compute lagged)
> nova-compute did not respond within timeout (1 minute, I think)
>  * nova-scheduler does not get response within 1 minute and marks the VM
> with Error status.
>
> In that scenario no message was lost, but still we have a VM half spawned
> and it is up to Nova to handle the error and do the cleanup in that case.
>
> Such issue already happens here and there when something glitches. For
> instance our favorite MessagingTimeout exception could be caused by such
> scenario. Specifically, in that example when nova-scheduler times out
> waiting for reply, it will throw exactly that exception.
>
> My point is simple - lets increase our architecture scalability by 2-3
> times by _maybe_ causing more errors for users during failover. The
> failover time itself should not get worse (to be tested by me) and errors
> should be correctly handler by services anyway.
>
>
>>> However, Dmitry's guess is that the overall messaging backplane
>>> stability increase (RabitMQ won't fail too often in some cases) would
>>> compensate for this change. This issue is very much real - speaking of
>>> me I've seen an awful cluster's performance degradation when a failing
>>> RabbitMQ node was killed by some watchdog application (or even worse
>>> wasn't killed at all). One of these issues was quite recently, and I'd
>>> love to see them less frequently.
>>>
>>> That said I'm uncertain about the stability impact of this change, yet
>>> I see a reasoning worth discussing behind it.
>>>
>>> 2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk :
>>> > Hi,
>>> >
>>> > -1 for FFE for disabling HA for RPC queue as we do not know all side
>>> effects
>>> > in HA scenarios.
>>> >
>>> > On Tue, Dec 1, 2015 at 7:34 PM, Dmitry Mescheryakov
>>> >  wrote:
>>> >>
>>> >> Folks,
>>> >>
>>> >> I would like to request feature freeze exception for disabling HA for
>>> RPC
>>> >> queues in RabbitMQ [1].
>>> >>
>>> >> As I already wrote in another thread [2], I've conducted tests which
>>> >> clearly show benefit we will get from that change. The change itself
>>> is a
>>> >> very small patch [3]. The only thing which I want to do before
>>> proposing to
>>> >> merge this change is to conduct destructive tests against it in order
>>> to
>>> >> make sure that we do not have a regression here. That should take just
>>> >> several days, so if there will be no other objections, we will be
>>> able to
>>> >> merge the change in a week or two timeframe.
>>> >>
>>> >> Thanks,
>>> >>
>>> >> Dmitry
>>> >>
>>> >> [1] https://review.openstack.org/247517
>>> >> [2]
>>> >>
>>> 

Re: [openstack-dev] [nova] jsonschema for scheduler hints

2015-12-02 Thread Sylvain Bauza



Le 02/12/2015 15:23, Sean Dague a écrit :

We have previously agreed that scheduler hints in Nova are an open ended
thing. It's expected for sites to have additional scheduler filters
which expose new hints. The way we handle that with our strict
jsonschema is that we allow additional properties -
https://github.com/openstack/nova/blob/1734ce7101982dd95f8fab1ab4815bd258a33744/nova/api/openstack/compute/schemas/scheduler_hints.py#L65

This means that if you specify some garbage hint, you don't get feedback
that it was garbage in your environment. That lost a couple of days
building multinode tests in the gate. Having gotten used to the hints
that "you've given us bad stuff", this was a stark change back to the
old world.

Would it be possible to make it so that the schema could be explicitly
extended (instead of implicitly extended). So that additional
properties=False, but a mechanism for a scheduler filter to register
it's jsonschema in?


I'm pretty +1 for that because we want to have in-tree filters clear for 
the UX they provide when asking for scheduler hints.


For the moment, it's possible to have 2 different filters asking for the 
same hint without providing a way to explain the semantics so I would 
want to make sure that one in-tree filter could just have the same 
behaviour for *all the OpenStack deployments.*


That said, I remember some discussion we had about that in the past, and 
the implementation details we discussed about having the Nova API 
knowing the list of filters and fitering by those.
To be clear, I want to make sure that we could not leak the deployment 
by providing a 401 if a filter is not deployed, but rather just make 
sure that all our in-tree filters are like checked, even if they aren't 
deployed.


That leaves the out-of-tree discussion about custom filters and how we 
could have a consistent behaviour given that. Should we accept something 
in a specific deployment while another deployment could 401 against it ? 
Mmm, bad to me IMHO.



-Sylvain


-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][inspector] CMDB integration

2015-12-02 Thread Serge Kovaleff
On Wed, Dec 2, 2015 at 5:23 PM, Dmitry Tantsur  wrote:

> What tripleo currently does is creating a JSON file with credentials in
> advance, then enroll nodes with it.


We were considering CSV as a starter.

Cheers,
Serge Kovaleff
http://www.mirantis.com
cell: +38 (063) 83-155-70
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] Proposal to add Abhishek to Glance core team

2015-12-02 Thread Flavio Percoco

On 01/12/15 01:21 -0500, Nikhil Komawar wrote:

Hi,

As the requested (re-voting) on [1] seemed to conflict with the thread
title, I am __init__ing a new thread for the sake of clarity, closure
and ease of vote.

Please do provide feedback on the proposal by me on this thread [1].
Other reference links are [2] and [3].

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-November/thread.html#80279
[2]
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-10-01-14.01.log.html#l-70

[3] https://launchpad.net/~abhishek-kekane


Hey Nikhil,

First and foremost, thanks for sending this out and helping with
growing the community.

Back when you proposed both, Sabari and Abishek, in that meeting, I
mentioned that I wanted to first get our priorities straigth before we
added more folks to the team.

Now that we've done that, it sounds like a great opportunity to expand
our cores team. However, I still feel we should move forward one step
at a time, at least during Mitaka.

My main reason for the above is that we've very specific areas that
we'd like to focus on during this cycle and we need more expertise on
those areas for now. That is, I'd like our team to be more focused on
helping fixes, blueprints and discussion on those specific areas.

I agree with you that Abishek has been doing a great job and that he's
helped w/ reviews quite a bit. Unfortunately, Glance is going through
a period where focus is at the top of its priorities and, while I
don't think Abishek would harm that, I do believe our actions and
steps as a community need to be clear and respect our goals.

We'll be doing a clean up soon, now that M-1 is about to go out. I'd
be more than happy to reconsider this later on.

The above is, of course, my personal opinion. I do want others to
chime in and provide their feedback.

All that said, I hope my intentions are clear and they don't come out
harsh. Abishek, I have huge respect for your work and your dedication
to the project. I would hate for you to get my words the wrong way.

I'm happy to expand and discuss this further if needed.
Flavio



--

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [ironic] Hardware composition

2015-12-02 Thread Dmitry Tantsur

On 12/01/2015 02:44 PM, Vladyslav Drok wrote:

Hi list!

There is an idea of making use of hardware composition (e.g.
http://www.intel.com/content/www/us/en/architecture-and-technology/rack-scale-architecture/intel-rack-scale-architecture-resources.html)
to create nodes for ironic.

The current proposal is:

1. To create hardware-compositor service under ironic umbrella to manage
this composition process. Its initial implementation will support Intel
RSA, other technologies may be added in future. At the beginning, it
will contain the most basic CRUD logic for composed system.


My concern with this idea is that it would have to have its own drivers, 
maybe overlapping with ironic drivers. I'm not sure what prevents you 
from bringing it into ironic (e.g. in case of ironic-inspector it was 
problems with HA mostly, I don't see anything that bad in your proposal).




2. Add logic to nova to compose a node using this new project and
register it in ironic if the scheduler is not able to find any ironic
node matching the flavor. An alternative (as pointed out by Devananda
during yesterday's meeting) could be using it in ironic by claims API
when it's implemented (https://review.openstack.org/204641).

3. If implemented in nova, there will be no changes to ironic right now
(apart from needing the driver to manage these composed nodes, which is
redfish I beleive), but there are cases when it may be useful to call
this service from ironic directly, e.g. to free the resources when a
node is deleted.


That's why I suggest just implementing it in ironic.

As a side note, some people (myself included) would really appreciate 
notifications on node deletion, and I think it's being worked on right now.




Thoughts?

Thanks,
Vlad


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze is soon

2015-12-02 Thread Sheena Gregson
Is the meeting at 8am PST today?



*From:* Mike Scherbakov [mailto:mscherba...@mirantis.com]
*Sent:* Wednesday, December 02, 2015 1:57 AM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
*Subject:* Re: [openstack-dev] [Fuel] Feature Freeze is soon



In order to be effective, I created an etherpad to go over:

https://etherpad.openstack.org/p/8.0-features-status



I'd like to call everyone to update status of blueprints, so that we can
have accurate picture of 8.0 deliverables. During the meeting, I'd like us
to quickly sync on FFEs and clarify status of major blueprints (if it won't
be updated by some reason).



In fact, we'd need to go over two first sections of etherpad (around 15
items now). I assume that 1 hour will be enough, and ideally to go quicker.
If I'm missing anything believed to be major, please move it in there.



Thanks,



On Tue, Dec 1, 2015 at 1:37 AM Vladimir Kuklin  wrote:

Mike I think, it is rather good idea. I guess we can have a couple of
requests still - although everyone is shy, we might get a little storm of
FFE's. BTW, I will file at least one.



On Tue, Dec 1, 2015 at 10:28 AM, Mike Scherbakov 
wrote:

Hi Fuelers,

we are couple of days away from FF [1]. I have not noticed any request for
feature freeze exception, so I assume that we pretty much decided what is
going into 8.0 and what is not.



If there are items which we'd like to ask exception for, I'd like us to
have this requested now - so that we all can spend some time on analysis of
what is done and what is left, and on risks assessment. I'd suggest to not
consider any exception requests on the day of FF, as it doesn't leave us
time to spend on it.



To make a formal checkpoint of what is in and what is out, I suggest to get
together on FF day, Wednesday, and go over all the items we have been
working on in 8.0. What do you think folks? For instance, in #fuel-dev IRC
at 8am PST (4pm UTC)?



[1] https://wiki.openstack.org/wiki/Fuel/8.0_Release_Schedule

-- 

Mike Scherbakov
#mihgen



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





-- 

Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] jsonschema for scheduler hints

2015-12-02 Thread Sean Dague
We have previously agreed that scheduler hints in Nova are an open ended
thing. It's expected for sites to have additional scheduler filters
which expose new hints. The way we handle that with our strict
jsonschema is that we allow additional properties -
https://github.com/openstack/nova/blob/1734ce7101982dd95f8fab1ab4815bd258a33744/nova/api/openstack/compute/schemas/scheduler_hints.py#L65

This means that if you specify some garbage hint, you don't get feedback
that it was garbage in your environment. That lost a couple of days
building multinode tests in the gate. Having gotten used to the hints
that "you've given us bad stuff", this was a stark change back to the
old world.

Would it be possible to make it so that the schema could be explicitly
extended (instead of implicitly extended). So that additional
properties=False, but a mechanism for a scheduler filter to register
it's jsonschema in?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze is soon

2015-12-02 Thread Igor Kalnitsky
Sheena,

Yeah, we will have a meeting in #fuel-dev IRC channel. :)

- Igor

On Wed, Dec 2, 2015 at 4:25 PM, Sheena Gregson  wrote:
> Is the meeting at 8am PST today?
>
>
>
> From: Mike Scherbakov [mailto:mscherba...@mirantis.com]
> Sent: Wednesday, December 02, 2015 1:57 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Fuel] Feature Freeze is soon
>
>
>
> In order to be effective, I created an etherpad to go over:
>
> https://etherpad.openstack.org/p/8.0-features-status
>
>
>
> I'd like to call everyone to update status of blueprints, so that we can
> have accurate picture of 8.0 deliverables. During the meeting, I'd like us
> to quickly sync on FFEs and clarify status of major blueprints (if it won't
> be updated by some reason).
>
>
>
> In fact, we'd need to go over two first sections of etherpad (around 15
> items now). I assume that 1 hour will be enough, and ideally to go quicker.
> If I'm missing anything believed to be major, please move it in there.
>
>
>
> Thanks,
>
>
>
> On Tue, Dec 1, 2015 at 1:37 AM Vladimir Kuklin  wrote:
>
> Mike I think, it is rather good idea. I guess we can have a couple of
> requests still - although everyone is shy, we might get a little storm of
> FFE's. BTW, I will file at least one.
>
>
>
> On Tue, Dec 1, 2015 at 10:28 AM, Mike Scherbakov 
> wrote:
>
> Hi Fuelers,
>
> we are couple of days away from FF [1]. I have not noticed any request for
> feature freeze exception, so I assume that we pretty much decided what is
> going into 8.0 and what is not.
>
>
>
> If there are items which we'd like to ask exception for, I'd like us to have
> this requested now - so that we all can spend some time on analysis of what
> is done and what is left, and on risks assessment. I'd suggest to not
> consider any exception requests on the day of FF, as it doesn't leave us
> time to spend on it.
>
>
>
> To make a formal checkpoint of what is in and what is out, I suggest to get
> together on FF day, Wednesday, and go over all the items we have been
> working on in 8.0. What do you think folks? For instance, in #fuel-dev IRC
> at 8am PST (4pm UTC)?
>
>
>
> [1] https://wiki.openstack.org/wiki/Fuel/8.0_Release_Schedule
>
> --
>
> Mike Scherbakov
> #mihgen
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
>
> Mike Scherbakov
> #mihgen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][aodh][vitrage] The purpose of notification about alarm updating

2015-12-02 Thread AFEK, Ifat (Ifat)
Hi,

In Vitrage[3] project, we would like to be notified on every alarm that is 
triggered, and respond immediately (e.g. by generating RCA insights, or by 
triggering new alarms on other resources). We are now in the process of 
designing our integration with AODH.

If I understood you correctly, you want to remove the notifications to the bus, 
but keep the alarm_actions in the alarm definition? 
I'd be happy to get some more details about the difference between these two 
approaches, and why do you think the notifications should be removed.

[3] https://wiki.openstack.org/wiki/Vitrage

Thanks,
Ifat.


>
> From: liusheng [mailto:liusheng1...@126.com] 
> Sent: Tuesday, December 01, 2015 4:32 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [telemetry][aodh] The purpose of notification about 
> alarm updating
>
> Hi folks,
>
> Currently, a notification message will be emitted when updating an alarm 
> (state  transition, attribute updating, creation),  this > functionality was 
> added by change[1], but the change didn't describe any purpose. So I wonder 
> whether there is any usage of this 
> type of notification, we can get the whole details about alarm change by 
> alarm-history API.  the notification is implicitly 
> ignored by default, because the "notification_driver" config option won't be 
> configured by default.  if we enable this option in 
> aodh.conf and enable the "store_events" in ceilometer.conf, this type of 
> notifications will be stored as events. so maybe some 
> users want to aggregate this with events ? what's your opinion ?
>
> I have made a change try to deprecate this notification, see [2].
>
> [1] https://review.openstack.org/#/c/48949/
> [2] https://review.openstack.org/#/c/246727/
>
> BR
> Liu sheng

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][inspector] CMDB integration

2015-12-02 Thread Dmitry Tantsur

On 11/30/2015 03:07 PM, Pavlo Shchelokovskyy wrote:

Hi all,

we are looking at how ironic-inspector could integrate with external
CMDB solutions and be able fetch a minimal set of data needed for
discovery (e.g. IPMI credentials and IPs) from CMDB. This could probably
be achieved with data filters framework that is already in place, but we
have one question:

what are people actually using? There are simple (but not conceivably
used in real life) choices to make a first implementation, like fetching
a csv file from HTTP link. Thus we want to learn if there is an already
known and working solution operators are actually using, either open
source or at least with open API.


What tripleo currently does is creating a JSON file with credentials in 
advance, then enroll nodes with it. There's no CMDB there, but the same 
flow might be preferred in this case as well.




We really appreciate if you chime in :) This would help us design this
feature the way that will benefit community the most.

Best regards,
--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] Provide possibilities to change VMware clusters on operational env

2015-12-02 Thread Andrian Noga
Colleagues,
I would like to request feature freeze exception for Provide possibilities
to change VMware clusters on operational env
https://blueprints.launchpad.net/fuel/+spec/add-vmware-clusters

Specification is ready for merge https://review.openstack.org/#/c/250469/12
Main patch for UI is already merged https://review.openstack.org/#/c/252358/
We still need to merge changes https://review.openstack.org/#/c/251278/
The change itself is a low risk patch.
 That should take just several days, so if there will be no other
objections, we will be able to merge the change in a week timeframe.


Regards,
Andrian Noga
Project manager
Partners Centric Engineering Team,
Mirantis, Inc.
+38 (063) 966-21-24
Skype: bigfoot_ua
www.mirantis.com
an...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-12-02 Thread Bogdan Dobrelya
On 01.12.2015 11:28, Aleksandr Didenko wrote:
> Hi,
> 
>> pregenerated catalogs for the Noop tests to become the very first
>> committed state in the data regression process has to be put in the
>> *separate repo*
> 
> +1 to that, we can put this new repo into .fixtures.yml
> 
>> note, we could as well move the tests/noop/astute.yaml/ there
> 
> +1 here too, astute.yaml files are basically configuration fixtures, we
> can put them into .fixtures.yml as well

I found the better -and easier for patch authors- way to use the data
regression checks. Originally suggested workflow was:

1.
"The check should be done for every modular component (aka deployment
task). Data generated in the noop catalog run for all classes and
defines of a given deployment task should be verified against its
"acknowledged" (committed) state."

This part remains the same with the only comment that the astute.yaml
fixtures of deployment cases should be fetched from the
fuel-noop-fixtures repo. And the committed state for generated catalogs
should be
stored there as well.

2.
"And fail the test gate, if changes has been found, like new parameter
with a defined value, removed a parameter, changed a parameter's value."

This should be changed as following:
- the data checks gate should be just a non voting helper for reviewers
and patch authors. The only its task would be to show inducted data
changes in a pretty and fast view to help accept/update/reject a patch
on review.
- the data checks gate job should fetch the committed data state from
the fuel-noop-fixtures repo and run regressions check with the patch
under review checked out on fuel-library repo.
- the Noop tests gate should be changed to fetch the astute.yaml
fixtures from the fuel-noop-fixtures repo in order to run noop tests as
usual.

3.
"In order to remove a regression, a patch author will have to add (and
reviewers should acknowledge) detected changes in the committed state of
the deployment data. This may be done manually, with a tool like [3] or
by a pre-commit hook, or even at the CI side!"

Instead, the patch authors should do nothing additionally. Once accepted
with wf+1, the patch on reivew should be merged with a pre-commit zuul
hook (is it possible?). The hook should just regenerate catalogs with
the changes introduced by the patch and update the committed state of
data in the fuel-noop-fixtures repo. After that, the patch may be safely
merged to the fuel-library and everything will be up to date with the
committed data state.

4.
"The regression check should show the diff between committed state and a
new state proposed in a patch. Changed state should be *reviewed* and
accepted with a patch, to became a committed one. So the deployment data
will evolve with *only* approved changes. And those changes would be
very easy to be discovered for each patch under review process!"

So this part would work even better now, with no additional actions
required from the review process sides.

> 
> Regards,
> Alex
> 
> 
> On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya  > wrote:
> 
> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
> >> Hi,
> >>
> >> let me try to rephrase this a bit and Bogdan will correct me if
> I'm wrong
> >> or missing something.
> >>
> >> We have a set of top-scope manifests (called Fuel puppet tasks)
> that we use
> >> for OpenStack deployment. We execute those tasks with "puppet
> apply". Each
> >> task supposed to bring target system into some desired state, so
> puppet
> >> compiles a catalog and applies it. So basically, puppet catalog =
> desired
> >> system state.
> >>
> >> So we can compile* catalogs for all top-scope manifests in master
> branch
> >> and store those compiled* catalogs in fuel-library repo. Then for
> each
> >> proposed patch CI will compare new catalogs with stored ones and
> print out
> >> the difference if any. This will pretty much show what is going to be
> >> changed in system configuration by proposed patch.
> >>
> >> We were discussing such checks before several times, iirc, but we
> did not
> >> have right tools to implement such thing before. Well, now we do
> :) I think
> >> it could be quite useful even in non-voting mode.
> >>
> >> * By saying compiled catalogs I don't mean actual/real puppet
> catalogs, I
> >> mean sorted lists of all classes/resources with all parameters
> that we find
> >> during puppet-rspec tests in our noop test framework, something like
> >> standard puppet-rspec coverage. See example [0] for networks.pp
> task [1].
> >>
> >> Regards,
> >> Alex
> >>
> >> [0] http://paste.openstack.org/show/477839/
> >> [1]
> >>
> 
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp
> >
> > 

[openstack-dev] [heat][tripleo] User Initiated Rollback

2015-12-02 Thread Steven Hardy
So, chatting with Giulio today about 
https://bugs.launchpad.net/heat/+bug/1521944
has be thinking about $subject.

The root case of that issue is essentially a corner case of a stack-update,
combined with some coupling within the Neutron API which prevents the
update traversal from working.

But it raises the broader question of what a "rollback" actually is, and
how a user can potentially use it to get out of the kind of mess described
in that bug (where, otherwise, your only option is to delete the entire
stack).

Currently, we treat rollback as a special type of update, where, if an
in-progress update fails, we then try to update again, to the previous
stack definition[1], but as Giulio has discovered, there are times when
that doesn't work, because what you actually want is to recover the
existing resource from the backup stack, not create a new one with the same
properties.

Then, looking at convergence, we have a different definition of rollback,
it's not yet clear to me how this should behave in a similar scenario, e.g
when the resource we want to roll back to failed to get deleted but still
exists (so, the resource is FAILED, but the underlying resource is fine)?

Finally, the interface to rollback - atm you have to know before something
fails that you'd like to enable rollback for a specific update.  This seems
suboptimal, since invariably by the time you know you need rollback, it's
too late.  Can we enable a user-initiated rollback from a FAILED state, via
one of:

 - Introduce a new heat API that allows an explicit heat stack-rollback?
 - (ab)use PATCH to trigger rollback on heat stack-update -x --rollback=True?

The former approach fits better with the current stack.Stack
implementation, because the ROLLBACK stack state already exists.  The
latter has the advantage that it doesn't need a new API so might be
backportable.

Any thoughts on how we might proceed to make this situation better, and
enable folks to roll back in the least destructive way possible when they
end up in a FAILED state?

Steve

[1] https://github.com/openstack/heat/blob/master/heat/engine/stack.py#L1331
[2] https://github.com/openstack/heat/blob/master/heat/engine/stack.py#L1143

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Liberty release for L2GW

2015-12-02 Thread Sukhdev Kapur
Folks,

This is to let everybody know that Liberty release for L2GW project is
released and is available at - https://pypi.python.org/pypi/networking-l2gw

Feel free to download it and use it. Please let us know if you see any
issue with it.

Authors of outstanding or new patches, please use your best judgement to
back-port them appropriately.

Thanks
-Sukhdev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Patch size limit

2015-12-02 Thread Igor Kalnitsky
Hey folks,

I agree that patches must be as small as possible. I believe it will
significantly increase our review experience - more fast review, and,
therefore, landing to master.

However, I don't agree that we should introduce criteria based on LOC,
because of mentioned reasons above. I believe that patches must be
atomic, no matter how much LOC it has. In the same time, we must not
have the whole feature as atomic unit here.

So basically my points are:

* Let's do not go with strict LOC. Decision it's ok to go with one
patch or not, should be up to code reviewers.
* If reviewer thinks that patch could and should be splitted into few,
then he/she set -1 and ask contributor to split it.
* Reviewers shouldn't hesitate to set -1 and ask to split patch.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [ceph] Puppet Ceph CI

2015-12-02 Thread David Moreau Simard
I pushed an overly optimistic review [1] for updating Openstack to Liberty.
Haven't had the time to look back at it yet.

The general idea was to defer the repository setup to openstack_extras and
pull in
the keystone setup mostly as-is directly from puppet-openstack-integration.

[1]: https://review.openstack.org/#/c/251531/



David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Wed, Dec 2, 2015 at 5:45 AM, David Gurtner  wrote:

> So from the discussion I gather we should do the following:
>
> - Update the jobs to run Infernalis
> - Split the RGW jobs into smaller chunks where one tests just the RGW and
> another one tests Keystone integration
> - Use Liberty (or at least Kilo) for the Keystone integration job
> - Split the tests more to have a test specifically for cephx functionality
> - re-enable the tests for CentOS once they work again
>
> Open points from my POV are:
>
> - should we test older Ceph versions via Jenkins (this would increase the
> runtime again)
> - should we still test CentOS 6 and Ubuntu 12.04
> - if yes, where
> - should we port more of the deprecated rspec-puppet-system tests? things
> I can think of are: 1) the profile tests 2) the
> scenario_node_terminus/hiera tests
>
> I'm happy to start working on the split of tests and the
> Infernalis/Liberty version bump tonight.
>
> Cheers,
> David
>
> - Original Message -
> > Hey Adam,
> >
> > A bit late here, sorry.
> > Ceph works fine with OpenStack Kilo but at the time we developed the
> > integration tests for puppet-ceph with Kilo, there were some issues
> > specific to our test implementation and we chose to settle with Juno
> > at the time.
> >
> > On the topic of CI, I can no longer sponsor the third party CI
> > (through my former employer, iWeb) as I am with Red Hat now.
> > I see this as an opportunity to drop the custom system tests with
> > vagrant and instead improve the acceptance tests.
> >
> > What do you think ?
> >
> >
> > David Moreau Simard
> > Senior Software Engineer | Openstack RDO
> >
> > dmsimard = [irc, github, twitter]
> >
> >
> > On Mon, Nov 23, 2015 at 6:45 PM, Adam Lawson  wrote:
> > > I'm confused, what is the context here? We use Ceph with OpenStack Kilo
> > > without issue.
> > >
> > > On Nov 23, 2015 2:28 PM, "David Moreau Simard"  wrote:
> > >>
> > >> Last I remember, David Gurtner tried to use Kilo instead of Juno but
> > >> he bumped into some problems and we settled for Juno at the time [1].
> > >> At this point we should already be testing against both Liberty and
> > >> Infernalis, we're overdue for an upgrade in that regard.
> > >>
> > >> But, yes, +1 to split acceptance tests:
> > >> 1) Ceph
> > >> 2) Ceph + Openstack
> > >>
> > >> Actually learning what failed is indeed challenging sometimes, I don't
> > >> have enough experience with the acceptance testing to suggest anything
> > >> better.
> > >> We have the flexibility of creating different logfiles, maybe we can
> > >> find a way to split out the relevant bits into another file.
> > >>
> > >> [1]: https://review.openstack.org/#/c/153783/
> > >>
> > >> David Moreau Simard
> > >> Senior Software Engineer | Openstack RDO
> > >>
> > >> dmsimard = [irc, github, twitter]
> > >>
> > >>
> > >> On Mon, Nov 23, 2015 at 2:45 PM, Andrew Woodward 
> wrote:
> > >> > I think I have a good lead on the recent failures in openstack /
> swift /
> > >> > radosgw integration component that we have since disabled. It looks
> like
> > >> > there is a oslo.config version upgrade conflict in the Juno repo we
> > >> > where
> > >> > using for CentOS. I think moving to Kilo will help sort this out,
> but at
> > >> > the
> > >> > same time I think it would be prudent to separate the Ceph v.s.
> > >> > OpenStack
> > >> > integration into separate jobs so that we have a better idea of
> which is
> > >> > a
> > >> > problem. If there is census for this, I'd need some direction /
> help, as
> > >> > well as set them up as non-voting for now.
> > >> >
> > >> > Looking into this I also found that the only place that we do
> > >> > integration
> > >> > any of the cephx logic was in the same test so we will need to
> create a
> > >> > component for it in the ceph integration as well as use it in the
> > >> > OpenStack
> > >> > side.
> > >> >
> > >> > Lastly un-winding the integration failure seemed overly complex. Is
> > >> > there a
> > >> > way that we can correlate the test status inside the job at a high
> level
> > >> > besides the entire job passed / failed without breaking them into
> > >> > separate
> > >> > jobs?
> > >> > --
> > >> >
> > >> > --
> > >> >
> > >> > Andrew Woodward
> > >> >
> > >> > Mirantis
> > >> >
> > >> > Fuel Community Ambassador
> > >> >
> > >> > Ceph Community
> > >> >
> > >> >
> > >> >
> > >> >
> __
> > >> > OpenStack Development Mailing List 

Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-02 Thread Davanum Srinivas
Vova, Folks,

+1 to "set this option to false as an experimental feature"

Thanks,
Dims

On Wed, Dec 2, 2015 at 10:08 AM, Vladimir Kuklin  wrote:
> Dmitry
>
> Although, I am a big fan of disabling replication for RPC, I think it is too
> late to introduce it so late by default. I would suggest that we control
> this part of OCF script with a specific parameter 'e.g. enable RPC
> replication' and set it to 'true' by default. Then we can set this option to
> false as an experimental feature, run some tests and decide whether it
> should be enabled by default or not. In this case, users who are interested
> in this, will be able to enable it when they need it, while we still stick
> to our old and tested approach.
>
> On Wed, Dec 2, 2015 at 5:52 PM, Konstantin Kalin 
> wrote:
>>
>> I would add on top of that Dmirty said that HA queues also increases
>> probability to have messages duplications under certain scenarios (besides
>> of that they are ~10x slower). Would Openstack services tolerate if RPC
>> request will be duplicated? What I've already learned - No. Also if
>> cluster_partition_handling=autoheal (what we currently have) the messages
>> may be lost as well during the failover scenarios like non-HA queues.
>> Honestly I believe there is no difference between HA queues and non
>> HA-queues in RPC layer fail-tolerance in the way how we use RabbitMQ.
>>
>> Thank you,
>> Konstantin.
>>
>> On Dec 2, 2015, at 4:05 AM, Dmitry Mescheryakov
>>  wrote:
>>
>>
>>
>> 2015-12-02 12:48 GMT+03:00 Sergii Golovatiuk :
>>>
>>> Hi,
>>>
>>>
>>> On Tue, Dec 1, 2015 at 11:34 PM, Peter Lemenkov 
>>> wrote:

 Hello All!

 Well, side-effects (or any other effects) are quite obvious and
 predictable - this will decrease availability of RPC queues a bit.
 That's for sure.
>>>
>>>
>>> Imagine the case when user creates VM instance, and some nova messages
>>> are lost. I am not sure we want half-created instances. Who is going to
>>> clean up them? Since we do not have results of destructive tests, I vote -2
>>> for FFE for this feature.
>>
>>
>> Sergii, actually messaging layer can not provide any guarantee that it
>> will not happen even if all messages are preserved. Assume the following
>> scenario:
>>
>>  * nova-scheduler (or conductor?) sends request to nova-compute to spawn a
>> VM
>>  * nova-compute receives the message and spawned the VM
>>  * due to some reason (rabbitmq unavailable, nova-compute lagged)
>> nova-compute did not respond within timeout (1 minute, I think)
>>  * nova-scheduler does not get response within 1 minute and marks the VM
>> with Error status.
>>
>> In that scenario no message was lost, but still we have a VM half spawned
>> and it is up to Nova to handle the error and do the cleanup in that case.
>>
>> Such issue already happens here and there when something glitches. For
>> instance our favorite MessagingTimeout exception could be caused by such
>> scenario. Specifically, in that example when nova-scheduler times out
>> waiting for reply, it will throw exactly that exception.
>>
>> My point is simple - lets increase our architecture scalability by 2-3
>> times by _maybe_ causing more errors for users during failover. The failover
>> time itself should not get worse (to be tested by me) and errors should be
>> correctly handler by services anyway.
>>

 However, Dmitry's guess is that the overall messaging backplane
 stability increase (RabitMQ won't fail too often in some cases) would
 compensate for this change. This issue is very much real - speaking of
 me I've seen an awful cluster's performance degradation when a failing
 RabbitMQ node was killed by some watchdog application (or even worse
 wasn't killed at all). One of these issues was quite recently, and I'd
 love to see them less frequently.

 That said I'm uncertain about the stability impact of this change, yet
 I see a reasoning worth discussing behind it.

 2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk :
 > Hi,
 >
 > -1 for FFE for disabling HA for RPC queue as we do not know all side
 > effects
 > in HA scenarios.
 >
 > On Tue, Dec 1, 2015 at 7:34 PM, Dmitry Mescheryakov
 >  wrote:
 >>
 >> Folks,
 >>
 >> I would like to request feature freeze exception for disabling HA for
 >> RPC
 >> queues in RabbitMQ [1].
 >>
 >> As I already wrote in another thread [2], I've conducted tests which
 >> clearly show benefit we will get from that change. The change itself
 >> is a
 >> very small patch [3]. The only thing which I want to do before
 >> proposing to
 >> merge this change is to conduct destructive tests against it in order
 >> to
 >> make sure that we do not have a regression here. That should take
 

Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-02 Thread Sheena Gregson
This seems like a totally reasonable solution, and would enable us to more
thoroughly test the performance implications of this change between 8.0
and 9.0 release.

+1

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: Wednesday, December 02, 2015 9:32 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in
RabbitMQ

Vova, Folks,

+1 to "set this option to false as an experimental feature"

Thanks,
Dims

On Wed, Dec 2, 2015 at 10:08 AM, Vladimir Kuklin 
wrote:
> Dmitry
>
> Although, I am a big fan of disabling replication for RPC, I think it
> is too late to introduce it so late by default. I would suggest that
> we control this part of OCF script with a specific parameter 'e.g.
> enable RPC replication' and set it to 'true' by default. Then we can
> set this option to false as an experimental feature, run some tests
> and decide whether it should be enabled by default or not. In this
> case, users who are interested in this, will be able to enable it when
> they need it, while we still stick to our old and tested approach.
>
> On Wed, Dec 2, 2015 at 5:52 PM, Konstantin Kalin 
> wrote:
>>
>> I would add on top of that Dmirty said that HA queues also increases
>> probability to have messages duplications under certain scenarios
>> (besides of that they are ~10x slower). Would Openstack services
>> tolerate if RPC request will be duplicated? What I've already learned
>> - No. Also if cluster_partition_handling=autoheal (what we currently
>> have) the messages may be lost as well during the failover scenarios
like non-HA queues.
>> Honestly I believe there is no difference between HA queues and non
>> HA-queues in RPC layer fail-tolerance in the way how we use RabbitMQ.
>>
>> Thank you,
>> Konstantin.
>>
>> On Dec 2, 2015, at 4:05 AM, Dmitry Mescheryakov
>>  wrote:
>>
>>
>>
>> 2015-12-02 12:48 GMT+03:00 Sergii Golovatiuk
:
>>>
>>> Hi,
>>>
>>>
>>> On Tue, Dec 1, 2015 at 11:34 PM, Peter Lemenkov 
>>> wrote:

 Hello All!

 Well, side-effects (or any other effects) are quite obvious and
 predictable - this will decrease availability of RPC queues a bit.
 That's for sure.
>>>
>>>
>>> Imagine the case when user creates VM instance, and some nova
>>> messages are lost. I am not sure we want half-created instances. Who
>>> is going to clean up them? Since we do not have results of
>>> destructive tests, I vote -2 for FFE for this feature.
>>
>>
>> Sergii, actually messaging layer can not provide any guarantee that
>> it will not happen even if all messages are preserved. Assume the
>> following
>> scenario:
>>
>>  * nova-scheduler (or conductor?) sends request to nova-compute to
>> spawn a VM
>>  * nova-compute receives the message and spawned the VM
>>  * due to some reason (rabbitmq unavailable, nova-compute lagged)
>> nova-compute did not respond within timeout (1 minute, I think)
>>  * nova-scheduler does not get response within 1 minute and marks the
>> VM with Error status.
>>
>> In that scenario no message was lost, but still we have a VM half
>> spawned and it is up to Nova to handle the error and do the cleanup in
that case.
>>
>> Such issue already happens here and there when something glitches.
>> For instance our favorite MessagingTimeout exception could be caused
>> by such scenario. Specifically, in that example when nova-scheduler
>> times out waiting for reply, it will throw exactly that exception.
>>
>> My point is simple - lets increase our architecture scalability by
>> 2-3 times by _maybe_ causing more errors for users during failover.
>> The failover time itself should not get worse (to be tested by me)
>> and errors should be correctly handler by services anyway.
>>

 However, Dmitry's guess is that the overall messaging backplane
 stability increase (RabitMQ won't fail too often in some cases)
 would compensate for this change. This issue is very much real -
 speaking of me I've seen an awful cluster's performance degradation
 when a failing RabbitMQ node was killed by some watchdog
 application (or even worse wasn't killed at all). One of these
 issues was quite recently, and I'd love to see them less frequently.

 That said I'm uncertain about the stability impact of this change,
 yet I see a reasoning worth discussing behind it.

 2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk
:
 > Hi,
 >
 > -1 for FFE for disabling HA for RPC queue as we do not know all
 > side effects in HA scenarios.
 >
 > On Tue, Dec 1, 2015 at 7:34 PM, Dmitry Mescheryakov
 >  wrote:
 >>
 >> Folks,
 >>
 >> I would like to request feature freeze exception for disabling
 >> HA for RPC 

[openstack-dev] [Fuel][FFE] Component registry

2015-12-02 Thread Andrian Noga
Colleagues,

Folks,
I would like to request feature freeze exception for Component registry
https://blueprints.launchpad.net/fuel/+spec/component-registry

Specification is already merged https://review.openstack.org/#/c/229306/
Main patch is also merged https://review.openstack.org/#/c/247913/
We still need to merge UI changes https://review.openstack.org/#/c/246889/53
The change itself is a very small patch.
 That should take just several days, so if there will be no other
objections, we will be able to merge the change in a week timeframe.

Regards,
Andrian Noga
Project manager
Partners Centric Engineering Team,
Mirantis, Inc.
+38 (063) 966-21-24
Skype: bigfoot_ua
www.mirantis.com
an...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-12-02 Thread Derek Higgins



On 02/12/15 12:53, Steven Hardy wrote:

On Tue, Dec 01, 2015 at 05:10:57PM -0800, Devananda van der Veen wrote:

On Tue, Dec 1, 2015 at 3:22 AM, Steven Hardy  wrote:

  On Mon, Nov 30, 2015 at 03:35:13PM -0800, Devananda van der Veen wrote:
  >Â  Â  On Mon, Nov 30, 2015 at 3:07 PM, Zane Bitter 
  wrote:
  >
  >Â  Â  Â  On 30/11/15 12:51, Ruby Loo wrote:
  >
  >Â  Â  Â  Â  On 30 November 2015 at 10:19, Derek Higgins
  Â  Â  Â  Â  > wrote:
  >
  >Â  Â  Â  Â  Ã*Â  Ã*Â  Hi All,
  >
  >Â  Â  Â  Â  Ã*Â  Ã*Â  Ã*Â  Ã*Â  Ã*Â A few months tripleo switch from
  its devtest based CI to
  >Â  Â  Â  Â  one
  >Â  Â  Â  Â  Ã*Â  Ã*Â  that was based on instack. Before doing this we
  anticipated
  >Â  Â  Â  Â  Ã*Â  Ã*Â  disruption in the ci jobs and removed them from
  non tripleo
  >Â  Â  Â  Â  projects.
  >
  >Â  Â  Â  Â  Ã*Â  Ã*Â  Ã*Â  Ã*Â  Ã*Â We'd like to investigate adding it
  back to heat and
  >Â  Â  Â  Â  ironic as
  >Â  Â  Â  Â  Ã*Â  Ã*Â  these are the two projects where we find our ci
  provides the
  >Â  Â  Â  Â  most
  >Â  Â  Â  Â  Ã*Â  Ã*Â  value. But we can only do this if the results
  from the job are
  >Â  Â  Â  Â  Ã*Â  Ã*Â  treated as voting.
  >
  >Â  Â  Â  Â  What does this mean? That the tripleo job could vote and do
  a -1 and
  >Â  Â  Â  Â  block ironic's gate?
  >
  >Â  Â  Â  Â  Ã*Â  Ã*Â  Ã*Â  Ã*Â  Ã*Â In the past most of the non tripleo
  projects tended to
  >Â  Â  Â  Â  ignore
  >Â  Â  Â  Â  Ã*Â  Ã*Â  the results from the tripleo job as it wasn't
  unusual for the
  >Â  Â  Â  Â  job to
  >Â  Â  Â  Â  Ã*Â  Ã*Â  broken for days at a time. The thing is, ignoring
  the results of
  >Â  Â  Â  Â  the
  >Â  Â  Â  Â  Ã*Â  Ã*Â  job is the reason (the majority of the time) it
  was broken in
  >Â  Â  Â  Â  the
  >Â  Â  Â  Â  Ã*Â  Ã*Â  first place.
  >Â  Â  Â  Â  Ã*Â  Ã*Â  Ã*Â  Ã*Â  Ã*Â To decrease the number of breakages
  we are now no longer
  >Â  Â  Â  Â  Ã*Â  Ã*Â  running master code for everything (for the non
  tripleo projects
  >Â  Â  Â  Â  we
  >Â  Â  Â  Â  Ã*Â  Ã*Â  bump the versions we use periodically if they are
  working). I
  >Â  Â  Â  Â  Ã*Â  Ã*Â  believe with this model the CI jobs we run have
  become a lot
  >Â  Â  Â  Â  more
  >Â  Â  Â  Â  Ã*Â  Ã*Â  reliable, there are still breakages but far less
  frequently.
  >
  >Â  Â  Â  Â  Ã*Â  Ã*Â  What I proposing is we add at least one of our
  tripleo jobs back
  >Â  Â  Â  Â  to
  >Â  Â  Â  Â  Ã*Â  Ã*Â  both heat and ironic (and other projects
  associated with them
  >Â  Â  Â  Â  e.g.
  >Â  Â  Â  Â  Ã*Â  Ã*Â  clients, ironicinspector etc..), tripleo will
  switch to running
  >Â  Â  Â  Â  Ã*Â  Ã*Â  latest master of those repositories and the cores
  approving on
  >Â  Â  Â  Â  those
  >Â  Â  Â  Â  Ã*Â  Ã*Â  projects should wait for a passing CI jobs before
  hitting
  >Â  Â  Â  Â  approve.
  >Â  Â  Â  Â  Ã*Â  Ã*Â  So how do people feel about doing this? can we
  give it a go? A
  >Â  Â  Â  Â  Ã*Â  Ã*Â  couple of people have already expressed an
  interest in doing
  >Â  Â  Â  Â  this
  >Â  Â  Â  Â  Ã*Â  Ã*Â  but I'd like to make sure were all in agreement
  before switching
  >Â  Â  Â  Â  it on.
  >
  >Â  Â  Â  Â  This seems to indicate that the tripleo jobs are
  non-voting, or at
  >Â  Â  Â  Â  least
  >Â  Â  Â  Â  won't block the gate -- so I'm fine with adding tripleo
  jobs to
  >Â  Â  Â  Â  ironic.
  >Â  Â  Â  Â  But if you want cores to wait/make sure they pass, then
  shouldn't they
  >Â  Â  Â  Â  be voting? (Guess I'm a bit confused.)
  >
  >Â  Â  Â  +1
  >
  >Â  Â  Â  I don't think it hurts to turn it on, but tbh I'm
  uncomfortable with the
  >Â  Â  Â  mental overhead of a non-voting job that I have to manually
  treat as a
  >Â  Â  Â  voting job. If it's stable enough to make it a voting job, I'd
  prefer we
  >Â  Â  Â  just make it voting. And if it's not then I'd like to see it
  be made
  >Â  Â  Â  stable enough to be a voting job and then make it voting.
  >
  >Â  Â  This is roughly where I sit as well -- if it's non-voting,
  experience
  >Â  Â  tells me that it will largely be ignored, and as such, isn't a
  good use of
  >Â  Â  resources.

  I'm sure you can appreciate it's something of a chicken/egg problem
  though
  - if everyone always ignores non-voting jobs, they never become voting.

  That effect is magnified with TripleO though, because it consumes so
  many
  OpenStack projects, any one of which has the capability to break our CI,
  so
  in an ideal 

Re: [openstack-dev] [Neutron] need help in translating sql query to sqlalchemy query

2015-12-02 Thread Sean M. Collins
Was perusing the documentation again this morning and there is another
thing I found - you can call join() with the aliased=True flag to get
similar results.

Check out the "Constructing Aliases Anonymously" section.

http://docs.sqlalchemy.org/en/latest/orm/query.html
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][inspector] CMDB integration

2015-12-02 Thread Serge Kovaleff
Another possible candidates we got from our discussions are *itop* and
*Foreman*. Anything else?

Cheers,
Serge Kovaleff


On Wed, Dec 2, 2015 at 5:34 PM, Serge Kovaleff 
wrote:

>
> On Wed, Dec 2, 2015 at 5:23 PM, Dmitry Tantsur 
> wrote:
>
>> What tripleo currently does is creating a JSON file with credentials in
>> advance, then enroll nodes with it.
>
>
> We were considering CSV as a starter.
>
> Cheers,
> Serge Kovaleff
> http://www.mirantis.com
> cell: +38 (063) 83-155-70
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-12-02 Thread Jim Rollenhagen
On Mon, Nov 30, 2015 at 12:48:44PM -0500, Anita Kuno wrote:
> On 11/30/2015 12:33 PM, Dmitry Tantsur wrote:
> > I was there and I already said that I'm not buying into "spamming the
> > list" argument. There are much less important things that I see here
> > right now, even though I do actively use filters to only see potentially
> > relevant things. We've been actively (and not very successfully)
> > encouraging people to use ML instead of IRC conversations (or even
> > private messages and video chats), and this thread does not seem in line
> > with it.
> 
> Please discuss this with the leadership of your project.
> 
> All announcements about the existence of a third party ci will be
> redirected to the third party systems wikipage.

While I agree with Dmitry that I don't tend to think posting these
announcements to the list is very spammy, I also don't know the history
behind these decisions, and I'm not opinionated enough to get involved in
that discussion.

As for third party CI announcements, I think we should do whatever the
rest of the OpenStack projects do; being different here just creates
pain for everyone.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] On Python 3, request_id must by Unicode, not bytes

2015-12-02 Thread Joshua Harlow
Seems ok with me, not ideal but knowing the support of python 3.x in 
openstack it doesn't seem harmful to fix this while we still are able 
to; and in general if thats 'req-$uuid' ascii/unicode should be fine 
(since that's all that is).


-Josh

Victor Stinner wrote:

Hi,

The next oslo.context release including the following change (still
under review) might break the voting Python 3 gate of your project:

https://review.openstack.org/#/c/250731/

Please try to run Python 3 tests of your project with this change. I
already ran the tests on Python 3 of the following projects:

- ceilometer
- cinder
- heat
- neutron
- nova


The type of the request_id attribute of oslo_context.RequestContext was
changed from Unicode (str) to bytes on Python 3 in April "to fix an unit
test". According to the author of the change, it was a mistake.

The request_id is a string looks like 'req-83a6...', 'req-' followed by
an UUID. On Python 3, it's annoying to manipulate a bytes string. For
example, print(request_id) writes b'req-83a6...' instead of req-83a6...,
request_id.startswith('req-83a6...') raises a TypeError, etc.

I propose to modify request_id type again to make it a Unicode string to
fix oslo.log (don't log b'req-...' anymore, but req-...):

https://review.openstack.org/#/c/250731/

It looks like it doesn't break services, but only one specific unit test
duplicated in some projects. The unit test relies on the exact
request_id type, it looks like: request_id.startswith(b'req-'). I fixed
this unit test in Glance, Neutron and oslo.middlware to accept bytes and
Unicode request_id.

I also searched for b'req-' in http://codesearch.openstack.org/ to find
all projects relying on the exact request_id type.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-12-02 Thread Jeremy Stanley
On 2015-11-16 20:57:09 -0600 (-0600), Matt Riedemann wrote:
[...]
> Arguably we could still be testing grenade on stable/kilo by just
> installing Juno 2014.2.4 (last Juno point release before EOL) and
> then upgrading to stable/kilo.

I encourage you, say in a month's time, to try "just" installing
(and running) Juno DevStack. We have enough trouble doing that in
our CI system right now.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Release Notes for *aaS projects

2015-12-02 Thread Ihar Hrachyshka

Kyle Mestery  wrote:


We're hoping to cut Neutron M-1 this week [1]. We have implemented
release notes in the main Neutron repository [2] , but not in the *aaS
repositories. At the time, I thought this was a good approach and we
could collect all releasenotes there. But I think it makes sense to
have releasenotes in the *aaS repositories as well.

What I'm going to propose is we cut Neutron M-1 as-is now, with any
*aaS releasenotes done in the main repository. Once Neutron M-1 is
cut, I'll add the releasenotes stuff into the *aaS repositories, and
we can start using releasenotes independently in those repositories.

If anyone has issues with this approach please reply on this thread.


I believe that is the best way to do it. Otherwise we would need to push  
multiple patches for the same change: one into *aas, and another one into  
neutron tree just for release notes. It defeats the whole usefulness of  
being able to enforce release notes requirements as part of the patches  
making the actual changes. We all know ‘I will follow up’ often means ‘I  
won’t do it, ever’.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [tripleo] stop maintaining puppet-tuskar

2015-12-02 Thread Emilien Macchi
Hi,

I don't find any official statement on the Internet, but I've heard
Tuskar is not going to be maintained anymore (tell me if I'm wrong).

If I'm not wrong, I suggest we stop maintaining puppet-tuskar, and
stable/liberty would be the latest release that we have maintained. I
would also drop all the code in master and update the README explaining
the module is not maintained anymore.

Thanks for your help,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FUEL] FFE request for erlang and rabbitmq-server packaged for centos7

2015-12-02 Thread Artem Silenkov
Hello!

We have got
- erlang=18.1 https://review.fuel-infra.org/#/c/12896/
- rabbitmq-server=3.5.6 https://review.fuel-infra.org/#/c/12901/
packaged for ubuntu trusty in corresponding requests.

Those requests are not merged yet but probably would today evening.
We need some time to backport it for centos7 in order to keep versions
synced.

It is not rocket science to package it for centos7 and it could take not
more then one working day.
This work could be done as soon as ubuntu packages are landed.

Regards,
Artem Silenkov
---
MOS~Packaging
mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS7 Merging Plan

2015-12-02 Thread Andrew Maksimov
Thank you Dmitry for very detailed plan and risks assessment.
Do we want to run swarm against custom iso with centos7 on Thu evening to
measure level of regression? I remember that we were considering this
approach.

Regards,
Andrey Maximov


On Wed, Dec 2, 2015 at 12:48 AM, Dmitry Borodaenko  wrote:

> With bit more details, I hope this covers all the risks and decision
> points now.
>
> First of all, current list of outstanding commits:
> https://etherpad.openstack.org/p/fuel_on_centos7
>
> The above list has two sections: backwards compatible changes that can
> be merged one at a time even if the rest of CentOS7 support isn't
> merged, and backwards incompatible changes that break support for
> CentOS6 and must be merged (and, if needed, reverted) all at once.
>
> Decision point 1: FFE for CentOS7
>
> CentOS7 support cannot be fully merged on Dec 2, so it misses FF. Can it
> be allowed a Feature Freeze Exception? So far, the disruption of the
> Fuel development process implied by the proposed merge plan is
> acceptable, if anything goes wrong and we become unable to have a stable
> ISO with merged CentOS7 support on Monday, December 7, the FFE will be
> revoked.
>
> Wed, Dec 2: Merge party
>
> Merge party before 8.0 FF, we should do our best to merge all remaining
> feature commits before end of day (including backwards compatible
> CentOS7 support commits), without breaking the build too much.
>
> At the end of the day we'll start a swarm test over the result of the
> merge party, and we expect QA to analyze and summarize the results by
> 17:00 MSK (6:00 PST) on Thu Dec 3.
>
> Risk 1: Merge party breaks the build
>
> If there is a large regression in swarm pass percentage, we won't be
> able to afford a merge freeze which is necessary to merge CentOS7
> support, we'll have to be merging bugfixes until swarm test pass rate is
> back around 70%.
>
> Risk 2: More features get FFE
>
> If some essential 8.0 features are not completely merged by end of day
> Wed Dec 2 and are granted FFE, merging the remaining commits can
> interfere with merging CentOS7 support, not just from merge conflicts
> perspective, but also invalidating swarm results and making it
> practically impossible to bisect and attribute potential regressions.
>
> Thu, Dec 3: Start merge freeze for CentOS7
>
> Decision point 2: Other FFEs
>
> In the morning MSK time, we will assess Risk 2 and decide what to do
> with the other FFEs. The options are: integrate remaining commits into
> CentOS7 merge plan, block remaining commits until Monday, revoke CentOS7
> FFE.
>
> If the decision is to go ahead with CentOS7 merge, we announce merge
> freeze for all git repositories that go into Fuel ISO, and spend the
> rest of the day rebasing and cleaning up the rest of the CentOS7 commits
> to make sure they're all in mergeable state by the end of the day. The
> outcome of this work must be a custom ISO image with all remaining
> commits, with additional requirement that it must not use Jenkins job
> parameters (only patches to fuel-main that change default repository
> paths) to specify all required package repositories. This will validate
> the proposed fuel-main patches and ensure that no unmerged package
> changes are used to produce the ISO.
>
> Decision point 3: Swarm pass rate
>
> After swarm results from Wed are available, we will assess the Risk 1.
> If the pass rate regression is significant, CentOS7 FFE is revoked and
> merge freeze is lifted. If regression is acceptable, we proceed with
> merging remaining CentOS7 commmits through Thu Dec 3 and Fri Dec 4.
>
> Fri, Dec 4: Merge and test CentOS7
>
> The team will have until 17:00 MSK to produce a non-custom ISO that
> passes BVT and can be run through swarm.
>
> Sat, Dec 5: Assess CentOS7 swarm and bugfix
>
> First of all, someone from CI and QA teams should commit to monitoring
> the CentOS7 swarm run and report the results as soon as possible. Based
> on the results (which once again must be available by 17:00 MSK), we can
> decide on the final step of the plan.
>
> Decision point 4: Keep or revert
>
> If CentOS7 based swarm shows significant regression, we have to spend
> the rest of the weekend including Sunday reverting all CentOS7 commits
> that were merged during merge freeze. Once revert is completed, we will
> lift the merge freeze.
>
> If the regression is acceptable, we lift the merge freeze straight away
> and proceed with bugfixing as usual. At this point CI team will need to
> update the Fuel ISO used for deployment tests in our CI to this same
> ISO.
>
> One way or the other, we will be able to resume bugfixing on Monday
> morning MSK time, and will have lost 2 business days (Thu-Fri) during
> which we won't be able to merge bugfixes. In addition to that, someone
> from QA and everyone from CentOS7 support team has to work on Saturday,
> and someone from CI will have to work a few hours on Sunday.
>
> --
> Dmitry Borodaenko
>
>
> On Tue, Dec 01, 2015 

Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-12-02 Thread Jim Rollenhagen
On Wed, Dec 02, 2015 at 03:58:01PM +, Derek Higgins wrote:


> >
> >Ah, I think all we have here is a terminology mismatch around "non voting"
> >vs "non gating".
> >
> >AFAIK what is being proposed is to reinstate the TripleO jobs so they *do*
> >vote on any change (+1/-1), but they do not block the gate, so we won't get
> >in the way if occasional outages happen.
> 
> Yes, this is exactly what I wanted to do, nothing would be changing from how
> it used to be, the tripleo jobs would vote with a -1/+1 but approvers could
> still approve if they wanted to (i.e. not in the gate). The only thing I am
> asking we do differently to the way it used to be is an agreement to not
> blindly ignore the results of the tripleo job as ignoring the results is
> what causes a lot of the breakages in the first place.

+1, this is what I was imagining when we first talked about this. Let's
go ahead and add this to ironic, I'll be sure to make an announcement in
Monday's meeting.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-12-02 Thread Julien Danjou
On Wed, Dec 02 2015, AFEK, Ifat (Ifat) wrote:

> As we understand it, if we take the first approach you describe, then we can
> have an alarm refer to all the VMs in the system, but then if the alarm is
> triggered by one VM or by five VMs, the result will be the same - only one
> alarm will be active. What we want is to be able to distinguish between the
> different VMs - to know which alarms were triggered on each specific VM.
>
> One of the motivations for this is that in Horizon we would like to display 
> all
> the alarms, where we would like to be able to see that a problem occurred on
> instance1, instance2 and instance8, not just that there was a problem on some
> VMs out of a group.

Ok, that's clearer.

> Can this be supported without defining an alarm for every VM separately?

No, it's not possible. You'd have to create the alarm for each instance
for now.

Honestly, I'd say start with this at a first step, and if it starts
becoming a problem, we can envision a better way to define some sort of
alarm template for example in Aodh. I wouldn't put the cart before the
horse.

> This is what Ryota Mibu wrote us:
>
>> The reason is that aodh evaluator may not be aware of new alarm
>> definitions and won't send notification until its alarm definition >
>> cache is refreshed in less than 60 sec (default value).
>
> Did we misunderstand?

Oh no, but I thought you were mentioning at it being slow. This is a
cache, you can lower it to 1s if you want, with the potential
performance impact it may have. :)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Release Notes for *aaS projects

2015-12-02 Thread Martin Hickey

+1

Regards,
Martin




From:   Kyle Mestery 
To: openstack-dev@lists.openstack.org
Date:   02/12/2015 16:40
Subject:[openstack-dev] [neutron] Release Notes for *aaS projects



We're hoping to cut Neutron M-1 this week [1]. We have implemented
release notes in the main Neutron repository [2] , but not in the *aaS
repositories. At the time, I thought this was a good approach and we
could collect all releasenotes there. But I think it makes sense to
have releasenotes in the *aaS repositories as well.

What I'm going to propose is we cut Neutron M-1 as-is now, with any
*aaS releasenotes done in the main repository. Once Neutron M-1 is
cut, I'll add the releasenotes stuff into the *aaS repositories, and
we can start using releasenotes independently in those repositories.

If anyone has issues with this approach please reply on this thread.

Thanks!
Kyle

[1] https://review.openstack.org/#/c/251959/
[2] https://review.openstack.org/241758

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-12-02 Thread Jeremy Stanley
On 2015-11-17 02:49:30 + (+), Rochelle Grober wrote:
> I would like to make a plea that while Juno is locked down so as
> no changes can be made against it, the branch remains on the
> git.openstack.org site.

We don't so much delete the stable/juno branch as replace its final
state with a tag called juno-eol. It's a trivial Git operation
downstream to recreate your own stable/juno branch from our juno-eol
tag and continue using it however you like. This is simply our way
of indicating clearly that we won't ever be adding new patches to
stable/juno (one might say that makes it _extremely_ stable).

> Please? One area that could be better investigated with the branch
> in place is upgrade. Kilo will continue to get patches, as will
> Liberty, so an occasional grenade run
[...]

TL;DR: no.

While I can understand the desire, the reality is that we stop doing
upgrade testing from unsupported branches for very practical
reasons. In our CI, Grenade relies on environments built by
DevStack. To test an upgrade from Juno to Kilo, you first need
DevStack working on Juno. I know it probably doesn't make a lot of
sense that we would suddenly cease to be able to install a version
of DevStack which worked at one point in time, but there are two
primary factors that cause this to be the case: instability within
the Python packaging ecosystem, and system support requirements.

Our DevStack deployments rely on pip to install Python dependencies
of OpenStack from PyPI. The way we traditionally declared these
dependencies was to track loose or even open-ended ranges of version
numbers in an attempt to keep our testing as current with the state
of those dependencies as possible. This means that after a time, the
behavior of some of those dependencies may change in such a way that
new versions we expect to work with OpenStack in fact do not. Given
that our dependencies number in the hundreds and have for a while,
and that our direct dependencies also in turn often declare loose or
open-ended version ranges for their own dependencies (our transitive
dependencies), this breaks down so often that we're usually unable
to start DevStack in our CI by the time we reach the EOL timeframe
for a given branch.

OpenStack also has (direct and indirect) dependencies outside of
Python, such that it's frequently impossible to run DevStack on a
non-contemporary Linux distribution. As a result, our CI ties
OpenStack versions to specific Linux distro release series, and so
continuing to support installing a given version of OpenStack
requires our CI maintainers to continue supporting Linux distro
releases contemporary with those which existed suring the
development cycle leading up to the initial version of a given
stable branch. For example, we maintained Ubuntu 12.04 as a test
platform through the lifetime of stable/icehouse because that's what
was current at the start of the Icehouse development cycle. We're
maintaining CentOS 6 as a test platform through the end of
stable/juno for similar reasons. The more stable branches we commit
to supporting, the more different distro releases our Project
Infrastructure team ends up being on the hook to maintain.

The Python packaging ecosystem situation is improving, and during
the Liberty development cycle OpenStack also grew the beginnings of
some new mechanisms to more accurately reproduce the previously
known-working conditions under which releases were developed and
tested. Reproducibility of DevStack operation has much greater
long-term promise for the stable/liberty branch, and by extension
upgrade testing with Grenade should be able to take advantage of
that for the stable/mitaka branch. However, the long and short of it
is that to support upgrade testing we need to (at least to some
extent) support the version from which the test is upgrading. This
is contradictory with performing upgrade testing *to* every
supported stable branch, since there will always be a starting point
we test upgrading *from* without being able to continue supporting
the version which came before it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-12-02 Thread AFEK, Ifat (Ifat)
Hi Julien,

Please see our questions below.

Ifat and Elisha.

> -Original Message-
> From: Julien Danjou [mailto:jul...@danjou.info]
> 
> On Wed, Dec 02 2015, ROSENSWEIG, ELISHA (ELISHA) wrote:
> > Regarding the second point: Say we have 30 different types of alarms
> > we might want to raise on an OpenStack instance (VM). What I
> > understand from your explanation is that when we create a new
> > instance, we need to create 30 new alarms in Aodh that can be
> > triggered some time in the future. If we have 100 instances, we will
> > effectively have 3,000 alarms created in Aodh, and so on with more
> instances.
> 
> Not necessarily. You can create one alarm that has conditions large
> enough to match e.g. all your VMs, and an alarm action that can be
> generic enough so that it will do the right thing for each VM.
> 

As we understand it, if we take the first approach you describe, then we can 
have an alarm refer to all the VMs in the system, but then if the alarm is 
triggered by one VM or by five VMs, the result will be the same - only one 
alarm will be active. What we want is to be able to distinguish between the 
different VMs - to know which alarms were triggered on each specific VM.

One of the motivations for this is that in Horizon we would like to display all 
the alarms, where we would like to be able to see that a problem occurred on 
instance1, instance2 and instance8, not just that there was a problem on some 
VMs out of a group. 

Can this be supported without defining an alarm for every VM separately?

> The alarm system provided by Aodh is really a simple event -> trigger
> system in this area. How precise or large is your event really depends
> on the granularity that your trigger (which is usually a Web hook) can
> handle.
> 
> > A different approach might be to create a new alarm in Aodh on-the-
> fly.
> > However, we are under the impression that the creation time can be up
> > to one minute, which will cause a large delay. Is there any way to
> shorten this?
> 
> Creation time of an alarm of one minute? That's not normal. It should
> consist of just a record in the database so it should be pretty fast.
> 

This is what Ryota Mibu wrote us:

> The reason is that aodh evaluator may not be aware of new alarm definitions 
> and won't send notification until its alarm definition > cache is refreshed 
> in less than 60 sec (default value).

Did we misunderstand?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-12-02 Thread Anita Kuno
On 12/02/2015 11:56 AM, Jim Rollenhagen wrote:
> On Mon, Nov 30, 2015 at 12:48:44PM -0500, Anita Kuno wrote:
>> On 11/30/2015 12:33 PM, Dmitry Tantsur wrote:
>>> I was there and I already said that I'm not buying into "spamming the
>>> list" argument. There are much less important things that I see here
>>> right now, even though I do actively use filters to only see potentially
>>> relevant things. We've been actively (and not very successfully)
>>> encouraging people to use ML instead of IRC conversations (or even
>>> private messages and video chats), and this thread does not seem in line
>>> with it.
>>
>> Please discuss this with the leadership of your project.
>>
>> All announcements about the existence of a third party ci will be
>> redirected to the third party systems wikipage.
> 
> While I agree with Dmitry that I don't tend to think posting these
> announcements to the list is very spammy, I also don't know the history
> behind these decisions, and I'm not opinionated enough to get involved in
> that discussion.
> 
> As for third party CI announcements, I think we should do whatever the
> rest of the OpenStack projects do; being different here just creates
> pain for everyone.
> 
> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Thanks Jim:

The expectation is that ci systems report their existence and current
status on: https://wiki.openstack.org/wiki/ThirdPartySystems

That way we have one place for people wanting to consume that
information to find it. If we have multiple places, operators feel they
have done their duty informing their community of status and don't
update their wikipage. Developers use the wikipage and individual ci
system pages to ascertain the status of a given system prior to taking
action with that system.

Just yesterday someone started to announce a system outage in infra:
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-12-01.log.html#t2015-12-01T14:30:08
(this person is a responsible operator and has demonstrated to me they
are worthy of my trust, this is simply an example of what some folks
think is the right thing to do). I assured them that updating their
wikipage (which they already had done) was sufficient and that channel
updates in infra were just noise and not helpful in achieving their goal.

I don't expect anyone to have to grep channel logs to know the status of
a system. That is the purpose of the wikipage. People wishing to convey
and consume information about the system will continue to be directed there.

Thanks for listening,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Release Notes for *aaS projects

2015-12-02 Thread Doug Wiegley
I’m in favor of things living in the repo where their code lives. The fewer 
dependencies the better. Especially if we stop adding them. So, I agree.

doug


> On Dec 2, 2015, at 9:38 AM, Kyle Mestery  wrote:
> 
> We're hoping to cut Neutron M-1 this week [1]. We have implemented
> release notes in the main Neutron repository [2] , but not in the *aaS
> repositories. At the time, I thought this was a good approach and we
> could collect all releasenotes there. But I think it makes sense to
> have releasenotes in the *aaS repositories as well.
> 
> What I'm going to propose is we cut Neutron M-1 as-is now, with any
> *aaS releasenotes done in the main repository. Once Neutron M-1 is
> cut, I'll add the releasenotes stuff into the *aaS repositories, and
> we can start using releasenotes independently in those repositories.
> 
> If anyone has issues with this approach please reply on this thread.
> 
> Thanks!
> Kyle
> 
> [1] https://review.openstack.org/#/c/251959/
> [2] https://review.openstack.org/241758
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-12-02 Thread Ben Nemec
On 12/02/2015 03:59 AM, Dmitry Tantsur wrote:
> On 12/01/2015 06:55 PM, Ben Nemec wrote:
>> Sorry for not getting to this earlier.  Some thoughts inline.
>>
>> On 11/09/2015 08:51 AM, Dmitry Tantsur wrote:
>>> Hi folks!
>>>
>>> I spent some time thinking about bringing profile matching back in, so
>>> I'd like to get your comments on the following near-future plan.
>>>
>>> First, the scope of the problem. What we do is essentially kind of
>>> capability discovery. We'll help nova scheduler with doing the right
>>> thing by assigning a capability like "suits for compute", "suits for
>>> controller", etc. The most obvious path is to use inspector to assign
>>> capabilities like "profile=1" and then filter nodes by it.
>>>
>>> A special care, however, is needed when some of the nodes match 2 or
>>> more profiles. E.g. if we have all 4 nodes matching "compute" and then
>>> only 1 matching "controller", nova can select this one node for
>>> "compute" flavor, and then complain that it does not have enough hosts
>>> for "controller".
>>>
>>> We also want to conduct some sanity check before even calling to
>>> heat/nova to avoid cryptic "no valid host found" errors.
>>>
>>> (1) Inspector part
>>>
>>> During the liberty cycle we've landed a whole bunch of API's to
>>> inspector that allow us to define rules on introspection data. The plan
>>> is to have rules saying, for example:
>>>
>>>rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
>>>rule 2: if local_gb >= 100, add capability "controller_profile=1"
>>>
>>> Note that these rules are defined via inspector API using a JSON-based
>>> DSL [1].
>>>
>>> As you see, one node can receive 0, 1 or many such capabilities. So we
>>> need the next step to make a final decision, based on how many nodes we
>>> need of every profile.
>>
>> Is the intent that this will replace the standalone ahc-match call that
>> currently assigns profiles to nodes?  In general I'm +1 on simplifying
>> the process (which is why I'm finally revisiting this) so I think I'm
>> onboard with that idea.
> 
> Yes
> 
>>
>>>
>>> (2) Modifications of `overcloud deploy` command: assigning profiles
>>>
>>> New argument --assign-profiles will be added. If it's provided,
>>> tripleoclient will fetch all ironic nodes, and try to ensure that we
>>> have enough nodes with all profiles.
>>>
>>> Nodes with existing "profile:xxx" capability are left as they are. For
>>> nodes without a profile it will look at "xxx_profile" capabilities
>>> discovered on the previous step. One of the possible profiles will be
>>> chosen and assigned to "profile" capability. The assignment stops as
>>> soon as we have enough nodes of a flavor as requested by a user.
>>
>> And this assignment would follow the same rules as the existing AHC
>> version does?  So if I had a rules file that specified 3 controllers, 3
>> cephs, and an unlimited number of computes, it would first find and
>> assign 3 controllers, then 3 cephs, and finally assign all the other
>> matching nodes to compute.
> 
> There's no longer a spec file, though we could create something like 
> that. The spec file had 2 problems:
> 1. it was used to maintain state in local file system
> 2. it was completely out of sync with what was later passed to the 
> deploy command. So you could, for example, request 1 controller and the 
> remaining to be computes in a spec file, and then request deploy with 2 
> controllers, which was doomed to fail.
> 
>>
>> I guess there's still a danger if ceph nodes also match the controller
>> profile definition but not the other way around, because a ceph node
>> might get chosen as a controller and then there won't be enough matching
>> ceph nodes when we get to that.  IIRC (it's been a while since I've done
>> automatic profile matching) that's how it would work today so it's an
>> existing problem, but it would be nice if we could fix that as part of
>> this work.  I'm not sure how complex the resolution code for such
>> conflicts would need to be.
> 
> My current patch does not deal with it. Spec file only had ordering, so 
> you could process 'ceph' before 'controller'. We can do the same by 
> accepting something like --profile-ordering=ceph,controller,compute. WDYT?
> 
> I can't think of something smarter for now, any ideas are welcome.

I'm not coming up with any scenarios that wouldn't be able to handle,
and if there are any then the user might just have to manually assign
one of the profiles.  I could see people doing that anyway for profiles
that will have limited nodes assigned, like controller.  It might be
simpler than trying to write the profile rules in a way that doesn't
allow problematic overlaps.

> 
>>
>>>
>>> (3) Modifications of `overcloud deploy` command: validation
>>>
>>> To avoid 'no valid host found' errors from nova, the deploy command will
>>> fetch all flavors involved and look at the "profile" capabilities. If
>>> they are set for any flavors, it will check if we have enough ironic
>>> 

Re: [openstack-dev] [heat][tripleo] User Initiated Rollback

2015-12-02 Thread Zane Bitter

On 02/12/15 11:02, Steven Hardy wrote:

So, chatting with Giulio today about 
https://bugs.launchpad.net/heat/+bug/1521944
has be thinking about $subject.

The root case of that issue is essentially a corner case of a stack-update,
combined with some coupling within the Neutron API which prevents the
update traversal from working.

But it raises the broader question of what a "rollback" actually is, and
how a user can potentially use it to get out of the kind of mess described
in that bug (where, otherwise, your only option is to delete the entire
stack).


I'm not sure it does raise that question; the same issue crops up 
whether you try to roll back or roll forward.



Currently, we treat rollback as a special type of update, where, if an
in-progress update fails, we then try to update again, to the previous
stack definition[1], but as Giulio has discovered, there are times when
that doesn't work, because what you actually want is to recover the
existing resource from the backup stack, not create a new one with the same
properties.


The rollback flow isn't the problem here. The problem is that the 
resource is marked as DELETE_FAILED, and Heat has no mechanism in 
general for knowing if that means it's still good and we can restore it 
or if it is, as we say in New Zealand, completely munted[1].


Since Heat can't know, it assumes the latter and replaces the resource. 
If we wanted to fix this, we'd need a mechanism to verify the health of 
the resource - and obviously it would have to be resource-specific. We 
already have an interface for that kind of mechanism in the form of 
handle_check(), so there's a chance we could repurpose that to do this.


[1] http://dictionary.reference.com/browse/munted?s=t


Then, looking at convergence, we have a different definition of rollback,
it's not yet clear to me how this should behave in a similar scenario, e.g
when the resource we want to roll back to failed to get deleted but still
exists (so, the resource is FAILED, but the underlying resource is fine)?


It's essentially the same. Convergence behaves a bit better when 
multiple failed versions of the same resource start stacking up, but it 
won't solve the problem.



Finally, the interface to rollback - atm you have to know before something
fails that you'd like to enable rollback for a specific update.  This seems
suboptimal, since invariably by the time you know you need rollback, it's
too late.  Can we enable a user-initiated rollback from a FAILED state, via
one of:

  - Introduce a new heat API that allows an explicit heat stack-rollback?
  - (ab)use PATCH to trigger rollback on heat stack-update -x --rollback=True?


In convergence there's no distinction between a rollback and an update 
using the previous template, so IMHO there's not much need for a 
separate API.



The former approach fits better with the current stack.Stack
implementation, because the ROLLBACK stack state already exists.  The
latter has the advantage that it doesn't need a new API so might be
backportable.


Convergence does store a copy of the previous template (not 100% sure 
when it deletes it at the moment - I suspect after the update succeeds), 
so a rollback API would be feasible if we decided we needed it. I'd 
prefer the first approach if so.



Any thoughts on how we might proceed to make this situation better, and
enable folks to roll back in the least destructive way possible when they
end up in a FAILED state?


Note that the root cause of this problem is that Heat doesn't have a 
global view of dependencies across stacks - if it did it would never 
have tried to delete the subnet with ports still in it. For the benefit 
of those who weren't at the design summit, we discussed potential fixes 
there:


https://etherpad.openstack.org/p/mitaka-heat-break-stack-barrier

cheers,
Zane.


Steve

[1] https://github.com/openstack/heat/blob/master/heat/engine/stack.py#L1331
[2] https://github.com/openstack/heat/blob/master/heat/engine/stack.py#L1143

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Request for comment on requiring running Linux as DefCore capability

2015-12-02 Thread Chris Hoge
A recent change request for the DefCore guidelines to “Flag validation
tests as being OS specific[1]" has sparked a larger discussion about
whether DefCore should explictly require running Linux as a compute
capability. The DefCore Committee has prepared a document[2] covering
the issue and possible actions, and is requesting review and comment
from the community, Board[3], and Technical Committee. The DefCore
committee would like to bring this topic for formal discussion to the
next TC meeting on December 8 to get input from TC on this issue.

Thanks,
Chris Hoge

[1] https://review.openstack.org/#/c/244782/
[2] 
https://docs.google.com/document/d/1Q_N93hJ-8WK4C3Ktcrex0mxv4VqoAjBzP9g6cDe0JoY/edit?usp=sharing
[3] https://wiki.openstack.org/wiki/Governance/Foundation/3Dec2015BoardMeeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze is soon

2015-12-02 Thread Mike Scherbakov
Thank you all for participation. I'll write up a summary here in the next
few hours.
IRC log is unfortunately totally useless [1] due to freenode issues. Please
help me to get the full log here:
https://etherpad.openstack.org/p/fuel-8.0-FF-meeting

[1] http://irclog.perlgeek.de/fuel-dev/2015-12-02
Thanks,

On Wed, Dec 2, 2015 at 6:37 AM Igor Kalnitsky 
wrote:

> Sheena,
>
> Yeah, we will have a meeting in #fuel-dev IRC channel. :)
>
> - Igor
>
> On Wed, Dec 2, 2015 at 4:25 PM, Sheena Gregson 
> wrote:
> > Is the meeting at 8am PST today?
> >
> >
> >
> > From: Mike Scherbakov [mailto:mscherba...@mirantis.com]
> > Sent: Wednesday, December 02, 2015 1:57 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Fuel] Feature Freeze is soon
> >
> >
> >
> > In order to be effective, I created an etherpad to go over:
> >
> > https://etherpad.openstack.org/p/8.0-features-status
> >
> >
> >
> > I'd like to call everyone to update status of blueprints, so that we can
> > have accurate picture of 8.0 deliverables. During the meeting, I'd like
> us
> > to quickly sync on FFEs and clarify status of major blueprints (if it
> won't
> > be updated by some reason).
> >
> >
> >
> > In fact, we'd need to go over two first sections of etherpad (around 15
> > items now). I assume that 1 hour will be enough, and ideally to go
> quicker.
> > If I'm missing anything believed to be major, please move it in there.
> >
> >
> >
> > Thanks,
> >
> >
> >
> > On Tue, Dec 1, 2015 at 1:37 AM Vladimir Kuklin 
> wrote:
> >
> > Mike I think, it is rather good idea. I guess we can have a couple of
> > requests still - although everyone is shy, we might get a little storm of
> > FFE's. BTW, I will file at least one.
> >
> >
> >
> > On Tue, Dec 1, 2015 at 10:28 AM, Mike Scherbakov <
> mscherba...@mirantis.com>
> > wrote:
> >
> > Hi Fuelers,
> >
> > we are couple of days away from FF [1]. I have not noticed any request
> for
> > feature freeze exception, so I assume that we pretty much decided what is
> > going into 8.0 and what is not.
> >
> >
> >
> > If there are items which we'd like to ask exception for, I'd like us to
> have
> > this requested now - so that we all can spend some time on analysis of
> what
> > is done and what is left, and on risks assessment. I'd suggest to not
> > consider any exception requests on the day of FF, as it doesn't leave us
> > time to spend on it.
> >
> >
> >
> > To make a formal checkpoint of what is in and what is out, I suggest to
> get
> > together on FF day, Wednesday, and go over all the items we have been
> > working on in 8.0. What do you think folks? For instance, in #fuel-dev
> IRC
> > at 8am PST (4pm UTC)?
> >
> >
> >
> > [1] https://wiki.openstack.org/wiki/Fuel/8.0_Release_Schedule
> >
> > --
> >
> > Mike Scherbakov
> > #mihgen
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> > --
> >
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com
> > www.mirantis.ru
> > vkuk...@mirantis.com
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > --
> >
> > Mike Scherbakov
> > #mihgen
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Component registry

2015-12-02 Thread Igor Kalnitsky
Fuelers,

As we decided on today's IRC meeting in #fuel-dev, FFE is granted for
1 week only.

Thanks,
igor

On Wed, Dec 2, 2015 at 5:42 PM, Andrian Noga  wrote:
> Colleagues,
>
> Folks,
> I would like to request feature freeze exception for Component registry
> https://blueprints.launchpad.net/fuel/+spec/component-registry
>
> Specification is already merged https://review.openstack.org/#/c/229306/
> Main patch is also merged https://review.openstack.org/#/c/247913/
> We still need to merge UI changes https://review.openstack.org/#/c/246889/53
> The change itself is a very small patch.
>  That should take just several days, so if there will be no other
> objections, we will be able to merge the change in a week timeframe.
>
> Regards,
> Andrian Noga
> Project manager
> Partners Centric Engineering Team,
> Mirantis, Inc.
> +38 (063) 966-21-24
> Skype: bigfoot_ua
> www.mirantis.com
> an...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-12-02 Thread Coffman, Joel M.

From: "duncan.tho...@gmail.com" 
>
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, November 30, 2015 at 9:13 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

On 30 November 2015 at 16:04, Coffman, Joel M. 
> wrote:
On 11/25/15, 11:33 AM, "Ben Swartzlander" 
> wrote:

On 11/24/2015 03:27 PM, Nathan Reller wrote:
Trying to design a system where we expect nova to do data encryption but
not cinder will not work in the long run. The eventual result will be
that nova will have to take on most of the functionality of cinder and
we'll be back to the nova-volume days.
Could you explain further what you mean by "nova will have to take on most of 
the functionality of cinder"? In the current design, Nova is still passing data 
blocks to Cinder for storage – they're just encrypted instead of plaintext. 
That doesn't seem to subvert the functionality of Cinder or reimplement it.

The functionality of cinder is more than blindly storing blocks - in particular 
it has create-from/upload-to image, backup, and retype, all of which do some 
degree of manipulation of the data and/or volume encryption metadata.
From a security perspective, it is advantageous for users to be able to upload 
an encrypted image, copy that image to a volume, and boot from that volume 
without decrypting the image until it is booted.

We are suffering from somewhat incompatible requirements with encryption 
between those who want fully functional cinder and encryption on disk (the 
common case I think), and those who have enhanced security requirements.
The original design supports this distinction: there is a "control-location" 
parameter that indicates where encryption is to be performed (see 
http://docs.openstack.org/user-guide-admin/dashboard_manage_volumes.html).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze Exception Request: Task Based Deployment in Astute

2015-12-02 Thread Igor Kalnitsky
Hey folks,

As we decided on today's IRC meeting in #fuel-dev, FFE exception is
granted on the following conditions (if get them right):

* the feature is marked as experimental
* patches should be merged by the end of next week

Thanks,
igor

On Tue, Dec 1, 2015 at 10:01 PM, Vladimir Kuklin  wrote:
> Hi, Folks
>
> * Intro
>
> During Iteration 3 our Enhancements Team as long as other folks worked on
> the feature called "Task Based Deployment with Astute". Here is a link to
> its blueprint:
> https://blueprints.launchpad.net/fuel/+spec/task-based-deployment-astute
>
> Major implication of this feature complition is that our deployment process
> will be drastically optimized allowing us to decrease deployment time of
> typical clusters at least by 2,5 times (for BVT/CI cases) and by order of
> magnitude for 100-node clusters.
>
> This is achieved by real parallelization of deployment tasks execution which
> assumes that we do not wait for the whole 'deployment group/role' to deploy,
> but we only wait for particular tasks to finish. For example, we could
> deploy 'database' task on secondary controllers as soon as 'database' task
> is ready on the first controller. As our deployment workflow consists only
> of a small amount of such synchronization points as 'database' task, we will
> be able to deploy majority of deployment tasks in parallel shrinking
> deployment time to "time-of-deployment-of-the-longest-node". This actually
> means that our standard deployment case for development and testing will
> take 30 minutes on our CI servers thus drastically improving developers and
> users experience, as well as shrinking down time of overall acceptance
> testing, time for bug reproducing and so on. This feature also allows one to
> use 7.0 role-as-a-plugin feature in much more effective way as current
> split-services-with-plugins feature may lead to very inoptimal deployment
> flow which might take up to 6 hours even for the simplest HA cluster, while
> it would take again 30 minutes with Task-Based approach.
> Also, when multi-roles were used we ran several tasks for each role each
> time it was used, making deployment suboptimal again.
>
>
> * Short List of Work Items
>
> As we started a little bit lately during iteration 3 we worked on design and
> specification of this feature in a way so that its introduction will bring
> in almost zero chance of regression with ability to disable it. Here is the
> summary
>
> So far we introduce several pieces of code:
> 1. New version of tasks format introducing cross-node dependencies between
> tasks
> 2. Changes to Nailgun
>   a. deduplication of tasks for roles [In Progress]
>   b. support for new tasks format [In Progress]
>   c. new engine that generates an array of hashes of tasks info consumable
> by new Astute engine [In Progress].
> 3. Changes to Astute
>  a. Tasks dependencies parser and visualizer [Ready for review]
>  b. Deployment engine capable of graph traversing and reporting [Read for
> Review]
>  c. Async wrapper for shell-based tasks [Ready for review]
> 4. Changes to Fuel Library
>  a. Add additional fields into existing Fuel Library deployment tasks for
> cross-dependencies [In Progress].
>
> * Ensurance of Little Regression and Backward Compatibility
>
> As we worked on being backward-compatible from the day one, this engine is
> enabled ONLY when 2 requirements are met:
>
> 1. It is globally enabled in Nailgun settings.yaml
> 2. ALL tasks scheduled for deployment execution have v2.0.0
>
> This list seems a little bit huge, but this changes are isolated and
> granular and actually affect the sequence in which tasks are executed on the
> nodes. This means that there will be actually no difference from the view of
> resulting functioning of the cluster. This feature can be safely disabled if
> user does not want to use it.
>
> But if user wants to work with it, he can gain enormous improvement in
> speed, his own engineering/development/testing velocity as well as in Fuel
> user experience.
>
> * Additional Cons of the Feature
>
> Moreover, this feature improves how the following use cases are also
> addressed:
>
> 1. When user deploys a specific set of nodes or tasks
> It will be possible to introduce additional flag for deploy/task run handler
> for Nailgun to pick up dependencies of specified tasks, even if they are
> currently not in place in current deployment graph. This means that instead
> of running
>
> fuel nodes --node-id 2,3 --deploy
>
> and see how it fails as node-1 contains some of the tasks that are required
> by nodes 2 and 3, user will be calm about it as he will be able to specify
> an option to populate deployment flow with needed tasks. No more
>
> fuel nodes --node-id 2 --tasks netconfig  -> Fail, because you forgot to
> specify some of the required tasks, e.g. hiera, globals.
>
> 2. Post-deployment plugin installation
>
> This feature also makes post-deployment plugin installation much easier as
> plugin 

Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-12-02 Thread Julien Danjou
On Wed, Dec 02 2015, ROSENSWEIG, ELISHA (ELISHA) wrote:

> Regarding the second point: Say we have 30 different types of alarms we might
> want to raise on an OpenStack instance (VM). What I understand from your
> explanation is that when we create a new instance, we need to create 30 new
> alarms in Aodh that can be triggered some time in the future. If we have 100
> instances, we will effectively have 3,000 alarms created in Aodh, and so on
> with more instances.

Not necessarily. You can create one alarm that has conditions large
enough to match e.g. all your VMs, and an alarm action that can be
generic enough so that it will do the right thing for each VM.

The alarm system provided by Aodh is really a simple event -> trigger
system in this area. How precise or large is your event really depends
on the granularity that your trigger (which is usually a Web hook) can
handle.

> A different approach might be to create a new alarm in Aodh on-the-fly.
> However, we are under the impression that the creation time can be up to one
> minute, which will cause a large delay. Is there any way to shorten this?

Creation time of an alarm of one minute? That's not normal. It should
consist of just a record in the database so it should be pretty fast.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] what are the key errors with volume detach

2015-12-02 Thread Rosa, Andrea (HP Cloud Services)
Hi

thanks Sean for bringing this point, I have been working on the change and on 
the (abandoned) spec.
I'll try here to summarize all the discussions we had and what we decided.

> From: Sean Dague [mailto:s...@dague.net]
> Sent: 02 December 2015 13:31
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova] what are the key errors with volume detach
> 
> This patch to add a bunch of logic to nova-manage for forcing volume detach
> raised a bunch of questions
> https://review.openstack.org/#/c/184537/24/nova/cmd/manage.py,cm

On this specific review there are some valid concerns that I am happy to 
address, but first we need to understand if we want this change.
FWIW I think it is still a valid change, please see below.

> In thinking about this for the last day, I think the real concern is that we 
> have
> so many safety checks on volume delete, that if we failed with a partially
> setup volume, we have too many safety latches to tear it down again.
> 
> Do we have some detailed bugs about how that happens? Is it possible to
> just fix DELETE to work correctly even when we're in these odd states?

In a simplified view of a detach volume we can say that the nova code does:
1 detach the volume from the instance
2 Inform cinder about the detach and call the terminate_connection on the 
cinder API. 
3 delete the dbm recod in the nova DB

If 2 fails the volumes get stuck in a detaching status and any further attempt 
to delete or detach the volume will fail:
"Delete for volume  failed: Volume  is still attached, 
detach volume first. (HTTP 400)"

And if you try to detach:
"EROR (BadRequest): Invalid input received: Invalid volume: Unable to detach 
volume. Volume status must be 'in-use' and attach_status must be 'attached' to 
detach. Currently: status: 'detaching', attach_status: 'attached.' (HTTP 400)"

at the moment the only way to clean up the situation is to hack the nova DB for 
deleting the bdm record and do some hack on the cinder side as well.
We wanted a way to clean up the situation avoiding the manual hack to the nova 
DB.

Solution proposed #1
Move the deletion of the bdm record so as it happens before calling cinder, I 
thought that was ok as from nova side we have done, no leaking bdm and the 
problem was just in the cinder side, but I was wrong.
We have to call the terminate_connection otherwise the device may show back on 
the nova host, for example that is true for iSCSI volumes:
 "if an iSCSI session from the compute host to the storage backend still exists 
(because other volumes are connected), then the volume you just removed will 
show back up on the next scsi bus rescan."
The key point here is that Nova must call the terminate_connection because just 
Nova has the "connector info" to call the terminate connection method, so 
Cinder can't fix it.

Solution proposed #2
Then I thought, ok, so let's expose a new nova API called force delete volume 
which skips all the check and allow to detach a volume in "detaching" status, I 
thought it was ok but I was wrong (again).
The main concern here is that we do not want to have the concept of "force 
delete", the user already asked for detaching the volume and the call should be 
idempotent and just work. 
So adding a new API was just adding a technical debt in the RESP API for a 
buggy/weak interaction between the Cinder API and Nova, or in other words we 
are adding a Nova API for fixing a bug in Cinder, which is very odd.

Solution proposed #3
Ok, so the solution is to fix the Cinder API and makes the interaction between 
Nova volume manager and that API robust. 
This time I was right (YAY) but as you can imagine this fix is not going to be 
an easy one and after talking with Cinder guys they clearly told me that thatt 
is going to be a massive change in the Cinder API and it is unlikely to land in 
the N(utella) or O(melette)  release.

Solution proposed #4: The trade-off
This solution is the solution I am proposing in the patch you mentioned, the 
idea is to have a temporary solution which allows us to give a handy tool for 
the operators to easily fix a problem which can occur quite often.
The solution should be something with a low impact on the nova code and easy to 
remove when we will have the proper solution for the root cause.
"quick", "useful", "dirty", "tool", "trade-off",  "db"we call it 
nova-manage!
The idea is to put a new method in the compute/api.py which skips all the 
checks on the volume status and go ahead with calling the detach_volume on the 
compute manager to detach the volume, call the terminate_connection and clean 
the bdm entry.
Nova-manage will have a command to call directly that new method.

That is a recap that I hope can help to understand how we decided to use 
nova-manage instead of other solutions, it was a bit long but I tried to 
condensate the comments from 53 spec patches + 24 code change patches (and 
counting).

PS: I added the cinder tag as they are 

[openstack-dev] [release][barbican][heat][manila] finishing reno integration work

2015-12-02 Thread Doug Hellmann
We have 3 managed projects following the release:cycle-with-milestones
release model (barbican, heat, manila) that haven't completed the
work to add reno to their projects for managing release notes.

Please complete the work today so you don't miss the M-1 milestone
deadline tomorrow.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominating Dmitry Burmistrov to core reviewers of fuel-mirror

2015-12-02 Thread Vladimir Kozhukalov
Mike,

Yes, probably the best place to describe further plans is README file. I'll
create a patch.

Vladimir Kozhukalov

On Tue, Dec 1, 2015 at 8:30 PM, Mike Scherbakov 
wrote:

> Vladimir,
> if you've been behind of this, could you please share further plans in
> separate email thread or (better) provide plans in README in the repo, so
> everyone can be aware of planned changes and can review them too? If you or
> someone else propose a change, please post a link here...
>
> Thanks,
>
> On Tue, Dec 1, 2015 at 6:27 AM Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Thomas,
>>
>> You are right about two independent modules in the repo. That is because
>> the former intention was to get rid of fuel-mirror (and fuel-createmirror)
>> and perestroika and leave only packetary there. Packetary is to be
>> developed so it is able to build not only repositories but  packages as
>> well. So we'll be able to remove perestroika once it is ready. Two major
>> capabilities of fuel-mirror are:
>> 1) create mirror (and partial mirror) and packetary can be used for this
>> instead
>> 2) apply mirror to nailgun (which is rather a matter of python-fuelclient)
>> So fuel-mirror also should be removed in the future to avoid
>> functionality duplication.
>>
>> Those were the reasons not to put them separately. (C) "There can be only
>> one".
>>
>>
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Tue, Dec 1, 2015 at 1:25 PM, Thomas Goirand  wrote:
>>
>>> On 12/01/2015 09:25 AM, Mike Scherbakov wrote:
>>> >  4. I don't quite understand how repo is organized. I see a lot of
>>> > Python code regarding to fuel-mirror itself and packetary, which is
>>> > used as fuel-mirrors core and being written and maintained mostly
>>> by
>>> > Bulat [5]. There are seem to be bash scripts now related to
>>> > Perestroika, and. I don't quite get how these things relate each to
>>> > other, and if we expect core reviewers to be merging code into both
>>> > Perestroika and Packetary? Unless mission of repo, code gets clear,
>>> > I'd abstain from giving +1...
>>>
>>> Also, why isn't packetary living in its own repository? It seems wrong
>>> to me to have 2 python modules living in the same source repo, unless
>>> they share the same egg-info. It feels weird to have to call setup.py
>>> install twice in the resulting Debian source package. That's not how
>>> things are done elsewhere, and I'd like to avoid special cases, just
>>> because it's fuel...
>>>
>>> Cheers,
>>>
>>> Thomas Goirand (zigo)
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS7 Merging Plan

2015-12-02 Thread Igor Marnat
Dmitry,
thank you!

I confirm that the plan looks good for our team, we'll follow it.

Regards,
Igor Marnat

On Wed, Dec 2, 2015 at 9:39 PM, Andrew Maksimov 
wrote:

> Thank you Dmitry for very detailed plan and risks assessment.
> Do we want to run swarm against custom iso with centos7 on Thu evening to
> measure level of regression? I remember that we were considering this
> approach.
>
> Regards,
> Andrey Maximov
>
>
> On Wed, Dec 2, 2015 at 12:48 AM, Dmitry Borodaenko <
> dborodae...@mirantis.com> wrote:
>
>> With bit more details, I hope this covers all the risks and decision
>> points now.
>>
>> First of all, current list of outstanding commits:
>> https://etherpad.openstack.org/p/fuel_on_centos7
>>
>> The above list has two sections: backwards compatible changes that can
>> be merged one at a time even if the rest of CentOS7 support isn't
>> merged, and backwards incompatible changes that break support for
>> CentOS6 and must be merged (and, if needed, reverted) all at once.
>>
>> Decision point 1: FFE for CentOS7
>>
>> CentOS7 support cannot be fully merged on Dec 2, so it misses FF. Can it
>> be allowed a Feature Freeze Exception? So far, the disruption of the
>> Fuel development process implied by the proposed merge plan is
>> acceptable, if anything goes wrong and we become unable to have a stable
>> ISO with merged CentOS7 support on Monday, December 7, the FFE will be
>> revoked.
>>
>> Wed, Dec 2: Merge party
>>
>> Merge party before 8.0 FF, we should do our best to merge all remaining
>> feature commits before end of day (including backwards compatible
>> CentOS7 support commits), without breaking the build too much.
>>
>> At the end of the day we'll start a swarm test over the result of the
>> merge party, and we expect QA to analyze and summarize the results by
>> 17:00 MSK (6:00 PST) on Thu Dec 3.
>>
>> Risk 1: Merge party breaks the build
>>
>> If there is a large regression in swarm pass percentage, we won't be
>> able to afford a merge freeze which is necessary to merge CentOS7
>> support, we'll have to be merging bugfixes until swarm test pass rate is
>> back around 70%.
>>
>> Risk 2: More features get FFE
>>
>> If some essential 8.0 features are not completely merged by end of day
>> Wed Dec 2 and are granted FFE, merging the remaining commits can
>> interfere with merging CentOS7 support, not just from merge conflicts
>> perspective, but also invalidating swarm results and making it
>> practically impossible to bisect and attribute potential regressions.
>>
>> Thu, Dec 3: Start merge freeze for CentOS7
>>
>> Decision point 2: Other FFEs
>>
>> In the morning MSK time, we will assess Risk 2 and decide what to do
>> with the other FFEs. The options are: integrate remaining commits into
>> CentOS7 merge plan, block remaining commits until Monday, revoke CentOS7
>> FFE.
>>
>> If the decision is to go ahead with CentOS7 merge, we announce merge
>> freeze for all git repositories that go into Fuel ISO, and spend the
>> rest of the day rebasing and cleaning up the rest of the CentOS7 commits
>> to make sure they're all in mergeable state by the end of the day. The
>> outcome of this work must be a custom ISO image with all remaining
>> commits, with additional requirement that it must not use Jenkins job
>> parameters (only patches to fuel-main that change default repository
>> paths) to specify all required package repositories. This will validate
>> the proposed fuel-main patches and ensure that no unmerged package
>> changes are used to produce the ISO.
>>
>> Decision point 3: Swarm pass rate
>>
>> After swarm results from Wed are available, we will assess the Risk 1.
>> If the pass rate regression is significant, CentOS7 FFE is revoked and
>> merge freeze is lifted. If regression is acceptable, we proceed with
>> merging remaining CentOS7 commmits through Thu Dec 3 and Fri Dec 4.
>>
>> Fri, Dec 4: Merge and test CentOS7
>>
>> The team will have until 17:00 MSK to produce a non-custom ISO that
>> passes BVT and can be run through swarm.
>>
>> Sat, Dec 5: Assess CentOS7 swarm and bugfix
>>
>> First of all, someone from CI and QA teams should commit to monitoring
>> the CentOS7 swarm run and report the results as soon as possible. Based
>> on the results (which once again must be available by 17:00 MSK), we can
>> decide on the final step of the plan.
>>
>> Decision point 4: Keep or revert
>>
>> If CentOS7 based swarm shows significant regression, we have to spend
>> the rest of the weekend including Sunday reverting all CentOS7 commits
>> that were merged during merge freeze. Once revert is completed, we will
>> lift the merge freeze.
>>
>> If the regression is acceptable, we lift the merge freeze straight away
>> and proceed with bugfixing as usual. At this point CI team will need to
>> update the Fuel ISO used for deployment tests in our CI to this same
>> ISO.
>>
>> One way or the other, we will be able to resume bugfixing on Monday
>> morning MSK time, and will 

Re: [openstack-dev] [Fuel] Feature Freeze Exception Request: Task Based Deployment in Astute

2015-12-02 Thread Mike Scherbakov
Correct. See my summary email at
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081131.html
.

On Wed, Dec 2, 2015 at 10:11 AM Igor Kalnitsky 
wrote:

> Hey folks,
>
> As we decided on today's IRC meeting in #fuel-dev, FFE exception is
> granted on the following conditions (if get them right):
>
> * the feature is marked as experimental
> * patches should be merged by the end of next week
>
> Thanks,
> igor
>
> On Tue, Dec 1, 2015 at 10:01 PM, Vladimir Kuklin 
> wrote:
> > Hi, Folks
> >
> > * Intro
> >
> > During Iteration 3 our Enhancements Team as long as other folks worked on
> > the feature called "Task Based Deployment with Astute". Here is a link to
> > its blueprint:
> > https://blueprints.launchpad.net/fuel/+spec/task-based-deployment-astute
> >
> > Major implication of this feature complition is that our deployment
> process
> > will be drastically optimized allowing us to decrease deployment time of
> > typical clusters at least by 2,5 times (for BVT/CI cases) and by order of
> > magnitude for 100-node clusters.
> >
> > This is achieved by real parallelization of deployment tasks execution
> which
> > assumes that we do not wait for the whole 'deployment group/role' to
> deploy,
> > but we only wait for particular tasks to finish. For example, we could
> > deploy 'database' task on secondary controllers as soon as 'database'
> task
> > is ready on the first controller. As our deployment workflow consists
> only
> > of a small amount of such synchronization points as 'database' task, we
> will
> > be able to deploy majority of deployment tasks in parallel shrinking
> > deployment time to "time-of-deployment-of-the-longest-node". This
> actually
> > means that our standard deployment case for development and testing will
> > take 30 minutes on our CI servers thus drastically improving developers
> and
> > users experience, as well as shrinking down time of overall acceptance
> > testing, time for bug reproducing and so on. This feature also allows
> one to
> > use 7.0 role-as-a-plugin feature in much more effective way as current
> > split-services-with-plugins feature may lead to very inoptimal deployment
> > flow which might take up to 6 hours even for the simplest HA cluster,
> while
> > it would take again 30 minutes with Task-Based approach.
> > Also, when multi-roles were used we ran several tasks for each role each
> > time it was used, making deployment suboptimal again.
> >
> >
> > * Short List of Work Items
> >
> > As we started a little bit lately during iteration 3 we worked on design
> and
> > specification of this feature in a way so that its introduction will
> bring
> > in almost zero chance of regression with ability to disable it. Here is
> the
> > summary
> >
> > So far we introduce several pieces of code:
> > 1. New version of tasks format introducing cross-node dependencies
> between
> > tasks
> > 2. Changes to Nailgun
> >   a. deduplication of tasks for roles [In Progress]
> >   b. support for new tasks format [In Progress]
> >   c. new engine that generates an array of hashes of tasks info
> consumable
> > by new Astute engine [In Progress].
> > 3. Changes to Astute
> >  a. Tasks dependencies parser and visualizer [Ready for review]
> >  b. Deployment engine capable of graph traversing and reporting [Read for
> > Review]
> >  c. Async wrapper for shell-based tasks [Ready for review]
> > 4. Changes to Fuel Library
> >  a. Add additional fields into existing Fuel Library deployment tasks for
> > cross-dependencies [In Progress].
> >
> > * Ensurance of Little Regression and Backward Compatibility
> >
> > As we worked on being backward-compatible from the day one, this engine
> is
> > enabled ONLY when 2 requirements are met:
> >
> > 1. It is globally enabled in Nailgun settings.yaml
> > 2. ALL tasks scheduled for deployment execution have v2.0.0
> >
> > This list seems a little bit huge, but this changes are isolated and
> > granular and actually affect the sequence in which tasks are executed on
> the
> > nodes. This means that there will be actually no difference from the
> view of
> > resulting functioning of the cluster. This feature can be safely
> disabled if
> > user does not want to use it.
> >
> > But if user wants to work with it, he can gain enormous improvement in
> > speed, his own engineering/development/testing velocity as well as in
> Fuel
> > user experience.
> >
> > * Additional Cons of the Feature
> >
> > Moreover, this feature improves how the following use cases are also
> > addressed:
> >
> > 1. When user deploys a specific set of nodes or tasks
> > It will be possible to introduce additional flag for deploy/task run
> handler
> > for Nailgun to pick up dependencies of specified tasks, even if they are
> > currently not in place in current deployment graph. This means that
> instead
> > of running
> >
> > fuel nodes --node-id 2,3 --deploy
> >
> > and see how it fails as node-1 contains some 

Re: [openstack-dev] [neutron] Release Notes for *aaS projects

2015-12-02 Thread Brandon Logan
Makes complete sense.

On Wed, 2015-12-02 at 10:38 -0600, Kyle Mestery wrote:
> We're hoping to cut Neutron M-1 this week [1]. We have implemented
> release notes in the main Neutron repository [2] , but not in the *aaS
> repositories. At the time, I thought this was a good approach and we
> could collect all releasenotes there. But I think it makes sense to
> have releasenotes in the *aaS repositories as well.
> 
> What I'm going to propose is we cut Neutron M-1 as-is now, with any
> *aaS releasenotes done in the main repository. Once Neutron M-1 is
> cut, I'll add the releasenotes stuff into the *aaS repositories, and
> we can start using releasenotes independently in those repositories.
> 
> If anyone has issues with this approach please reply on this thread.
> 
> Thanks!
> Kyle
> 
> [1] https://review.openstack.org/#/c/251959/
> [2] https://review.openstack.org/241758
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Douglas Fish
+1 for both Richard and Timur. Great additions to the team!
Doug Fish
 
 
- Original message -From: "Chen, Shaoquan" To: "OpenStack Development Mailing List (not for usage questions)" Cc:Subject: Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-coreDate: Wed, Dec 2, 2015 2:03 PM 
+1 for both Timur and Richard!On 12/2/15, 10:57 AM, "David Lyle"  wrote:>Let's try that again.>>I propose adding Richard Jones[1] to horizon-core.>>Over the last several cycles Richard has consistently been providing>great reviews, actively participating in the Horizon community, and>making meaningful contributions around angularJS and overall project>stability and health.>>Please respond with comments, +1s, or objections within one week.>>Thanks,>David>>[1]>http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s>=all>>On Wed, Dec 2, 2015 at 11:56 AM, David Lyle  wrote:>> I propose adding Richard Jones[1] to horizon-core. Over the last several cycles Timur has consistently been providing>> great reviews, actively participating in the Horizon community, and>> making meaningful contributions around angularJS and overall project>> stability and health. Please respond with comments, +1s, or objections within one week. Thanks,>> David [1]>>http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s>>e=all>>__>OpenStack Development Mailing List (not for usage questions)>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Feature Freeze Exceptions

2015-12-02 Thread Mike Scherbakov
Hi all,
we ran a meeting and made a decision on feature freeze exceptions. Full log
is here:
https://etherpad.openstack.org/p/fuel-8.0-FF-meeting

The following features were granted with feature freeze exception:

   1. CentOS 7. ETA: Monday 7th. Blueprint:
   https://blueprints.launchpad.net/fuel/+spec/master-on-centos7. We have
   rather complicated plan on merges here:
   http://lists.openstack.org/pipermail/openstack-dev/2015-December/081026.html
   2. Disable queue mirroring for RPC queues in RabbitMQ. ETA: didn't
   define. As fairly small patch is related, I propose Monday, 7th to be the
   last day.
   
https://blueprints.launchpad.net/fuel/+spec/rabbitmq-disable-mirroring-for-rpc
   3. Task based deployment with Astute. ETA: Friday, 11th.
   https://blueprints.launchpad.net/fuel/+spec/task-based-deployment-astute.
   Decision is that new code must be disabled by default.
   4. Component Registry. ETA: Wednesday, 10th. Only
   https://review.openstack.org/#/c/246889/ is given with an exception. BP:
   https://blueprints.launchpad.net/fuel/+spec/component-registry
   5. Add vmware cluster after deployment. ETA: Tuesday, 8th. Only this
   patch is given with an exception:
   https://review.openstack.org/#/c/251278/. BP:
   https://blueprints.launchpad.net/fuel/+spec/add-vmware-clusters
   6. Support murano service broker. ETA: Tuesday, 8th. Only this patch is
   given an exception: https://review.openstack.org/#/c/252356. BP:
   
https://blueprints.launchpad.net/fuel/+spec/implement-support-for-murano-service-broker
   7. Ubuntu boostrap. Two patches requested for FFE:
   https://review.openstack.org/#/c/250504/,
   https://review.openstack.org/#/c/251873/. Both are merged. So I consider
   that this is actually done.

I'm calling everyone to update blueprints status. I'm volunteering to go
over open blueprints targeting 8.0 tomorrow, and move all which are not in
"Implemented" status unless those are exceptions or test/docs related
things.

Thanks all for keeping a focused efforts on getting code into master. I
strongly suggest that we don't push any exception further down, and if
something is not done by second deadline - it has to be disabled / reverted
in 8.0.
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Component registry

2015-12-02 Thread Mike Scherbakov
ETA Wednesday, 10th
Summary of meeting:
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081131.html

On Wed, Dec 2, 2015 at 10:07 AM Igor Kalnitsky 
wrote:

> Fuelers,
>
> As we decided on today's IRC meeting in #fuel-dev, FFE is granted for
> 1 week only.
>
> Thanks,
> igor
>
> On Wed, Dec 2, 2015 at 5:42 PM, Andrian Noga  wrote:
> > Colleagues,
> >
> > Folks,
> > I would like to request feature freeze exception for Component registry
> > https://blueprints.launchpad.net/fuel/+spec/component-registry
> >
> > Specification is already merged https://review.openstack.org/#/c/229306/
> > Main patch is also merged https://review.openstack.org/#/c/247913/
> > We still need to merge UI changes
> https://review.openstack.org/#/c/246889/53
> > The change itself is a very small patch.
> >  That should take just several days, so if there will be no other
> > objections, we will be able to merge the change in a week timeframe.
> >
> > Regards,
> > Andrian Noga
> > Project manager
> > Partners Centric Engineering Team,
> > Mirantis, Inc.
> > +38 (063) 966-21-24
> > Skype: bigfoot_ua
> > www.mirantis.com
> > an...@mirantis.com
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Timur Sufiev
Well, I'm not sure if I'm eligible to chime in (yet), but IMO Richard is
the first candidate here (and me is the second one) :).

On Wed, Dec 2, 2015 at 9:59 PM Rob Cresswell (rcresswe) 
wrote:

> An equally big +1!
>
> On 02/12/2015 18:56, "David Lyle"  wrote:
>
> >I propose adding Richard Jones[1] to horizon-core.
> >
> >Over the last several cycles Timur has consistently been providing
> >great reviews, actively participating in the Horizon community, and
> >making meaningful contributions around angularJS and overall project
> >stability and health.
> >
> >Please respond with comments, +1s, or objections within one week.
> >
> >Thanks,
> >David
> >
> >[1]
> >
> http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s
> >=all
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread David Lyle
I propose adding Richard Jones[1] to horizon-core.

Over the last several cycles Timur has consistently been providing
great reviews, actively participating in the Horizon community, and
making meaningful contributions around angularJS and overall project
stability and health.

Please respond with comments, +1s, or objections within one week.

Thanks,
David

[1] 
http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Diana Whitten
٩(͡๏̮͡๏)۶

+1!

On Wed, Dec 2, 2015 at 11:58 AM, Rob Cresswell (rcresswe) <
rcres...@cisco.com> wrote:

> An equally big +1!
>
> On 02/12/2015 18:56, "David Lyle"  wrote:
>
> >I propose adding Richard Jones[1] to horizon-core.
> >
> >Over the last several cycles Timur has consistently been providing
> >great reviews, actively participating in the Horizon community, and
> >making meaningful contributions around angularJS and overall project
> >stability and health.
> >
> >Please respond with comments, +1s, or objections within one week.
> >
> >Thanks,
> >David
> >
> >[1]
> >
> http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s
> >=all
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug deputy process

2015-12-02 Thread Salvatore Orlando
I only have some historical, anecdotal, and rapidly waning memory of
previous releases.
Nevertheless my feeling is that the process has been a success so far.
In past times it would not have been a surprise if a bug fell under the
radar until that well known brownish matter hit the proverbial fan.

Also, only 17 bugs are in "new" status out of 373. Which means that - at
worst - only 4.6% of reported bugs have not yet been analysed by the team.
I reckon these numbers are rather impressive. Kudos to both the deputies
and most importantly to Armando who set up the process.

Salvatore


On 2 December 2015 at 19:49, Armando M.  wrote:

> Hi neutrinos,
>
> It's been a couple of months that the Bug deputy process has been in place
> [1,2]. Since the beginning of Mitaka we have collected the following
> statistics (for neutron and neutronclient):
>
> Total bug reports: 373
>
>- Fix committed: 144
>- Unassigned: 73
>   - New: 17
>   - Incomplete: 20
>   - Confirmed: 27
>   - Triaged: 6
>
>
> At first, it is clear that we do not fix issues nearly as fast as they
> come in, but at least we managed to keep the number of unassigned/unvetted
> bugs relatively small, so kudos to you all who participated in this
> experiment. I don't have data based on older releases, so I can't see
> whether we've improved or worsened, and I'd like to ask for feedback from
> the people who played with this first hand, especially on the amount of
> time that has taken them to do deputy duty for their assigned week.
>
>- ihrachys
>- regXboi
>- markmcclain
>- mestery
>- mangelajo
>- garyk
>- rossella_s
>- dougwig
>
> Many thanks,
> Armando
>
> [1] https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy
> [2]
> http://docs.openstack.org/developer/neutron/policies/bugs.html#neutron-bug-deputy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Tripp, Travis S
Definite +1 for me.  I think they are great additions to the team!

-Travis





On 12/2/15, 11:57 AM, "David Lyle"  wrote:

>Let's try that again.
>
>I propose adding Richard Jones[1] to horizon-core.
>
>Over the last several cycles Richard has consistently been providing
>great reviews, actively participating in the Horizon community, and
>making meaningful contributions around angularJS and overall project
>stability and health.
>
>Please respond with comments, +1s, or objections within one week.
>
>Thanks,
>David
>
>[1] 
>http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all
>
>On Wed, Dec 2, 2015 at 11:56 AM, David Lyle  wrote:
>> I propose adding Richard Jones[1] to horizon-core.
>>
>> Over the last several cycles Timur has consistently been providing
>> great reviews, actively participating in the Horizon community, and
>> making meaningful contributions around angularJS and overall project
>> stability and health.
>>
>> Please respond with comments, +1s, or objections within one week.
>>
>> Thanks,
>> David
>>
>> [1] 
>> http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Fuel 8.0 Feature Freeze

2015-12-02 Thread Dmitry Borodaenko
Fuel 8.0 (based on Liberty) is now in Feature Freeze [0].

[0] https://wiki.openstack.org/wiki/FeatureFreeze

Aside from feature commits that have received feature freeze exceptions
listed in Mike's email [1], only bugfix commits are now allowed to be
merged.

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-December/081131.html

Lets all be extra careful about what gets merged from now on and look
out for possible regressions and disruptions. The list of exceptions is
much longer than I'd like, and some have larger impact than I'd like,
lets all of us make sure we don't come to regret granting these
exceptions.

As per my proposal back in August [2], master branches will remain
closed for feature work only until Soft Code Freeze. On Soft Code Freeze
(currently scheduled to December 23), stable/8.0 branches will be
created and master branches will be opened for feature work and Mitaka
support.

[2] http://lists.openstack.org/pipermail/openstack-dev/2015-August/073110.html

Note that our definitions of Soft Code Freeze and Hard Code Freeze on
the wiki [3] haven't been updated yet and still shows the old scheme
(stable branches created at HCF), apologies about that.

[3] https://wiki.openstack.org/wiki/Fuel#Release_Milestones

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-12-02 Thread Jeremy Stanley
[Apologies for the delayed reply, after more than a week without
Internet access it's taking me more than a week to catch up with
everything on the mailing lists.]

On 2015-11-20 10:21:47 + (+), Kuvaja, Erno wrote:
[...]
> So we were brainstorming this with Rocky the other night. Would
> this be possible to do by following:
> 
> 1) we still tag juno EOL in few days time

Hopefully by the end of this week, once I finish making sure I'm up
to speed on everything that's been said while I was out (anything
less would be irresponsible of me).

> 2) we do not remove the stable/juno branch

As pointed out later in this thread by Alan, it's technically
possible to use a tag instead of a branch name (after all, both are
just Git refs in the end), and deleting the branch sends a clearer
message that there are no new commits coming for stable/juno ever
again.

> 3) we run periodic grenade jobs for kilo
> 
> I'm not that familiar with the grenade job itself so I'm doing
> couple of assumptions, please correct me if I'm wrong.
> 
> 1) We could do this with py27 only

Our Grenade jobs are only using Python 2.7 anyway.

> 2) We could do this with Ubuntu 1404 only

That's the only place we run Grenade now that stable/icehouse is EOL
(it was the last branch for which we supported Ubuntu 12.04).

> If this is doable would we need anything special for these jobs in
> infra point of view or can we just schedule these jobs from the
> pool running our other jobs as well?
> 
> If so is there still "quiet" slots on the infra utilization so
> that we would not be needing extra resources poured in for this?
> 
> Is there something else we would need to consider in QA/infra
> point of view?
[...]

There are no technical Infra-side blockers to changing how we've
done this in the past and instead continuing to run stable/kilo
Grenade jobs for some indeterminate period after stable/juno is
dead, but it's also not (entirely) up to Infra to decide this. I
defer to the Grenade maintainers and QA team to make this
determination, and they seem to be pretty heavily against the idea.

> Big question ref the 2), what can we do if the grenade starts
> failing? In theory we won't be merging anything to kilo that
> _should_ cause this and we definitely will not be merging anything
> to Juno to fix these issues anymore. How much maintenance those
> grenade jobs themselves needs?

That's the kicker. As I explained earlier in the thread from which
this one split, keeping Juno-era DevStack and Tempest and all the
bits on which they rely working in our CI without being able to make
any modifications to them is intractable (mainly because of the
potential for behavior changes in transitive dependencies not under
our control):

http://lists.openstack.org/pipermail/openstack-dev/2015-December/081109.html

> So all in all, is the cost doing above too much to get indicator
> that tells us when Juno --> Kilo upgrade is not doable anymore?

Yes. This is how we arrived at the EOL timeline for stable/juno in
the first place: gauging our ability to keep running things like
DevStack and Tempest on it. Now is not the time to discuss how we
can keep Juno on some semblance of life support (that discussion
concluded more than a year ago), it's time for discussing what we
can implement in Mitaka so we have more reasonable options for
keeping the stable/mitaka branch healthy a year from now.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [openstack-operators] Tools to move instances between projects?

2015-12-02 Thread Kris G. Lindgren
Hello,

I was wondering if someone has a set of tools/code to work allow admins to move 
vm's from one tenant to another?  We get asked this fairly frequently in our 
internal cloud (atleast once a week, more when we start going through and 
cleaning up resources for people who are no longer with the company).   I have 
searched and I was able to find anything externally.

Matt Riedemann pointed me to an older spec for nova : 
https://review.openstack.org/#/c/105367/ for nova.  I realize that this will 
most likely need to be a cross projects effort.  Since vm's consume resources 
for multiple other projects, and to move a VM between projects would also 
require that those other resources get updated as well.

Is anyone aware of a cross project spec to handle this – or of specs in other 
projects?
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread David Lyle
Let's try that again.

I propose adding Richard Jones[1] to horizon-core.

Over the last several cycles Richard has consistently been providing
great reviews, actively participating in the Horizon community, and
making meaningful contributions around angularJS and overall project
stability and health.

Please respond with comments, +1s, or objections within one week.

Thanks,
David

[1] 
http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all

On Wed, Dec 2, 2015 at 11:56 AM, David Lyle  wrote:
> I propose adding Richard Jones[1] to horizon-core.
>
> Over the last several cycles Timur has consistently been providing
> great reviews, actively participating in the Horizon community, and
> making meaningful contributions around angularJS and overall project
> stability and health.
>
> Please respond with comments, +1s, or objections within one week.
>
> Thanks,
> David
>
> [1] 
> http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Timur Sufiev to horizon-core

2015-12-02 Thread Diana Whitten
+1!

On Wed, Dec 2, 2015 at 11:57 AM, Rob Cresswell (rcresswe) <
rcres...@cisco.com> wrote:

> A big +1 for me!
>
> On 02/12/2015 18:52, "David Lyle"  wrote:
>
> >I propose adding Timur Sufiev[1] to horizon-core.
> >
> >Over the last several cycles Timur has consistently been providing
> >great reviews, actively participating in the Horizon community, and
> >making meaningful contributions particularly around testing and
> >stability.
> >
> >Please respond with comments, +1s, or objections within one week.
> >
> >Thanks,
> >David
> >
> >[1]
> >
> http://stackalytics.com/?module=horizon-group_id=tsufiev-x=al
> >l
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-02 Thread Dmitry Mescheryakov
2015-12-02 16:52 GMT+03:00 Jordan Pittier :

>
> On Wed, Dec 2, 2015 at 1:05 PM, Dmitry Mescheryakov <
> dmescherya...@mirantis.com> wrote:
>
>>
>>
>> My point is simple - lets increase our architecture scalability by 2-3
>> times by _maybe_ causing more errors for users during failover. The
>> failover time itself should not get worse (to be tested by me) and errors
>> should be correctly handler by services anyway.
>>
>
> Scalability is great, but what about correctness ?
>

Jordan, users will encounter problems only when some of RabbitMQ nodes go
down. Under normal circumstances it will not cause any additional errors.
And when RabbitMQ goes down and oslo.messaging fails over to alive hosts,
we anyway have couple minutes messaging downtime at the moment, which
disrupts almost all RPC calls. On the other side, disabling mirroring
greatly reduces chances a RabbitMQ node goes down due to high load.


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Matthias Runge
On 02/12/15 19:57, David Lyle wrote:
> Let's try that again.
> 
> I propose adding Richard Jones[1] to horizon-core.

> Please respond with comments, +1s, or objections within one week.
Yes, +1 from me.

Matthias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Bug deputy process

2015-12-02 Thread Armando M.
Hi neutrinos,

It's been a couple of months that the Bug deputy process has been in place
[1,2]. Since the beginning of Mitaka we have collected the following
statistics (for neutron and neutronclient):

Total bug reports: 373

   - Fix committed: 144
   - Unassigned: 73
  - New: 17
  - Incomplete: 20
  - Confirmed: 27
  - Triaged: 6


At first, it is clear that we do not fix issues nearly as fast as they come
in, but at least we managed to keep the number of unassigned/unvetted bugs
relatively small, so kudos to you all who participated in this experiment.
I don't have data based on older releases, so I can't see whether we've
improved or worsened, and I'd like to ask for feedback from the people who
played with this first hand, especially on the amount of time that has
taken them to do deputy duty for their assigned week.

   - ihrachys
   - regXboi
   - markmcclain
   - mestery
   - mangelajo
   - garyk
   - rossella_s
   - dougwig

Many thanks,
Armando

[1] https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy
[2]
http://docs.openstack.org/developer/neutron/policies/bugs.html#neutron-bug-deputy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Announcing Third Party CI for Proliant iLO Drivers

2015-12-02 Thread Ruby Loo
On 30 November 2015 at 11:25, Gururaj Grandhi 
wrote:

> Hi,
>
>
>
>  This is to announce that  we have  setup  a  Third Party CI
> environment  for Proliant iLO Drivers. The results will be posted  under
> "HP Proliant CI check" section in Non-voting mode.   We will be  running
> the basic deploy tests for  iscsi_ilo and agent_ilo drivers  for the
> check queue.  We will first  pursue to make the results consistent and
> over a period of time we will try to promote it to voting mode.
>
>
>
>For more information check the Wiki:
> https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/third-party-ci
> , for any issues please contact ilo_driv...@groups.ext.hpe.com
>
>
>
>
>
> Thanks & Regards,
>
> Gururaja Grandhi
>
> R Project Manager
>
> HPE Proliant  Ironic  Project
>
>
>
(I now know that these announcements shouldn't be posted but since this
was, I think it merits ...)

Yay, that's awesome! Thanks for setting this up so quickly. Good thing
we're not competitive; otherwise I'd say this is a challenge to the other
vendors out there :D

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Proposal to add Timur Sufiev to horizon-core

2015-12-02 Thread David Lyle
I propose adding Timur Sufiev[1] to horizon-core.

Over the last several cycles Timur has consistently been providing
great reviews, actively participating in the Horizon community, and
making meaningful contributions particularly around testing and
stability.

Please respond with comments, +1s, or objections within one week.

Thanks,
David

[1] http://stackalytics.com/?module=horizon-group_id=tsufiev-x=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Rob Cresswell (rcresswe)
An equally big +1!

On 02/12/2015 18:56, "David Lyle"  wrote:

>I propose adding Richard Jones[1] to horizon-core.
>
>Over the last several cycles Timur has consistently been providing
>great reviews, actively participating in the Horizon community, and
>making meaningful contributions around angularJS and overall project
>stability and health.
>
>Please respond with comments, +1s, or objections within one week.
>
>Thanks,
>David
>
>[1] 
>http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s
>=all
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Timur Sufiev to horizon-core

2015-12-02 Thread Rob Cresswell (rcresswe)
A big +1 for me!

On 02/12/2015 18:52, "David Lyle"  wrote:

>I propose adding Timur Sufiev[1] to horizon-core.
>
>Over the last several cycles Timur has consistently been providing
>great reviews, actively participating in the Horizon community, and
>making meaningful contributions particularly around testing and
>stability.
>
>Please respond with comments, +1s, or objections within one week.
>
>Thanks,
>David
>
>[1] 
>http://stackalytics.com/?module=horizon-group_id=tsufiev-x=al
>l
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Diana Whitten
Timur,

I'm sure you can chime in even if your vote doesn't count [yet]!

On Wed, Dec 2, 2015 at 12:11 PM, Tripp, Travis S 
wrote:

> Definite +1 for me.  I think they are great additions to the team!
>
> -Travis
>
>
>
>
>
> On 12/2/15, 11:57 AM, "David Lyle"  wrote:
>
> >Let's try that again.
> >
> >I propose adding Richard Jones[1] to horizon-core.
> >
> >Over the last several cycles Richard has consistently been providing
> >great reviews, actively participating in the Horizon community, and
> >making meaningful contributions around angularJS and overall project
> >stability and health.
> >
> >Please respond with comments, +1s, or objections within one week.
> >
> >Thanks,
> >David
> >
> >[1]
> http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all
> >
> >On Wed, Dec 2, 2015 at 11:56 AM, David Lyle  wrote:
> >> I propose adding Richard Jones[1] to horizon-core.
> >>
> >> Over the last several cycles Timur has consistently been providing
> >> great reviews, actively participating in the Horizon community, and
> >> making meaningful contributions around angularJS and overall project
> >> stability and health.
> >>
> >> Please respond with comments, +1s, or objections within one week.
> >>
> >> Thanks,
> >> David
> >>
> >> [1]
> http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Announce List

2015-12-02 Thread Jeremy Stanley
On 2015-11-20 11:41:43 +0100 (+0100), Thierry Carrez wrote:
> Tom Fifield wrote:
[...]
> > * Important security advisories
> 
> Actually it's all security advisories, not just "important" ones.
[...]

I would counter that the VMT don't bother to write and publish
"unimportant" security advisories, as it would be a waste of all our
time and we already have plenty of other things to do. ;)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Timur Sufiev to horizon-core

2015-12-02 Thread Matthias Runge
On 02/12/15 19:52, David Lyle wrote:
> I propose adding Timur Sufiev[1] to horizon-core.
> 
> Over the last several cycles Timur has consistently been providing
> great reviews, actively participating in the Horizon community, and
> making meaningful contributions particularly around testing and
> stability.
> 
> Please respond with comments, +1s, or objections within one week.
> 
Oh, yes please! Timur has been doing a great job, not only over the past
cycle.

Matthias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] weekly meeting agenda

2015-12-02 Thread Antoine Cabot
Hello!

Here is the agenda for our weekly meeting, today at 1400 UTC
on #openstack-meeting-4 [1]

Feel free to add any items you'd like to discuss.

Thanks,

Antoine

[1] *https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda#12.2F2.2F2015
*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Neutron] dashboard repository for neutron subprojects

2015-12-02 Thread Akihiro Motoki
Thanks all.
All comments so far are from neutron side. I would like to wait inputs
from horizon side, especially David.

Option (c) is what we do in neutron sub-projects under neutron stadium model and
I agree it makes sense and sounds natural to neutron folks.

My initial mail just did not cover technical points or horizon
developer perspective
if we go to option (c). Let me share them.

[Horizon developer perspective]

I think we need some collaboration points between neutron subprojects
and horizon team (+ UX team).
to share knowledge or conventions in the dashboard development.
Not so many neutron developers are aware of horizon side changes, so I
think Horizon side
needs to care of these repositories to some extent for better UX
consistency or framework changes.

We are going to the self-management models in individual repos, so I believe
each team watches horizon side changes to some extent, and keep their
dashboard up-to-date.

>From Horizon point of view, it seems good to me if the following are done:

- Use a consistent directory name for a dashboard support in each
repository (e.g., "dashboard")
  Gerrit support filename based query, so it allows horizon developers
can reach dashboard related reviews.
- Keep up-to-date Horizon plugin registry
http://docs.openstack.org/developer/horizon/plugins.html
- Use horizon plugin model rather than adhoc approach
- Documentation on config options (at now, horizon does not support
oslo.config generator)

[Technical topics]

- We need to have two testing setup for both neutron and horizon.
  I think most dashboard tests depend on Horizon (or at least Django)

- Does (test-)requirements.txt contain neutron and horizon dependencies?
  For horizon itself, perhaps no. Our test tool chains should install horizon
  as we do for neutron dependency.
  For other requirements, I am not sure at this moment.

- Separate translation support for dashboard and server code.
  Django and oslo.i18n (python gettext) use different approach to find
translation catalog,
  so we need to prepare a separate tool chain for both translation catalog.
  It requires the infra script change.

  # Normal Horizon plugin translation support is an ongoing effort,
  # but option (c) needs extra effort.

[Packaging perspective]

I am not sure how it affects.
There is one concern as a package consumer.

> Getting additional packages through distro channels can be surprisingly 
> difficult for new packages. :/

How neutron team can answer to this?
I think it is not specific to neutron subproject dashboard discussion.
Neutron stadium mode already has this problem.
Input from packaging side would be appreciated.

Thanks,
Akihiro

2015-11-25 14:46 GMT+09:00 Akihiro Motoki :
> Hi,
>
> Neutron has now various subprojects and some of them would like to
> implement Horizon supports. Most of them are additional features.
> I would like to start the discussion where we should have horizon support.
>
> [Background]
> Horizon team introduced a plugin mechanism and we can add horizon panels
> from external repositories. Horizon team is recommending external repos for
> additional services for faster iteration and features.
> We have various horizon related repositories now [1].
>
> In Neutron related world, we have neutron-lbaas-dashboard and
> horizon-cisco-ui repos.
>
> [Possible options]
> There are several possible options for neutron sub-projects.
> My current vote is (b), and the next is (a). It looks a good balance to me.
> I would like to gather broader opinions,
>
> (a) horizon in-tree repo
> - [+] It was a legacy approach and there is no initial effort to setup a repo.
> - [+] Easy to share code conventions.
> - [-] it does not scale. Horizon team can be a bottleneck.
>
> (b) a single dashboard repo for all neutron sub-projects
> - [+] No need to set up a repo by each sub-project
> - [+] Easier to share the code convention. Can get horizon reviewers.
> - [-] who will be a core reviewer of this repo?
>
> (c) neutron sub-project repo
> - [+] Each sub-project can develop a dashboard fast.
> - [-] It is doable, but the directory tree can be complicated.
> - [-] Lead to too many repos and the horizon team/liaison cannot cover all.
>
> (d) a separate repo per neutron sub-project
> Similar to (c)
> - [+] A dedicate repo for dashboard simplifies the directory tree.
> - [-] Need to setup a separate repo.
> - [-] Lead to too many repos and the horizon team/liaison cannot cover all.
>
>
> Note that this mail is not intended to move the current neutron
> support in horizon
> to outside of horizon tree. I would like to discuss Horizon support of
> additional features.
>
> Akihiro
>
> [1] http://docs.openstack.org/developer/horizon/plugins.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [midonet] IRC: ditch #midonet-dev?

2015-12-02 Thread Jan Hilberath

On 2015年12月01日 19:14, Takashi Yamamoto wrote:

On Tue, Dec 1, 2015 at 7:08 PM, Antoni Segura Puimedon
 wrote:



On Tue, Dec 1, 2015 at 10:59 AM, Ivan Kelly  wrote:


+1 for #2



PS: Beware of the top-posting! It makes vote counting harder ;-)




On Tue, Dec 1, 2015 at 10:57 AM, Sandro Mathys 
wrote:

Hi,

Our IRC channels have been neglected for a long time, and as a result
we lost ownership of #midonet-dev, which is now owner by
freenode-staff. In theory, it should be very easy to get ownership
back, particularly since we still own #midonet. But in reality, it
seems like none of the freenode staff feel responsible for these
requests, so we still aren't owners after requesting it for 3 weeks
already.

Therefore, Toni Segura suggested we just ditch it and move to
#openstack-midonet instead.

However, several people have also said we don't need two channels,
i.e. we should merge #midonet and #midonet-dev.

So, here's three proposals:

Proposal #1:
* keep #midonet
* replace #midonet-dev with #openstack-midonet

Proposal #2:
* keep #midonet
* merge #midonet-dev into #midonet



+1


+1


+1









Proposal #3:
* replace both #midonet and #midonet-dev with #openstack-midonet

I don't have any strong feelings for any of the proposals, but suggest
we go with proposal #2. Traffic in both #midonet and #midonet-dev is
rather low, so one channel should do - there's way busier OpenStack
channels out there. Furthermore, #midonet is shorter than
#openstack-midonet and already established. I also think people will
rather look in #midonet than #openstack-midonet if they're looking for
us.

Thoughts?

Cheers,
Sandro


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-02 Thread Thierry Carrez
Armando M. wrote:
>> One solution is, like you mentioned, to make some (or all) of them
>> full-fledged project teams. Be aware that this means the TC would judge
>> those new project teams individually and might reject them if we feel
>> the requirements are not met. We might want to clarify what happens
>> then.
> 
> That's a good point. Do we have existing examples of this or would we be
> sailing in uncharted waters?

It's been pretty common that we rejected/delayed applications for
projects where we felt they needed more alignment. In such cases, the
immediate result for those projects if they are out of the Neutron
"stadium" is that they would fall from the list of official projects.
Again, I'm fine with that outcome, but I want to set expectations clearly :)

> That said, I didn't see you comment on the
> possible introduction of neutron-relevant tags, is something that the TC
> would be open to?

Totally. We've been pondering "support/relationship" tags (describing
which project has a horizon UI, or a devstack plugin, or...) for a while
now. Boris had one in mind for Rally (to describe projects that have a
Rally profile). We'd likely need to bikeshed on names and colors, but I
think the idea of describing project relationships using tags is a
well-accepted one.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-12-02 Thread Ben Swartzlander

On 11/30/2015 09:04 AM, Coffman, Joel M. wrote:



On 11/25/15, 11:33 AM, "Ben Swartzlander" > wrote:

On 11/24/2015 03:27 PM, Nathan Reller wrote:

the cinder admin and the nova admin are ALWAYS the same people


There is interest in hybrid clouds where the Nova and Cinder
services
are managed by different providers. The customer would place higher
trust in Nova because you must trust the compute service, and the
customer would place less trust in Cinder. One way to achieve this
would be to have all encryption done by Nova. Cinder would
simply see
encrypted data and provide a good cheap storage solution for data.

Consider a company with sensitive data. They can run the compute
nodes
themselves and offload Cinder service to some third-party service.
This way they are the only ones who can manage the machines that see
the plaintext.


If you have that level of paranoia, I suggest running LUKS inside the
guest VM and not relying on OpenStack to handle your encryption. Then
you don't have to worry about whether nova is sharing your keys with
cinder because even nova won't have them.

That approach isn't actually more secure — anyone with root access to
the compute host can dump the VM's memory to extract the encryption keys.


I agree, however in the above case there was implied trust in the 
compute infrastructure -- at least more so than in the storage 
infrastructure. If you don't trust your hypervisor admin to not read 
your VM memory and steal encryption keys, then relying on your 
hypervisor admin (or nova) to perform the encryption is kind of silly. 
In every case, the hypervisor admin can see the plaintext and the keys.


The suggestion was a way to achieve the goal of doing encryption WITHOUT 
trusting the storage admin and WITHOUT CHANGING ANY CODE. I assert that 
any attempt to implement encryption at the nova level and not sharing 
keys with cinder will break the existing model. There are 2 solutions I 
can see:

1) don't break it (see above)
2) you break it, you fix it (nova takes over responsibility for all the 
operations cinder currently performs that involve plaintext).



Trying to design a system where we expect nova to do data encryption but
not cinder will not work in the long run. The eventual result will be
that nova will have to take on most of the functionality of cinder and
we'll be back to the nova-volume days.

Could you explain further what you mean by "nova will have to take on
most of the functionality of cinder"? In the current design, Nova is
still passing data blocks to Cinder for storage – they're just encrypted
instead of plaintext. That doesn't seem to subvert the functionality of
Cinder or reimplement it.


I think Duncan covered it, but the data operations cinder currently 
performs, which require Cinder to touch plaintext data are:

1) Create volume from glance image
2) Create glance image from volume
3) Backup volume
4) Restore volume

I'm not claiming that we can't redefine or alter the above operations to 
deal with encryption, but someone needs to propose how they should work 
differently or work at all when the volume isn't storing plaintext data.



Also in case it's not obvious, if you use different providers for
compute and storage, your performance is going to be absolutely
terrible.

The general idea is probably separation of duties, which contradicts the
original statement that "the cinder admin and the nova admin are ALWAYS
the same people." Is there an operational reason that these admins
must be the same person, or is that just typical?


My assertion was to try to combat confusion about the roles of the 
"storage admin". Many people who don't deal with storage all the time 
tend to forgot that cinder is a management tool that's separate from 
actual hardware which stores bits. It's common to have a "storage admin" 
who is responsible for configuration and management of storage hardware 
and software but to have Cinder run by an "openstack admin" who is a 
customer/client of the storage admin.


My belief is that it's FAR more common for cinder to be installed and 
run by the same guy (or group) who installs/runs nova, than for the the 
2 services to be run by 2 entirely different groups, whereas it _is_ 
fairly common to have different groups dedicated to managing physical 
servers vs physical storage controllers.



Joel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-12-02 Thread Jeremy Stanley
On 2015-11-23 21:20:56 + (+), David Chadwick wrote:
> Since the ultimate arbiter is the PTL, then it would be wrong to allow
> members of the same organisation as the PTL to perform all three code
> functions without the input of anyone from any other organisation. This
> places too much power in the hands of one organisation to the detriment
> of the overall community.

The ultimate arbiter is the body of contributors who elect the PTL.
If I begin making questionable or downright abusive decisions, a
majority of contributors can hold an interim election and oust me at
any time they see fit.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS7 Merging Plan

2015-12-02 Thread Dmitry Teselkin
Hello,

Here is our status (Dec, 2).

We are on track now.

This (Dec, 2) morning we've got results of 6 tests (swarm) - 3
succeeded, 3 failed. Failures were caused issues in ISO and QA scripts,
and were fixed during the day. In the end of the day (Dec, 2) we've got
another green custom ISO, which now runs BVT and the same 6 swarm tests.

There were several merges during the day, we rebased our patches
without significant issues.

We have around 17 patch sets in 'open' state:
* compatible - 5 (can be merged any time)
* not compatible - 12 (can be merged only if we merge CentOS7)

On Tue, 1 Dec 2015 13:48:00 -0800
Dmitry Borodaenko  wrote:

> With bit more details, I hope this covers all the risks and decision
> points now.
> 
> First of all, current list of outstanding commits:
> https://etherpad.openstack.org/p/fuel_on_centos7
> 
> The above list has two sections: backwards compatible changes that can
> be merged one at a time even if the rest of CentOS7 support isn't
> merged, and backwards incompatible changes that break support for
> CentOS6 and must be merged (and, if needed, reverted) all at once.
> 
> Decision point 1: FFE for CentOS7
> 
> CentOS7 support cannot be fully merged on Dec 2, so it misses FF. Can
> it be allowed a Feature Freeze Exception? So far, the disruption of
> the Fuel development process implied by the proposed merge plan is
> acceptable, if anything goes wrong and we become unable to have a
> stable ISO with merged CentOS7 support on Monday, December 7, the FFE
> will be revoked.
> 
> Wed, Dec 2: Merge party
> 
> Merge party before 8.0 FF, we should do our best to merge all
> remaining feature commits before end of day (including backwards
> compatible CentOS7 support commits), without breaking the build too
> much.
> 
> At the end of the day we'll start a swarm test over the result of the
> merge party, and we expect QA to analyze and summarize the results by
> 17:00 MSK (6:00 PST) on Thu Dec 3.
> 
> Risk 1: Merge party breaks the build
> 
> If there is a large regression in swarm pass percentage, we won't be
> able to afford a merge freeze which is necessary to merge CentOS7
> support, we'll have to be merging bugfixes until swarm test pass rate
> is back around 70%.
> 
> Risk 2: More features get FFE
> 
> If some essential 8.0 features are not completely merged by end of day
> Wed Dec 2 and are granted FFE, merging the remaining commits can
> interfere with merging CentOS7 support, not just from merge conflicts
> perspective, but also invalidating swarm results and making it
> practically impossible to bisect and attribute potential regressions.
> 
> Thu, Dec 3: Start merge freeze for CentOS7
> 
> Decision point 2: Other FFEs
> 
> In the morning MSK time, we will assess Risk 2 and decide what to do
> with the other FFEs. The options are: integrate remaining commits into
> CentOS7 merge plan, block remaining commits until Monday, revoke
> CentOS7 FFE.
> 
> If the decision is to go ahead with CentOS7 merge, we announce merge
> freeze for all git repositories that go into Fuel ISO, and spend the
> rest of the day rebasing and cleaning up the rest of the CentOS7
> commits to make sure they're all in mergeable state by the end of the
> day. The outcome of this work must be a custom ISO image with all
> remaining commits, with additional requirement that it must not use
> Jenkins job parameters (only patches to fuel-main that change default
> repository paths) to specify all required package repositories. This
> will validate the proposed fuel-main patches and ensure that no
> unmerged package changes are used to produce the ISO.
> 
> Decision point 3: Swarm pass rate
> 
> After swarm results from Wed are available, we will assess the Risk 1.
> If the pass rate regression is significant, CentOS7 FFE is revoked and
> merge freeze is lifted. If regression is acceptable, we proceed with
> merging remaining CentOS7 commmits through Thu Dec 3 and Fri Dec 4.
> 
> Fri, Dec 4: Merge and test CentOS7
> 
> The team will have until 17:00 MSK to produce a non-custom ISO that
> passes BVT and can be run through swarm.
> 
> Sat, Dec 5: Assess CentOS7 swarm and bugfix
> 
> First of all, someone from CI and QA teams should commit to monitoring
> the CentOS7 swarm run and report the results as soon as possible.
> Based on the results (which once again must be available by 17:00
> MSK), we can decide on the final step of the plan.
> 
> Decision point 4: Keep or revert
> 
> If CentOS7 based swarm shows significant regression, we have to spend
> the rest of the weekend including Sunday reverting all CentOS7 commits
> that were merged during merge freeze. Once revert is completed, we
> will lift the merge freeze.
> 
> If the regression is acceptable, we lift the merge freeze straight
> away and proceed with bugfixing as usual. At this point CI team will
> need to update the Fuel ISO used for deployment tests in our CI to
> this same ISO.
> 
> One way or the other, we will be 

Re: [openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-02 Thread Adrian Otto
Thanks team for stepping up to fill this important role. Please let me know if 
there is anything I can do to assist you.

Adrian

On Dec 2, 2015, at 2:19 PM, Everett Toews 
> wrote:

On Dec 2, 2015, at 12:32 AM, 王华 
> wrote:

Adrian,
I would like to be an alternate.

Regards
Wanghua


On Wed, Dec 2, 2015 at 10:19 AM, Adrian Otto 
> wrote:
Everett,

Thanks for reaching out. Eli is a good choice for this role. We should also 
identify an alternate as well.

Adrian

--
Adrian

> On Dec 1, 2015, at 6:15 PM, Qiao,Liyong 
> > wrote:
>
> hi Everett
> I'd like to take it.
>
> thanks
> Eli.

Great!

Eli and Wanghua, clone the api-wg repo as you would any repo and add yourselves 
to this file

http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json

Please make sure you use your name *exactly* as it appears in Gerrit. It should 
be the same as the name that appears in the Reviewer field on any review in 
Gerrit. Also, double check that you have only one account in Gerrit.

If you need help, just ask in #openstack-sdks where the API WG hangs out on IRC.

Cheers,
Everett

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Post-release bump after 2014.2.4?

2015-12-02 Thread Jeremy Stanley
On 2015-11-24 14:34:19 +0100 (+0100), Alan Pevec wrote:
> > But to illustrate better the issue I'm seeing:
> > http://tarballs.openstack.org/cinder/cinder-stable-juno.tar.gz contains
> > a directory cinder-2014.2.4.dev24, which is kind of wrong. That's the
> > bit that I'd like to see fixed.
> 
> That version was correct at the time tarball was generated, if it were
> regenerated now that tag was pushed, it would be cinder-2014.2.4
> You could either ask infra to regenerate those tarballs or generate
> tarballs yourself (python ./setup.py sdist in git checkout) instead of
> relying on tarballs.o.o.

In the past I think we've just assumed that you'd consume
http://tarballs.openstack.org/cinder/cinder-2014.2.4.tar.gz from
that point on. Keep in mind that the cinder-stable-juno.tar.gz and
cinder-2014.2.4.tar.gz tarballs, while they may have basically
identical contents, were not generated at the same time nor by the
same event so it's perfectly reasonable that the former would be
versioned as a dev commit working toward the release tag of the
latter.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Learning to Debug the Gate

2015-12-02 Thread Anita Kuno
On 11/23/2015 02:19 PM, Anita Kuno wrote:
> On 11/18/2015 06:03 PM, Mikhail Medvedev wrote:
>> We had a mini tutorial today in #openstack-infra, were Clark Boylan
>> explained how one can bring up an environment to debug logstash-2.0.
>> This is tangential to "debugging the gate", but still could be useful to
>> better understand logstash/es pipeline. 
>>
>> I did condense the conversation into the doc, excuse any grammar /
>> punctuation that I missed:
>>
>> http://paste.openstack.org/show/479346/
>>
> 
> Thanks for this post, Mikhail.
> 
> Continuing on with our learning, today we ask the question, "Is it in
> Logstash?"
> http://logstash.openstack.org
> 
> The answer is in this file:
> http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/logstash/jenkins-log-client.yaml
> which is the yaml file logstash.openstack.org uses to get the logs for
> indexing.
> 
> Thanks for following along,
> Anita.
> 

Near the top of the build log for a job (found in the console log) is
the information about which host provided the vm for the job.

Building remotely on devstack-trusty-rax-dfw-6245483 (devstack-trusty)
in workspace /home/jenkins/workspace/gate-tempest-dsvm-neutron-full

We have images with different names, this job used the devstack-trusty
image.

The host is rax-dfw, so the provider is rackspace and the vm came from
their dfw region.

The 6245483 is the number of nodes we have built since we started
tracking. This node was the 6, 245, 483rd.

From:
http://logs.openstack.org/00/189500/23/check/gate-tempest-dsvm-neutron-full/c4c0ab5/console.html

Thanks for reading,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Neill Cox
Another +1 from me

From: David Lyle 
Sent: Thursday, 3 December 2015 5:57 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [horizon] Proposal to add Richard Jones to 
horizon-core

Let's try that again.

I propose adding Richard Jones[1] to horizon-core.

Over the last several cycles Richard has consistently been providing
great reviews, actively participating in the Horizon community, and
making meaningful contributions around angularJS and overall project
stability and health.

Please respond with comments, +1s, or objections within one week.

Thanks,
David

[1] 
http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all

On Wed, Dec 2, 2015 at 11:56 AM, David Lyle  wrote:
> I propose adding Richard Jones[1] to horizon-core.
>
> Over the last several cycles Timur has consistently been providing
> great reviews, actively participating in the Horizon community, and
> making meaningful contributions around angularJS and overall project
> stability and health.
>
> Please respond with comments, +1s, or objections within one week.
>
> Thanks,
> David
>
> [1] 
> http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Rackspace Hosting Australia PTY LTD a company registered in the state of 
Victoria, Australia (company registered number ACN 153 275 524) whose 
registered office is at Level 1, 37 Pitt Street, Sydney, NSW 2000, Australia. 
Rackspace Hosting Australia PTY LTD privacy policy can be viewed at 
www.rackspace.com.au/company/legal-privacy-statement.php - This e-mail message 
may contain confidential or privileged information intended for the recipient. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited. If you receive this transmission in error, please notify us 
immediately by e-mail at ab...@rackspace.com and delete the original message. 
Your cooperation is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [openstack-operators] Tools to move instances between projects?

2015-12-02 Thread Matt Riedemann



On 12/2/2015 2:52 PM, Kris G. Lindgren wrote:

Hello,

I was wondering if someone has a set of tools/code to work allow admins
to move vm's from one tenant to another?  We get asked this fairly
frequently in our internal cloud (atleast once a week, more when we
start going through and cleaning up resources for people who are no
longer with the company).   I have searched and I was able to find
anything externally.

Matt Riedemann pointed me to an older spec for nova :
https://review.openstack.org/#/c/105367/ for nova.  I realize that this
will most likely need to be a cross projects effort.  Since vm's consume
resources for multiple other projects, and to move a VM between projects
would also require that those other resources get updated as well.

Is anyone aware of a cross project spec to handle this – or of specs in
other projects?
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



I think we need a good understanding of what the use case is first. I 
have to assume that these are pets and that's why we can't just snapshot 
an instance and then the new user/project can boot an instance from that.


Quotas are going to be a big issue here I'd think, along with any 
orchestration that nova would need to do with other services like 
cinder/glance/neutron to transfer ownership of volumes or network 
resources (ports), and those projects also have their own quota frameworks.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] How to debug a failed deployment?

2015-12-02 Thread Vahid S Hashemian
Hello,

I am trying to deploy a hello world HOT package in murano but the 
deployment runs forever and does not stop (does not succeed or fail).
Stan advised me to modify the config setting below to set a timeout:

# Time for waiting for a response from murano agent during the
# deployment (integer value)
agent_timeout = 300

As you see I set the timeout to 300 seconds and restarted murano API, but 
even after 15 minutes my deployment is still going.
I think my environment is not configured correctly for this to happen.

I have devstack running on a VM, and my murano development is done on a 
separate VM.
I bring up murano API and UI on my dev VM that successfully connect to my 
devstack VM.

I am able to deploy the HOT yaml using heat without an issue.
>From what I can see it seems that murano is not able to talk to heat, 
because I don't see any stack being created as a result of my deployment.

I have also enabled murano logging in the config file as below:

# (Optional) Name of log file to output to. If no default is set,
# logging will go to stdout. (string value)
# Deprecated group/name - [DEFAULT]/logfile
log_file = /opt/stack/logs/murano.log

But no error  is reported while the deployment is in progress.

Any tips on how to force the timeout, and also on how to resolve the 
deployment problem is appreciated.

Regards,
--Vahid

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-12-02 Thread Jeremy Stanley
On 2015-11-25 14:02:47 +0800 (+0800), Tom Fifield wrote:
[...]
> Putting this out there - over at the Foundation, we're here to
> Protect and Empower you. So, if you've ever been reprimanded by
> management for choosing not to abuse the community process,
> perhaps we should arrange an education session with that manager
> (or their manager) on how OpenStack works.

Further, it's been my observation so far that those heavily involved
in upstream OpenStack development have no shortage of alternative
job offers (and I would definitely count core reviewers of larger
projects such as those under discussion to fall into this category).
If they're finding themselves in an oppressive employment situation,
that employer will quickly lose their influence within the community
through key employee attrition. It's a problem which has a tendency
to self-correct over time, so hopefully our member companies are
figuring this dynamic out for themselves. ;)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Instance customization for Bootstrapping the cluster nodes

2015-12-02 Thread Sergey Lukjanov
Hi,

we don't have such feature right now. It could be implemented by adding
support for defining user data on the templates level. We already passing
some user data to the VMs, so, it'll be really easy to implement it in
Sahara.

You can submit spec or blueprint for it.

Thanks.

On Wed, Nov 25, 2015 at 11:13 PM, Ashish Billore 
wrote:

> Hello Everyone,
>
> I am looking for a way to customize my Cluster nodes at cluster creation
> time. This is something like: AWS EMR BootStrap Action:
> http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-plan-bootstrap.html
>
> While creating an instance through nova boot command / UI, there is an
> option to pass user-data through with cloud-init or cloud-config scripts,
> that can customize the instance at creation time based on specific
> environment or user need. Is there a similar way to pass cloud-init /
> cloud-config scripts while creating cluster using Sahara?
>
> Please Note: I do not prefer to bake these customization in the Sahara
> image itself, I need to do these at the Cluster creation time, dynamically.
>
> Thanks for the help.
>
> --
> Thanks,
> Ashish
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] preparing your mitaka-1 milestone tag

2015-12-02 Thread Zane Bitter

On 27/11/15 09:32, Doug Hellmann wrote:

Next week (Dec 1-3) is the Mitaka 1 milestone deadline. Release
liaisons for all managed projects using the cycle-with-milestones
release model will need to propose tags for their repositories by
Thursday. Tag requests submitted after Dec 3 will be rejected.

As a one-time change, we are also going to simplify how we specify
the versions for projects by moving to only using tags, and removing
the version entry from setup.cfg. As with most of the other changes
we are making this cycle, switching to only using tags for versioning
will simplify some of the automation and release management processes.

Because of the way pbr calculates development version numbers, we
need to be careful to tag the new milestone before removing the
version entry to avoid having our versions decrease on master (for
example, from something in the 12.0.0 series to something in the
11.0.0 series), which would disrupt users deploying from trunk
automatically.

Here are the steps we need to follow, for each project to tag the
milestone and safely remove the version entry:

1. Complete the reno integration so that release notes are building
correctly, and add any release notes for work done up to this
point.  Changes to project-config should be submitted and changes
to add reno to each repository should be landed.

2. Prepare a patch to the deliverable file in the openstack/releases
repository adding a *beta 1* tag for the upcoming release,
selecting an appropriate SHA close to the tip of the master
branch.

For example, a project with version number 8.0.0 in setup.cfg
right now should propose a tag 8.0.0.0b1 for this milestone.

The SHA should refer to a patch merged *after* all commits
containing release notes intended for the milestone to ensure the
notes are picked up in the right version.

3. Prepare a patch to the project repository removing the version
line from setup.cfg.

Set the patch to depend on the release patch from step 1, and


I believe this should say the milestone tag request in 
openstack/releases from step 2?



use the topic "remove-version-from-setup".

4. Add a comment to the milestone tag request linking to the
review from step 3.

We will wait to tag the milestone for a project until the reno
integration is complete and until the tag request includes a link
to a patch removing the version entry. Again, late submissions will
be rejected.

After your milestone is tagged, the patches to remove the version
entry from setup.cfg should be given high priority for reviews and
merged as quickly as possible.

Projects following the cycle-with-intermediary release model will need
to complete these steps around the time of their next release, but if
there is no release planned for the milestone week the work can wait.

As always, let me know if you have questions.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-12-02 Thread AFEK, Ifat (Ifat)
Hi Ryota,

Thanks for your response, please see my comments below.

Ifat.

> -Original Message-
> From: Ryota Mibu [mailto:r-m...@cq.jp.nec.com]
> 
> Hi,
> 
> 
> Sorry for my late response...
> 
> It seems like a fundamental question whether we should have rich
> function or intelligence in on-the-fly event alarm evaluation. I think
> we can add simple operations (like aggregating alarm) in aodh
> evaluator, and other operations (like deducing with referring some
> external DB) should be done outside of the evaluation process to reduce
> impact on other evaluations. But, if we separate too much, then there
> will be many interactions between two services that makes slow to
> finish sequence of alarm handling.
> 
> One approach we can take, is that you configure aodh to pass each row
> event (e.g. each VM downed) wrapped in alarm notification to vitrage,
> then do some operation (e.g. deducing, aggregating) and store resource-
> level alarm without any alarm_actions, so that users can see the alarms
> in horizon view. This may not require alarm evaluation, so we can
> forget the problem I raised (cache refresh interval).

Let me see if I got this right: are you suggesting that we create 
on-the-fly alarm definitions with no alarm_actions, for every deduced 
alarm that we want to raise? And this will spare us the extra alarm 
evaluation in AODH?

It does make sense. 

My next question is how exactly we should create these resource-level 
alarms. Can we create an alarm definition with no rule, no actions, 
and initial state set to "alarm"? (I'm not sure it can be done in the 
current AODH API)

Another question is our need to get alarms from other sources, like 
Nagios, zabbix, ganglia, etc. We thought that Vitrage would query these 
Alarms from each source directly, and then create alarms in AODH in the 
same way as our deduced alarms: for example create nagios_ovs_vswitchd 
alarm if nagios check_ovs_vswitchd test failed. 
An alternative could be to integrate nagios directly with AODH. 
What do you think?

> BTW, is it useful to have on-the-fly evaluation of combination alarm
> with event alarms for alarm aggregation or other cases?

I'm not sure I understand. Can you give a detailed example?

> Horizon view is the different topic. Maybe we can reduce the number of
> alarms listed in user view by creating raw alarms in admin space that
> is not visible from end user, or using relevant severity or tag so that
> user can filter out uninterested alarms.

Referring to this[1] blueprint, do you have specific concerns regarding 
the usability/performance of Horizon view when there are many alarms? 
I think that your ideas make sense, and we can implement them if there 
is a need. 

In addition, in Vitrage we plan to handle alarm aggregation by creating 
aggregation rule templates, for example based on the RCA information. 
The user will be able to see only the root cause alarms, and then drill 
down to all specific alarms. But I doubt if this will be done for Mitaka.


[1] 
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page

Thanks,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Config support for oslo.config.cfg.MultiStrOpt

2015-12-02 Thread Cody Herriges
Martin,

I see no reason this shouldn't just be pushed into puppetlabs-inifile.  I
can't actually find a real "spec" for INI file and even the Wiki link[3]
calls out that there is no actual spec.

On Fri, Nov 27, 2015 at 5:04 AM, Martin Mágr  wrote:

> Greetings,
>
>   I've submitted patch to puppet-openstacklib [1] which adds provider for
> parsing INI files containing duplicated variables (a.k.a MultiStrOpt [2]).
> Such parameters are used for example to set
> service_providers/service_provider for Neutron LBaaSv2. There has been a
> thought raised, that the patch should rather be submitted to
> puppetlabs-inifile module instead. The reason I did not submitted the patch
> to inifile module is that IMHO duplicate variables are not in the INI file
> spec [3]. Thoughts?
>
> Regards,
> Martin
>
>
> [1] https://review.openstack.org/#/c/234727/
> [2]
> https://docs.openstack.org/developer/oslo.config/api/oslo.config.cfg.html#oslo.config.cfg.MultiStrOpt
> [3] https://en.wikipedia.org/wiki/INI_file#Duplicate_names
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] preparing your mitaka-1 milestone tag

2015-12-02 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2015-12-02 19:17:25 -0500:
> On 27/11/15 09:32, Doug Hellmann wrote:
> > Next week (Dec 1-3) is the Mitaka 1 milestone deadline. Release
> > liaisons for all managed projects using the cycle-with-milestones
> > release model will need to propose tags for their repositories by
> > Thursday. Tag requests submitted after Dec 3 will be rejected.
> >
> > As a one-time change, we are also going to simplify how we specify
> > the versions for projects by moving to only using tags, and removing
> > the version entry from setup.cfg. As with most of the other changes
> > we are making this cycle, switching to only using tags for versioning
> > will simplify some of the automation and release management processes.
> >
> > Because of the way pbr calculates development version numbers, we
> > need to be careful to tag the new milestone before removing the
> > version entry to avoid having our versions decrease on master (for
> > example, from something in the 12.0.0 series to something in the
> > 11.0.0 series), which would disrupt users deploying from trunk
> > automatically.
> >
> > Here are the steps we need to follow, for each project to tag the
> > milestone and safely remove the version entry:
> >
> > 1. Complete the reno integration so that release notes are building
> > correctly, and add any release notes for work done up to this
> > point.  Changes to project-config should be submitted and changes
> > to add reno to each repository should be landed.
> >
> > 2. Prepare a patch to the deliverable file in the openstack/releases
> > repository adding a *beta 1* tag for the upcoming release,
> > selecting an appropriate SHA close to the tip of the master
> > branch.
> >
> > For example, a project with version number 8.0.0 in setup.cfg
> > right now should propose a tag 8.0.0.0b1 for this milestone.
> >
> > The SHA should refer to a patch merged *after* all commits
> > containing release notes intended for the milestone to ensure the
> > notes are picked up in the right version.
> >
> > 3. Prepare a patch to the project repository removing the version
> > line from setup.cfg.
> >
> > Set the patch to depend on the release patch from step 1, and
> 
> I believe this should say the milestone tag request in 
> openstack/releases from step 2?

Yes, that's correct.

Doug

> 
> > use the topic "remove-version-from-setup".
> >
> > 4. Add a comment to the milestone tag request linking to the
> > review from step 3.
> >
> > We will wait to tag the milestone for a project until the reno
> > integration is complete and until the tag request includes a link
> > to a patch removing the version entry. Again, late submissions will
> > be rejected.
> >
> > After your milestone is tagged, the patches to remove the version
> > entry from setup.cfg should be given high priority for reviews and
> > merged as quickly as possible.
> >
> > Projects following the cycle-with-intermediary release model will need
> > to complete these steps around the time of their next release, but if
> > there is no release planned for the milestone week the work can wait.
> >
> > As always, let me know if you have questions.
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Timur Sufiev to horizon-core

2015-12-02 Thread Lin Hua Cheng
+1 thanks for all the hard work! And Timur will fix pagination too :)

On Wed, Dec 2, 2015 at 7:52 PM, David Lyle  wrote:

> I propose adding Timur Sufiev[1] to horizon-core.
>
> Over the last several cycles Timur has consistently been providing
> great reviews, actively participating in the Horizon community, and
> making meaningful contributions particularly around testing and
> stability.
>
> Please respond with comments, +1s, or objections within one week.
>
> Thanks,
> David
>
> [1]
> http://stackalytics.com/?module=horizon-group_id=tsufiev-x=all
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Proposal to add Richard Jones to horizon-core

2015-12-02 Thread Adam Young

On 12/02/2015 01:57 PM, David Lyle wrote:

Let's try that again.

I propose adding Richard Jones[1] to horizon-core.

Over the last several cycles Richard has consistently been providing
great reviews, actively participating in the Horizon community, and
making meaningful contributions around angularJS and overall project
stability and health.

Please respond with comments, +1s, or objections within one week.


As an outside (Keystoner)  I have to say that working with Richard was 
great, and he had a fantastic strategic vision for Horizon. FWIW +1 from me.




Thanks,
David

[1] 
http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all

On Wed, Dec 2, 2015 at 11:56 AM, David Lyle  wrote:

I propose adding Richard Jones[1] to horizon-core.

Over the last several cycles Timur has consistently been providing
great reviews, actively participating in the Horizon community, and
making meaningful contributions around angularJS and overall project
stability and health.

Please respond with comments, +1s, or objections within one week.

Thanks,
David

[1] 
http://stackalytics.com/?module=horizon-group_id=r1chardj0n3s=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-02 Thread Adam Young

On 12/01/2015 03:50 PM, Fox, Kevin M wrote:

I just upgraded to keystone liberty for one of my production clouds, and went with apache 
since eventlet was listed as deprecated. It was pretty easy. Just ran into one issue. 
RadosGW wouldn't work against it until I added "WSGIChunkedRequest On'" in the 
config. otherwise, the config as shipped with RDO worked fine. I am running giant 
radosgw, so future versions may not require that.


Thanks for the note.  Should this be bug?



Thanks,
Kevin

From: Sean Dague [s...@dague.net]
Sent: Tuesday, December 01, 2015 4:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [keystone] Removing 
functionality that was deprecated in Kilo and upcoming deprecated functionality 
in Mitaka

On 12/01/2015 01:57 AM, Steve Martinelli wrote:

Trying to summarize here...

- There isn't much interest in keeping eventlet around.
- Folks are OK with running keystone in a WSGI server, but feel they are
constrained by Apache.

 From an interop perspective, this concerns me a bit. My understanding is
that Apache is specifically needed for Federation. Federation is the
norm that we want for environments in the future.

I'd hate to go down a path where the reference architecture we put out
there doesn't support this. It's going to be all the pain of cells /
non-cells that Nova's or nova-net / neutron bifurcation.

Whatever the reference architecture is, it should support Federation. A
non federation capable keystone should be the exception.


- uWSGI could help to support multiple web servers.


--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-12-02 Thread Jay Lau
Hi Bharath,

Actually I have already filed a bp here:
https://blueprints.launchpad.net/magnum/+spec/unify-coe-api ,sorry for the
late notify.

We may need some discussion for this in next Week's meeting. I will attend
next week's meeting and we can have discussion with team then, hope it is
OK. ;-)

Thanks!

On Wed, Dec 2, 2015 at 8:48 AM, bharath thiruveedula <
bharath_...@hotmail.com> wrote:

> Hi,
>
> Sorry I was off for some days because of health issues.
>
> So I think the work items for this BP[1] are:
>
> 1)Add support to accept json file in container-create command
> 2)Handle JSON input in docker_conductor
> 3)Implement mesos conductor for container create,delete and list.
>
> Correct me if I am wrong. And let me know the process for implementing BP
> in magnum. I think we need approval for this BP and then implementation?
>
>  [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor
>
> Regards
> Bharath T(tbh)
>
>
> --
> Date: Fri, 20 Nov 2015 07:44:49 +0800
> From: jay.lau@gmail.com
> To: openstack-dev@lists.openstack.org
>
> Subject: Re: [openstack-dev] [magnum] Mesos Conductor
>
> It's great that we come to some agreement on unifying the client call ;-)
>
> As i proposed in previous thread, I think that "magnum app-create" may be
> better than "magnum create", I want to use "magnum app-create" to
> distinguish with "magnum container-create". The "app-create" may also not a
> good name as the k8s also have concept of service which is actually not an
> app. comments?
>
> I think we can file a bp for this and it will be a great feature in M
> release!
>
> On Fri, Nov 20, 2015 at 4:59 AM, Egor Guz  wrote:
>
> +1, I found that 'kubectl create -f FILENAME’ (
> https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/kubectl/kubectl_create.md)
> works very well for different type of objects and I think we should try to
> use it.
>
> but I think we should support two use-cases
>  - 'magnum container-create’, with simple list of options which work for
> Swarm/Mesos/Kub. it will be good option for users who just wants to try
> containers.
>  - 'magnum create ’, with file which has Swarm/Mesos/Kub specific payload.
>
> —
> Egor
>
> From: Adrian Otto >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Date: Thursday, November 19, 2015 at 10:36
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Subject: Re: [openstack-dev] [magnum] Mesos Conductor
>
> I’m open to allowing magnum to pass a blob of data (such as a lump of JSON
> or YAML) to the Bay's native API. That approach strikes a balance that’s
> appropriate.
>
> Adrian
>
> On Nov 19, 2015, at 10:01 AM, bharath thiruveedula <
> bharath_...@hotmail.com> wrote:
>
> Hi,
>
> At the present scenario, we can have mesos conductor with existing
> attributes[1]. Or we can add  extra options like 'portMappings',
> 'instances', 'uris'[2]. And the other options is to take json file as input
> to 'magnum container-create' and dispatch it to corresponding conductor.
> And the conductor will handle the json input. Let me know your opinions.
>
>
> Regards
> Bharath T
>
>
>
>
> [1]https://goo.gl/f46b4H
> [2]https://mesosphere.github.io/marathon/docs/application-basics.html
> 
> To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>
> From: wk...@cn.ibm.com
> Date: Thu, 19 Nov 2015 10:47:33 +0800
> Subject: Re: [openstack-dev] [magnum] Mesos Conductor
>
> @bharath,
>
> 1) actually, if you mean use container-create(delete) to do on mesos bay
> for apps. I am not sure how different the interface between docker
> interface and mesos interface. One point that when you introduce that
> feature, please not make docker container interface more complicated than
> now. I worried that because it would confuse end-users a lot than the
> unified benefits. (maybe as optional parameter to pass one json file to
> create containers in mesos)
>
> 2) For the unified interface, I think it need more thoughts, we need not
> bring more trouble to end-users to learn new concepts or interfaces, except
> we could have more clear interface, but different COES vary a lot. It is
> very challenge.
>
>
>
> Thanks
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> 

  1   2   >