Re: [openstack-dev] [nova][mistral] Automatic evacuation as a long running task

2015-10-06 Thread Steve Gordon
- Original Message -
> From: "Roman Dobosz" 
> To: "OpenStack Development Mailing List" 
> 
> Hi all,
> 
> The case of automatic evacuation (or resurrection currently), is a topic
> which surfaces once in a while, but it isn't yet fully supported by
> OpenStack and/or by the cluster services. There was some attempts to
> bring the feature into OpenStack, however it turns out it cannot be
> easily integrated with. On the other hand evacuation may be executed
> from the outside using Nova client or Nova API calls for evacuation
> initiation.
> 
> I did some research regarding the ways how it could be designed, based
> on Russel Bryant blog post[1] as a starting point. Apart from it, I've
> also taken high availability and reliability into consideration when
> designing the solution.
> 
> Together with coworker, we did first PoC[2] to enable cluster to be able
> to perform evacuation. The idea behind that PoC was simple - providing
> additional, small service which would trigger and supervise the
> evacuation process, which would be triggered from the outside (in this
> example we were using Pacemaker fencing facility, but it might be
> anything) using RabbitMQ directly. Those services are running on the
> control plane in AA fashion.

Hi Roman,

Another aspect of this which we discussed briefly a few weeks back was whether 
external HA solutions like that proposed by Russell should be "opt-in" on a 
per-instance basis via an image property or flavor extra specification. That is 
that the external instance high-availability solution would only automatically 
move virtual machines that had this attribute associated with them, whatever it 
ends up being.

I'm wondering if there is any appetite in the community for standardizing on 
what this literal property or extra specification would be even though the 
delivery of the HA solutions themselves is not part of Nova itself but rather 
handled by the deployers/distributors using external tools like Pacemaker?

Thanks,

Steve

> That work well for us. So we started exploring other possibilities like
> oslo.messaging just to use it in the same manner as we did in the poc.
> It turns out that the implementation will not be as easy, because there
> is no facility in the oslo.messaging for letting sending an ACK from the
> client after the job is done (not as soon as it gets the message). We
> also looked at the existing OpenStack projects for a candidate which
> provide service for managing long running tasks.
> 
> There is the Mistral project, which gives us almost all the features we
> need. The one missing feature is the HA of the Mistral tasks execution.
> 
> The question is, how such problem (long running tasks) could be resolved
> in OpenStack?
> 
> [1] http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/
> [2] https://github.com/dawiddeja/evacuationd

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] ports management

2015-10-06 Thread Peter V. Saveliev

…


The problem.



There are use cases, when it us needed to attach vnic to some specific 
network interface instead of br-int.


For example, working with trunk ports, it is better to attach vnic to a 
specific trunk bridge, and the bridge add to br-int. But it doesn't fit 
in the current design.


There are several possible ways to solve the issue:

1. make the user responsible to pass the ready-to-use port to nova, so 
nova will not care about adding port by libvirt to the bridge
2. make the neutron service synchronously call the agent to create the 
required interface, e.g. the trunk bridge.

3. make the neutron somehow to delay vif plug
4. make the nova to create the required port

Also two hack-like alternatives:

5. use not another interface like the trunk bridge, and instead install 
another flow table
6. intercept vnic plug, and forcibly reattach it to an on-demand created 
interface



The 5. leads us to problems with mac-learning. The 6. is a pure hack, 
that will cause issues with literally everything, like migration etc.


The 1. is not a solution sensu stricto. The 2. has the scalability 
issue, effectively turning the neutron to be synchronous across all the 
cluster in that case.


The 3. and 4. sound more reasonable, but 3. is not so clear (for me) how 
to do, and 4. impacts the nova.


…

And there is a solution, that can be used in that case as well, the 
binding negotiation, see references below.


…


The question


I would like to see your opinions, how it could be managed. I believe I 
could miss something. Thanks for comments.


…

References
--

https://review.openstack.org/#/c/190917/7/specs/mitaka/approved/nova-neutron-binding-negotiation.rst
https://review.openstack.org/#/c/213644/1/specs/mitaka/approved/trunk-port.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][PTL] PTL Candidates Q Session

2015-10-06 Thread Vladimir Kuklin
Which is actually contradictory and ambiguous and shows that PTL has less
power than CLs while CLs at the same time have less power than PTL. I think
this is the time when universe should collapse as we found that time-space
is contradicting laws of propositional calculus.

On Tue, Oct 6, 2015 at 6:26 PM, Tomasz Napierala 
wrote:

> Hi
>
> That’s right, but we made slight change here:
> "Define architecture direction & review majority of design specs. Rely on
> Component Leads and Core Reviewers"
>
> So we assume that detailed architectural work will be relayed to Component
> Leads
>
>
> > On 02 Oct 2015, at 10:12, Evgeniy L  wrote:
> >
> > Hi Mike,
> >
> > According to the description of the role, I wouldn't say that the role
> is less architectural than
> > political, since PTL should review designs and resolve conflicts between
> cores (which are
> > usually technical), PTL should also have strong skills in software
> architecture, and understanding
> > of what Fuel should look like.
> >
> > Thanks,
> >
> > On Thu, Oct 1, 2015 at 11:32 PM, Mike Scherbakov <
> mscherba...@mirantis.com> wrote:
> > > we may mix technical direction / tech debt roadmap and process,
> political, and people management work of PTL.
> > sorry, of course I meant that we rather should NOT mix these things.
> >
> > To make my email very short, I'd say PTL role is more political and
> process-wise rather than architectural.
> >
> > On Wed, Sep 30, 2015 at 5:48 PM Mike Scherbakov <
> mscherba...@mirantis.com> wrote:
> > Vladimir,
> > we may mix technical direction / tech debt roadmap and process,
> political, and people management work of PTL.
> >
> > PTL definition in OpenStack [1] reflects many things which PTL becomes
> responsible for. This applies to Fuel as well.
> >
> > I'd like to reflect some things here which I'd expect PTL doing, most of
> which will intersect with [1]:
> > - Participate in cross-project initiatives & resolution of issues around
> it. Great example is puppet-openstack vs Fuel [2]
> > - Organize required processes around launchpad bugs & blueprints
> > - Personal personal feedback to Fuel contributors & public suggestions
> when needed
> > - Define architecture direction & review majority of design specs. Rely
> on Component Leads and Core Reviewers
> > - Ensure that roadmap & use cases are aligned with architecture work
> > - Resolve conflicts between core reviewers, component leads. Get people
> to the same page
> > - Watch for code review queues and quality of reviews. Ensure discipline
> of code review.
> > - Testing / coverage have to be at the high level
> >
> > Considering all above, contributors actually have been working with all
> of us and know who could be better handling such a hard work. I don't think
> special Q is needed. If there are concerns / particular process/tech
> questions we'd like to discuss - those should be just open as email threads.
> >
> > [1] https://wiki.openstack.org/wiki/PTL_Guide
> > [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/066685.html
> >
> > Thank you,
> >
> > On Tue, Sep 29, 2015 at 3:47 AM Vladimir Kuklin 
> wrote:
> > Folks
> >
> > I think it is awesome we have three candidates for PTL position in Fuel.
> I read all candidates' emails (including mine own several times :-) ) and I
> got a slight thought of not being able to really differentiate the
> candidates platforms as they are almost identical from the high-level point
> of view. But we all know that the devil is in details. And this details
> will actually affect project future.
> >
> > Thus I thought about Q session at #fuel-dev channel in IRC. I think
> that this will be mutually benefitial for everyone to get our platforms a
> little bit more clear.
> >
> > Let's do it before or right at the start of actual voting so that our
> contributors can make better decisions based on this session.
> >
> > I suggest the following format:
> >
> > 1) 3 questions from electorate members - let's put them onto an etherpad
> > 2) 2 questions from a candidate to his opponents (1 question per
> opponent)
> > 3) external moderator - I suppose, @xarses as our weekly meeting
> moderator could help us
> > 4) time and date - Wednesday or Thursday comfortable for both timezones,
> e.g. after 4PM UTC or right after fuel weekly meeting.
> >
> > What do you think, folks?
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com
> > www.mirantis.ru
> > vkuk...@mirantis.com
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 

[openstack-dev] [Neutron][Kuryr] - IRC Meetings

2015-10-06 Thread Gal Sagie
Hello Everyone,

Our current IRC meeting time is 15:00 UTC every Monday.
I have received several requests from US people saying its hard
for them to attend at this early time.

We want to allow anyone that shows interest in the project to join, so we
discussed this in yesterday's IRC meeting and decided to run an
"experiment".
We will have an additional non-formal IRC meeting next week and the week
after in a time more suitable for people in that timezone.

If we see there is a good enough interest we will consider conducting
alternating meetings.

If you are in that area and would like to join our meetings, please suggest
a time that fits
you, please still try to make it as early as possible for people from
Europe.

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-06 Thread Sebastian Kalinowski
I've already wrote in the review that caused this thread that I do not want
to blindly follow rules for using one or another. We should always consider
technical requirements. And I do not see a reason to leave py.test (and
nobody
show me such reason) and replace it with something else.

Additionally other folks showed that this is not a blocker for moving under
big tent.

Best,
Sebastian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] What to do when a controller runs out of space

2015-10-06 Thread Vladimir Kuklin
Eugene

With all due respect to you and other OpenStack developers, I as a system
administrator do not believe when someone says that something is working
that way. Actually, what I would prefer to do is to stress-test these
services on their 'statelessness'. Currently we have l3-agent not so
stateless and lacking centralized synchronization in proper way which you
have no actually defied. So I agree - let's move this into different thread
and not hijack this one.

On Tue, Oct 6, 2015 at 5:11 PM, Eugene Nikanorov 
wrote:

>
> On Tue, Oct 6, 2015 at 4:22 PM, Vladimir Kuklin 
> wrote:
>
>> Eugene
>>
>> For example, each time that you need to have one instance (e.g. master
>> instance) of something non-stateless running in the cluster.
>>
>
> Right. This is theoretical. Practically, there are no such services among
> openstack.
>
> You are right that currently lots of things are fixed already - heat
>> engine is fine, for example. But I still see this issue with l3 agents and
>> I will not change my mind until we conduct complete scale and destructive
>> testing with new neutron code.
>>
>> Secondly, if we cannot reliably identify when to engage - then we need to
>> write the code that will tell us when to engage. If this code is already in
>> place and we can trigger a couple of commands to figure out Neutron agent
>> state, then we can add them to OCF script monitor and that is all. I agree
>> that we have some issues with our OCF scripts, for example some unoptimal
>> cleanup code that has issues with big scale, but I am almost sure we can
>> fix it.
>>
>> Finally, let me show an example of when you need a centralized cluster
>> manager to manage such situations - you have a temporary issue with
>> connectivity to neutron server over management network for some reason.
>> Your agents are not cleaned up and neutron server starts new l3 agent
>> instances on different node. In this case you will have IP duplication in
>> the network and will bring down the whole cluster as connectivity through
>> 'public' network will be working just fine. In case when we are using
>> Pacemaker - such node will be either fenced or will stop all the services
>> controlled by pacemaker as it is a part of non-quorate partition of the
>> cluster. When this happens, l3 agent OCF script will run its cleanup
>> section and purge all the stale IPs thus saving us from the trouble. I
>> obviously may be mistaking, so please correct me if this is not the case.
>>
> I think this deserves discussion in a separate thread, which I'll start
> soon.
> My initial point was (to state it clearly), that I will be -2 on any new
> additions of openstack services to pacemaker kingdom.
>
> Thanks,
> Eugene.
>
>>
>>
>> On Tue, Oct 6, 2015 at 3:46 PM, Eugene Nikanorov > > wrote:
>>
>>>
>>>
 2) I think you misunderstand what is the difference between
 upstart/systemd and Pacemaker in this case. There are many cases when you
 need to have syncrhonized view of the cluster. Otherwise you will hit
 split-brain situations and have your cluster misfunctioning. Until
 OpenStack provides us with such means there is no other way than using
 Pacemaker/Zookeper/etc.

>>>
>>> Could you please give some examples of those 'many cases' for openstack
>>> specifically?
>>> As for my 'misunderstanding' - openstack services only need to be always
>>> up, not more than that.
>>> Upstart does a perfect job there.
>>>
>>>
 3) Regarding Neutron agents - we discussed it many times - you need to
 be able to control and clean up stuff after some service crashed.
 Currently, Neutron does not provide reliable ways to do it. If your agent
 dies and does not clean up ip addresses from the network namespace you will
 get into the situation of ARP duplication which will be a kind of split
 brain described in item #2. I personally as a system architect and
 administrator do not believe for this to change in at least several years
 for OpenStack so we will be using Pacemaker for a very long period of time.

>>>
>>> This has been changed already, and a while ago.
>>> OCF infrastructure around neutron agents has never helped neutron in any
>>> meaningful way and is just an artifact from the dark past.
>>> The reasons are: pacemaker/ocf doesn't have enough intelligence to know
>>> when to engage, as a result, any cleanup could only be achieved through
>>> manual operations. I don't need to remind you how many bugs were in ocf
>>> scripts which brought whole clusters down after those manual operations.
>>> So it's just a way better to go with simple standard tools with
>>> fine-grain control.
>>> Same applies to any other openstack service (again, not rabbitmq/galera)
>>>
>>> > so we will be using Pacemaker for a very long period of time.
>>> Not for neutron, sorry. As soon as we finish the last bit of such
>>> cleanup, which is targeted for 8.0
>>>
>>> 

Re: [openstack-dev] [Fuel][PTL] PTL Candidates Q Session

2015-10-06 Thread Tomasz Napierala
Hi

That’s right, but we made slight change here:
"Define architecture direction & review majority of design specs. Rely on 
Component Leads and Core Reviewers"

So we assume that detailed architectural work will be relayed to Component Leads


> On 02 Oct 2015, at 10:12, Evgeniy L  wrote:
> 
> Hi Mike,
> 
> According to the description of the role, I wouldn't say that the role is 
> less architectural than
> political, since PTL should review designs and resolve conflicts between 
> cores (which are
> usually technical), PTL should also have strong skills in software 
> architecture, and understanding
> of what Fuel should look like.
> 
> Thanks,
> 
> On Thu, Oct 1, 2015 at 11:32 PM, Mike Scherbakov  
> wrote:
> > we may mix technical direction / tech debt roadmap and process, political, 
> > and people management work of PTL.
> sorry, of course I meant that we rather should NOT mix these things.
> 
> To make my email very short, I'd say PTL role is more political and 
> process-wise rather than architectural.
> 
> On Wed, Sep 30, 2015 at 5:48 PM Mike Scherbakov  
> wrote:
> Vladimir,
> we may mix technical direction / tech debt roadmap and process, political, 
> and people management work of PTL.
> 
> PTL definition in OpenStack [1] reflects many things which PTL becomes 
> responsible for. This applies to Fuel as well.
> 
> I'd like to reflect some things here which I'd expect PTL doing, most of 
> which will intersect with [1]:
> - Participate in cross-project initiatives & resolution of issues around it. 
> Great example is puppet-openstack vs Fuel [2]
> - Organize required processes around launchpad bugs & blueprints
> - Personal personal feedback to Fuel contributors & public suggestions when 
> needed
> - Define architecture direction & review majority of design specs. Rely on 
> Component Leads and Core Reviewers
> - Ensure that roadmap & use cases are aligned with architecture work
> - Resolve conflicts between core reviewers, component leads. Get people to 
> the same page
> - Watch for code review queues and quality of reviews. Ensure discipline of 
> code review.
> - Testing / coverage have to be at the high level
> 
> Considering all above, contributors actually have been working with all of us 
> and know who could be better handling such a hard work. I don't think special 
> Q is needed. If there are concerns / particular process/tech questions we'd 
> like to discuss - those should be just open as email threads.
> 
> [1] https://wiki.openstack.org/wiki/PTL_Guide
> [2] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066685.html
> 
> Thank you,
> 
> On Tue, Sep 29, 2015 at 3:47 AM Vladimir Kuklin  wrote:
> Folks
> 
> I think it is awesome we have three candidates for PTL position in Fuel. I 
> read all candidates' emails (including mine own several times :-) ) and I got 
> a slight thought of not being able to really differentiate the candidates 
> platforms as they are almost identical from the high-level point of view. But 
> we all know that the devil is in details. And this details will actually 
> affect project future.
> 
> Thus I thought about Q session at #fuel-dev channel in IRC. I think that 
> this will be mutually benefitial for everyone to get our platforms a little 
> bit more clear.
> 
> Let's do it before or right at the start of actual voting so that our 
> contributors can make better decisions based on this session.
> 
> I suggest the following format:
> 
> 1) 3 questions from electorate members - let's put them onto an etherpad
> 2) 2 questions from a candidate to his opponents (1 question per opponent)
> 3) external moderator - I suppose, @xarses as our weekly meeting moderator 
> could help us
> 4) time and date - Wednesday or Thursday comfortable for both timezones, e.g. 
> after 4PM UTC or right after fuel weekly meeting.
> 
> What do you think, folks?
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuk...@mirantis.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> -- 
> Mike Scherbakov
> #mihgen


-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [neutron] A larger batch of questions about configuring DevStack to use Neutron

2015-10-06 Thread Mike Spreitzer
[Sorry, but I do not know if the thundering silence is because these 
questions are too hard, too easy, grossly off-topic, or simply because 
nobody cares.]

I have been looking at 
http://docs.openstack.org/developer/devstack/guides/neutron.htmland wonder 
about a few things.

In the section 
http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface
there is a helpful display of localrc contents.  It says, among other 
things,

   OVS_PHYSICAL_BRIDGE=br-ex
   PUBLIC_BRIDGE=br-ex

In the next top-level section, 
http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-multiple-interfaces
, there is no display of revised localrc contents and no mention of 
changing either bridge setting.  That is an oversight, right?  I am 
guessing I need to set OVS_PHYSICAL_BRIDGEand PUBLIC_BRIDGEto different 
values, and the exhibited `ovs-vsctl` commands in this section apply to 
$OVS_PHYSICAL_BRIDGE.  Is that right?  Are there other revisions I need to 
make to localrc?

Looking at 
http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html(or, in 
former days, the doc now preserved at 
http://docs.ocselected.org/openstack-manuals/kilo/networking-guide/content/under_the_hood_openvswitch.html
) I see the name br-ex used for $PUBLIC_BRIDGE--- not $OVS_PHYSICAL_BRIDGE
, right?  Wouldn't it be less confusing if 
http://docs.openstack.org/developer/devstack/guides/neutron.htmlused a 
name other than "br-ex" for the exhibited commands that apply to 
$OVS_PHYSICAL_BRIDGE?

The section 
http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch
builds on 
http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-multiple-interfaces
NOT 
http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface
--- right?  Could I stop after reading that section, or must I go on to 
http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch-and-provider-networks
?

The exhibited localrc contents in section 
http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface
include both of these:

   Q_L3_ENABLED=True
   Q_USE_PROVIDERNET_FOR_PUBLIC=True

and nothing gainsays either of them until section 
http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch-and-provider-networks
--- where we first see

   Q_L3_ENABLED=False

Is it true that all the other sections want both Q_L3_ENABLEDand 
Q_USE_PROVIDERNET_FOR_PUBLICto be True?

I tried adding IPv6 support to the recipe of the first section (
http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface
).  I added this to my localrc:

IP_VERSION=4+6
IPV6_PUBLIC_RANGE=fddf:2::/64
IPV6_PUBLIC_NETWORK_GATEWAY=fddf:2::1
IPV6_ROUTER_GW_IP=fddf:2::231

At first I had tried setting a different set of IPv6 variables (having 
only IP_VERSION in common with what I exhibit here), but found those: (a) 
duplicated the defaults and (b) caused problems due to lack of the ones I 
mention here.  Even the ones mentioned here led to a problem.  There is a 
bit of scripging that replaces my setting for IPV6_ROUTER_GW_IP with 
something dug out of Neutron.  That went wrong.  It replaced my setting 
with fddf:2::2, but that address was already in use by something else.

Thanks,
Mike


Thanks,
Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-06 Thread Daniel P. Berrange
On Tue, Oct 06, 2015 at 02:54:21PM +0100, Paul Carlton wrote:
> https://review.openstack.org/#/c/85048/ was raised to address the
> migration of instances that are not running but people did not warm to
> the idea of bringing a stopped/suspended instance to a paused state to
> migrate it.  Is there any work in progress to get libvirt enhanced to
> perform the migration of non active virtual machines?

Libvirt can "migrate" the configuration of an inactive VM, but does
not plan todo anything related to storage migration. OpenStack could
already solve this itself by using libvirt storage pool APIs to
copy storage volumes across, but the storage pool worked in Nova
is stalled

https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/use-libvirt-storage-pools,n,z
> 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-06 Thread Flavio Percoco

Greetings,

Not so long ago, Erno started a thread[0] in this list to discuss the
abandon policies for patches that haven't been updated in Glance.

I'd like to go forward and start following that policy with some
changes that you can find below:

1) Lets do this on patches that haven't had any activity in the last 2
months. This adds one more month to Erno's proposal. The reason being
that during the lat cycle, there were some ups and downs in the review
flow that caused some patches to get stuck.

2) Do this just on master, for all patches regardless they fix a
bug or implement a spec and for all patches regardless their review
status.

3) The patch will be first marked as a WIP and then abandoned if the
patch is not updated in 1 week. This will put this patches at the
begining of the queue but using the Glance review dashboard should
help keeing focus.

Unless there are some critical things missing in the above or strong
opiniones against this, I'll make this effective starting next Monday
October 12th.

Best regards,
Flavio

[0] http://lists.openstack.org/pipermail/openstack-dev/2015-February/056829.html


--
@flaper87
Flavio Percoco


pgpHAjhpRRm1H.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][mistral] Automatic evacuation as a long running task

2015-10-06 Thread Matthew Booth
Hi, Roman,

Evacuated has been on my radar for a while and this post has prodded me to
take a look at the code. I think it's worth starting by explaining the
problems in the current solution. Nova client is currently responsible for
doing this evacuate. It does:

1. List all instances on the source host
2. Initiate evacuate for each instance

Evacuating a single instance does:

API:
1. Set instance task state to rebuilding
2. Create a migration record with source and dest if specified

Conductor:
3. Call the scheduler to get a destination host if not specified
4. Get the migration object from the db

Compute:
5. Rebuild the instance on dest
6. Update instance.host to dest

Examining single instance evacuation, the first obvious thing to look at is
what if 2 happen simultaneously. Because step 1 is atomic, it should not be
possible to initiate 2 evacuations simultaneously of a single instance.
However, note that this atomic action hasn't updated the instance host,
meaning the source host remains the owner of this instance. If the
evacuation process fails to complete, the source host will automatically
delete it if it comes back up because it will find a migration record, but
it will not be rebuilt anywhere else. Evacuating it again will fail,
because its task state is already rebuilding.

Also, let's imagine that the conductor crashes. There is not enough state
for any tool, whether internal or external, to be able to know if the
rebuild is ongoing somewhere or not, and therefore whether it is safe to
retry even if that retry would succeed, which it wouldn't.

Which is to say that we can't currently robustly evacuate one instance!

Looking at the nova client side, there is an obvious race there: there is
no guarantee in step 2 that instances returned in step one have not already
been evacuated by another process. We're protected here, though because
evacuating a single instance twice will fail the second time. Note that the
process isn't idempotent, though, because an evacuation which falls into a
hole will never be retried.

Moving on to what evacuated does. Evacuated uses rabbit to distribute jobs
reliably. There are 2 jobs in evacuated:

1. Evacuate host:
  1.1 Get list of all instances on the source host from Nova
  1.2 Send an evacuate vm job for each instance
2. Evacuate vm:
  2.1 Tell Nova to start evacuating an instance

Because we're using rabbit as a reliable message bus, the initiator of one
of the tasks knows that it will eventually run to completion at least once.
Note that there's nothing to prevent the task being executed more than once
per call, though. A task may crash before sending an ack, or may just be
really slow. However, in both cases, for exactly the same reasons as for
the implementation in nova client, running more than once should not race.
It is still not idempotent, though, again for exactly the same reasons as
nova client.

Also notice that, exactly as in the nova client implementation, we are not
asserting that an instance has been evacuated. We are only asserting that
we called nova.evacuate, which is to say that we got as far as step 2 in
the evacuation sequence above.

In other words, in terms of robustness, calling evacuated's evacuate host
is identical to asserting that nova client's evacuate host ran to
completion at least once, which is quite a lot simpler to do. That's still
not very robust, though: we don't recover from failures, and we don't
ensure that an instance is evacuated, only that we started an attempt to
evacuate at least once. I'm obviously not satisfied with nova client,
however as the implementation is simpler I would favour it over evacuated.

I believe we can solve this problem, but I think that without fixing
single-instance evacuate we're just pushing the problem around (or creating
new places for it to live). I would base the robustness of my
implementation on a single principal:

  An instance has a single owner, which is exclusively responsible for
rebuilding it.

In outline, I would redefine the evacuate process to do:

API:
1. Call the scheduler to get a destination for the evacuate if none was
given.
2. Atomically update instance.host to this destination, and task state to
rebuilding.

Compute:
3. Rebuild the instance.

This would be supported by a periodic task on the compute host which looks
for rebuilding instances assigned to this host which aren't currently
rebuilding, and kicks off a rebuild for them. This would cover the compute
going down during a rebuild, or the api going down before messaging the
compute.

Implementing this gives us several things:

1. The list instances, evacuate all instances process becomes idempotent,
because as soon as the evacuate is initiated, the instance is removed from
the source host.
2. We get automatic recovery of failure of the target compute. Because we
atomically moved the instance to the target compute immediately, if the
target compute also has to be evacuated, our instance won't fall through
the 

[openstack-dev] [Keystone] Mitaka design summit schedule

2015-10-06 Thread Steve Martinelli

Keystoners and Keystone enthusiasts,

At our last Keystone meeting we decided on the topics for the design
summit, the schedule is now available online:
  - http://mitakadesignsummit.sched.org/type/Keystone#.VhSpPBNVhBc

Please note the start times and locations.

I've gone ahead and created etherpads for each of these (in order of how
they appear in the schedule):
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-tokens
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-multitenancy
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-policy
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-deprecations
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-federation
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-server
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-testing
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-oslo-and-docs
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-libraries
  - https://etherpad.openstack.org/p/keystone-mitaka-summit-x-project

Please refrain from modifying our brain dump etherpad[1] and instead modify
the specific etherpad above.

I'll be adding these to the Mitaka Design Summit Wiki soon.

See you in a few weeks!

[1] http://mitakadesignsummit.sched.org/type/Keystone#.VhSpPBNVhBc

Thanks,

Steve Martinelli
OpenStack Keystone PTL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [openstack-infra] stable changes to python-neutronclient unable to merge

2015-10-06 Thread Armando M.
Hi folks,

We are unable to merge stable changes to python-neutronclient (as shown in
[1,2]) because of the missing master fixes [3,4]. We should be able to
untangle Liberty with [5], but to unblock Kilo, I may need to squash [6]
with a cherry pick of [3] and wait [5] to merge.

Please bear with us until we get the situation sorted.

Cheers,
Armando

[1]
https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient+branch:stable/kilo,n,z
[2]
https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient+branch:stable/liberty,n,z
[3] https://review.openstack.org/#/c/231731/
[4] https://review.openstack.org/#/c/231797/
[5] https://review.openstack.org/#/c/231796/
[6] https://review.openstack.org/#/c/231797/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Blueprints for upgrades

2015-10-06 Thread Zane Bitter
I've started an etherpad collecting a list of potential blueprints for 
achieving major version upgrades of in TripleO, with an initial target 
of upgrading Kilo to Liberty using a stable/liberty undercloud. There's 
still a bunch of unanswered questions (especially around compute nodes), 
and some important considerations are likely to be missing. I'd like to 
ask everyone to take a look, add comments, answer questions if you can, 
add in any other blueprints we should be considering, and propose specs 
if you're in a good position to do so.


Have at it:

https://etherpad.openstack.org/p/tripleo-kilo-to-liberty-upgrades

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][keystone]

2015-10-06 Thread David Chadwick
Dear All

One of my students, Anton Brida, has developed an Attribute Mapping GUI
for Horizon as part of his MSc project. Attribute mappings are an
essential, though complex, part of federated Keystone. Currently they
can only be created as JSON objects in the config file. The Horizon code
allows them to be dynamically created via an easy to use GUI.

Since Anton has now left the university for full time employment, he is
not able to go through the process of submitting his code to the next
release of Horizon. His design however was submitted to InVision and
commented on by various people at the time of the development.

I am now looking for someone who would like to take a copy of this code
and go through the process of submitting this to the next release of
Horizon. I have a copy of Anton's MSc dissertation as well which
explains the work that he has done.

All the attribute mapping features are supported in Anton's code
(groups, users, direct mapping, multiple attribute values etc.)
However the whitelist/blacklist feature is not, since this was not fully
incorporated into Keystone when Anton was doing his implementation. (I
am still not sure if it has been.)

The code has a couple of known bugs:

1. when a user tries to enter an email address into an attribute value
(i.e. usern...@example.com) and saves the mapping rule into the
database, after reloading the new list of mappings rules the interface
does not work as intended. The particular reason why this is happening
is yet unknown. The only way to avoid such disruption is to delete the
faulty mapping rule from the table. After removing the faulty rule the
interface works as intended.

2. Some of the descriptive text needs improvement due to incorrect grammar.

There is also the following suggested enhancement which can be added later:

1. After the mapping rules are created with the GUI, when they are
displayed, they are still in JSON format. It would be nice to be able to
display the rules in a table or similar.

If you would like to take on the job of submitting this code to Horizon
for review and incorporation, please contact me

regards

David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Liberty release finalization

2015-10-06 Thread Nikhil Komawar
Woohoo! It's a feat!

On 10/6/15 12:19 PM, Tripp, Travis S wrote:
>
> On 10/6/15, 2:28 AM, "Thierry Carrez"  wrote:
>
>> The "intermediary" model requires the project following it to be mature
>> enough (and the project team following it to be disciplined enough) to
>> internalize the QA process.
>>
>> In the "with-milestones" model, you produce development milestones and
>> release candidates to get the features out early and progressively get
>> more and more outside testing on proposed artifacts. It's "ok" if a
>> development milestone is revealed to be unusable: that shows lack of
>> proper testing coverage, and there is still time to fix things before
>> the "real" release.
>>
>> In the "intermediary" model, you deliver fully-usable releases that you
>> recommend production deployments to upgrade to. There is no alpha, beta
>> or RC. You directly tag a release. That means you need to be confident
>> enough in your own testing and testing coverage. Mistakes can still
>> happen (in which case we rush a subsequent point release) but should
>> really be exceptional, otherwise nobody will trust your deliverables.
>>
>> This is why we recommend the "intermediary" model to mature projects and
>> project teams -- that model requires excellent test coverage and
>> discipline inside the team to slow down development as you get closer to
>> a release tag and spend time on testing.
>>
>> -- 
>> Thierry Carrez (ttx)
> Thierry,
>
> Thanks again for the information. After quite a bit of discussion in our IRC 
> channel this morning, we think it does make sense to start with the 
> milestones as recommended.  So, I’ve gone ahead and applied the rc1 tag and 
> will follow up with you in the openstack-relmgr-office for next steps!
>
> Thanks,
> Travis
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-06 Thread Chris Friesen

On 10/06/2015 11:27 AM, Paul Carlton wrote:



On 06/10/15 17:30, Chris Friesen wrote:

On 10/06/2015 08:11 AM, Daniel P. Berrange wrote:

On Tue, Oct 06, 2015 at 02:54:21PM +0100, Paul Carlton wrote:

https://review.openstack.org/#/c/85048/ was raised to address the
migration of instances that are not running but people did not warm to
the idea of bringing a stopped/suspended instance to a paused state to
migrate it.  Is there any work in progress to get libvirt enhanced to
perform the migration of non active virtual machines?


Libvirt can "migrate" the configuration of an inactive VM, but does
not plan todo anything related to storage migration. OpenStack could
already solve this itself by using libvirt storage pool APIs to
copy storage volumes across, but the storage pool worked in Nova
is stalled

https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/use-libvirt-storage-pools,n,z



What is the libvirt API to migrate a paused/suspended VM? Currently nova uses
dom.managedSave(), so it doesn't know what file libvirt used to save the
state.  Can libvirt migrate that file transparently?

I had thought we might switch to virDomainSave() and then use the cold
migration framework, but that requires passwordless ssh.  If there's a way to
get libvirt to handle it internally via the storage pool API then that would
be better.




So my reading of this is the issue could be addressed in Mitaka by
implementing
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/use-libvirt-storage-pools.html

and
https://review.openstack.org/#/c/126979/4/specs/kilo/approved/migrate-libvirt-volumes.rst


is there any prospect of this being progressed?


Paul, that would avoid the need for cold migrations to use passwordless ssh 
between nodes.  However, I think there may be additional work to handle 
migrating paused/suspended instances--still waiting for Daniel to address that bit.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] A few questions on configuring DevStack for Neutron

2015-10-06 Thread Christopher Aedo
On Sun, Oct 4, 2015 at 9:16 PM, Mike Spreitzer  wrote:
> [Apologies for re-posting, but I botched the subject line the first time and
> know that people use filters.]
>
> I have been looking at
> http://docs.openstack.org/developer/devstack/guides/neutron.htmland wonder
> about a few things.
>
> In the section
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interfacethere
> is a helpful display of localrc contents.  It says, among other things,
>
>OVS_PHYSICAL_BRIDGE=br-ex
>PUBLIC_BRIDGE=br-ex
>
> In the next top-level section,
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-multiple-interfaces,
> there is no display of revised localrc contents and no mention of changing
> either bridge setting.  That is an oversight, right?  I am guessing I need
> to set OVS_PHYSICAL_BRIDGEand PUBLIC_BRIDGEto different values, and the
> exhibited `ovs-vsctl` commands in this section apply to
> $OVS_PHYSICAL_BRIDGE.  Is that right?  Are there other revisions I need to
> make to localrc?
>
> Looking at
> http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html(or, in
> former days, the doc now preserved at
> http://docs.ocselected.org/openstack-manuals/kilo/networking-guide/content/under_the_hood_openvswitch.html)
> I see the name br-ex used for $PUBLIC_BRIDGE--- not $OVS_PHYSICAL_BRIDGE,
> right?  Wouldn't it be less confusing if
> http://docs.openstack.org/developer/devstack/guides/neutron.htmlused a name
> other than "br-ex" for the exhibited commands that apply to
> $OVS_PHYSICAL_BRIDGE?
>
> The section
> http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitchbuilds
> on
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-multiple-interfacesNOT
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface---
> right?  Could I stop after reading that section, or must I go on to
> http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch-and-provider-networks?
>
> The exhibited localrc contents in section
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interfaceinclude
> both of these:
>
>Q_L3_ENABLED=True
>Q_USE_PROVIDERNET_FOR_PUBLIC=True
>
> and nothing gainsays either of them until section
> http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch-and-provider-networks---
> where we first see
>
>Q_L3_ENABLED=False
>
> Is it true that all the other sections want both Q_L3_ENABLED and
> Q_USE_PROVIDERNET_FOR_PUBLICto be True?

I'd love to see a response from someone who can make sense of this
too.  With my evangelist hat on, I usually tell people who want to get
started with OpenStack development to start with Devstack.  More often
than not, they have trouble with the networking side.  As discussed
and hoped for in the "just get me a network" spec, there's definitely
a need for a less painful path for users.  Likewise we should be able
to share a devstack config that just works, but at the same time shows
off some of the great capabilities of neutron (and all the other good
bits of OpenStack).

Can anyone weigh in here on this issue?

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ironic] Announcing ironic-python-agent 1.0.0

2015-10-06 Thread Jim Rollenhagen
Hi all,

We're ecstatic to announce the release of ironic-python-agent 1.0.0. This is
the first ever release of this project, and is the basis for a stable/liberty
branch. We will continue to do intermediary releases and stable branches for
IPA, following Ironic's release model.

IPA is the heart of Ironic's deploy ramdisk. It was brought to the project
during the Icehouse cycle[0] as an alternate deploy mechanism to the existing
bash ramdisk. Since then, it has evolved to support every Ironic driver and
also provide a pluggable cleaning mechanism. It is now the recommended ramdisk
for Ironic deployments, and soon the only ramdisk, as the bash ramdisk is now
deprecated.

IPA is a python application that exposes a REST API for performing actions on
the server it is running on. It has a heartbeat mechanism, adding two-way
communication with Ironic. IPA allows for pluggable "hardware managers"; by
default it supports a wide variety of hardware in a generic way, and additional
hardware managers can be added to expose support or optimization for
specialized hardware. For example, one might add a hardware manager to make
disk erasure use a vendor-specific binary, rather than shred or hdparm's secure
erase.

Instructions for including ironic-python-agent in a ramdisk, and links to
pre-built ramdisks from the master branch, are available at:
http://docs.openstack.org/developer/ironic-python-agent/#image-builders

Instructions for setting up Ironic to use a given ramdisk are here:
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#image-requirements

Please do try this out! For more information, see the links below.

Source code: http://git.openstack.org/cgit/openstack/ironic-python-agent
Documentation: http://docs.openstack.org/developer/ironic-python-agent/
PyPI package: https://pypi.python.org/pypi/ironic-python-agent/1.0.0
Bug tracker: https://bugs.launchpad.net/ironic-python-agent

// jim

[0] http://lists.openstack.org/pipermail/openstack-dev/2014-March/029270.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-06 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2015-10-06 23:36:53 +0900:
> Greetings,
> 
> Not so long ago, Erno started a thread[0] in this list to discuss the
> abandon policies for patches that haven't been updated in Glance.
> 
> I'd like to go forward and start following that policy with some
> changes that you can find below:
> 
> 1) Lets do this on patches that haven't had any activity in the last 2
> months. This adds one more month to Erno's proposal. The reason being
> that during the lat cycle, there were some ups and downs in the review
> flow that caused some patches to get stuck.
> 
> 2) Do this just on master, for all patches regardless they fix a
> bug or implement a spec and for all patches regardless their review
> status.
> 
> 3) The patch will be first marked as a WIP and then abandoned if the
> patch is not updated in 1 week. This will put this patches at the
> begining of the queue but using the Glance review dashboard should
> help keeing focus.
> 
> Unless there are some critical things missing in the above or strong
> opiniones against this, I'll make this effective starting next Monday
> October 12th.
> 
> Best regards,
> Flavio
> 
> [0] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-February/056829.html
> 

In the past we've had discussions on the list about how abandoning
patches can be perceived as hostile to contributors, and that using
a review dashboard with good filters is a better solution. Since
you already have a dashboard, I suggest adding a section for patches
that are old but have no review comments (maybe you already have
that) and another for patches where the current viewer has voted
-1. The first highlights the patches for reviewers, and ignores
them when they are in a state where we're waiting for feedback or
an update, and the latter provides a list of patches the current
reviewer is involved in and may need to recheck for new comments.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] external projects extending ceilometer

2015-10-06 Thread gord chung

hi,

telemetry is a big space and its requirements differ from company to 
company. there are existing projects that extend/leverage the 
functionality of Ceilometer to either customise, fill in gaps or resolve 
complementary problems.


as the projects are not all managed by the Ceilometer team, i've added a 
section in the wiki[1] so there's a place for users to see what 
extensions exists and for developers to promote their customisations. 
feel free to add your own projects.


[1] https://wiki.openstack.org/wiki/Ceilometer#Ceilometer_Extensions

cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks

2015-10-06 Thread Gal Sagie
Hello All,

I have opened a Trello board to track all Kuryr assigned tasks and their
assignee.
In addition to all the non assigned tasks we have defined.

You can visit and look at the board here [1].
Please email back if i missed you or any task that you are working on, or a
task
that you think needs to be on that list.

This is only a temporary solution until we get everything organised, we
plan to track everything with launchpad bugs (and the assigned blueprints)

If you see any task from this list which doesn't have an assignee and you
feel
you have the time and the desire to contribute, please contact me and i
will provide
guideness.

Thanks
Gal

[1] https://trello.com/b/cbIAXrQ2/project-kuryr
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] backwards compat issue with PXEDeply and AgentDeploy drivers

2015-10-06 Thread Ramakrishnan G
Well it's nice to fix, but I really don't know if we should be fixing it.
As discussed in one of the Ironic meetings before, we might need to define
what is our driver API or SDK or DDK or whatever we choose to call it .
Please see inline for my thoughts.

On Tue, Oct 6, 2015 at 5:54 AM, Devananda van der Veen <
devananda@gmail.com> wrote:

> tldr; the boot / deploy interface split we did broke an out of tree
> driver. I've proposed a patch. We should get a fix into stable/liberty too.
>
> Longer version...
>
> I was rebasing my AMTTool driver [0] on top of master because the in-tree
> one still does not work for me, only to discover that my driver suddenly
> failed to deploy. I have filed this bug
>   https://bugs.launchpad.net/ironic/+bug/1502980
> because we broke at least one out of tree driver (mine). I highly suspect
> we've broken many other out of tree drivers that relied on either the
> PXEDeploy or AgentDeploy interfaces that were present in Kilo release. Both
> classes in Liberty are making explicit calls to "task.driver.boot" -- and
> kilo-era driver classes did not define this interface.
>


I would like to think more about what really our driver API is ? We have a
couple of well defined interfaces in ironic/drivers/base.py which people
may follow, implement an out-of-tree driver, make it a stevedore entrypoint
and get it working with Ironic.

But

1) Do we promise them that in-tree implementations of these interfaces will
always exist.  For example in boot/deploy work done in Liberty, we removed
the class PxeDeploy [1].  It actually got broken down to PXEBoot and
ISCSIDeploy.  In the first place, do we guarantee that they will exist for
ever in the same place with the same name ? :)

2) Do we really promise the in-tree implementations of these interfaces
will behave the same way ? For example, the broken stuff AgentDeploy which
is an implementation of our DeployInterface.  Do we guarantee that this
implementation will always keep doing what ever it was every time code is
rebased ?

[1] https://review.openstack.org/#/c/166513/19/ironic/drivers/modules/pxe.py



>
> I worked out a patch for the AgentDeploy driver and have proposed it here:
>   https://review.openstack.org/#/c/231215/1
>
> I'd like to ask folks to review it quickly -- we should fix this ASAP and
> backport it to stable/liberty before the next release, if possible. We
> should also make a similar fix for the PXEDeploy class. If anyone gets to
> this before I do, please reply here and let me know so we don't duplicate
> effort.
>


This isn't going to be as same as above as there is no longer a PXEDeploy
class any more.  We might need to create a new class PXEDeploy which
probably inherits from ISCSIDeploy and has task.driver.boot worked around
in the same way as the above patch.



>
> Also, Jim already spotted something in the review that is a bit
> concerning. It seems like the IloVirtualMediaAgentVendorInterface class
> expects the driver it is attached to *not* to have a boot interface and
> *not* to call boot.clean_up_ramdisk. Conversely, other drivers may be
> expecting AgentVendorInterface to call boot.clean_up_ramdisk -- since that
> was its default behavior in Kilo. I'm not sure what the right way to fix
> this is, but I lean towards updating the in-tree driver so we remain
> backwards-compatible for out of tree drivers.
>
>
> -Devananda
>
> [0] https://github.com/devananda/ironic/tree/new-amt-driver
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] perfomance benchmark metrics of heat-api

2015-10-06 Thread Christian Berendt

On 10/06/2015 05:20 AM, ESWAR RAO wrote:

Has anyone done any performance tests on heat-api servers on any
standard setup so as to know how many stack requests it can handle
before it can stumble so that we can deploy scaling of heat-servers ??


It depends on your environment and you should run your own tests. Have a 
look at 
https://github.com/openstack/rally/tree/master/samples/tasks/scenarios/heat 
for a lot of prepared scenarios for Heat.


HTH, Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][cli][openstackclient] conflict in nova flavor and zaqar flavor

2015-10-06 Thread Steve Martinelli

Using `message flavor` works for me, and having two words is just fine.

I'm in the process of collecting all of the existing "object" works are
putting them online, there's a lot of them. Hopefully this will reduce the
collisions in the future.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:   Shifali Agrawal 
To: openstack-dev@lists.openstack.org
Date:   2015/10/06 03:40 PM
Subject:[openstack-dev] [Zaqar][cli][openstack-client] conflict in nova
flavor and zaqar flavor



Greetings,

I am implementing cli commands for Zaqar flavors, the command should be
like:

"openstack flavor "

But there is already same command present for Nova flavors. After
discussing with Zaqar devs we thought to change all zaqar commands such
that they include `message` word after openstack, thus above Zaqar flavor
command will become:

"openstack message flavor "

Does openstack-client devs have something to say for this? Or they also
feel its good to move with adding `message` word to all Zaqar cli
commands?

Already existing Zaqar commands will work with get a deprecation
message/warning and also I will implement them all to work with `message`
word, and all new commands will be implement so that they work only with
`message` word.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Core Reviewers groups restructure

2015-10-06 Thread Mike Scherbakov
Update here: patch was marked as WIP for now due to comment from Anita Kuno:

> On Oct. 17 all active stackforge projects that have themselves listed on
the stackforge retirement wikipage will be moved. This includes reviewing
all acl files for that move.

> Can we mark this patch wip until after the Oct. 17 stackforge rename and
then change the paths on these files to the openstack namespace?

On Thu, Oct 1, 2015 at 4:03 PM Dmitry Borodaenko 
wrote:

> This commit brings Fuel ACLs in sync with each other and in line with
> the agreement on this thread:
> https://review.openstack.org/230195
>
> Please review carefully. Note that I intentionally didn't touch any of
> the plugins ACLs, primarily to save time for us and the
> openstack-infra team until after the stackforge->openstack namespace
> migration.
>
> On Mon, Sep 21, 2015 at 4:17 PM, Mike Scherbakov
>  wrote:
> > Thanks guys.
> > So for fuel-octane then there are no actions needed.
> >
> > For fuel-agent-core group [1], looks like we are already good (it doesn't
> > have fuel-core group nested). But it would need to include fuel-infra
> group
> > and remove Aleksandra Fedorova (she will be a part of fuel-infra group).
> >
> > python-fuel-client-core [2] is good as well (no nested fuel-core).
> However,
> > there is another group python-fuelclient-release [3], which has to be
> > eliminated, and main python-fuelclient-core would just have fuel-infra
> group
> > included for maintenance purposes.
> >
> > [1] https://review.openstack.org/#/admin/groups/995,members
> > [2] https://review.openstack.org/#/admin/groups/551,members
> > [3] https://review.openstack.org/#/admin/groups/552,members
> >
> >
> > On Mon, Sep 21, 2015 at 11:06 AM Oleg Gelbukh 
> wrote:
> >>
> >> FYI, we have a separate core group for stackforge/fuel-octane repository
> >> [1].
> >>
> >> I'm supporting the move to modularization of Fuel with cleaner
> separation
> >> of authority and better defined interfaces. Thus, I'm +1 to such a
> change as
> >> a part of that move.
> >>
> >> [1] https://review.openstack.org/#/admin/groups/1020,members
> >>
> >> --
> >> Best regards,
> >> Oleg Gelbukh
> >>
> >> On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov
> >>  wrote:
> >>>
> >>> Hi all,
> >>> as of my larger proposal on improvements to code review workflow [1],
> we
> >>> need to have cores for repositories, not for the whole Fuel. It is the
> path
> >>> we are taking for a while, and new core reviewers added to specific
> repos
> >>> only. Now we need to complete this work.
> >>>
> >>> My proposal is:
> >>>
> >>> Get rid of one common fuel-core [2] group, members of which can merge
> >>> code anywhere in Fuel. Some members of this group may cover a couple of
> >>> repositories, but can't really be cores in all repos.
> >>> Extend existing groups, such as fuel-library [3], with members from
> >>> fuel-core who are keeping up with large number of reviews / merges.
> This
> >>> data can be queried at Stackalytics.
> >>> Establish a new group "fuel-infra", and ensure that it's included into
> >>> any other core group. This is for maintenance purposes, it is expected
> to be
> >>> used only in exceptional cases. Fuel Infra team will have to decide
> whom to
> >>> include into this group.
> >>> Ensure that fuel-plugin-* repos will not be affected by removal of
> >>> fuel-core group.
> >>>
> >>> #2 needs specific details. Stackalytics can show active cores easily,
> we
> >>> can look at people with *:
> >>> http://stackalytics.com/report/contribution/fuel-web/180. This is for
> >>> fuel-web, change the link for other repos accordingly. If people are
> added
> >>> specifically to the particular group, leaving as is (some of them are
> no
> >>> longer active. But let's clean them up separately from this group
> >>> restructure process).
> >>>
> >>> fuel-library-core [3] group will have following members: Bogdan D.,
> >>> Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
> >>> fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
> Vitaly
> >>> Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
> >>> fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
> >>> fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
> >>> fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
> >>> Urlapova
> >>> fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
> >>> Konstantinov, Olga Gusarenko
> >>> fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry Pyzhov,
> >>> Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
> >>> fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
> >>> fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
> >>> Sledzinsky, Dmitry Shulyak
> >>> fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
> >>> fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
> Urlapova
> >>> fuel-stats-core [14]: 

[openstack-dev] [Zaqar][cli][openstack-client] conflict in nova flavor and zaqar flavor

2015-10-06 Thread Shifali Agrawal
Greetings,

I am implementing cli commands for Zaqar flavors, the command should be
like:

"openstack flavor "

But there is already same command present for Nova flavors. After
discussing with Zaqar devs we thought to change all zaqar commands such
that they include `message` word after openstack, thus above Zaqar flavor
command will become:

"openstack message flavor "

Does openstack-client devs have something to say for this? Or they also
feel its good to move with adding `message` word to all Zaqar cli commands?

Already existing Zaqar commands will work with get a deprecation
message/warning and also I will implement them all to work with `message`
word, and all new commands will be implement so that they work only with
`message` word.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-06 Thread Rich Megginson

On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:

Rich Megginson  writes:


On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:

Gilles Dubreuil  writes:


On 30/09/15 03:43, Rich Megginson wrote:

On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:

On 15/09/15 19:55, Sofer Athlan-Guyot wrote:

Gilles Dubreuil  writes:


On 15/09/15 06:53, Rich Megginson wrote:

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:

Hi,

Gilles Dubreuil  writes:


A. The 'composite namevar' approach:

   keystone_tenant {'projectX::domainY': ... }
 B. The 'meaningless name' approach:

  keystone_tenant {'myproject': name='projectX',
domain=>'domainY',
...}

Notes:
 - Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
 - Please look at [1] this for some background between the two
approaches:

The question
-
Decide between the two approaches, the one we would like to
retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project.
Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.

I think we could support both.  I don't see it as an either/or
situation.


4. Being consistent
5. Therefore the community to decide

Pros/Cons
--
A.

I think it's the B: meaningless approach here.


  Pros
- Easier names

That's subjective, creating unique and meaningful name don't look
easy
to me.

The point is that this allows choice - maybe the user already has some
naming scheme, or wants to use a more "natural" meaningful name -
rather
than being forced into a possibly "awkward" naming scheme with "::"

 keystone_user { 'heat domain admin user':
   name => 'admin',
   domain => 'HeatDomain',
   ...
 }

 keystone_user_role {'heat domain admin user@::HeatDomain':
   roles => ['admin']
   ...
 }


  Cons
- Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.


- Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how
to use
"compound" names with Puppet.


- More difficult to debug

More difficult than it is already? :P


- Titles mismatch when listing the resources (self.instances)

B.
  Pros
- Unique titles guaranteed
- No ambiguity between resource found and their title
  Cons
- More complicated titles
My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when
not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

+1 for B.

My view is that this should be the advertised way, but the other
method
(meaningless) should be there if the user need it.

So as far as I'm concerned the two idioms should co-exist.  This
would
mimic what is possible with all puppet resources.  For instance
you can:

  file { '/tmp/foo.bar': ensure => present }

and you can

  file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
present }

The two refer to the same resource.

Right.


I disagree, using the name for the title is not creating a composite
name. The latter requires adding at least another parameter to be part
of the title.

Also in the case of the file resource, a path/filename is a unique
name,
which is not the case of an Openstack user which might exist in several
domains.

I actually added the meaningful name case in:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html


But that doesn't work very well because without adding the domain to
the
name, the following fails:

keystone_tenant {'project_1': domain => 'domain_A', ...}
keystone_tenant {'project_1': domain => 'domain_B', ...}

And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.


There is a big blocker of making use of domain name as parameter.
The issue is the limitation of autorequire.

Because autorequire doesn't support any parameter other than the
resource type and expects the resource title (or a list of) [1].

So for instance, keystone_user requires the tenant project1 from

Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-06 Thread Monty Taylor

On 10/06/2015 10:52 AM, Sebastian Kalinowski wrote:

I've already wrote in the review that caused this thread that I do not want
to blindly follow rules for using one or another. We should always consider
technical requirements. And I do not see a reason to leave py.test (and
nobody
show me such reason) and replace it with something else.


Hi!

The reason is that testrepository is what OpenStack uses and as I 
understand it, Fuel wants to join the Big Tent.


The use of testr is documented in the Project Testing Interface:

http://git.openstack.org/cgit/openstack/governance/tree/reference/project-testing-interface.rst#n78

There are many reasons for it, but in large part we are continually 
adding more and more tools to process subunit output across the board in 
the Gate. subunit2sql is an important one, as it will be feeding into 
expanded test result dashboards.


We also have zuul features in the pipeline to be able to watch the 
subunit streams in real time to respond more quickly to issues in test runs.


We also have standard job builders based around tox and testr. Having 
project divergence in this area is a non-starter when there are over 800 
repositories.


In short, while I understand that this seems like an area where a 
project can do whatever it wants to, it really isn't. If it's causing 
you excessive pain, I recommend connecting with Robert on ways to make 
improvements to testrepository. Those improvements will also have the 
effect of improving life for the rest of OpenStack, which is also a 
great reason why we all use the same tools rather than foster an 
environment of per-project snowflakes.



Additionally other folks showed that this is not a blocker for moving
under big tent.


I apologize for any confusion that may have resulted from you being 
given erroneous information.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-06 Thread Davanum Srinivas
Sebastian,

I am really hoping that the items Monty listed below are enough. Also, if
you are interested, folks do use other things for running their tests
especially to find problems when testr hides some errors. Example, please
see:

https://davanum.wordpress.com/2015/01/13/quickly-running-a-single-openstack-nova-test/

Thanks,
Dims



On Tue, Oct 6, 2015 at 5:47 PM, Monty Taylor  wrote:

> On 10/06/2015 10:52 AM, Sebastian Kalinowski wrote:
>
>> I've already wrote in the review that caused this thread that I do not
>> want
>> to blindly follow rules for using one or another. We should always
>> consider
>> technical requirements. And I do not see a reason to leave py.test (and
>> nobody
>> show me such reason) and replace it with something else.
>>
>
> Hi!
>
> The reason is that testrepository is what OpenStack uses and as I
> understand it, Fuel wants to join the Big Tent.
>
> The use of testr is documented in the Project Testing Interface:
>
>
> http://git.openstack.org/cgit/openstack/governance/tree/reference/project-testing-interface.rst#n78
>
> There are many reasons for it, but in large part we are continually adding
> more and more tools to process subunit output across the board in the Gate.
> subunit2sql is an important one, as it will be feeding into expanded test
> result dashboards.
>
> We also have zuul features in the pipeline to be able to watch the subunit
> streams in real time to respond more quickly to issues in test runs.
>
> We also have standard job builders based around tox and testr. Having
> project divergence in this area is a non-starter when there are over 800
> repositories.
>
> In short, while I understand that this seems like an area where a project
> can do whatever it wants to, it really isn't. If it's causing you excessive
> pain, I recommend connecting with Robert on ways to make improvements to
> testrepository. Those improvements will also have the effect of improving
> life for the rest of OpenStack, which is also a great reason why we all use
> the same tools rather than foster an environment of per-project snowflakes.
>
> Additionally other folks showed that this is not a blocker for moving
>> under big tent.
>>
>
> I apologize for any confusion that may have resulted from you being given
> erroneous information.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] conflicting names in python-openstackclient: could we have some exception handling please?

2015-10-06 Thread Thomas Goirand
Hi,

tl;dr: let's add a exception handling so that python-*client having
conflicting command names isn't a problem anymore, and "openstack help"
always work as much as it can.

Longer version:

This is just a suggestion for contributors to python-openstackclient.

I saw a few packages that had conflicts with the namespace of others
within openstackclient. To the point that typing "openstack help" just
fails. Here's an example:

# openstack help
[ ...]
  project create  Create new project
  project delete  Delete project(s)
  project list   List projects
  project setSet project properties
  project show   Display project details
Could not load EntryPoint.parse('ptr_record_list =
designateclient.v2.cli.reverse:ListFloatingIPCommand')
'ArgumentParser' object has no attribute 'debug'

This first happened to me with saharaclient. Lucky, upgrading to latest
version fixed it. Then I had the problem with zaqarclient, which I fixed
with a few patches to its setup.cfg. Then now designate, but this time,
patching setup.cfg doesn't seem to cut it (ie: after changing the name
of the command, "openstack help" just fails).

Note: I don't care which project is at fault, this isn't the point here.
The point is that command name conflicts aren't handled (see below)
which is the problem.

With Horizon being a large consumer of nearly all python-*client
packages, removing one of them also removes Horizon in my CI which is
not what I want to (or can) do to debug a tempest problem. End of the
story: since Liberty b3, I never could have "openstack help" to work
correctly in my CI... :(

Which leads me to write this:

Since we have a very large amount of projects, with each and everyone of
them adding new commands to openstackclient, I would really nice if we
could have some kind of checks to make sure that conflicts are either 1/
not possible or 2/ handled gracefully.

Your thoughts?
Cheers,

Thomas Goirand (zigo)

P.S: It wasn't the point of this message, but do we have a fix for
designateclient? It'd be nice to have this fixed before Liberty is out.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Defect management

2015-10-06 Thread Miguel Angel Ajo
Hi, I was trying to help a bit, for now, but I don't have access in 
launchpad to update importance,

etc.

I will add comments on the bugs themselves, and set a ? in the 
spreadsheet.


Cheers, and thanks Armando for leading this, it's a good change. I will 
be happy to take

the bug deputy position for 1-2 weeks anytime soon.

Cheers,
Miguel Ángel.


Armando M. wrote:

Hi neutrinos,

Whilst we go down the path of revising the way we manage/process bugs in
Neutron, and transition to the proposed model [1], I was wondering if I can
solicit some volunteers to screen the bugs outlined in [2]. It's only 24
bugs so it should be quick ;)

Btw, you can play with filters and Google sheets Insights to see how well
(or bad) we've done this week.

Cheers,
Armando

[1] https://review.openstack.org/#/c/228733/
[2]
https://docs.google.com/spreadsheets/d/1UpxSOsFKQWN0IF-mN0grFJJap-j-8tnZHmG4f3JYmIQ/edit#gid=1296831500

On 28 September 2015 at 23:06, Armando M.  wrote:


Hi folks,

One of the areas I would like to look into during the Mitaka cycle is
'stability' [1]. The team has done a great job improving test coverage, and
at the same time increasing reliability of the product.

However, regressions are always around the corner, and there is a huge
backlog of outstanding bugs (800+ of new/confirmed/triaged/in progress
actively reported) that pressure the team. Having these slip through the
cracks or leave them lingering is not cool.

To this aim, I would like to propose a number of changes in the way the
team manage defeats, and I will be going through the process of proposing
these changes via code review by editing [2] (like done in [3]).

Feedback most welcome.

Many thanks,
Armando


[1]
http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/Neutron/Armando_Migliaccio.txt#n25
[2] http://docs.openstack.org/developer/neutron/policies/index.html
[3] https://review.openstack.org/#/c/228733/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Liberty release finalization

2015-10-06 Thread Thierry Carrez
Tripp, Travis S wrote:
> Thanks for the info! We had discussed the release models a couple times in 
> our IRC meeting and we thought that the release cycle with intermediary 
> releases sounded good to us.  One reason is that we actually wanted to be 
> able to release more frequently if needed support deployers and developers 
> interested in moving searchlight into production more quickly. Possibly we 
> would be looking to release whenever we improve an integration with an 
> existing project, support an integration with a new project, enable a new 
> feature, address major bugs, or to address UI integration needs.
> 
> As far as the version number, we feel that we have a good basis for the 
> functionality and API at this point. We’re wanting to start getting deployer 
> feedback and want to be able to make changes needed without getting too hung 
> up on major vs minor version changes. So we’ve voted to go with 0.1.0 to 
> allow us time to solidify based on that with a goal of going to 1.0 by the 
> end of the Mitaka release cycle.
> 
>  
> However, in reading the page you sent below it says the following about 
> common cycle with intermediary releases.
> 
> "This is especially suitable to more stable projects which add a limited set 
> of new features and don’t plan to go through large architectural changes. 
> Getting the latest and greatest out as often as possible, while ensuring 
> stability and upgradeability."
> 
> This description of the release model sounds a bit dissimilar from our ideas 
> above, so is this okay with you that we stay on that release model?

The "intermediary" model requires the project following it to be mature
enough (and the project team following it to be disciplined enough) to
internalize the QA process.

In the "with-milestones" model, you produce development milestones and
release candidates to get the features out early and progressively get
more and more outside testing on proposed artifacts. It's "ok" if a
development milestone is revealed to be unusable: that shows lack of
proper testing coverage, and there is still time to fix things before
the "real" release.

In the "intermediary" model, you deliver fully-usable releases that you
recommend production deployments to upgrade to. There is no alpha, beta
or RC. You directly tag a release. That means you need to be confident
enough in your own testing and testing coverage. Mistakes can still
happen (in which case we rush a subsequent point release) but should
really be exceptional, otherwise nobody will trust your deliverables.

This is why we recommend the "intermediary" model to mature projects and
project teams -- that model requires excellent test coverage and
discipline inside the team to slow down development as you get closer to
a release tag and spend time on testing.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] perfomance benchmark metrics of heat-api

2015-10-06 Thread Sergey Kraynev
On 6 October 2015 at 10:18, Christian Berendt  wrote:
> On 10/06/2015 05:20 AM, ESWAR RAO wrote:
>>
>> Has anyone done any performance tests on heat-api servers on any
>> standard setup so as to know how many stack requests it can handle
>> before it can stumble so that we can deploy scaling of heat-servers ??
>
>
> It depends on your environment and you should run your own tests. Have a
> look at
> https://github.com/openstack/rally/tree/master/samples/tasks/scenarios/heat
> for a lot of prepared scenarios for Heat.
>
> HTH, Christian.
>
> --
> Christian Berendt
> Cloud Solution Architect
> Mail: bere...@b1-systems.de
>
> B1 Systems GmbH
> Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
> GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


ESWAR RAO,  good question !

I don't know about some published results for such performance testing :)

As Christian Berendt said: it's really depends on particular
deployment installation, so suggestion about using rally is the best
option :)
Btw, if you do it, feel free to share with community - it will be
really helpful for us.

-- 
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-06 Thread Nikhil Komawar


On 10/6/15 1:53 PM, Doug Hellmann wrote:
> Excerpts from Flavio Percoco's message of 2015-10-06 23:36:53 +0900:
>> Greetings,
>>
>> Not so long ago, Erno started a thread[0] in this list to discuss the
>> abandon policies for patches that haven't been updated in Glance.
>>
>> I'd like to go forward and start following that policy with some
>> changes that you can find below:
>>
>> 1) Lets do this on patches that haven't had any activity in the last 2
>> months. This adds one more month to Erno's proposal. The reason being
>> that during the lat cycle, there were some ups and downs in the review
>> flow that caused some patches to get stuck.
>>
>> 2) Do this just on master, for all patches regardless they fix a
>> bug or implement a spec and for all patches regardless their review
>> status.
>>
>> 3) The patch will be first marked as a WIP and then abandoned if the
>> patch is not updated in 1 week. This will put this patches at the
>> begining of the queue but using the Glance review dashboard should
>> help keeing focus.
>>
>> Unless there are some critical things missing in the above or strong
>> opiniones against this, I'll make this effective starting next Monday
>> October 12th.
>>
>> Best regards,
>> Flavio
>>
>> [0] 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-February/056829.html
>>
> In the past we've had discussions on the list about how abandoning
> patches can be perceived as hostile to contributors, and that using
> a review dashboard with good filters is a better solution. Since
> you already have a dashboard, I suggest adding a section for patches
> that are old but have no review comments (maybe you already have
> that) and another for patches where the current viewer has voted
> -1. The first highlights the patches for reviewers, and ignores
> them when they are in a state where we're waiting for feedback or
> an update, and the latter provides a list of patches the current
> reviewer is involved in and may need to recheck for new comments.

That hostility aspect is the main reason why abandoning has been avoided
until now.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-06 Thread Sofer Athlan-Guyot
Rich Megginson  writes:

> On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 30/09/15 03:43, Rich Megginson wrote:
 On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 15/09/15 06:53, Rich Megginson wrote:
 On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
> Hi,
>
> Gilles Dubreuil  writes:
>
>> A. The 'composite namevar' approach:
>>
>>   keystone_tenant {'projectX::domainY': ... }
>> B. The 'meaningless name' approach:
>>
>>  keystone_tenant {'myproject': name='projectX',
>> domain=>'domainY',
>> ...}
>>
>> Notes:
>> - Actually using both combined should work too with the domain
>> supposedly overriding the name part of the domain.
>> - Please look at [1] this for some background between the two
>> approaches:
>>
>> The question
>> -
>> Decide between the two approaches, the one we would like to
>> retain for
>> puppet-keystone.
>>
>> Why it matters?
>> ---
>> 1. Domain names are mandatory in every user, group or project.
>> Besides
>> the backward compatibility period mentioned earlier, where no domain
>> means using the default one.
>> 2. Long term impact
>> 3. Both approaches are not completely equivalent which different
>> consequences on the future usage.
> I can't see why they couldn't be equivalent, but I may be missing
> something here.
 I think we could support both.  I don't see it as an either/or
 situation.

>> 4. Being consistent
>> 5. Therefore the community to decide
>>
>> Pros/Cons
>> --
>> A.
> I think it's the B: meaningless approach here.
>
>>  Pros
>>- Easier names
> That's subjective, creating unique and meaningful name don't look
> easy
> to me.
 The point is that this allows choice - maybe the user already has some
 naming scheme, or wants to use a more "natural" meaningful name -
 rather
 than being forced into a possibly "awkward" naming scheme with "::"

 keystone_user { 'heat domain admin user':
   name => 'admin',
   domain => 'HeatDomain',
   ...
 }

 keystone_user_role {'heat domain admin user@::HeatDomain':
   roles => ['admin']
   ...
 }

>>  Cons
>>- Titles have no meaning!
 They have meaning to the user, not necessarily to Puppet.

>>- Cases where 2 or more resources could exists
 This seems to be the hardest part - I still cannot figure out how
 to use
 "compound" names with Puppet.

>>- More difficult to debug
 More difficult than it is already? :P

>>- Titles mismatch when listing the resources (self.instances)
>>
>> B.
>>  Pros
>>- Unique titles guaranteed
>>- No ambiguity between resource found and their title
>>  Cons
>>- More complicated titles
>> My vote
>> 
>> I would love to have the approach A for easier name.
>> But I've seen the challenge of maintaining the providers behind the
>> curtains and the confusion it creates with name/titles and when
>> not sure
>> about the domain we're dealing with.
>> Also I believe that supporting self.instances consistently with
>> meaningful name is saner.
>> Therefore I vote B
> +1 for B.
>
> My view is that this should be the advertised way, but the other
> method
> (meaningless) should be there if the user need it.
>
> So as far as I'm concerned the two idioms should co-exist.  This
> would
> mimic what is possible with all puppet resources.  For instance
> you can:
>
>  file { '/tmp/foo.bar': ensure => present }
>
> and you can
>
>  file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
> present }
>
> The two refer to the same resource.
 Right.

>>> I disagree, using the name for the title is not creating a composite
>>> name. The latter requires adding at least another parameter to be part
>>> of the title.
>>>
>>> Also in the case of the file resource, a 

Re: [openstack-dev] [Fuel][PTL] PTL Candidates Q Session

2015-10-06 Thread Mike Scherbakov
Looks like we need to make it totally clear in our policy...

> So we assume that detailed architectural work will be relayed to
Component Leads
I don't agree with this statement, as it implies that _all_ architectural
work will be relayed to component leads.

PTL is fully responsible for technical direction of the project. However,
since the project is large, I'd expect that PTL will rely on Component
Leads for most of particular technical decisions to be made.
At the same time, PTL defines technical direction for the project, and
ensures that component leads as well as others are aligned to it.

If people are not aligned, it's job of PTL in the first order to:
a) delegate alignment work to Component Lead if possible
b) make yourself available to participate in alignment and resolving
disputes, if component leads can't do it or if there is misalignment
between Component Leads or Component Leads and PTL or Component Leads and
PTLs of other OpenStack projects.

I hope that this definition most of fuelers will find reasonable... but as
I said, we need to spend some time and get it crystal clear in our policy.

On Tue, Oct 6, 2015 at 8:45 AM Vladimir Kuklin  wrote:

> Which is actually contradictory and ambiguous and shows that PTL has less
> power than CLs while CLs at the same time have less power than PTL. I think
> this is the time when universe should collapse as we found that time-space
> is contradicting laws of propositional calculus.
>
> On Tue, Oct 6, 2015 at 6:26 PM, Tomasz Napierala 
> wrote:
>
>> Hi
>>
>> That’s right, but we made slight change here:
>> "Define architecture direction & review majority of design specs. Rely on
>> Component Leads and Core Reviewers"
>>
>> So we assume that detailed architectural work will be relayed to
>> Component Leads
>>
>>
>> > On 02 Oct 2015, at 10:12, Evgeniy L  wrote:
>> >
>> > Hi Mike,
>> >
>> > According to the description of the role, I wouldn't say that the role
>> is less architectural than
>> > political, since PTL should review designs and resolve conflicts
>> between cores (which are
>> > usually technical), PTL should also have strong skills in software
>> architecture, and understanding
>> > of what Fuel should look like.
>> >
>> > Thanks,
>> >
>> > On Thu, Oct 1, 2015 at 11:32 PM, Mike Scherbakov <
>> mscherba...@mirantis.com> wrote:
>> > > we may mix technical direction / tech debt roadmap and process,
>> political, and people management work of PTL.
>> > sorry, of course I meant that we rather should NOT mix these things.
>> >
>> > To make my email very short, I'd say PTL role is more political and
>> process-wise rather than architectural.
>> >
>> > On Wed, Sep 30, 2015 at 5:48 PM Mike Scherbakov <
>> mscherba...@mirantis.com> wrote:
>> > Vladimir,
>> > we may mix technical direction / tech debt roadmap and process,
>> political, and people management work of PTL.
>> >
>> > PTL definition in OpenStack [1] reflects many things which PTL becomes
>> responsible for. This applies to Fuel as well.
>> >
>> > I'd like to reflect some things here which I'd expect PTL doing, most
>> of which will intersect with [1]:
>> > - Participate in cross-project initiatives & resolution of issues
>> around it. Great example is puppet-openstack vs Fuel [2]
>> > - Organize required processes around launchpad bugs & blueprints
>> > - Personal personal feedback to Fuel contributors & public suggestions
>> when needed
>> > - Define architecture direction & review majority of design specs. Rely
>> on Component Leads and Core Reviewers
>> > - Ensure that roadmap & use cases are aligned with architecture work
>> > - Resolve conflicts between core reviewers, component leads. Get people
>> to the same page
>> > - Watch for code review queues and quality of reviews. Ensure
>> discipline of code review.
>> > - Testing / coverage have to be at the high level
>> >
>> > Considering all above, contributors actually have been working with all
>> of us and know who could be better handling such a hard work. I don't think
>> special Q is needed. If there are concerns / particular process/tech
>> questions we'd like to discuss - those should be just open as email threads.
>> >
>> > [1] https://wiki.openstack.org/wiki/PTL_Guide
>> > [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-June/066685.html
>> >
>> > Thank you,
>> >
>> > On Tue, Sep 29, 2015 at 3:47 AM Vladimir Kuklin 
>> wrote:
>> > Folks
>> >
>> > I think it is awesome we have three candidates for PTL position in
>> Fuel. I read all candidates' emails (including mine own several times :-) )
>> and I got a slight thought of not being able to really differentiate the
>> candidates platforms as they are almost identical from the high-level point
>> of view. But we all know that the devil is in details. And this details
>> will actually affect project future.
>> >
>> > Thus I thought about Q session at #fuel-dev channel in IRC. 

Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-06 Thread Monty Taylor

On 10/06/2015 06:01 PM, Thomas Goirand wrote:

On 10/06/2015 01:14 PM, Yuriy Taraday wrote:

On Mon, Oct 5, 2015 at 5:40 PM Roman Prykhodchenko > wrote:

 Atm I have the following pros. and cons. regarding testrepository:

 pros.:

 1. It’s ”standard" in OpenStack so using it gives Fuel more karma
 and moves it more under big tent


I don't think that big tent model aims at eliminating diversity of tools
we use in our projects. A collection of web frameworks used in big tent
is an example of that.


 From the downstream distro point of view, I don't agree in general, and
with the web framework in particular. (though it's less a concern for
the testr vs pbr). We keep adding dependencies and duplicates, but never
remove them. For example, tablib and suds/sudsjurko need to be removed
because they are not maintainable, there's not much work to do so, but
nobody does the work...


The Big Tent has absolutely no change in opinion about eliminating 
diversity of tools. OpenStack has ALWAYS striven to reduce diversity of 
tools. Big Tent applies OpenStack to more things that request to be part 
of OpenStack.


Nothing has changed in the intent.

Diversity of tools in a project this size is a bad idea. Always has 
been. Always will be.


The amount of web frameworks in use is a bug.


 2. It’s in global requirements, so it doesn’t cause dependency hell

That can be solved by adding py.test to openstack/requirements.


No, it cannot. py.test/testr is not about dependency management. It's 
about a much bigger picture of how OpenStack does development and how 
that development can be managed.



I'd very much prefer if we could raise the barrier for getting a 3rd
party new dependency in. I hope we can talk about this in Tokyo. That
being said, indeed, adding py.test isn't so much of a problem, as it is
widely used, already packaged, and maintained upstream. I'd still prefer
if all projects were using the same testing framework and test runner
though.


As I said earlier in this thread, it has already been decided by the TC 
long ago that we will use testr. Barring a (very unlikely) TC rescinding 
of that decision, OpenStack projects use testr. There is zero value in 
expanding the number of test runners.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Couple new bugs for liberty

2015-10-06 Thread Tim Hinrichs
Hi all,

Another round of manual testing revealed a couple more bugs.  The ones at
the bottom without the Fix-committed:

https://bugs.launchpad.net/congress/+bugs/?field.tag=liberty-rc2

I have a patch for 1503392 in review.

It'd be great if someone could pick up
https://bugs.launchpad.net/congress/+bug/1503443

After that, I think we'll be about done with liberty.

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][mistral] Automatic evacuation as a long running task

2015-10-06 Thread Renat Akhmerov
Roman,

Here are some things that may help you:
In Mistral we’ve been aware of this post-message-processing ACK problem since 
we began to use oslo.messaging and we’ve been communicating with oslo team in 
order to fix that. Patch [1] is supposed to help us finally solve it. I would 
encourage you to participate in that effort too to make sure this matches your 
understanding of the problem. We’ve also seen a bug [2] that you filed at 
Launchpad so we’ll be updating its status.
As far as Mistral HA, I would say the following: it is actually supported by 
design but there’s a number of issues with its implementation. Not that it’s an 
HA info but, FYI, there are existing Mistral installations working in 
production with multiple Mistral engines, executors and api servers. Although I 
have to admit that it’s not so easy yet to make such installations work 
reliably. Generally, we keep working on it and we have huge plans for making 
Mistral HA in Mitaka cycle. Significant part of design sessions in Tokyo will 
be exactly about HA which includes a lot of things: proper testing, profiling, 
identifying points of failure and overall performance improvement (which is 
also one of the things influencing overall robustness).
As far as the task you’re trying to solve, I can say that, IMO, Mistral is a 
good candidate for this just because it’s really a standalone reliable service 
that can take execution of a long process under its control. This is one of the 
main ideas behind it. Currently we are planning to address similar cases with 
Mistral within our company. I think we’ll share the results when once we get 
something done and described.

Thanks for bringing this up. And I'll say what I usually do: you’re very 
welcome to contribute into Mistral, it should be fun to do.

Looking forward to hear more from you about your discoveries.

[1] https://review.openstack.org/#/c/229186/ 

[2] https://bugs.launchpad.net/mistral/+bug/1502120 


Renat Akhmerov
@ Mirantis Inc.



> On 02 Oct 2015, at 19:05, Roman Dobosz  wrote:
> 
> Hi all,
> 
> The case of automatic evacuation (or resurrection currently), is a topic 
> which surfaces once in a while, but it isn't yet fully supported by 
> OpenStack and/or by the cluster services. There was some attempts to 
> bring the feature into OpenStack, however it turns out it cannot be 
> easily integrated with. On the other hand evacuation may be executed 
> from the outside using Nova client or Nova API calls for evacuation 
> initiation.
> 
> I did some research regarding the ways how it could be designed, based 
> on Russel Bryant blog post[1] as a starting point. Apart from it, I've 
> also taken high availability and reliability into consideration when 
> designing the solution.
> 
> Together with coworker, we did first PoC[2] to enable cluster to be able 
> to perform evacuation. The idea behind that PoC was simple - providing 
> additional, small service which would trigger and supervise the 
> evacuation process, which would be triggered from the outside (in this 
> example we were using Pacemaker fencing facility, but it might be 
> anything) using RabbitMQ directly. Those services are running on the 
> control plane in AA fashion.
> 
> That work well for us. So we started exploring other possibilities like 
> oslo.messaging just to use it in the same manner as we did in the poc.  
> It turns out that the implementation will not be as easy, because there 
> is no facility in the oslo.messaging for letting sending an ACK from the 
> client after the job is done (not as soon as it gets the message). We 
> also looked at the existing OpenStack projects for a candidate which 
> provide service for managing long running tasks.
> 
> There is the Mistral project, which gives us almost all the features we 
> need. The one missing feature is the HA of the Mistral tasks execution.
> 
> The question is, how such problem (long running tasks) could be resolved 
> in OpenStack?
> 
> [1] http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/
> [2] https://github.com/dawiddeja/evacuationd
> 
> -- 
> Cheers,
> Roman Dobosz
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-06 Thread Thomas Goirand
On 10/06/2015 01:14 PM, Yuriy Taraday wrote:
> On Mon, Oct 5, 2015 at 5:40 PM Roman Prykhodchenko  > wrote:
> 
> Atm I have the following pros. and cons. regarding testrepository:
> 
> pros.:
> 
> 1. It’s ”standard" in OpenStack so using it gives Fuel more karma
> and moves it more under big tent
> 
> 
> I don't think that big tent model aims at eliminating diversity of tools
> we use in our projects. A collection of web frameworks used in big tent
> is an example of that.

>From the downstream distro point of view, I don't agree in general, and
with the web framework in particular. (though it's less a concern for
the testr vs pbr). We keep adding dependencies and duplicates, but never
remove them. For example, tablib and suds/sudsjurko need to be removed
because they are not maintainable, there's not much work to do so, but
nobody does the work...

> 2. It’s in global requirements, so it doesn’t cause dependency hell
> 
> That can be solved by adding py.test to openstack/requirements.

I'd very much prefer if we could raise the barrier for getting a 3rd
party new dependency in. I hope we can talk about this in Tokyo. That
being said, indeed, adding py.test isn't so much of a problem, as it is
widely used, already packaged, and maintained upstream. I'd still prefer
if all projects were using the same testing framework and test runner
though.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Security spec status update

2015-10-06 Thread Kevin Carter
Great work on this Major! I look forward to seeing the role built out and 
adding it into the stack.

If anyone out there is interested, the greater OpenStack-Ansible community 
would love feedback on the initial role import [0].

[0] - https://review.openstack.org/#/c/231165 

--

Kevin Carter
IRC: cloudnull



From: Major Hayden 
Sent: Friday, October 2, 2015 2:19 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [openstack-ansible] Security spec status update

Hello there,

A couple of people were asking me about the status of the security spec[1] for 
openstack-ansible.  Here are a few quick updates as of today:

  * We've moved away from considering CIS temporarily due to licensing and 
terms of use issues
  * We're currently adapting the RHEL 6 STIG[2] for Ubuntu 14.04
  * There's are lots of tasks coming together in a temporary repository[3]
  * Documentation is up on ReadTheDocs[4] (temporarily)

At this point, we have 181 controls left to evaluate (out of 264[5]).  Feel 
free to hop into #openstack-ansible and ask any questions you have about the 
work.

[1] 
http://specs.openstack.org/openstack/openstack-ansible-specs/specs/mitaka/security-hardening.html
[2] http://iase.disa.mil/stigs/Pages/index.aspx
[3] https://github.com/rackerlabs/openstack-ansible-security
[4] http://openstack-ansible-security.readthedocs.org/en/latest/
[5] https://www.stigviewer.com/stig/red_hat_enterprise_linux_6/

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #54

2015-10-06 Thread Colleen Murphy
On Mon, Oct 5, 2015 at 5:48 AM, Emilien Macchi <emil...@redhat.com> wrote:

> Hello!
>
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
>
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151006
>
> Feel free to add any additional items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.
>
> Note: I'll be at the airport for my trip to Portland (Puppetconf 2015).
> Colleen will lead the meeting if my flight is on-time and I'll be
> probably afk.
>
> Regards,
> --
> Emilien Macchi
>
Our meeting notes from this morning are here:

http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-06-15.00.html

Thanks!

Colleen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-06 Thread Deepak Shetty
On Thu, Oct 1, 2015 at 3:32 PM, John Spray  wrote:

> On Thu, Oct 1, 2015 at 8:36 AM, Deepak Shetty  wrote:
> >
> >
> > On Thu, Sep 24, 2015 at 7:19 PM, John Spray  wrote:
> >>
> >> Hi all,
> >>
> >> I've recently started work on a CephFS driver for Manila.  The (early)
> >> code is here:
> >> https://github.com/openstack/manila/compare/master...jcsp:ceph
> >>
> >
> > 1) README says driver_handles_share_servers=True, but code says
> >
> > + if share_server is not None:
> > + log.warning("You specified a share server, but this driver doesn't use
> > that")
>
> The warning is just for my benefit, so that I could see which bits of
> the API were pushing a share server in.  This driver doesn't care
> about the concept of a share server, so I'm really just ignoring it
> for the moment.
>
> > 2) Would it good to make the data_isolated option controllable from
> > manila.conf config param ?
>
> That's the intention.
>
> > 3) CephFSVolumeClient - it sounds more like CephFSShareClient , any
> reason
> > you chose the
> > word 'Volume" instead of Share ? Volumes remind of RBD volumes, hence
> the Q
>
> The terminology here is not standard across the industry, so there's
> not really any right term.  For example, in docker, a
> container-exposed filesystem is a "volume".  I generally use volume to
> refer to a piece of storage that we're carving out, and share to refer
> to the act of making that visible to someone else.  If I had been
> writing Manila originally I wouldn't have called shares shares :-)
>
> The naming in CephFSVolumeClient will not be the same as Manilas,
> because it is not intended to be Manila-only code, though that's the
> first use for it.
>
> > 4) IIUC there is no need to do access_allow/deny in the cephfs usecase ?
> It
> > looks like
> > create_share, put the cephx keyring in client and it can access the
> share,
> > as long as the
> > client has network access to the ceph cluster. Doc says you don't use IP
> > address based
> > access method, so which method is used in case you are using access_allow
> > flow ?
>
> Currently, as you say, a share is accessible to anyone who knows the
> auth key (created a the time the share is created).
>
> For adding the allow/deny path, I'd simply create and remove new ceph
> keys for each entity being allowed/denied.
>

Ok, but how does that map to the existing Manila access types (IP, User,
Cert) ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Defect management

2015-10-06 Thread Ihar Hrachyshka
> 
> On 05 Oct 2015, at 17:27, Armando M.  wrote:
> 
> 
> 
> On 5 October 2015 at 03:14, Ihar Hrachyshka  wrote:
> > On 02 Oct 2015, at 02:37, Armando M.  wrote:
> >
> > Hi neutrinos,
> >
> > Whilst we go down the path of revising the way we manage/process bugs in 
> > Neutron, and transition to the proposed model [1], I was wondering if I can 
> > solicit some volunteers to screen the bugs outlined in [2]. It's only 24 
> > bugs so it should be quick ;)
> 
> I am fine to be a guinea pig, though I don’t have edit access to the 
> spreadsheet.
> 
> Ihar
> 
> Thanks Ihar, the sheet is editable now. Polishing Launchpad should suffice 
> though.

I walked thru all bugs that were shown in spreadsheet, and 
tagged/marked/assigned where applicable.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] What to do when a controller runs out of space

2015-10-06 Thread Eugene Nikanorov
On Tue, Oct 6, 2015 at 4:22 PM, Vladimir Kuklin 
wrote:

> Eugene
>
> For example, each time that you need to have one instance (e.g. master
> instance) of something non-stateless running in the cluster.
>

Right. This is theoretical. Practically, there are no such services among
openstack.

You are right that currently lots of things are fixed already - heat engine
> is fine, for example. But I still see this issue with l3 agents and I will
> not change my mind until we conduct complete scale and destructive testing
> with new neutron code.
>
> Secondly, if we cannot reliably identify when to engage - then we need to
> write the code that will tell us when to engage. If this code is already in
> place and we can trigger a couple of commands to figure out Neutron agent
> state, then we can add them to OCF script monitor and that is all. I agree
> that we have some issues with our OCF scripts, for example some unoptimal
> cleanup code that has issues with big scale, but I am almost sure we can
> fix it.
>
> Finally, let me show an example of when you need a centralized cluster
> manager to manage such situations - you have a temporary issue with
> connectivity to neutron server over management network for some reason.
> Your agents are not cleaned up and neutron server starts new l3 agent
> instances on different node. In this case you will have IP duplication in
> the network and will bring down the whole cluster as connectivity through
> 'public' network will be working just fine. In case when we are using
> Pacemaker - such node will be either fenced or will stop all the services
> controlled by pacemaker as it is a part of non-quorate partition of the
> cluster. When this happens, l3 agent OCF script will run its cleanup
> section and purge all the stale IPs thus saving us from the trouble. I
> obviously may be mistaking, so please correct me if this is not the case.
>
I think this deserves discussion in a separate thread, which I'll start
soon.
My initial point was (to state it clearly), that I will be -2 on any new
additions of openstack services to pacemaker kingdom.

Thanks,
Eugene.

>
>
> On Tue, Oct 6, 2015 at 3:46 PM, Eugene Nikanorov 
> wrote:
>
>>
>>
>>> 2) I think you misunderstand what is the difference between
>>> upstart/systemd and Pacemaker in this case. There are many cases when you
>>> need to have syncrhonized view of the cluster. Otherwise you will hit
>>> split-brain situations and have your cluster misfunctioning. Until
>>> OpenStack provides us with such means there is no other way than using
>>> Pacemaker/Zookeper/etc.
>>>
>>
>> Could you please give some examples of those 'many cases' for openstack
>> specifically?
>> As for my 'misunderstanding' - openstack services only need to be always
>> up, not more than that.
>> Upstart does a perfect job there.
>>
>>
>>> 3) Regarding Neutron agents - we discussed it many times - you need to
>>> be able to control and clean up stuff after some service crashed.
>>> Currently, Neutron does not provide reliable ways to do it. If your agent
>>> dies and does not clean up ip addresses from the network namespace you will
>>> get into the situation of ARP duplication which will be a kind of split
>>> brain described in item #2. I personally as a system architect and
>>> administrator do not believe for this to change in at least several years
>>> for OpenStack so we will be using Pacemaker for a very long period of time.
>>>
>>
>> This has been changed already, and a while ago.
>> OCF infrastructure around neutron agents has never helped neutron in any
>> meaningful way and is just an artifact from the dark past.
>> The reasons are: pacemaker/ocf doesn't have enough intelligence to know
>> when to engage, as a result, any cleanup could only be achieved through
>> manual operations. I don't need to remind you how many bugs were in ocf
>> scripts which brought whole clusters down after those manual operations.
>> So it's just a way better to go with simple standard tools with
>> fine-grain control.
>> Same applies to any other openstack service (again, not rabbitmq/galera)
>>
>> > so we will be using Pacemaker for a very long period of time.
>> Not for neutron, sorry. As soon as we finish the last bit of such
>> cleanup, which is targeted for 8.0
>>
>> Now, back to the topic - we may decide to use some more sophisticated
>>> integral node health attribute which can be used with Pacemaker as well as
>>> to put node into some kind of maintenance mode. We can leverage User
>>> Maintenance Mode feature here or just simply stop particular services and
>>> disable particular haproxy backends.
>>>
>>
>> I think this kind of attribute, although being analyzed by pacemaker/ocf,
>> doesn't need any new OS service to be put under pacemaker control.
>>
>> Thanks,
>> Eugene.
>>
>>
>>>
>>> On Mon, Oct 5, 2015 at 11:57 PM, Eugene Nikanorov <
>>> enikano...@mirantis.com> wrote:
>>>

>>
> Mirantis does 

Re: [openstack-dev] [fuel] What to do when a controller runs out of space

2015-10-06 Thread Eugene Nikanorov
> 2) I think you misunderstand what is the difference between
> upstart/systemd and Pacemaker in this case. There are many cases when you
> need to have syncrhonized view of the cluster. Otherwise you will hit
> split-brain situations and have your cluster misfunctioning. Until
> OpenStack provides us with such means there is no other way than using
> Pacemaker/Zookeper/etc.
>

Could you please give some examples of those 'many cases' for openstack
specifically?
As for my 'misunderstanding' - openstack services only need to be always
up, not more than that.
Upstart does a perfect job there.


> 3) Regarding Neutron agents - we discussed it many times - you need to be
> able to control and clean up stuff after some service crashed. Currently,
> Neutron does not provide reliable ways to do it. If your agent dies and
> does not clean up ip addresses from the network namespace you will get into
> the situation of ARP duplication which will be a kind of split brain
> described in item #2. I personally as a system architect and administrator
> do not believe for this to change in at least several years for OpenStack
> so we will be using Pacemaker for a very long period of time.
>

This has been changed already, and a while ago.
OCF infrastructure around neutron agents has never helped neutron in any
meaningful way and is just an artifact from the dark past.
The reasons are: pacemaker/ocf doesn't have enough intelligence to know
when to engage, as a result, any cleanup could only be achieved through
manual operations. I don't need to remind you how many bugs were in ocf
scripts which brought whole clusters down after those manual operations.
So it's just a way better to go with simple standard tools with fine-grain
control.
Same applies to any other openstack service (again, not rabbitmq/galera)

> so we will be using Pacemaker for a very long period of time.
Not for neutron, sorry. As soon as we finish the last bit of such cleanup,
which is targeted for 8.0

Now, back to the topic - we may decide to use some more sophisticated
> integral node health attribute which can be used with Pacemaker as well as
> to put node into some kind of maintenance mode. We can leverage User
> Maintenance Mode feature here or just simply stop particular services and
> disable particular haproxy backends.
>

I think this kind of attribute, although being analyzed by pacemaker/ocf,
doesn't need any new OS service to be put under pacemaker control.

Thanks,
Eugene.


>
> On Mon, Oct 5, 2015 at 11:57 PM, Eugene Nikanorov  > wrote:
>
>>

>>> Mirantis does control neither Rabbitmq or Galera. Mirantis cannot assure
>>> their quality as well.
>>>
>>
>> Correct, and rabbitmq was always the pain in the back, preventing any *real
>> *enterprise usage of openstack where reliability does matter.
>>
>>
>>> > 2) it has terrible UX

>>>
>>> It looks like personal opinion. I'd like to see surveys or operators
>>> feedbacks. Also, this statement is not constructive as it doesn't have
>>> alternative solutions.
>>>
>>
>> The solution is to get rid of terrible UX wherever possible (i'm not
>> saying it is always possible, of course)
>> upstart is just so much better.
>> And yes, this is my personal opinion and is a summary of escalation
>> team's experience.
>>
>>
>>>
 > 3) it is not reliable

>>>
>>> I would say openstack services are not HA reliable. So OCF scripts are
>>> reaction of operators on these problems. Many of them have child-ish issues
>>> from release to release. Operators made OCF scripts to fix these problems.
>>> A lot of openstack are stateful, so they require some kind of stickiness or
>>> synchronization. Openstack services doesn't have simple health-check
>>> functionality so it's hard to say it's running well or not. Sighup is still
>>> a problem for many of openstack services. Etc/etc So, let's be constructive
>>> here.
>>>
>>
>> Well, I prefer to be responsible for what I know and maintain. Thus, I
>> state that neutron doesn't need to be managed by pacemaker, neither server,
>> nor all kinds of agents, and that's the path that neutron team will be
>> taking.
>>
>> Thanks,
>> Eugene.
>>
>>>
>>>
 >

 I disagree with #1 as I do not agree that should be a criteria for an
 open-source project.  Considering pacemaker is at the core of our
 controller setup, I would argue that if these are in fact true we need
 to be using something else.  I would agree that it is a terrible UX
 but all the clustering software I've used fall in this category.  I'd
 like more information on how it is not reliable. Do we have numbers to
 backup these claims?

 > (3) is not evaluation of the project itself, but just a logical
 consequence
 > of (1) and (2).
 > As a part of escalation team I can say that it has cost our team
 thousands
 > of man hours of head-scratching, staring at pacemaker logs which
 value are
 > usually slightly below 

Re: [openstack-dev] [Neutron] Defect management

2015-10-06 Thread Ihar Hrachyshka
> On 06 Oct 2015, at 10:36, Miguel Angel Ajo  wrote:
> 
> Hi, I was trying to help a bit, for now, but I don't have access in launchpad 
> to update importance,
> etc.
> 
> I will add comments on the bugs themselves, and set a ? in the 
> spreadsheet.

I believe you need to be a member of one of the following groups to have access 
to all LP fields:

https://launchpad.net/~neutron-bugs
https://launchpad.net/~neutron-drivers

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] What to do when a controller runs out of space

2015-10-06 Thread Vladimir Kuklin
Eugene

I would prefer to tackle your points in the following way:

1) Regarding rabbitmq - you and me both know that this a major flaw in how
OpenStack operates - it uses message broker in very unoptimal way sending
lots of unneeded data through when it actually may not send it. So far we
hardened our automated control of rabbitmq as much as possible and the only
issues we see are those when nodes are already under very stressful
conditions such as one of the OpenStack services consuming 95% of available
memory. I doubt that such a case should be handled by Pacemaker or any
other supervisor - they just will not help you. The proper thing that
should be done is fixing OpenStack itself to not overload messaging bus and
use built-in capabilities of RDBMS and other underlying components.

2) I think you misunderstand what is the difference between upstart/systemd
and Pacemaker in this case. There are many cases when you need to have
syncrhonized view of the cluster. Otherwise you will hit split-brain
situations and have your cluster misfunctioning. Until OpenStack provides
us with such means there is no other way than using Pacemaker/Zookeper/etc.

3) Regarding Neutron agents - we discussed it many times - you need to be
able to control and clean up stuff after some service crashed. Currently,
Neutron does not provide reliable ways to do it. If your agent dies and
does not clean up ip addresses from the network namespace you will get into
the situation of ARP duplication which will be a kind of split brain
described in item #2. I personally as a system architect and administrator
do not believe for this to change in at least several years for OpenStack
so we will be using Pacemaker for a very long period of time.

Now, back to the topic - we may decide to use some more sophisticated
integral node health attribute which can be used with Pacemaker as well as
to put node into some kind of maintenance mode. We can leverage User
Maintenance Mode feature here or just simply stop particular services and
disable particular haproxy backends.

On Mon, Oct 5, 2015 at 11:57 PM, Eugene Nikanorov 
wrote:

>
>>>
>> Mirantis does control neither Rabbitmq or Galera. Mirantis cannot assure
>> their quality as well.
>>
>
> Correct, and rabbitmq was always the pain in the back, preventing any *real
> *enterprise usage of openstack where reliability does matter.
>
>
>> > 2) it has terrible UX
>>>
>>
>> It looks like personal opinion. I'd like to see surveys or operators
>> feedbacks. Also, this statement is not constructive as it doesn't have
>> alternative solutions.
>>
>
> The solution is to get rid of terrible UX wherever possible (i'm not
> saying it is always possible, of course)
> upstart is just so much better.
> And yes, this is my personal opinion and is a summary of escalation team's
> experience.
>
>
>>
>>> > 3) it is not reliable
>>>
>>
>> I would say openstack services are not HA reliable. So OCF scripts are
>> reaction of operators on these problems. Many of them have child-ish issues
>> from release to release. Operators made OCF scripts to fix these problems.
>> A lot of openstack are stateful, so they require some kind of stickiness or
>> synchronization. Openstack services doesn't have simple health-check
>> functionality so it's hard to say it's running well or not. Sighup is still
>> a problem for many of openstack services. Etc/etc So, let's be constructive
>> here.
>>
>
> Well, I prefer to be responsible for what I know and maintain. Thus, I
> state that neutron doesn't need to be managed by pacemaker, neither server,
> nor all kinds of agents, and that's the path that neutron team will be
> taking.
>
> Thanks,
> Eugene.
>
>>
>>
>>> >
>>>
>>> I disagree with #1 as I do not agree that should be a criteria for an
>>> open-source project.  Considering pacemaker is at the core of our
>>> controller setup, I would argue that if these are in fact true we need
>>> to be using something else.  I would agree that it is a terrible UX
>>> but all the clustering software I've used fall in this category.  I'd
>>> like more information on how it is not reliable. Do we have numbers to
>>> backup these claims?
>>>
>>> > (3) is not evaluation of the project itself, but just a logical
>>> consequence
>>> > of (1) and (2).
>>> > As a part of escalation team I can say that it has cost our team
>>> thousands
>>> > of man hours of head-scratching, staring at pacemaker logs which value
>>> are
>>> > usually slightly below zero.
>>> >
>>> > Most of openstack services (in fact, ALL api servers) are stateless,
>>> they
>>> > don't require any cluster management (also, they don't need to be
>>> moved in
>>> > case of lack of space).
>>> > Statefull services like neutron agents have their states being a
>>> function of
>>> > db state and are able to syncronize it with the server without external
>>> > "help".
>>> >
>>>
>>> So it's not an issue with moving services so much as being able to
>>> stop the services 

Re: [openstack-dev] [fuel] What to do when a controller runs out of space

2015-10-06 Thread Vladimir Kuklin
Eugene

For example, each time that you need to have one instance (e.g. master
instance) of something non-stateless running in the cluster. You are right
that currently lots of things are fixed already - heat engine is fine, for
example. But I still see this issue with l3 agents and I will not change my
mind until we conduct complete scale and destructive testing with new
neutron code.

Secondly, if we cannot reliably identify when to engage - then we need to
write the code that will tell us when to engage. If this code is already in
place and we can trigger a couple of commands to figure out Neutron agent
state, then we can add them to OCF script monitor and that is all. I agree
that we have some issues with our OCF scripts, for example some unoptimal
cleanup code that has issues with big scale, but I am almost sure we can
fix it.

Finally, let me show an example of when you need a centralized cluster
manager to manage such situations - you have a temporary issue with
connectivity to neutron server over management network for some reason.
Your agents are not cleaned up and neutron server starts new l3 agent
instances on different node. In this case you will have IP duplication in
the network and will bring down the whole cluster as connectivity through
'public' network will be working just fine. In case when we are using
Pacemaker - such node will be either fenced or will stop all the services
controlled by pacemaker as it is a part of non-quorate partition of the
cluster. When this happens, l3 agent OCF script will run its cleanup
section and purge all the stale IPs thus saving us from the trouble. I
obviously may be mistaking, so please correct me if this is not the case.


On Tue, Oct 6, 2015 at 3:46 PM, Eugene Nikanorov 
wrote:

>
>
>> 2) I think you misunderstand what is the difference between
>> upstart/systemd and Pacemaker in this case. There are many cases when you
>> need to have syncrhonized view of the cluster. Otherwise you will hit
>> split-brain situations and have your cluster misfunctioning. Until
>> OpenStack provides us with such means there is no other way than using
>> Pacemaker/Zookeper/etc.
>>
>
> Could you please give some examples of those 'many cases' for openstack
> specifically?
> As for my 'misunderstanding' - openstack services only need to be always
> up, not more than that.
> Upstart does a perfect job there.
>
>
>> 3) Regarding Neutron agents - we discussed it many times - you need to be
>> able to control and clean up stuff after some service crashed. Currently,
>> Neutron does not provide reliable ways to do it. If your agent dies and
>> does not clean up ip addresses from the network namespace you will get into
>> the situation of ARP duplication which will be a kind of split brain
>> described in item #2. I personally as a system architect and administrator
>> do not believe for this to change in at least several years for OpenStack
>> so we will be using Pacemaker for a very long period of time.
>>
>
> This has been changed already, and a while ago.
> OCF infrastructure around neutron agents has never helped neutron in any
> meaningful way and is just an artifact from the dark past.
> The reasons are: pacemaker/ocf doesn't have enough intelligence to know
> when to engage, as a result, any cleanup could only be achieved through
> manual operations. I don't need to remind you how many bugs were in ocf
> scripts which brought whole clusters down after those manual operations.
> So it's just a way better to go with simple standard tools with fine-grain
> control.
> Same applies to any other openstack service (again, not rabbitmq/galera)
>
> > so we will be using Pacemaker for a very long period of time.
> Not for neutron, sorry. As soon as we finish the last bit of such cleanup,
> which is targeted for 8.0
>
> Now, back to the topic - we may decide to use some more sophisticated
>> integral node health attribute which can be used with Pacemaker as well as
>> to put node into some kind of maintenance mode. We can leverage User
>> Maintenance Mode feature here or just simply stop particular services and
>> disable particular haproxy backends.
>>
>
> I think this kind of attribute, although being analyzed by pacemaker/ocf,
> doesn't need any new OS service to be put under pacemaker control.
>
> Thanks,
> Eugene.
>
>
>>
>> On Mon, Oct 5, 2015 at 11:57 PM, Eugene Nikanorov <
>> enikano...@mirantis.com> wrote:
>>
>>>
>
 Mirantis does control neither Rabbitmq or Galera. Mirantis cannot
 assure their quality as well.

>>>
>>> Correct, and rabbitmq was always the pain in the back, preventing any *real
>>> *enterprise usage of openstack where reliability does matter.
>>>
>>>
 > 2) it has terrible UX
>

 It looks like personal opinion. I'd like to see surveys or operators
 feedbacks. Also, this statement is not constructive as it doesn't have
 alternative solutions.

>>>
>>> The solution is to get rid of terrible 

Re: [openstack-dev] [Neutron] Defect management

2015-10-06 Thread Armando M.
On 6 October 2015 at 06:10, Ihar Hrachyshka  wrote:

> > On 06 Oct 2015, at 10:36, Miguel Angel Ajo  wrote:
> >
> > Hi, I was trying to help a bit, for now, but I don't have access in
> launchpad to update importance,
> > etc.
> >
> > I will add comments on the bugs themselves, and set a ? in the
> spreadsheet.
>
> I believe you need to be a member of one of the following groups to have
> access to all LP fields:
>
> https://launchpad.net/~neutron-bugs
> https://launchpad.net/~neutron-drivers


No, I think only neutron-bugs should do. I saw Akihiro gave you rights, so
you should be good.

Kudos to you for helping!

Cheers,
Armando


>
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate][Ceilometer] Liberty RC2 available

2015-10-06 Thread Thierry Carrez
Hello everyone,

Due to release-critical issues spotted in Designate and Ceilometer
during RC1 testing (as well as last-minute translations imports), new
release candidates were created for Liberty. The list of RC2 fixes, as
well as RC2 tarballs are available at:

https://launchpad.net/designate/liberty/liberty-rc2
https://launchpad.net/ceilometer/liberty/liberty-rc2

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, these tarballs will be formally released as
final "Liberty" versions on October 15. You are therefore strongly
encouraged to test and validate these tarballs !

Alternatively, you can directly test the stable/liberty branch at:
http://git.openstack.org/cgit/openstack/designate/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/ceilometer/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/designate/+filebug
or
https://bugs.launchpad.net/ceilometer/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-06 Thread Paul Carlton

https://review.openstack.org/#/c/85048/ was raised to address the
migration of instances that are not running but people did not warm to
the idea of bringing a stopped/suspended instance to a paused state to
migrate it.  Is there any work in progress to get libvirt enhanced to
perform the migration of non active virtual machines?

--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-06 Thread Yuriy Taraday
On Mon, Oct 5, 2015 at 5:40 PM Roman Prykhodchenko  wrote:

> Atm I have the following pros. and cons. regarding testrepository:
>
> pros.:
>
> 1. It’s ”standard" in OpenStack so using it gives Fuel more karma and
> moves it more under big tent
>

I don't think that big tent model aims at eliminating diversity of tools we
use in our projects. A collection of web frameworks used in big tent is an
example of that.

2. It’s in global requirements, so it doesn’t cause dependency hell
>

That can be solved by adding py.test to openstack/requirements.

cons.:
> 1. Debugging is really hard
>

I'd say that debugging here is not the right term. Every aspect of
developing with testr is harder than with py.test. py.test tends to just
work where you need additional tools and effort with testr.

In general I don't see any benefit the project can get from using testr
while its limitations will bite developers at every turn.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-10-06 Thread Vilobh Meshram
Thanks everyone!

I really appreciate this. Happy to join Magnum-Core  :)

We have a great team, very diverse and very dedicated. It's pleasure to
work with all of you.

Thanks,
Vilobh

On Mon, Oct 5, 2015 at 5:26 PM, Adrian Otto 
wrote:

> Team,
>
> In accordance with our consensus and the current date/time, I hereby
> welcome Vilobh and Hua as new core reviewers, and have added them to the
> magnum-core group. I will announce this addition at tomorrow’s team meeting
> at our new time of 1600 UTC (no more alternating schedule, remember?).
>
> Thanks,
>
> Adrian
>
> On Oct 1, 2015, at 7:33 PM, Jay Lau  wrote:
>
> +1 for both! Welcome!
>
> On Thu, Oct 1, 2015 at 7:07 AM, Hongbin Lu  wrote:
>
>> +1 for both. Welcome!
>>
>>
>>
>> *From:* Davanum Srinivas [mailto:dava...@gmail.com]
>> *Sent:* September-30-15 7:00 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] New Core Reviewers
>>
>>
>>
>> +1 from me for both Vilobh and Hua.
>>
>>
>>
>> Thanks,
>>
>> Dims
>>
>>
>>
>> On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
>> wrote:
>>
>> Core Reviewers,
>>
>> I propose the following additions to magnum-core:
>>
>> +Vilobh Meshram (vilobhmm)
>> +Hua Wang (humble00)
>>
>> Please respond with +1 to agree or -1 to veto. This will be decided by
>> either a simple majority of existing core reviewers, or by lazy consensus
>> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>>
>> Thanks,
>>
>> Adrian Otto
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> --
>>
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-06 Thread Julien Danjou
On Tue, Oct 06 2015, Flavio Percoco wrote:

I send patches to Glance from time to time, and they usually got 0
review for *weeks* (sometimes months, because, well there are no
reviewers active in Glance, so:

> 1) Lets do this on patches that haven't had any activity in the last 2
> months. This adds one more month to Erno's proposal. The reason being
> that during the lat cycle, there were some ups and downs in the review
> flow that caused some patches to get stuck.

This is going to expire my patches that nobody cares about and that are
improving the code or fixing stuff people didn't encounter (yet).

> 3) The patch will be first marked as a WIP and then abandoned if the
> patch is not updated in 1 week. This will put this patches at the
> begining of the queue but using the Glance review dashboard should
> help keeing focus.

Why WIP? If a patch is complete and waiting for reviewers I'm not sure
it helps.

The problem is that nobody is reviewing Glance patches (except you
recently it seems). That's not going to solve that. That's just going to
hide the issues under the carpet by lowering the total of patches that
needs review…

My 2c,

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-06 Thread Victor Stinner

Hi,

Le 06/10/2015 16:36, Flavio Percoco a écrit :

Not so long ago, Erno started a thread[0] in this list to discuss the
abandon policies for patches that haven't been updated in Glance.
(...)
1) Lets do this on patches that haven't had any activity in the last 2
months. This adds one more month to Erno's proposal. The reason being
that during the lat cycle, there were some ups and downs in the review
flow that caused some patches to get stuck.


Please don't do that. I sent a patch in June (20) and it was only 
reviewed in October (4)... There was no activity simply because I had 
nothing to add, everything was explained in the commit message, I was 
only waiting for a review...


I came on #openstack-glance to ask for review several time between 
August and September but nobody reviewed by patches (there was al.


Example of patch: https://review.openstack.org/#/c/193786/ (now merged)

It would be very frustrating to have to resend the same patch over and over.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-06 Thread Paul Carlton



On 06/10/15 17:30, Chris Friesen wrote:

On 10/06/2015 08:11 AM, Daniel P. Berrange wrote:

On Tue, Oct 06, 2015 at 02:54:21PM +0100, Paul Carlton wrote:

https://review.openstack.org/#/c/85048/ was raised to address the
migration of instances that are not running but people did not warm to
the idea of bringing a stopped/suspended instance to a paused state to
migrate it.  Is there any work in progress to get libvirt enhanced to
perform the migration of non active virtual machines?


Libvirt can "migrate" the configuration of an inactive VM, but does
not plan todo anything related to storage migration. OpenStack could
already solve this itself by using libvirt storage pool APIs to
copy storage volumes across, but the storage pool worked in Nova
is stalled

https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/use-libvirt-storage-pools,n,z 



What is the libvirt API to migrate a paused/suspended VM? Currently 
nova uses dom.managedSave(), so it doesn't know what file libvirt used 
to save the state.  Can libvirt migrate that file transparently?


I had thought we might switch to virDomainSave() and then use the cold 
migration framework, but that requires passwordless ssh.  If there's a 
way to get libvirt to handle it internally via the storage pool API 
then that would be better.


Chris

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


So my reading of this is the issue could be addressed in Mitaka by
implementing
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/use-libvirt-storage-pools.html
and
https://review.openstack.org/#/c/126979/4/specs/kilo/approved/migrate-libvirt-volumes.rst

is there any prospect of this being progressed?

--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Prepare for expiration bugs without activity

2015-10-06 Thread ZZelle
Hi everyone,


As decided during last neutron meeting[1], we try to let Launchpad expire
outdated bugs.

The status of every bug without activity in last year has been set to
Incomplete and their assignee/milestone unset in order to let Launchpad
expire them in 60 days[2].

It gives us a 60-days window to "revive" expirable bugs[3] which are (sadly
:)) still valid (by changing their status).


Cedric/ZZelle@IRC

PS: you can contact me if you have any questions

[1]
http://eavesdrop.openstack.org/meetings/networking/2015/networking.2015-10-06-14.00.txt
[2] https://help.launchpad.net/Bugs/Expiry
[3] https://bugs.launchpad.net/neutron/+expirable-bugs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Liberty release finalization

2015-10-06 Thread Tripp, Travis S


On 10/6/15, 2:28 AM, "Thierry Carrez"  wrote:

>
>The "intermediary" model requires the project following it to be mature
>enough (and the project team following it to be disciplined enough) to
>internalize the QA process.
>
>In the "with-milestones" model, you produce development milestones and
>release candidates to get the features out early and progressively get
>more and more outside testing on proposed artifacts. It's "ok" if a
>development milestone is revealed to be unusable: that shows lack of
>proper testing coverage, and there is still time to fix things before
>the "real" release.
>
>In the "intermediary" model, you deliver fully-usable releases that you
>recommend production deployments to upgrade to. There is no alpha, beta
>or RC. You directly tag a release. That means you need to be confident
>enough in your own testing and testing coverage. Mistakes can still
>happen (in which case we rush a subsequent point release) but should
>really be exceptional, otherwise nobody will trust your deliverables.
>
>This is why we recommend the "intermediary" model to mature projects and
>project teams -- that model requires excellent test coverage and
>discipline inside the team to slow down development as you get closer to
>a release tag and spend time on testing.
>
>-- 
>Thierry Carrez (ttx)

Thierry,

Thanks again for the information. After quite a bit of discussion in our IRC 
channel this morning, we think it does make sense to start with the milestones 
as recommended.  So, I’ve gone ahead and applied the rc1 tag and will follow up 
with you in the openstack-relmgr-office for next steps!

Thanks,
Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] A larger batch of questions about configuring DevStack to use Neutron

2015-10-06 Thread Sean M. Collins
On Tue, Oct 06, 2015 at 11:25:03AM EDT, Mike Spreitzer wrote:
> [Sorry, but I do not know if the thundering silence is because these 
> questions are too hard, too easy, grossly off-topic, or simply because 
> nobody cares.]

You sent your first e-mail on a Saturday. I saw it and flagged it for
reply, but have not had a chance yet. It's only Tuesday. I do care and
your questions are important. I will say though that it's a little
difficult to answer your e-mail because of formatting and your thoughts
seem to jump around. This is not intended as a personal criticism, it's
just a little difficult to follow your e-mail in order to reply.


> In the section 
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface
> there is a helpful display of localrc contents.  It says, among other 
> things,
> 
>OVS_PHYSICAL_BRIDGE=br-ex
>PUBLIC_BRIDGE=br-ex
> 
> In the next top-level section, 
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-multiple-interfaces
> , there is no display of revised localrc contents and no mention of 
> changing either bridge setting.  That is an oversight, right?

No, this is deliberate. Each section is meant to be independent, since
each networking configuration and corresponding DevStack configuration
is different. Of course, this may need to be explicitly stated in the
guide, so there is always room for improvement. For example, There needs
to be some editing done for that doc - the part about disabling the
firewall is just dropped in the middle of the doc and breaks the flow -
among other things. This is obviously not helpful to a new reader and we
need to fix.


> I am 
> guessing I need to set OVS_PHYSICAL_BRIDGEand PUBLIC_BRIDGEto different 
> values, and the exhibited `ovs-vsctl` commands in this section apply to 
> $OVS_PHYSICAL_BRIDGE.  Is that right?  Are there other revisions I need to 
> make to localrc?

No, this is not correct.

What does your networking layout look like on the DevStack node that you
are trying to configure?


> 
> Looking at 
> http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html(or, in 
> former days, the doc now preserved at 
> http://docs.ocselected.org/openstack-manuals/kilo/networking-guide/content/under_the_hood_openvswitch.html
> ) I see the name br-ex used for $PUBLIC_BRIDGE--- not $OVS_PHYSICAL_BRIDGE
> , right?  Wouldn't it be less confusing if 
> http://docs.openstack.org/developer/devstack/guides/neutron.htmlused a 
> name other than "br-ex" for the exhibited commands that apply to 
> $OVS_PHYSICAL_BRIDGE?

No, this is deliberate - br-ex is the bridge that is used for external
network traffic - such as floating IPs and public IP address ranges. On
the network node, a physical interface is attached to br-ex so that
traffic will flow.

PUBLIC_BRIDGE is a carryover from DevStack's Nova-Network support and is
used in some places, with OVS_PHYSICAL_BRIDGE being used by DevStack's
Neutron support, for the Open vSwitch driver specifically. They are two
variables that for the most part serve the same purpose. Frankly,
DevStack has a lot of problems with configuration knobs, and
PUBLIC_BRIDGE and OVS_PHYSICAL_BRIDGE is just a symptom.


> The section 
> http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch
> builds on 
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-multiple-interfaces
> NOT 
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface
> --- right?  Could I stop after reading that section, or must I go on to 
> http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch-and-provider-networks

See my previous statement - each section is supposed to be independent.

> ?
> 
> The exhibited localrc contents in section 
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface
> include both of these:
> 
>Q_L3_ENABLED=True
>Q_USE_PROVIDERNET_FOR_PUBLIC=True

Yes they do. They are there for a reason, it's how to configure DevStack
in such a way that you end up with a network configuration in Neutron
that fits the diagram that is shown in the section.



> 
> and nothing gainsays either of them until section 
> http://docs.openstack.org/developer/devstack/guides/neutron.html#neutron-networking-with-open-vswitch-and-provider-networks
> --- where we first see
> 
>Q_L3_ENABLED=False
> 
> Is it true that all the other sections want both Q_L3_ENABLEDand 
> Q_USE_PROVIDERNET_FOR_PUBLICto be True?

No. If they are omitted from the other sections, that is intentional.

> 
> I tried adding IPv6 support to the recipe of the first section (
> http://docs.openstack.org/developer/devstack/guides/neutron.html#using-neutron-with-a-single-interface
> ).  I added this to my localrc:
> 
> IP_VERSION=4+6

[openstack-dev] [searchlight] Mitaka Summit Sessions

2015-10-06 Thread Tripp, Travis S
Hi Searchlighters,

We need to have our summit sessions decided by October 15.

I’ve put up an ether pad capturing some of the initial ideas we had discussed 
previously in IRC.

https://etherpad.openstack.org/p/searchlight-mitaka-summit

Please jump on there and add comments, suggestions, alternate proposals and 
let’s see if we can finalize the sessions in our IRC meeting this week.

Thanks!
Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-06 Thread Ihar Hrachyshka
> On 06 Oct 2015, at 19:10, Thomas Goirand  wrote:
> 
> On 10/01/2015 03:45 PM, Ihar Hrachyshka wrote:
>> Hi all,
>> 
>> I talked recently with several contributors about what each of us plans for 
>> the next cycle, and found it’s quite useful to share thoughts with others, 
>> because you have immediate yay/nay feedback, and maybe find companions for 
>> next adventures, and what not. So I’ve decided to ask everyone what you see 
>> the team and you personally doing the next cycle, for fun or profit.
>> 
>> That’s like a PTL nomination letter, but open to everyone! :) No 
>> commitments, no deadlines, just list random ideas you have in mind or in 
>> your todo lists, and we’ll all appreciate the huge pile of awesomeness no 
>> one will ever have time to implement even if scheduled for Xixao release.
>> 
>> To start the fun, I will share my silly ideas in the next email.
>> 
>> Ihar
> 
> Could we have oslo-config-generator flat neutron.conf as a release goal
> for Mitaka as well? The current configuration layout makes it difficult
> for distributions to catch-up with working by default config.

Good idea. I think we had some patches for that. I will try to keep it on my 
plate for M.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-06 Thread Nikhil Komawar
Overall I think this is a good idea and the time frame proposal also
looks good. Few suggestions in-line.

On 10/6/15 10:36 AM, Flavio Percoco wrote:
> Greetings,
>
> Not so long ago, Erno started a thread[0] in this list to discuss the
> abandon policies for patches that haven't been updated in Glance.
>
> I'd like to go forward and start following that policy with some
> changes that you can find below:
>
> 1) Lets do this on patches that haven't had any activity in the last 2
> months. This adds one more month to Erno's proposal. The reason being
> that during the lat cycle, there were some ups and downs in the review
> flow that caused some patches to get stuck.
>

+2 . I think 2 months is a reasonable time frame. Though, I think this
should be done on glance , python-glanceclient and glance-store repos
and not glance-specs. Specs can sometimes need to sit and wait while
discussion may happen at other places and then a gist is added back the
spec.

> 2) Do this just on master, for all patches regardless they fix a
> bug or implement a spec and for all patches regardless their review
> status.
>

+2 . No comments, looks clean.

> 3) The patch will be first marked as a WIP and then abandoned if the
> patch is not updated in 1 week. This will put this patches at the
> begining of the queue but using the Glance review dashboard should
> help keeing focus.
>

While I think that one may give someone a email/irc heads up if the
proposer doesn't show up and we will use the context and wisdom of
feedback this sorta seems to imply for a general case when a developer
is new and their intent to get a patch in one cycle isn't clear.

> Unless there are some critical things missing in the above or strong
> opiniones against this, I'll make this effective starting next Monday
> October 12th.
>

I added some comments above for possible brainstorming. No serious
objections, looking forward to this cleanup process.

> Best regards,
> Flavio
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2015-February/056829.html
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-06 Thread Nikhil Komawar
This again becomes a part of the undefinied priority concept. It's hard
to come up with a list of priorities all at the beginning of the cycle
so, conflicting priority items may get missed every now and then. I hope
the dashboard will help however, this is more of a people problem than a
process problem what Victor is describing.

To solve that, here's the context:
If a change is either big, has a spec associated with it or doesn't have
a bug associated it is unlikely to catch the notice of the cores easily
let alone other reviewers. Something that would help is to propose a
topic to the weekly meeting and if you can't attend delegate it to
someone who will -- I have seen it help catch attention of the cores.
Specs and other big changes are often discussed at the drivers meeting
so that's a good place to bring this up too. It doesn't have to be a
long one, just a heads up suffices many a times.

On 10/6/15 11:54 AM, Victor Stinner wrote:
> Hi,
>
> Le 06/10/2015 16:36, Flavio Percoco a écrit :
>> Not so long ago, Erno started a thread[0] in this list to discuss the
>> abandon policies for patches that haven't been updated in Glance.
>> (...)
>> 1) Lets do this on patches that haven't had any activity in the last 2
>> months. This adds one more month to Erno's proposal. The reason being
>> that during the lat cycle, there were some ups and downs in the review
>> flow that caused some patches to get stuck.
>
> Please don't do that. I sent a patch in June (20) and it was only
> reviewed in October (4)... There was no activity simply because I had
> nothing to add, everything was explained in the commit message, I was
> only waiting for a review...
>
> I came on #openstack-glance to ask for review several time between
> August and September but nobody reviewed by patches (there was al.
>
> Example of patch: https://review.openstack.org/#/c/193786/ (now merged)
>
> It would be very frustrating to have to resend the same patch over and
> over.
>
> Victor
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-06 Thread Nikhil Komawar
I think Glance reviewer brandwidth is pretty low and the feedback time
can be quite high. For specs, I had requested a section describing the
core reviewer who will be vouching for your spec to be added to the spec
itself. I think in general we have not seen anyone doing that strongly.

If a spec isn't proposed as a priority at the beginning of a summit and
if it's a big change and is not decided to an extent, it becomes quite
tricky to get in the project in that cycle. As a general feedback I have
always recommend bringing feature proposals at summit sessions,
mid-cycle, meetings etc etc. If a feature has missed it other proposal
have gotten priority. Also, there were priorities that were set the
beginning of the cycle and if a proposal isn't on the list there, it
becomes tricky to make a call for a new spec to get merged in one cycle.



Happy to help on the process, review speed, activities 'context' more if
you want input.

On 10/6/15 11:52 AM, Julien Danjou wrote:
> On Tue, Oct 06 2015, Flavio Percoco wrote:
>
> I send patches to Glance from time to time, and they usually got 0
> review for *weeks* (sometimes months, because, well there are no
> reviewers active in Glance, so:
>
>> 1) Lets do this on patches that haven't had any activity in the last 2
>> months. This adds one more month to Erno's proposal. The reason being
>> that during the lat cycle, there were some ups and downs in the review
>> flow that caused some patches to get stuck.
> This is going to expire my patches that nobody cares about and that are
> improving the code or fixing stuff people didn't encounter (yet).
>
>> 3) The patch will be first marked as a WIP and then abandoned if the
>> patch is not updated in 1 week. This will put this patches at the
>> begining of the queue but using the Glance review dashboard should
>> help keeing focus.
> Why WIP? If a patch is complete and waiting for reviewers I'm not sure
> it helps.
>
> The problem is that nobody is reviewing Glance patches (except you
> recently it seems). That's not going to solve that. That's just going to
> hide the issues under the carpet by lowering the total of patches that
> needs review…
>
> My 2c,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-06 Thread Chris Friesen

On 10/06/2015 08:11 AM, Daniel P. Berrange wrote:

On Tue, Oct 06, 2015 at 02:54:21PM +0100, Paul Carlton wrote:

https://review.openstack.org/#/c/85048/ was raised to address the
migration of instances that are not running but people did not warm to
the idea of bringing a stopped/suspended instance to a paused state to
migrate it.  Is there any work in progress to get libvirt enhanced to
perform the migration of non active virtual machines?


Libvirt can "migrate" the configuration of an inactive VM, but does
not plan todo anything related to storage migration. OpenStack could
already solve this itself by using libvirt storage pool APIs to
copy storage volumes across, but the storage pool worked in Nova
is stalled

https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/use-libvirt-storage-pools,n,z


What is the libvirt API to migrate a paused/suspended VM?  Currently nova uses 
dom.managedSave(), so it doesn't know what file libvirt used to save the state. 
 Can libvirt migrate that file transparently?


I had thought we might switch to virDomainSave() and then use the cold migration 
framework, but that requires passwordless ssh.  If there's a way to get libvirt 
to handle it internally via the storage pool API then that would be better.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-06 Thread Thomas Goirand
On 10/01/2015 03:45 PM, Ihar Hrachyshka wrote:
> Hi all,
> 
> I talked recently with several contributors about what each of us plans for 
> the next cycle, and found it’s quite useful to share thoughts with others, 
> because you have immediate yay/nay feedback, and maybe find companions for 
> next adventures, and what not. So I’ve decided to ask everyone what you see 
> the team and you personally doing the next cycle, for fun or profit.
> 
> That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
> no deadlines, just list random ideas you have in mind or in your todo lists, 
> and we’ll all appreciate the huge pile of awesomeness no one will ever have 
> time to implement even if scheduled for Xixao release.
> 
> To start the fun, I will share my silly ideas in the next email.
> 
> Ihar

Could we have oslo-config-generator flat neutron.conf as a release goal
for Mitaka as well? The current configuration layout makes it difficult
for distributions to catch-up with working by default config.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-06 Thread Igor Kalnitsky
Hey Roman,

> It’s ”standard" in OpenStack so using it gives Fuel more karma
> and moves it more under big tent

As far as I understand it doesn't affect our movement under big tent.

> It’s in global requirements, so it doesn’t cause dependency hell

Honestly I have no idea how py.test caused a dependency hell. It
almost doesn't have any dependencies:

* py
* colorama for windows
* argparse for python 2.6 and 3.0

So the only possible intersection with global-requirements may is a
'py' package. Well, I checked and I didn't find 'py' package in there.

Summarizing, let's be practical - using py.test is convenient. It has
beautiful reports out-of-box, could be extended with a lot of
available plugins and so on. I'd prefer to keep it.

Thanks,
Igor

On Tue, Oct 6, 2015 at 2:14 PM, Yuriy Taraday  wrote:
> On Mon, Oct 5, 2015 at 5:40 PM Roman Prykhodchenko  wrote:
>>
>> Atm I have the following pros. and cons. regarding testrepository:
>>
>> pros.:
>>
>> 1. It’s ”standard" in OpenStack so using it gives Fuel more karma and
>> moves it more under big tent
>
>
> I don't think that big tent model aims at eliminating diversity of tools we
> use in our projects. A collection of web frameworks used in big tent is an
> example of that.
>
>> 2. It’s in global requirements, so it doesn’t cause dependency hell
>
>
> That can be solved by adding py.test to openstack/requirements.
>
>> cons.:
>> 1. Debugging is really hard
>
>
> I'd say that debugging here is not the right term. Every aspect of
> developing with testr is harder than with py.test. py.test tends to just
> work where you need additional tools and effort with testr.
>
> In general I don't see any benefit the project can get from using testr
> while its limitations will bite developers at every turn.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Keystone] Liberty RC2 available

2015-10-06 Thread Thierry Carrez
Hello everyone,

Due to release-critical issues spotted in Cinder and Keystone during RC1
testing (as well as last-minute translations imports), new release
candidates were created for Liberty. The list of RC2 fixes, as well as
RC2 tarballs are available at:

https://launchpad.net/cinder/liberty/liberty-rc2
https://launchpad.net/keystone/liberty/liberty-rc2

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, these tarballs will be formally released as
final "Liberty" versions on October 15. You are therefore strongly
encouraged to test and validate these tarballs !

Alternatively, you can directly test the stable/liberty branch at:
http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/cinder/+filebug
or
https://bugs.launchpad.net/keystone/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] AZ Support

2015-10-06 Thread Takashi Yamamoto
On Mon, Oct 5, 2015 at 10:41 PM, Ihar Hrachyshka  wrote:
>> On 05 Oct 2015, at 15:32, Gary Kotton  wrote:
>>
>>
>>
>> On 10/5/15, 3:21 AM, "Ihar Hrachyshka"  wrote:
>>
 On 04 Oct 2015, at 19:21, Gary Kotton  wrote:

 Sorry, it is not a result of the AZ support (humble apologies)

 It is a result of https://review.openstack.org/#/c/226362/
>>>
>>> So you use DHCP agent with non-ml2 plugin, and it broke you. Do you think
>>> it¹s ok for you to change the RPC topic to work with Mitaka, or we¹ll
>>> need to handle it more gracefully, f.e. by detecting the reply timeout
>>> and switching back to the old topic?
>>
>> My thinking is that we should fail to use the old topic. But then again, I
>> would expect that the Neutron service would first be upgraded and then the
>> agents. Updating the agents first would be a recipe for disaster.
>
> Yes, controller first, then agents. That’s why there was no fallback 
> mechanism in that patch that broke you, while we looked into backwards 
> compatibility. Though no one considered the case of 3party plugins that may 
> rely on some in-tree agents. We can accommodate for them, if it seems too 
> much effort for them to change the topic name they report on. But note that 
> it would also mean that those plugins don’t utilize the separate threads to 
> handle reports, which can be bad for their scalability.

i made a fix for midonet.  just FYI.
https://review.openstack.org/#/c/230333/

>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev