Re: [openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Matthew Treinish
On Wed, Dec 16, 2015 at 01:12:13PM -0600, Matt Riedemann wrote:
> I'm not entirely sure what the geo distribution is for everyone that works
> on stable, but I know we have people in Europe and some people in Australia.
> So I was thinking alternating weekly meetings:
> 
> Mondays at 2100 UTC
> 
> Tuesdays at 1500 UTC
> 
> Does that at least sort of work for people that would be interested in
> attending a meeting about stable? I wouldn't expect a full hour discussion,
> my main interests are highlighting status, discussing any issues that come
> up in the ML or throughout the week, and whatever else people want to go
> over (work items, questions, process discussion, etc).
> 

Works for me.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Naming Polls for N and O are open

2015-12-16 Thread Monty Taylor

Hey everybody!

The naming polls for N and O have started. You should have received an 
email for each of them. They'll be open until the end of Dec 22 UTC. 
Once there are presumptive winners in each, please remember that we will 
then have the names vetted by the OpenStack Foundation's lawyers, so it 
may still be a little bit before we announce official winners.


Good luck to all of the names.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][gnocchi] 'bad' resource_id

2015-12-16 Thread Lu, Lianhao
Hi stackers,

In ceilometer, some metrics(e.g. network.incoming.bytes for VM net interface, 
hardware.network.incoming.bytes for host net interface, 
compute.node.cpu.percentage for nova compute node host cpu utilization, etc.) 
don't have their resource_id in UUID format(which is required by gnocchi). 
Instead, they have something like . as their resource_id, in 
some cases even the  part won't be in uuid format.  Gnocchi will treat 
these kind of resource_id as bad id, and build a new UUID format resource_id 
for them. Since users are mostly using resource_id to identify their resources, 
changing user passed in resource_id would require the users extra effort to 
identify the resources in gnocchi and link them with the resources they 
original passed in.

It seems there're several options to handle this kind of 'bad' resource_id 
problem. I'm writing this email to ask for your opinios. 

1. Create new types of resource in gnocchi, and put original resource_id 
information as new resource attributes for that specific type. This might 
require adding different new code in gnocchi for different types of metrics 
with 'bad' resource_id, but it might give user fine grain control and aware 
they're dealing with special types of resources with 'bad' resource_id.

2. Added a new resource attribute original_resource_id in the generic resource 
type, and inhence will be inherited by all resource types in goncchi. This 
won't require adding new code for resources with 'bad' id, but might require 
adding a new db index on original_resource_id for resource search purpose. 

Any comments or suggestions?

Best Regards,
-Lianhao Lu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gnocchi] 'bad' resource_id

2015-12-16 Thread Chris Dent

On Wed, 16 Dec 2015, Lu, Lianhao wrote:


In ceilometer, some metrics(e.g. network.incoming.bytes for VM net
interface, hardware.network.incoming.bytes for host net interface,
compute.node.cpu.percentage for nova compute node host cpu
utilization, etc.) don't have their resource_id in UUID format(which
is required by gnocchi). Instead, they have something like
. as their resource_id, in some cases even the 
part won't be in uuid format. Gnocchi will treat these kind of
resource_id as bad id, and build a new UUID format resource_id for
them. Since users are mostly using resource_id to identify their
resources, changing user passed in resource_id would require the users
extra effort to identify the resources in gnocchi and link them with
the resources they original passed in.


Just for the sake of completeness can you describe the use cases
where the resource_id translation that gnocchi does does not help
the use case. The one way translation is used in the body of search
queries as well as in any URL which contains a resource_id.

I'm sure there are use cases where it breaks down, but I've not
heard them enumerated explicitly.

Thanks.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Vilobh Meshram
The intent here in Magnum is to enforce quota on resources owned by Magnum
(#of bays etc that are allowed to be created for a user in a project).

+1 to Lee that  "Resources created by Magnum COEs should be governed by
existing quota policies governing said resources (e.g. Nova and vCPUs).".

-Vilobh


On Wed, Dec 16, 2015 at 12:19 PM, Fox, Kevin M  wrote:

> keypairs are real though. they consume database resource at the moment.
> You don't want a user blowing out your db. Quota's should be for things
> that ops will get sad over, if the users consume too many of them.
>
> Thanks,
> Kevin
> 
> From: Tim Bell [tim.b...@cern.ch]
> Sent: Wednesday, December 16, 2015 11:56 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources
>
> > -Original Message-
> > From: Clint Byrum [mailto:cl...@fewbar.com]
> > Sent: 15 December 2015 22:40
> > To: openstack-dev 
> > Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> > Resources
> >
> > Hi! Can I offer a counter point?
> >
> > Quotas are for _real_ resources.
> >
>
> The CERN container specialist agrees with you ... it would be good to
> reflect on the needs given that ironic, neutron and nova are policing the
> resource usage. Quotas in the past have been used for things like key pairs
> which are not really real.
>
> > Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
> cost
> > real money and cannot be conjured from thin air. As such, the user being
> > able to allocate 1 billion or 2 containers is not limited by Magnum, but
> by real
> > things that they must pay for. If they have enough Nova quota to allocate
> 1
> > billion tiny pods, why would Magnum stop them? Who actually benefits from
> > that limitation?
> >
> > So I suggest that you not add any detailed, complicated quota system to
> > Magnum. If there are real limitations to the implementation that Magnum
> > has chosen, such as we had in Heat (the entire stack must fit in memory),
> > then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
> > memory quotas be the limit, and enjoy the profit margins that having an
> > unbound force multiplier like Magnum in your cloud gives you and your
> > users!
> >
> > Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
> > > Hi All,
> > >
> > > Currently, it is possible to create unlimited number of resource like
> > > bay/pod/service/. In Magnum, there should be a limitation for user or
> > > project to create Magnum resource, and the limitation should be
> > > configurable[1].
> > >
> > > I proposed following design :-
> > >
> > > 1. Introduce new table magnum.quotas
> > > ++--+--+-+-++
> > >
> > > | Field  | Type | Null | Key | Default | Extra  |
> > >
> > > ++--+--+-+-++
> > >
> > > | id | int(11)  | NO   | PRI | NULL| auto_increment |
> > >
> > > | created_at | datetime | YES  | | NULL||
> > >
> > > | updated_at | datetime | YES  | | NULL||
> > >
> > > | deleted_at | datetime | YES  | | NULL||
> > >
> > > | project_id | varchar(255) | YES  | MUL | NULL||
> > >
> > > | resource   | varchar(255) | NO   | | NULL||
> > >
> > > | hard_limit | int(11)  | YES  | | NULL||
> > >
> > > | deleted| int(11)  | YES  | | NULL||
> > >
> > > ++--+--+-+-++
> > >
> > > resource can be Bay, Pod, Containers, etc.
> > >
> > >
> > > 2. API controller for quota will be created to make sure basic CLI
> > > commands work.
> > >
> > > quota-show, quota-delete, quota-create, quota-update
> > >
> > > 3. When the admin specifies a quota of X number of resources to be
> > > created the code should abide by that. For example if hard limit for
> Bay
> is 5
> > (i.e.
> > > a project can have maximum 5 Bay's) if a user in a project tries to
> > > exceed that hardlimit it won't be allowed. Similarly goes for other
> > resources.
> > >
> > > 4. Please note the quota validation only works for resources created
> > > via Magnum. Could not think of a way that Magnum to know if a COE
> > > specific utilities created a resource in background. One way could be
> > > to see the difference between whats stored in magnum.quotas and the
> > > information of the actual resources created for a particular bay in
> k8s/COE.
> > >
> > > 5. Introduce a config variable to set quotas values.
> > >
> > > If everyone agrees will start the changes by introducing quota
> > > restrictions on Bay creation.
> > >
> > > Thoughts ??
> > >
> > >
> > > -Vilobh
> > >
> > > [1] 

[openstack-dev] [Manila] Generic share groups

2015-12-16 Thread Knight, Clinton
Hello, Manila-philes.

In last week's Manila IRC meeting, I briefly outlined a proposal for a generic 
share grouping facility in Manila.  It is on the agenda for tomorrow (17 Dec), 
and I've described the ideas more fully on the Manila wiki.  We think this 
would benefit every driver, improve the consistency of the user experience, and 
simplify life for us developers and testers.  Please have a look before the 
meeting!

https://wiki.openstack.org/wiki/Manila/design/manila-generic-groups

Thanks,
Clinton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] ping cannot work from VM to external gateway IP.

2015-12-16 Thread Duarte Nunes
On Wed, Dec 16, 2015 at 12:20 PM Li Ma  wrote:

> Updated:
>
> Lots of ARP requests from external physical router to VM are catched
> on the physical NIC binded to provider router port.
>
> It seems that external physical router doesn't get answers to these
> ARP requests.
>

Just in case, could you open an issue in our jira
, when you have time, with debug logs from
the compute and gateway node while running this ping?

Thanks,
Duarte


>
> On Wed, Dec 16, 2015 at 8:08 PM, Li Ma  wrote:
> > Hi Midoers,
> >
> > I have a platform running Midonet 2015 (I think it is the last release
> > when you switch to 5.0).
> > I cannot ping from VM to external gateway IP (which is set up at the
> > physical router side).
> >
> > VM inter-connectivity is OK.
> >
> > When I tcpdump packets on the physical interface located in the gateway
> node,
> > I just grabbed lots of ARP requests to external gateway IP.
> >
> > I'm not sure how midonet gateway manages ARP?
> > Will the ARP be cached on the gateway host?
> >
> > Can I specify a static ARP record by 'ip command' on gateway node to
> > solve it quickly (not gracefully)?
> >
> > (Currently I'm in the business trip that cannot touch the environment.
> > So, I'd like to get some ideas first and then I can tell my partners
> > to work on it.)
> >
> > Thanks a lot,
> >
> > --
> >
> > Li Ma (Nick)
> > Email: skywalker.n...@gmail.com
>
>
>
> --
>
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Lee Calcote
Food for thought - there is a cost to FIPs (in the case of public IP 
addresses), security groups (to a lesser extent, but in terms of the 
computation of many hundreds of them), etc. Administrators may wish to enforce 
quotas on a variety of resources that are direct costs or indirect costs (e.g. 
# of bays, where a bay consists of a number of multi-VM / multi-host pods and 
services, which consume CPU, mem, etc.).

If Magnum quotas are brought forward, they should govern (enforce quota) on 
Magnum-specific constructs only, correct? Resources created by Magnum COEs 
should be governed by existing quota policies governing said resources (e.g. 
Nova and vCPUs).

Lee

> On Dec 16, 2015, at 1:56 PM, Tim Bell  wrote:
> 
>> -Original Message-
>> From: Clint Byrum [mailto:cl...@fewbar.com ]
>> Sent: 15 December 2015 22:40
>> To: openstack-dev > >
>> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
>> Resources
>> 
>> Hi! Can I offer a counter point?
>> 
>> Quotas are for _real_ resources.
>> 
> 
> The CERN container specialist agrees with you ... it would be good to
> reflect on the needs given that ironic, neutron and nova are policing the
> resource usage. Quotas in the past have been used for things like key pairs
> which are not really real.
> 
>> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
> cost
>> real money and cannot be conjured from thin air. As such, the user being
>> able to allocate 1 billion or 2 containers is not limited by Magnum, but
> by real
>> things that they must pay for. If they have enough Nova quota to allocate
> 1
>> billion tiny pods, why would Magnum stop them? Who actually benefits from
>> that limitation?
>> 
>> So I suggest that you not add any detailed, complicated quota system to
>> Magnum. If there are real limitations to the implementation that Magnum
>> has chosen, such as we had in Heat (the entire stack must fit in memory),
>> then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
>> memory quotas be the limit, and enjoy the profit margins that having an
>> unbound force multiplier like Magnum in your cloud gives you and your
>> users!
>> 
>> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
>>> Hi All,
>>> 
>>> Currently, it is possible to create unlimited number of resource like
>>> bay/pod/service/. In Magnum, there should be a limitation for user or
>>> project to create Magnum resource, and the limitation should be
>>> configurable[1].
>>> 
>>> I proposed following design :-
>>> 
>>> 1. Introduce new table magnum.quotas
>>> ++--+--+-+-++
>>> 
>>> | Field  | Type | Null | Key | Default | Extra  |
>>> 
>>> ++--+--+-+-++
>>> 
>>> | id | int(11)  | NO   | PRI | NULL| auto_increment |
>>> 
>>> | created_at | datetime | YES  | | NULL||
>>> 
>>> | updated_at | datetime | YES  | | NULL||
>>> 
>>> | deleted_at | datetime | YES  | | NULL||
>>> 
>>> | project_id | varchar(255) | YES  | MUL | NULL||
>>> 
>>> | resource   | varchar(255) | NO   | | NULL||
>>> 
>>> | hard_limit | int(11)  | YES  | | NULL||
>>> 
>>> | deleted| int(11)  | YES  | | NULL||
>>> 
>>> ++--+--+-+-++
>>> 
>>> resource can be Bay, Pod, Containers, etc.
>>> 
>>> 
>>> 2. API controller for quota will be created to make sure basic CLI
>>> commands work.
>>> 
>>> quota-show, quota-delete, quota-create, quota-update
>>> 
>>> 3. When the admin specifies a quota of X number of resources to be
>>> created the code should abide by that. For example if hard limit for Bay
> is 5
>> (i.e.
>>> a project can have maximum 5 Bay's) if a user in a project tries to
>>> exceed that hardlimit it won't be allowed. Similarly goes for other
>> resources.
>>> 
>>> 4. Please note the quota validation only works for resources created
>>> via Magnum. Could not think of a way that Magnum to know if a COE
>>> specific utilities created a resource in background. One way could be
>>> to see the difference between whats stored in magnum.quotas and the
>>> information of the actual resources created for a particular bay in
> k8s/COE.
>>> 
>>> 5. Introduce a config variable to set quotas values.
>>> 
>>> If everyone agrees will start the changes by introducing quota
>>> restrictions on Bay creation.
>>> 
>>> Thoughts ??
>>> 
>>> 
>>> -Vilobh
>>> 
>>> [1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for 

Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread James Penick
>Affinity is mostly meaningless with baremetal. It's entirely a
>virtualization related thing. If you try and group things by TOR, or
>chassis, or anything else, it's going to start meaning something entirely
>different than it means in Nova,

I disagree, in fact, we need TOR and power affinity/anti-affinity for VMs
as well as baremetal. As an example, there are cases where certain compute
resources move significant amounts of data between one or two other
instances, but you want to ensure those instances are not on the same
hypervisor. In that scenario it makes sense to have instances on different
hypervisors, but on the same TOR to reduce unnecessary traffic across the
fabric.

>and it would probably be better to just
>make lots of AZ's and have users choose their AZ mix appropriately,
>since that is the real meaning of AZ's.

Yes, at some level certain things should be expressed in the form of an AZ,
power seems like a good candidate for that. But , expressing something like
a TOR as an AZ in an environment with hundreds of thousands of physical
hosts, would not scale. Further, it would require users to have a deeper
understanding of datacenter toplogy, which is exactly the opposite of why
IaaS exists.

The whole point of a service-oriented infrastructure is to be able to give
the end user the ability to boot compute resources that match a variety of
constraints, and have those resources selected and provisioned for them. IE
"Give me 12 instances of m1.blah, all running Linux, and make sure they're
spread across 6 different TORs and 2 different power domains in network
zone Blah."







On Wed, Dec 16, 2015 at 10:38 AM, Clint Byrum  wrote:

> Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:
> > Nobody is talking about running a compute per flavor or capability. All
> > compute hosts will be able to handle all ironic nodes. We *do* still
> > need to figure out how to handle availability zones or host aggregates,
> > but I expect we would pass along that data to be matched against. I
> > think it would just be metadata on a node. Something like
> > node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
> > you. Ditto for host aggregates - add the metadata to ironic to match
> > what's in the host aggregate. I'm honestly not sure what to do about
> > (anti-)affinity filters; we'll need help figuring that out.
> >
>
> Affinity is mostly meaningless with baremetal. It's entirely a
> virtualization related thing. If you try and group things by TOR, or
> chassis, or anything else, it's going to start meaning something entirely
> different than it means in Nova, and it would probably be better to just
> make lots of AZ's and have users choose their AZ mix appropriately,
> since that is the real meaning of AZ's.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Carl Baldwin
On Wed, Dec 16, 2015 at 11:32 AM, Jeremy Stanley  wrote:
[...]
> Yes, it's progressing nicely. DevStack-based jobs are already
> covered this way for master and stable/liberty, and Neutron is
> piloting the same solution for its other non-DevStack-based jobs. If

Is someone from Neutron actively helping out here?  Need more?

> Nova's unit test jobs were already switched to their
> upper-constraints equivalents then there's a chance this wouldn't
> have impacted there (though we still need to work out the bit where
> we run a representative sample of jobs like neutron/nova unit tests
> on proposed constraints bumps to block any with this sort of impact,
> right now we're really just relying on
> devstack-tempest/grenade/other integration test jobs as canaries).
>
> Anyway, the solution seems to be working (modulo unforeseen issues
> like people thinking it's sane to delete their latest releases of
> some dependencies from PyPI) but it's a long road to full
> implementation.

Thanks for the report Jeremy.  I'm very happy to see progress here.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Dan Smith
> It was in the queue for 11 days, Dan Smith took a look, he added Jay
> Pipes, and I also added Matt Riedemann, there were also a bunch of
> neutron folks on it since this fix originated from their end.

Yeah, and both of those other people have had serious other commitments
in the last two weeks. The commit message in that patch set was a far
cry from "merge this or we break when 0.8.x is released".

Please, in the future, if you know you're going to destroy a half day's
worth of work because a patch isn't getting looked at, spend a little
more effort trying to raise it up the flagpole. Like Sean said, an email
to the list will get serious attention (as this one did), or an agenda
item on the weekly meeting will _certainly_ get enough eyes on the
problem to get it handled.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Flavio Percoco

On 16/12/15 11:53 -0500, Sean Dague wrote:

On 12/16/2015 11:37 AM, Sean Dague wrote:

On 12/16/2015 11:22 AM, Mike Bayer wrote:



On 12/16/2015 09:10 AM, Sylvain Bauza wrote:



Le 16/12/2015 14:59, Sean Dague a écrit :

oslo.db test_migrations is using methods for alembic, which changed in
the 0.8.4 release. This ends up causing a unit test failure (at least in
the Nova case) that looks like this -
http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404


There is an oslo.db patch out there
https://review.openstack.org/#/c/258478 to fix it, but #openstack-oslo
has been pretty quiet this morning, so no idea how fast this can get out
into a release.

-Sean



So, it seems that the issue came when
https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
Fortunatelt, Mike seems to have a patch in place for Nova in order to
fix this https://review.openstack.org/#/c/253859/

I'd suggest an intensive review pass on that one to make sure it's OK.


do you folks have a best practice suggestion on this?  My patch kind of
stayed twisting in the wind for a week even though those who read it
would have seen "hey, this is going to break on Alembic's next minor
release!"I pinged the important people and all on it, but it still
got no attention.


Which people were those? I guess none of us this morning knew this was
going to be an issue and were surprised that 12 hours worth of patches
had all failed.

-Sean


Best practice is send an email to the openstack-dev list:

Subject: [all] the following test jobs will be broken by Alembic 0.8.4
release

The Alembic 0.8.4 release is scheduled on 12/15. When it comes out it
will break Nova unit tests on all branches.

The following patch will fix master - .

You all will need to backport it as well to all branches.


Instead of just breaking the world, and burning 10s to 100 engineer
hours in redo tests and investigating and addressing the break after the
fact.


I know you didn't want to come off harsh but I think there are better
ways to express recommendations and best practices than this. I don't
think this is the best way to communicate with other members of the
community, especially when they are asking for feedback in good faith,
regardless of how bad the breakage was or how ugly/untolerable the
mistake could've been.

Other than that, I think sending an email out to raise awareness is
probably the best thing to do in these cases and what's normally been
done in the past.

Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Julien Danjou
On Wed, Dec 16 2015, Carl Baldwin wrote:

> We need to vet new package releases before they wreak havoc.  We need
> to accept new package releases by proposing a patch to update the
> version and take it through the gate.  Weren't we working on this at
> one point?  I understand if it isn't quite possible to do this yet but
> we need to be working toward this and accelerating our efforts rather
> than lashing out at package maintainers.

Projects need to take care of patches that are sent to avoid such things
in the first place.

Blocking new packages (upper cap and all) is just a proof that the
projects fail to cope with their development rate. They should address
that with proper means. Not by blocking the situation and burying itself
under more technical debt. Which seems to be a chronic reflex in many
OpenStack projects.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] New testing guidelines

2015-12-16 Thread Boris Pavlovic
Assaf,

We can as well add Rally testing for scale/performance/regression testing.

Best regards,
Boris Pavlovic

On Wed, Dec 16, 2015 at 7:00 AM, Fawad Khaliq  wrote:

> Very useful information. Thanks, Assaf.
>
> Fawad Khaliq
>
>
> On Thu, Dec 10, 2015 at 6:26 AM, Assaf Muller  wrote:
>
>> Today we merged [1] which adds content to the Neutron testing guidelines:
>>
>> http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
>>
>> The document details Neutron's different testing infrastructures:
>> * Unit
>> * Functional
>> * Fullstack (Integration testing with services deployed by the testing
>> infra itself)
>> * In-tree Tempest
>>
>> The new documentation provides:
>> * Examples
>> * Do's and don'ts
>> * Good and bad usage of mock
>> * The anatomy of a good unit test
>>
>> And primarily the advantages and use cases for each testing framework.
>>
>> It's short - I encourage developers to go through it. Reviewers may
>> use it as reference / link when testing anti-pattern pop up.
>>
>> Please send feedback on this thread or better yet in the form of a
>> devref patch. Thank you!
>>
>>
>> [1] https://review.openstack.org/#/c/245984/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] adding puppet-rally to OpenStack

2015-12-16 Thread Cody Herriges
Emilien Macchi wrote:
> I just noticed we have a second module written by Cody:
> https://github.com/ody/puppet-rally
> 
> We might want to collaborate on that.
> 

Yeah...I'd actually completely forgotten that existed until Emilien
mentioned it to me in IRC.

It was my responsibility to deploy rally for our production cloud so I
started down the road of building a rally module that I intended to ship
upstream but things never panned out.  I pretty much ran cookiecutter
and then added the openstack-rally package resource that was dependent
on our internal RPM repository that contained a openstack-rally package
I built myself because one existed no where else.  I never actually ran
the module through Puppet.

Since we were behind on deployment I put rally on the back burner until
I had something fully functional in pre-production to start running
rally against.  Plus, at that time we were pre-liberty release so no one
was shipping an openstack-rally package which would have made putting
the module I started into the upstream project difficult since it would
be unusable and untestable.  So, instead I focussed on tying off some
other internal things and my OpenStack time was spent working on already
established Puppet OpenStack community items, reviews and CI.

Now we are in December and Mitaka is in full swing, there are
openstack-rally packages upstream in Mitaka repos, and our internal
pre-production install is fully functional so...I am back to working on
a rally deployment.  I am happy to collaborate on merging/just promoting
one of these modules to upstream.  I do not have a preference for which
one does become upstream; in their current state they probably both need
a fresh run through cookiecutter and msync.

-- 
Cody



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] IRC Meeting Thursday December 17th at 17:00UTC

2015-12-16 Thread Christopher Aedo
Greetings! Our next OpenStack Community App Catalog meeting will take
place this ThursdayDecember 17th at 17:00 UTC in #openstack-meeting-3

The agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Please add agenda items if there's anything specific you would like to
discuss (or of course if the meeting time is not convenient for you
join us on IRC #openstack-app-catalog).

This will be our last meeting for 2015 as the next two Thursdays fall
on dates which make attendance a challenge for some attendees.  Please
join us if you can!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Performance][Holidays] IRC meeting schedule

2015-12-16 Thread Dina Belova
Folks,

we had an IRC meeting yesterday, and holidays schedule was one of the
topics covered. Although, the final decision was not made, so let's
finalise everything here :)

There are few options for 2015 and 2016 now:
1) Let's *do not have more meetings this year or let's meet last time on
Dec 22nd*
3) Should we have first 2016 meeting on *Jan 5th or Jan 12*

Due to the opinions proposed on the meeting, *I'll suggest to finish
meetings for this year and have the first meeting on Jan 12, 2016* (due to
the fact that Christmas holidays almost started already, and in Russia
there will be no X-mas holidays, but we'll have about a week after NY).

Any pros / cons?

Cheers,
Dina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Adrian Otto
Clint,

> On Dec 16, 2015, at 11:56 AM, Tim Bell  wrote:
> 
>> -Original Message-
>> From: Clint Byrum [mailto:cl...@fewbar.com]
>> Sent: 15 December 2015 22:40
>> To: openstack-dev 
>> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
>> Resources
>> 
>> Hi! Can I offer a counter point?
>> 
>> Quotas are for _real_ resources.

No. Beyond billable resources, quotas are a mechanism for limiting abusive use 
patterns from hostile users. The rate at which Bays are created, and how many 
of them you can have in total are important limits to put in the hands of cloud 
operators. Each Bay contains a keypair, which takes resources to generate and 
securely distribute. Updates to and Deletion of bays causes a storm of activity 
in Heat, and even more activity in Nova. Cloud operators should have the 
ability to control the rate of activity by enforcing rate controls on Magnum 
resources before they become problematic further down in the control plane. 
Admission controls are best managed at the entrance to a system, not at the 
core.

Adrian

> The CERN container specialist agrees with you ... it would be good to
> reflect on the needs given that ironic, neutron and nova are policing the
> resource usage. Quotas in the past have been used for things like key pairs
> which are not really real.
> 
>> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
> cost
>> real money and cannot be conjured from thin air. As such, the user being
>> able to allocate 1 billion or 2 containers is not limited by Magnum, but
> by real
>> things that they must pay for. If they have enough Nova quota to allocate
> 1
>> billion tiny pods, why would Magnum stop them? Who actually benefits from
>> that limitation?
>> 
>> So I suggest that you not add any detailed, complicated quota system to
>> Magnum. If there are real limitations to the implementation that Magnum
>> has chosen, such as we had in Heat (the entire stack must fit in memory),
>> then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
>> memory quotas be the limit, and enjoy the profit margins that having an
>> unbound force multiplier like Magnum in your cloud gives you and your
>> users!
>> 
>> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
>>> Hi All,
>>> 
>>> Currently, it is possible to create unlimited number of resource like
>>> bay/pod/service/. In Magnum, there should be a limitation for user or
>>> project to create Magnum resource, and the limitation should be
>>> configurable[1].
>>> 
>>> I proposed following design :-
>>> 
>>> 1. Introduce new table magnum.quotas
>>> ++--+--+-+-++
>>> 
>>> | Field  | Type | Null | Key | Default | Extra  |
>>> 
>>> ++--+--+-+-++
>>> 
>>> | id | int(11)  | NO   | PRI | NULL| auto_increment |
>>> 
>>> | created_at | datetime | YES  | | NULL||
>>> 
>>> | updated_at | datetime | YES  | | NULL||
>>> 
>>> | deleted_at | datetime | YES  | | NULL||
>>> 
>>> | project_id | varchar(255) | YES  | MUL | NULL||
>>> 
>>> | resource   | varchar(255) | NO   | | NULL||
>>> 
>>> | hard_limit | int(11)  | YES  | | NULL||
>>> 
>>> | deleted| int(11)  | YES  | | NULL||
>>> 
>>> ++--+--+-+-++
>>> 
>>> resource can be Bay, Pod, Containers, etc.
>>> 
>>> 
>>> 2. API controller for quota will be created to make sure basic CLI
>>> commands work.
>>> 
>>> quota-show, quota-delete, quota-create, quota-update
>>> 
>>> 3. When the admin specifies a quota of X number of resources to be
>>> created the code should abide by that. For example if hard limit for Bay
> is 5
>> (i.e.
>>> a project can have maximum 5 Bay's) if a user in a project tries to
>>> exceed that hardlimit it won't be allowed. Similarly goes for other
>> resources.
>>> 
>>> 4. Please note the quota validation only works for resources created
>>> via Magnum. Could not think of a way that Magnum to know if a COE
>>> specific utilities created a resource in background. One way could be
>>> to see the difference between whats stored in magnum.quotas and the
>>> information of the actual resources created for a particular bay in
> k8s/COE.
>>> 
>>> 5. Introduce a config variable to set quotas values.
>>> 
>>> If everyone agrees will start the changes by introducing quota
>>> restrictions on Bay creation.
>>> 
>>> Thoughts ??
>>> 
>>> 
>>> -Vilobh
>>> 
>>> [1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

[openstack-dev] [Tacker] No IRC weekly meetings next two weeks

2015-12-16 Thread Sridhar Ramaswamy
We are skipping the next two weekly meetings,

   - Dec 22
   - Dec 29

We will reconvene Jan 5th at our usual 1700 UTC slot.

BTW, though we skipping these meetings, tacker core team is available to
continue to review and merge patchsets over next two weeks, so keep them
coming!

- Sridhar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] New testing guidelines

2015-12-16 Thread Assaf Muller
On Wed, Dec 16, 2015 at 2:32 PM, Boris Pavlovic  wrote:
> Assaf,
>
> We can as well add Rally testing for scale/performance/regression testing.

There's mention of it in the doc but not the rationale of using it like for the
other testing frameworks. I'd appreciate it if a Rally dev could send the patch
and add me as a reviewer.

>
> Best regards,
> Boris Pavlovic
>
> On Wed, Dec 16, 2015 at 7:00 AM, Fawad Khaliq  wrote:
>>
>> Very useful information. Thanks, Assaf.
>>
>> Fawad Khaliq
>>
>>
>> On Thu, Dec 10, 2015 at 6:26 AM, Assaf Muller  wrote:
>>>
>>> Today we merged [1] which adds content to the Neutron testing guidelines:
>>>
>>> http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
>>>
>>> The document details Neutron's different testing infrastructures:
>>> * Unit
>>> * Functional
>>> * Fullstack (Integration testing with services deployed by the testing
>>> infra itself)
>>> * In-tree Tempest
>>>
>>> The new documentation provides:
>>> * Examples
>>> * Do's and don'ts
>>> * Good and bad usage of mock
>>> * The anatomy of a good unit test
>>>
>>> And primarily the advantages and use cases for each testing framework.
>>>
>>> It's short - I encourage developers to go through it. Reviewers may
>>> use it as reference / link when testing anti-pattern pop up.
>>>
>>> Please send feedback on this thread or better yet in the form of a
>>> devref patch. Thank you!
>>>
>>>
>>> [1] https://review.openstack.org/#/c/245984/
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Fox, Kevin M
keypairs are real though. they consume database resource at the moment. You 
don't want a user blowing out your db. Quota's should be for things that ops 
will get sad over, if the users consume too many of them.

Thanks,
Kevin

From: Tim Bell [tim.b...@cern.ch]
Sent: Wednesday, December 16, 2015 11:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: 15 December 2015 22:40
> To: openstack-dev 
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
> Hi! Can I offer a counter point?
>
> Quotas are for _real_ resources.
>

The CERN container specialist agrees with you ... it would be good to
reflect on the needs given that ironic, neutron and nova are policing the
resource usage. Quotas in the past have been used for things like key pairs
which are not really real.

> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
cost
> real money and cannot be conjured from thin air. As such, the user being
> able to allocate 1 billion or 2 containers is not limited by Magnum, but
by real
> things that they must pay for. If they have enough Nova quota to allocate
1
> billion tiny pods, why would Magnum stop them? Who actually benefits from
> that limitation?
>
> So I suggest that you not add any detailed, complicated quota system to
> Magnum. If there are real limitations to the implementation that Magnum
> has chosen, such as we had in Heat (the entire stack must fit in memory),
> then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
> memory quotas be the limit, and enjoy the profit margins that having an
> unbound force multiplier like Magnum in your cloud gives you and your
> users!
>
> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
> > Hi All,
> >
> > Currently, it is possible to create unlimited number of resource like
> > bay/pod/service/. In Magnum, there should be a limitation for user or
> > project to create Magnum resource, and the limitation should be
> > configurable[1].
> >
> > I proposed following design :-
> >
> > 1. Introduce new table magnum.quotas
> > ++--+--+-+-++
> >
> > | Field  | Type | Null | Key | Default | Extra  |
> >
> > ++--+--+-+-++
> >
> > | id | int(11)  | NO   | PRI | NULL| auto_increment |
> >
> > | created_at | datetime | YES  | | NULL||
> >
> > | updated_at | datetime | YES  | | NULL||
> >
> > | deleted_at | datetime | YES  | | NULL||
> >
> > | project_id | varchar(255) | YES  | MUL | NULL||
> >
> > | resource   | varchar(255) | NO   | | NULL||
> >
> > | hard_limit | int(11)  | YES  | | NULL||
> >
> > | deleted| int(11)  | YES  | | NULL||
> >
> > ++--+--+-+-++
> >
> > resource can be Bay, Pod, Containers, etc.
> >
> >
> > 2. API controller for quota will be created to make sure basic CLI
> > commands work.
> >
> > quota-show, quota-delete, quota-create, quota-update
> >
> > 3. When the admin specifies a quota of X number of resources to be
> > created the code should abide by that. For example if hard limit for Bay
is 5
> (i.e.
> > a project can have maximum 5 Bay's) if a user in a project tries to
> > exceed that hardlimit it won't be allowed. Similarly goes for other
> resources.
> >
> > 4. Please note the quota validation only works for resources created
> > via Magnum. Could not think of a way that Magnum to know if a COE
> > specific utilities created a resource in background. One way could be
> > to see the difference between whats stored in magnum.quotas and the
> > information of the actual resources created for a particular bay in
k8s/COE.
> >
> > 5. Introduce a config variable to set quotas values.
> >
> > If everyone agrees will start the changes by introducing quota
> > restrictions on Bay creation.
> >
> > Thoughts ??
> >
> >
> > -Vilobh
> >
> > [1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Mark McClain

> On Dec 16, 2015, at 2:12 PM, Matt Riedemann  
> wrote:
> 
> I'm not entirely sure what the geo distribution is for everyone that works on 
> stable, but I know we have people in Europe and some people in Australia.  So 
> I was thinking alternating weekly meetings:
> 
> Mondays at 2100 UTC
> 
> Tuesdays at 1500 UTC
> 

Were you thinking of putting these on the opposite weeks as Neutron’s 
Monday/Tuesday schedule?

> Does that at least sort of work for people that would be interested in 
> attending a meeting about stable? I wouldn't expect a full hour discussion, 
> my main interests are highlighting status, discussing any issues that come up 
> in the ML or throughout the week, and whatever else people want to go over 
> (work items, questions, process discussion, etc).
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-16 Thread Dolph Mathews
On Wed, Dec 16, 2015 at 1:33 PM, Davanum Srinivas  wrote:

> Brant,
>
> I am ok either way, guess the alternative was to add keystone-core
> directly to the oslo.policy core group (can't check right now).
>

That's certainly reasonable, and kind of what we did with pycadf.


>
> The name is very possibly going to create confusion
>

I assume you're not referring to unnecessarily changing the name of the
project itself (oslo.policy) just because there might be a shift in the
group of maintainers! Either way, let's definitely not do that.


> -- Dims
>
> On Wed, Dec 16, 2015 at 7:22 PM, Jordan Pittier
>  wrote:
> > Hi,
> > I am sure oslo.policy would be good under Keystone's governance. But I am
> > not sure I understood what's wrong in having oslo.policy under the oslo
> > program ?
> >
> > Jordan
> >
> > On Wed, Dec 16, 2015 at 6:13 PM, Brant Knudson  wrote:
> >>
> >>
> >> I'd like to propose moving oslo.policy from the oslo program to the
> >> keystone program. Keystone developers know what's going on with
> oslo.policy
> >> and I think are more interested in what's going on with it so that
> reviews
> >> will get proper vetting, and it's not like oslo doesn't have enough
> going on
> >> with all the other repos. Keystone core has equivalent stringent
> development
> >> policy that we already enforce with keystoneclient and keystoneauth, so
> >> oslo.policy isn't going to be losing any stability.
> >>
> >> If there aren't any objections, let's go ahead with this. I heard this
> >> requires a change to a governance repo, and gerrit permission changes to
> >> make keystone-core core, and updates in oslo.policy to change some docs
> or
> >> links. Any oslo.policy specs that are currently proposed
> >>
> >> - Brant
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread James Penick
Someone else expressed this more gracefully than I:

*'Because sans Ironic, compute-nodes still have physical characteristics*
*that make grouping on them attractive for things like anti-affinity. I*
*don't really want my HA instances "not on the same compute node", I want*
*them "not in the same failure domain". It becomes a way for all*
*OpenStack workloads to have more granularity than "availability zone".*
(
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg14891.html
)

^That guy definitely has a good head on his shoulders ;)

-James


On Wed, Dec 16, 2015 at 12:40 PM, James Penick  wrote:

> >Affinity is mostly meaningless with baremetal. It's entirely a
> >virtualization related thing. If you try and group things by TOR, or
> >chassis, or anything else, it's going to start meaning something entirely
> >different than it means in Nova,
>
> I disagree, in fact, we need TOR and power affinity/anti-affinity for VMs
> as well as baremetal. As an example, there are cases where certain compute
> resources move significant amounts of data between one or two other
> instances, but you want to ensure those instances are not on the same
> hypervisor. In that scenario it makes sense to have instances on different
> hypervisors, but on the same TOR to reduce unnecessary traffic across the
> fabric.
>
> >and it would probably be better to just
> >make lots of AZ's and have users choose their AZ mix appropriately,
> >since that is the real meaning of AZ's.
>
> Yes, at some level certain things should be expressed in the form of an
> AZ, power seems like a good candidate for that. But , expressing something
> like a TOR as an AZ in an environment with hundreds of thousands of
> physical hosts, would not scale. Further, it would require users to have a
> deeper understanding of datacenter toplogy, which is exactly the opposite
> of why IaaS exists.
>
> The whole point of a service-oriented infrastructure is to be able to give
> the end user the ability to boot compute resources that match a variety of
> constraints, and have those resources selected and provisioned for them. IE
> "Give me 12 instances of m1.blah, all running Linux, and make sure they're
> spread across 6 different TORs and 2 different power domains in network
> zone Blah."
>
>
>
>
>
>
>
> On Wed, Dec 16, 2015 at 10:38 AM, Clint Byrum  wrote:
>
>> Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:
>> > Nobody is talking about running a compute per flavor or capability. All
>> > compute hosts will be able to handle all ironic nodes. We *do* still
>> > need to figure out how to handle availability zones or host aggregates,
>> > but I expect we would pass along that data to be matched against. I
>> > think it would just be metadata on a node. Something like
>> > node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
>> > you. Ditto for host aggregates - add the metadata to ironic to match
>> > what's in the host aggregate. I'm honestly not sure what to do about
>> > (anti-)affinity filters; we'll need help figuring that out.
>> >
>>
>> Affinity is mostly meaningless with baremetal. It's entirely a
>> virtualization related thing. If you try and group things by TOR, or
>> chassis, or anything else, it's going to start meaning something entirely
>> different than it means in Nova, and it would probably be better to just
>> make lots of AZ's and have users choose their AZ mix appropriately,
>> since that is the real meaning of AZ's.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Clint Byrum

Excerpts from Fox, Kevin M's message of 2015-12-16 12:19:59 -0800:
> keypairs are real though. they consume database resource at the moment. You 
> don't want a user blowing out your db. Quota's should be for things that ops 
> will get sad over, if the users consume too many of them.
> 

Limit every user to 1 keys. It's not going to break _anything_.

Rows in a table are not free, I agree. However, these are what I'm
referring to as "implementation limitations". Nobody is looking for a
cloud to store keys cost effectively. So put a sane per-user limit on
them that guards your implementation against incompetence and/or malice,
and otherwise focus on quotas for things that users pay for.

(oh also, keys are special because they're kind of big, and big things
in the database are a bad idea. I say move keys to swift and bill the
user for them as objects. ;)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-16 Thread Davanum Srinivas
Brant,

I am ok either way, guess the alternative was to add keystone-core
directly to the oslo.policy core group (can't check right now).

The name is very possibly going to create confusion

-- Dims

On Wed, Dec 16, 2015 at 7:22 PM, Jordan Pittier
 wrote:
> Hi,
> I am sure oslo.policy would be good under Keystone's governance. But I am
> not sure I understood what's wrong in having oslo.policy under the oslo
> program ?
>
> Jordan
>
> On Wed, Dec 16, 2015 at 6:13 PM, Brant Knudson  wrote:
>>
>>
>> I'd like to propose moving oslo.policy from the oslo program to the
>> keystone program. Keystone developers know what's going on with oslo.policy
>> and I think are more interested in what's going on with it so that reviews
>> will get proper vetting, and it's not like oslo doesn't have enough going on
>> with all the other repos. Keystone core has equivalent stringent development
>> policy that we already enforce with keystoneclient and keystoneauth, so
>> oslo.policy isn't going to be losing any stability.
>>
>> If there aren't any objections, let's go ahead with this. I heard this
>> requires a change to a governance repo, and gerrit permission changes to
>> make keystone-core core, and updates in oslo.policy to change some docs or
>> links. Any oslo.policy specs that are currently proposed
>>
>> - Brant
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Tim Bell
> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: 15 December 2015 22:40
> To: openstack-dev 
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
> 
> Hi! Can I offer a counter point?
> 
> Quotas are for _real_ resources.
>

The CERN container specialist agrees with you ... it would be good to
reflect on the needs given that ironic, neutron and nova are policing the
resource usage. Quotas in the past have been used for things like key pairs
which are not really real.
 
> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
cost
> real money and cannot be conjured from thin air. As such, the user being
> able to allocate 1 billion or 2 containers is not limited by Magnum, but
by real
> things that they must pay for. If they have enough Nova quota to allocate
1
> billion tiny pods, why would Magnum stop them? Who actually benefits from
> that limitation?
> 
> So I suggest that you not add any detailed, complicated quota system to
> Magnum. If there are real limitations to the implementation that Magnum
> has chosen, such as we had in Heat (the entire stack must fit in memory),
> then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
> memory quotas be the limit, and enjoy the profit margins that having an
> unbound force multiplier like Magnum in your cloud gives you and your
> users!
> 
> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
> > Hi All,
> >
> > Currently, it is possible to create unlimited number of resource like
> > bay/pod/service/. In Magnum, there should be a limitation for user or
> > project to create Magnum resource, and the limitation should be
> > configurable[1].
> >
> > I proposed following design :-
> >
> > 1. Introduce new table magnum.quotas
> > ++--+--+-+-++
> >
> > | Field  | Type | Null | Key | Default | Extra  |
> >
> > ++--+--+-+-++
> >
> > | id | int(11)  | NO   | PRI | NULL| auto_increment |
> >
> > | created_at | datetime | YES  | | NULL||
> >
> > | updated_at | datetime | YES  | | NULL||
> >
> > | deleted_at | datetime | YES  | | NULL||
> >
> > | project_id | varchar(255) | YES  | MUL | NULL||
> >
> > | resource   | varchar(255) | NO   | | NULL||
> >
> > | hard_limit | int(11)  | YES  | | NULL||
> >
> > | deleted| int(11)  | YES  | | NULL||
> >
> > ++--+--+-+-++
> >
> > resource can be Bay, Pod, Containers, etc.
> >
> >
> > 2. API controller for quota will be created to make sure basic CLI
> > commands work.
> >
> > quota-show, quota-delete, quota-create, quota-update
> >
> > 3. When the admin specifies a quota of X number of resources to be
> > created the code should abide by that. For example if hard limit for Bay
is 5
> (i.e.
> > a project can have maximum 5 Bay's) if a user in a project tries to
> > exceed that hardlimit it won't be allowed. Similarly goes for other
> resources.
> >
> > 4. Please note the quota validation only works for resources created
> > via Magnum. Could not think of a way that Magnum to know if a COE
> > specific utilities created a resource in background. One way could be
> > to see the difference between whats stored in magnum.quotas and the
> > information of the actual resources created for a particular bay in
k8s/COE.
> >
> > 5. Introduce a config variable to set quotas values.
> >
> > If everyone agrees will start the changes by introducing quota
> > restrictions on Bay creation.
> >
> > Thoughts ??
> >
> >
> > -Vilobh
> >
> > [1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron][keystone] how to reauth the token

2015-12-16 Thread Dolph Mathews
On Wed, Dec 16, 2015 at 9:59 AM, Pavlo Shchelokovskyy <
pshchelokovs...@mirantis.com> wrote:

> Hi all,
>
> I'd like to start discussion on how Ironic is using Neutron when Keystone
> is involved.
>
> Recently the patch [0] was merged in Ironic to fix a bug when the token
> with which to create the neutronclient is expired. For that Ironic now
> passes both username/password of its own service user and the token from
> the request to the client. But that IMO is a wrong thing to do.
>
> When token is given but happens to be expired, neutronclient will
> reauthentificate [1] using provided credentials for service tenant and user
> - but in fact the original token might have come from completely different
> tenant. Thus the action neutron is performing might look for / change
> resources in the service tenant instead of the tenant for which the
> original token was issued.
>
> Ironic by default is admin-only service, so the token that is accepted is
> admin-scoped, but still it might be coming from different tenants (e.g.
> service tenant or actual admin tenant, or some other tenant that admin is
> logged into). And even in the case of admin-scoped token I'm not sure how
> this will work for domain-separated tenants in Keystone v3. Does
> admin-scoped neutronclient show all ports including those created by
> tenants in domains other than the domain of admin tenant?
>
> If I understand it right, the best we could do is use keystoneauth *token
> auth plugins that can reauth when the token is about to expire (but of
> course not when it is already expired).
>

Yeah, when the user's token expires, implementing a privilege escalation
vulnerability as a workaround is not the ideal solution. Keystone does not
implement a way to extend the expiration on bearer tokens - as that would
present another security vulnerability - but you can increase the lifespan
of all the tokens in your deployment using keystone.conf [token] expiration:


https://github.com/openstack/keystone/blob/70f3401d0b526fbb731df70512ad427a198990fd/etc/keystone.conf.sample#L1975-L1976


> [0] https://review.openstack.org/#/c/255885
> [1]
> https://github.com/openstack/python-neutronclient/blob/master/neutronclient/client.py#L173
>
> Best regards,
> --
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] "Mistral HA and multi-regional support" meeting minutes

2015-12-16 Thread ELISHA, Moshe (Moshe)
Hi all,

Renat and I had an action item to think about "Mistral HA and multi-regional 
support".
No big surprises. These are the meeting minutes:

Mistral Multi-Region:
* A blueprint already exists [1]
* Most topics were already discussed in Mitaka OpenStack summit and are 
described in the blueprint.

Mistral HA
* Add a gate that runs Mistral in HA mode (Ask akuznetsova for 
more info as she looked into this once).
* Add more functional tests that are focused on HA tests
* Put together a list of known HA issues that are currently not handled (For 
example, if an executor dies immediately after dequeuing a task) and think of 
solutions.
* Expose Mistral load metrics to allow some external system to decide if it 
needs to scale Mistral components in / out.

[1] https://blueprints.launchpad.net/mistral/+spec/mistral-multi-region-support

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Kevin Benton
Yeah. The twisted take-away from this thread is to not test your changes
against openstack so you have plausible deniability. :)

On Wed, Dec 16, 2015 at 9:04 AM, Mike Bayer  wrote:

>
>
> On 12/16/2015 11:53 AM, Sean Dague wrote:
> > On 12/16/2015 11:37 AM, Sean Dague wrote:
> >> On 12/16/2015 11:22 AM, Mike Bayer wrote:
> >>>
> >>>
> >>> On 12/16/2015 09:10 AM, Sylvain Bauza wrote:
> 
> 
>  Le 16/12/2015 14:59, Sean Dague a écrit :
> > oslo.db test_migrations is using methods for alembic, which changed
> in
> > the 0.8.4 release. This ends up causing a unit test failure (at
> least in
> > the Nova case) that looks like this -
> >
> http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404
> >
> >
> > There is an oslo.db patch out there
> > https://review.openstack.org/#/c/258478 to fix it, but
> #openstack-oslo
> > has been pretty quiet this morning, so no idea how fast this can get
> out
> > into a release.
> >
> > -Sean
> >
> 
>  So, it seems that the issue came when
>  https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
>  Fortunatelt, Mike seems to have a patch in place for Nova in order to
>  fix this https://review.openstack.org/#/c/253859/
> 
>  I'd suggest an intensive review pass on that one to make sure it's OK.
> >>>
> >>> do you folks have a best practice suggestion on this?  My patch kind of
> >>> stayed twisting in the wind for a week even though those who read it
> >>> would have seen "hey, this is going to break on Alembic's next minor
> >>> release!"I pinged the important people and all on it, but it still
> >>> got no attention.
> >>
> >> Which people were those? I guess none of us this morning knew this was
> >> going to be an issue and were surprised that 12 hours worth of patches
> >> had all failed.
> >>
> >>  -Sean
> >
> > Best practice is send an email to the openstack-dev list:
> >
> > Subject: [all] the following test jobs will be broken by Alembic 0.8.4
> > release
> >
> > The Alembic 0.8.4 release is scheduled on 12/15. When it comes out it
> > will break Nova unit tests on all branches.
> >
> > The following patch will fix master - .
> >
> > You all will need to backport it as well to all branches.
> >
> >
> > Instead of just breaking the world, and burning 10s to 100 engineer
> > hours in redo tests and investigating and addressing the break after the
> > fact.
>
> I was hoping to get a thanks for even *testing* unreleased versions of
> my entirely non-Openstack, upstream projects against Openstack itself.
>  If I did *less* effort here, and just didn't bother the way 100% of all
> other non-Openstack projects do, then I'd not have been scolded by you.
>
>
>
>
>
>
> >
> >   -Sean
> >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-16 Thread Vladimir Kuklin
Vladimir

I am pretty much for removing docker, but I do not think that we should
startle our developers/QA folks with additional efforts on fixing 2
different environments. Let's just think from the point of development
velocity here and at delay such changes for at least after NY. Because if
we do it immediately after SCF there will be a whole bunch of holidays and
Russian holidays are Jan 1st-10th and you (who is the SME for docker
removal) will be offline. Do you really want to fix things instead of
enjoying holidays?

On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L  wrote:

> +1 to Vladimir Kozhukalov,
>
> Entire point of moving branches creation to SCF was to perform such
> changes as
> early as possible in the release, I see no reasons to wait for HCF.
>
> Thanks,
>
> On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> -1
>>
>> We already discussed this and we have made a decision to move stable
>> branch creation from HCF to SCF. There were reasons for this. We agreed
>> that once stable branch is created, master becomes open for new features.
>> Let's avoid discussing this again.
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin > > wrote:
>>
>>> +1
>>>
>>> Regards,
>>> Bulat Gaifullin
>>> Mirantis Inc.
>>>
>>>
>>>
>>> On 15 Dec 2015, at 22:19, Andrew Maksimov 
>>> wrote:
>>>
>>> +1
>>>
>>> Regards,
>>> Andrey Maximov
>>> Fuel Project Manager
>>>
>>> On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin 
>>> wrote:
>>>
 Folks

 This email is a proposal to push Docker containers removal from the
 master node to the date beyond 8.0 HCF.

 Here is why I propose to do so.

 Removal of Docker is a rather invasive change and may introduce a lot
 of regressions. It is well may affect how bugs are fixed - we might have 2
 ways of fixing them, while during SCF of 8.0 this may affect velocity of
 bug fixing as you need to fix bugs in master prior to fixing them in stable
 branches. This actually may significantly increase our bugfixing pace and
 put 8.0 GA release on risk.



 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com 
 www.mirantis.ru
 vkuk...@mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>>> ?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-16 Thread Vladimir Kozhukalov
Vladimir,

I have other activities planned for the time immediately after SCF
(separating UI from fuel-web, maybe it is even more invasive :-)) and it is
not a big deal to postpone this feature or another. I am against the
approach itself of postponing something because it is too invasive. If we
create stable branch master becomes open. That was our primary intention to
open master earlier than later when we decided to move stable branch
creation.




Vladimir Kozhukalov

On Wed, Dec 16, 2015 at 8:28 PM, Vladimir Kuklin 
wrote:

> Vladimir
>
> I am pretty much for removing docker, but I do not think that we should
> startle our developers/QA folks with additional efforts on fixing 2
> different environments. Let's just think from the point of development
> velocity here and at delay such changes for at least after NY. Because if
> we do it immediately after SCF there will be a whole bunch of holidays and
> Russian holidays are Jan 1st-10th and you (who is the SME for docker
> removal) will be offline. Do you really want to fix things instead of
> enjoying holidays?
>
> On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L  wrote:
>
>> +1 to Vladimir Kozhukalov,
>>
>> Entire point of moving branches creation to SCF was to perform such
>> changes as
>> early as possible in the release, I see no reasons to wait for HCF.
>>
>> Thanks,
>>
>> On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> -1
>>>
>>> We already discussed this and we have made a decision to move stable
>>> branch creation from HCF to SCF. There were reasons for this. We agreed
>>> that once stable branch is created, master becomes open for new features.
>>> Let's avoid discussing this again.
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin <
>>> bgaiful...@mirantis.com> wrote:
>>>
 +1

 Regards,
 Bulat Gaifullin
 Mirantis Inc.



 On 15 Dec 2015, at 22:19, Andrew Maksimov 
 wrote:

 +1

 Regards,
 Andrey Maximov
 Fuel Project Manager

 On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin 
 wrote:

> Folks
>
> This email is a proposal to push Docker containers removal from the
> master node to the date beyond 8.0 HCF.
>
> Here is why I propose to do so.
>
> Removal of Docker is a rather invasive change and may introduce a lot
> of regressions. It is well may affect how bugs are fixed - we might have 2
> ways of fixing them, while during SCF of 8.0 this may affect velocity of
> bug fixing as you need to fix bugs in master prior to fixing them in 
> stable
> branches. This actually may significantly increase our bugfixing pace and
> put 8.0 GA release on risk.
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 

Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Mike Bayer


On 12/16/2015 11:37 AM, Sean Dague wrote:
> On 12/16/2015 11:22 AM, Mike Bayer wrote:
>>
>>
>> On 12/16/2015 09:10 AM, Sylvain Bauza wrote:
>>>
>>>
>>> Le 16/12/2015 14:59, Sean Dague a écrit :
 oslo.db test_migrations is using methods for alembic, which changed in
 the 0.8.4 release. This ends up causing a unit test failure (at least in
 the Nova case) that looks like this -
 http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404


 There is an oslo.db patch out there
 https://review.openstack.org/#/c/258478 to fix it, but #openstack-oslo
 has been pretty quiet this morning, so no idea how fast this can get out
 into a release.

 -Sean

>>>
>>> So, it seems that the issue came when
>>> https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
>>> Fortunatelt, Mike seems to have a patch in place for Nova in order to
>>> fix this https://review.openstack.org/#/c/253859/
>>>
>>> I'd suggest an intensive review pass on that one to make sure it's OK.
>>
>> do you folks have a best practice suggestion on this?  My patch kind of
>> stayed twisting in the wind for a week even though those who read it
>> would have seen "hey, this is going to break on Alembic's next minor
>> release!"I pinged the important people and all on it, but it still
>> got no attention.
> 
> Which people were those? I guess none of us this morning knew this was
> going to be an issue and were surprised that 12 hours worth of patches
> had all failed.

It was in the queue for 11 days, Dan Smith took a look, he added Jay
Pipes, and I also added Matt Riedemann, there were also a bunch of
neutron folks on it since this fix originated from their end.







> 
>   -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Carl Baldwin
On Wed, Dec 16, 2015 at 10:04 AM, Mike Bayer  wrote:
>> Instead of just breaking the world, and burning 10s to 100 engineer
>> hours in redo tests and investigating and addressing the break after the
>> fact.

We shouldn't let this happen in the first place.  I think this is our fault.

We need to vet new package releases before they wreak havoc.  We need
to accept new package releases by proposing a patch to update the
version and take it through the gate.  Weren't we working on this at
one point?  I understand if it isn't quite possible to do this yet but
we need to be working toward this and accelerating our efforts rather
than lashing out at package maintainers.

With the scale that openstack development has grown to, we can't
afford to let package updates do this to us.  If we have a patch to
propose accepting new versions, we could provide feedback to package
maintainers in a much more civilized and pleasant way.

> I was hoping to get a thanks for even *testing* unreleased versions of
> my entirely non-Openstack, upstream projects against Openstack itself.
>  If I did *less* effort here, and just didn't bother the way 100% of all
> other non-Openstack projects do, then I'd not have been scolded by you.

Requiring any package maintainer to be on top of every consumer of his
package doesn't scale.  We can expect that packages have good tests
and take care to maintain quality and a stability.  But to put the
responsibility for "100s of engineering hours" squarely on Mike's
shoulders because he didn't "raise it up the flagpole" is
irresponsible.  We need to take our own responsibility for the choice
to consume the package and blindly accept updates.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] ping cannot work from VM to external gateway IP.

2015-12-16 Thread Ivan Kelly
>>> I cannot ping from VM to external gateway IP (which is set up at the
>>> physical router side).
This is a known issue. We have it in the backlog to fix but noone has
gotten to it yet.

-Ivan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron][keystone] how to reauth the token

2015-12-16 Thread Clint Byrum
Excerpts from Pavlo Shchelokovskyy's message of 2015-12-16 07:59:42 -0800:
> Hi all,
> 
> I'd like to start discussion on how Ironic is using Neutron when Keystone
> is involved.
> 
> Recently the patch [0] was merged in Ironic to fix a bug when the token
> with which to create the neutronclient is expired. For that Ironic now
> passes both username/password of its own service user and the token from
> the request to the client. But that IMO is a wrong thing to do.
> 
> When token is given but happens to be expired, neutronclient will
> reauthentificate [1] using provided credentials for service tenant and user
> - but in fact the original token might have come from completely different
> tenant. Thus the action neutron is performing might look for / change
> resources in the service tenant instead of the tenant for which the
> original token was issued.
> 
> Ironic by default is admin-only service, so the token that is accepted is
> admin-scoped, but still it might be coming from different tenants (e.g.
> service tenant or actual admin tenant, or some other tenant that admin is
> logged into). And even in the case of admin-scoped token I'm not sure how
> this will work for domain-separated tenants in Keystone v3. Does
> admin-scoped neutronclient show all ports including those created by
> tenants in domains other than the domain of admin tenant?
> 
> If I understand it right, the best we could do is use keystoneauth *token
> auth plugins that can reauth when the token is about to expire (but of
> course not when it is already expired).

Why not use trusts the way Heat does?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cloudkitty] Feature freeze for release 0.5

2015-12-16 Thread Stéphane Albert
Hi,

As you might already know we are using the independent release which
means that we can release whenever we want.

Like we discussed during the Tokyo summit our goal is to release a
couple versions before the Mitaka release which should correspond to the
1.0 version.

Every version will implement a new feature, or improve some part of the
code, deprecating the old one.

If we want to implement all the features we envisioned we'll need to
release 0.5 version ASAP to focus on next features. Most of the code
regarding 0.5 are pending review.

The last goals are to integrate bugfixes and Gnocchi support before
releasing first rc. I got positive feedbacks from people testing the
gnocchi integration. Code only needs minor fixes and code review, people
should start reviewing code as it's mostly finished and only a few minor
parts are subject to changes.

Some gerrits rights problems are being fixed so that we can release a
0.5 branch and start focusing on next version while RC is being tested.
You can still send patch to review, but their integration will be
postponed until we have a 0.5 branch.
Any blueprint that has not yet been approved will not be part of the 0.5
release.

Just a quick reminder that 0.6 will feature new collector and resources
models. Which will help people needing multiple collector instances for
different backends (for example multiple ceilometer backends). A
compatibility layer will be added to support legacy code and data
format.

We're only a few steps away from releasing gnocchi support in
CloudKitty. Let's focus and make it a success :)

Cheers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Upgrading containers in Mesos/Marathon environment - request for feedback

2015-12-16 Thread Marek Zawadzki

Hello all,

I described use case and simple tool that I'd like to implement as a 
first step in this topic - would you please

review it and provide me with feedback before I start coding?
Is the use-case realistic? Is this tool going to be useful given the 
features I described? Any other comments?


https://etherpad.openstack.org/p/kolla_upgrading_containers

Thank you,

-marek zawadzki

--
Marek Zawadzki
Mirantis Kolla Team


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Operational tools support

2015-12-16 Thread Emilien Macchi
Hi TripleO,

I recently sent an email about my progress on adding monitoring in TripleO.
I've made more progress this week on logging and I would like to present
you the work.

If you're interested, please look this 12 min presentation:
https://youtu.be/SJWA_vwc4wI

Summary:
* About operational tools
* Operational tools in TripleO
* Demo

The next steps is hearing your feedback and follow-up my patches on
instack-undercloud.

Thank you,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] adding puppet-rally to OpenStack

2015-12-16 Thread Emilien Macchi
I just noticed we have a second module written by Cody:
https://github.com/ody/puppet-rally

We might want to collaborate on that.

On 12/16/2015 02:03 PM, Emilien Macchi wrote:
> [adding openstack-dev mailing list for public discussion]
> 
> Hi Andy,
> 
> I was wondering if you would be interested to move puppet-rally [1] to
> OpenStack?
> We have some documentation about the process [2] but feel free to ping
> us on IRC #puppet-openstack or by this thread if you're interested to
> contribute.
> 
> I think that would be awesome for OpenStack to have this module part of
> our list. We would have a bit of work to bring it consistent but from
> what I'm seeing, that's not a ton of things.
> We also could add it in our integration testing CI and eventually run
> more testing beside Tempest.
> 
> What do you think?
> 
> Thanks,
> 
> [1] https://github.com/NeCTAR-RC/puppet-rally
> (2] https://wiki.openstack.org/wiki/Puppet/New_module
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Mike Bayer


On 12/16/2015 11:53 AM, Sean Dague wrote:
> On 12/16/2015 11:37 AM, Sean Dague wrote:
>> On 12/16/2015 11:22 AM, Mike Bayer wrote:
>>>
>>>
>>> On 12/16/2015 09:10 AM, Sylvain Bauza wrote:


 Le 16/12/2015 14:59, Sean Dague a écrit :
> oslo.db test_migrations is using methods for alembic, which changed in
> the 0.8.4 release. This ends up causing a unit test failure (at least in
> the Nova case) that looks like this -
> http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404
>
>
> There is an oslo.db patch out there
> https://review.openstack.org/#/c/258478 to fix it, but #openstack-oslo
> has been pretty quiet this morning, so no idea how fast this can get out
> into a release.
>
> -Sean
>

 So, it seems that the issue came when
 https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
 Fortunatelt, Mike seems to have a patch in place for Nova in order to
 fix this https://review.openstack.org/#/c/253859/

 I'd suggest an intensive review pass on that one to make sure it's OK.
>>>
>>> do you folks have a best practice suggestion on this?  My patch kind of
>>> stayed twisting in the wind for a week even though those who read it
>>> would have seen "hey, this is going to break on Alembic's next minor
>>> release!"I pinged the important people and all on it, but it still
>>> got no attention.
>>
>> Which people were those? I guess none of us this morning knew this was
>> going to be an issue and were surprised that 12 hours worth of patches
>> had all failed.
>>
>>  -Sean
> 
> Best practice is send an email to the openstack-dev list:
> 
> Subject: [all] the following test jobs will be broken by Alembic 0.8.4
> release
> 
> The Alembic 0.8.4 release is scheduled on 12/15. When it comes out it
> will break Nova unit tests on all branches.
> 
> The following patch will fix master - .
> 
> You all will need to backport it as well to all branches.
> 
> 
> Instead of just breaking the world, and burning 10s to 100 engineer
> hours in redo tests and investigating and addressing the break after the
> fact.

I was hoping to get a thanks for even *testing* unreleased versions of
my entirely non-Openstack, upstream projects against Openstack itself.
 If I did *less* effort here, and just didn't bother the way 100% of all
other non-Openstack projects do, then I'd not have been scolded by you.






> 
>   -Sean
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Joshua Harlow

Carl Baldwin wrote:

On Wed, Dec 16, 2015 at 10:04 AM, Mike Bayer  wrote:

Instead of just breaking the world, and burning 10s to 100 engineer
hours in redo tests and investigating and addressing the break after the
fact.


We shouldn't let this happen in the first place.  I think this is our fault.

We need to vet new package releases before they wreak havoc.  We need
to accept new package releases by proposing a patch to update the
version and take it through the gate.  Weren't we working on this at
one point?  I understand if it isn't quite possible to do this yet but
we need to be working toward this and accelerating our efforts rather
than lashing out at package maintainers.

With the scale that openstack development has grown to, we can't
afford to let package updates do this to us.  If we have a patch to
propose accepting new versions, we could provide feedback to package
maintainers in a much more civilized and pleasant way.


I was hoping to get a thanks for even *testing* unreleased versions of
my entirely non-Openstack, upstream projects against Openstack itself.
  If I did *less* effort here, and just didn't bother the way 100% of all
other non-Openstack projects do, then I'd not have been scolded by you.


Requiring any package maintainer to be on top of every consumer of his
package doesn't scale.  We can expect that packages have good tests
and take care to maintain quality and a stability.  But to put the
responsibility for "100s of engineering hours" squarely on Mike's
shoulders because he didn't "raise it up the flagpole" is
irresponsible.  We need to take our own responsibility for the choice
to consume the package and blindly accept updates.


+100 treating people that provide underlying components of openstack 
that everyone depends upon in this way seems u, sorta evil... Mike 
(and many others, in oslo, outside, ...) have built things that many 
many others in our community have built on-top of; so in the spirit of 
xmas (and well being good humans in general) and all that I'd like to 
say thank you mike for your hard work and thank you mike for what u have 
helped do that others have built on. We should all be grateful IMHO for 
the things we don't have to build ourselves that others have built for 
us... (saving us pain, problems, bad APIs... and much much more).




Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Cammann, Tom
I don’t see a benefit from supporting the old API through a microversion 
when the same functionality will be available through the native API. We 
are still early enough in Magnum to make significant API changes, no one 
is using Magnum as a whole in production.

Have we had any discussion on adding a v2 API and what changes (beyond 
removing pod, rc, service) we would include in that change. What sort of 
timeframe would we expect to remove the v1 API. I would like to move to a 
v2 in this cycle, then we can think about removing v1 in N.

Tom



On 16/12/2015, 15:57, "Hongbin Lu"  wrote:

>Hi Tom,
>
>If I remember correctly, the decision is to drop the COE-specific API 
>(Pod, Service, Replication Controller) in the next API version. I think a 
>good way to do that is to put a deprecated warning in current API version 
>(v1) for the removed resources, and remove them in the next API version 
>(v2).
>
>An alternative is to drop them in current API version. If we decide to do 
>that, we need to bump the micro-version [1], and ask users to specify the 
>microversion as part of the requests when they want the removed APIs.
>
>[1] 
>http://docs.openstack.org/developer/nova/api_microversions.html#removing-a
>n-api-method
>
>Best regards,
>Hongbin
>
>-Original Message-
>From: Cammann, Tom [mailto:tom.camm...@hpe.com] 
>Sent: December-16-15 8:21 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
>
>I have been noticing a fair amount of redundant work going on in magnum, 
>python-magnumclient and magnum-ui with regards to APIs we have been 
>intending to drop support for. During the Tokyo summit it was decided 
>that we should support for only COE APIs that all COEs can support which 
>means dropping support for Kubernetes specific APIs for Pod, Service and 
>Replication Controller.
>
>Egor has submitted a blueprint[1] “Unify container actions between all 
>COEs” which has been approved to cover this work and I have submitted the 
>first of many patches that will be needed to unify the APIs.
>
>The controversial patches are here: 
>https://review.openstack.org/#/c/258485/ and 
>https://review.openstack.org/#/c/258454/
>
>These patches are more a forcing function for our team to decide how to 
>correctly deprecate these APIs as I mention there is a lot of redundant 
>work going on these APIs. Please let me know your thoughts.
>
>Tom
>
>[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread Clint Byrum
Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:
> Nobody is talking about running a compute per flavor or capability. All
> compute hosts will be able to handle all ironic nodes. We *do* still
> need to figure out how to handle availability zones or host aggregates,
> but I expect we would pass along that data to be matched against. I
> think it would just be metadata on a node. Something like
> node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
> you. Ditto for host aggregates - add the metadata to ironic to match
> what's in the host aggregate. I'm honestly not sure what to do about
> (anti-)affinity filters; we'll need help figuring that out.
> 

Affinity is mostly meaningless with baremetal. It's entirely a
virtualization related thing. If you try and group things by TOR, or
chassis, or anything else, it's going to start meaning something entirely
different than it means in Nova, and it would probably be better to just
make lots of AZ's and have users choose their AZ mix appropriately,
since that is the real meaning of AZ's.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Davanum Srinivas
Mike,

I for one whole heartedly thank you for everything you do including extra nova 
review in this case. 

-- dims


> On Dec 16, 2015, at 12:04 PM, Mike Bayer  wrote:
> 
> 
> 
>> On 12/16/2015 11:53 AM, Sean Dague wrote:
>>> On 12/16/2015 11:37 AM, Sean Dague wrote:
 On 12/16/2015 11:22 AM, Mike Bayer wrote:
 
 
> On 12/16/2015 09:10 AM, Sylvain Bauza wrote:
> 
> 
> Le 16/12/2015 14:59, Sean Dague a écrit :
>> oslo.db test_migrations is using methods for alembic, which changed in
>> the 0.8.4 release. This ends up causing a unit test failure (at least in
>> the Nova case) that looks like this -
>> http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404
>> 
>> 
>> There is an oslo.db patch out there
>> https://review.openstack.org/#/c/258478 to fix it, but #openstack-oslo
>> has been pretty quiet this morning, so no idea how fast this can get out
>> into a release.
>> 
>>-Sean
> 
> So, it seems that the issue came when
> https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
> Fortunatelt, Mike seems to have a patch in place for Nova in order to
> fix this https://review.openstack.org/#/c/253859/
> 
> I'd suggest an intensive review pass on that one to make sure it's OK.
 
 do you folks have a best practice suggestion on this?  My patch kind of
 stayed twisting in the wind for a week even though those who read it
 would have seen "hey, this is going to break on Alembic's next minor
 release!"I pinged the important people and all on it, but it still
 got no attention.
>>> 
>>> Which people were those? I guess none of us this morning knew this was
>>> going to be an issue and were surprised that 12 hours worth of patches
>>> had all failed.
>>> 
>>>-Sean
>> 
>> Best practice is send an email to the openstack-dev list:
>> 
>> Subject: [all] the following test jobs will be broken by Alembic 0.8.4
>> release
>> 
>> The Alembic 0.8.4 release is scheduled on 12/15. When it comes out it
>> will break Nova unit tests on all branches.
>> 
>> The following patch will fix master - .
>> 
>> You all will need to backport it as well to all branches.
>> 
>> 
>> Instead of just breaking the world, and burning 10s to 100 engineer
>> hours in redo tests and investigating and addressing the break after the
>> fact.
> 
> I was hoping to get a thanks for even *testing* unreleased versions of
> my entirely non-Openstack, upstream projects against Openstack itself.
> If I did *less* effort here, and just didn't bother the way 100% of all
> other non-Openstack projects do, then I'd not have been scolded by you.
> 
> 
> 
> 
> 
> 
>> 
>>-Sean
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Sean Dague
On 12/16/2015 11:37 AM, Sean Dague wrote:
> On 12/16/2015 11:22 AM, Mike Bayer wrote:
>>
>>
>> On 12/16/2015 09:10 AM, Sylvain Bauza wrote:
>>>
>>>
>>> Le 16/12/2015 14:59, Sean Dague a écrit :
 oslo.db test_migrations is using methods for alembic, which changed in
 the 0.8.4 release. This ends up causing a unit test failure (at least in
 the Nova case) that looks like this -
 http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404


 There is an oslo.db patch out there
 https://review.openstack.org/#/c/258478 to fix it, but #openstack-oslo
 has been pretty quiet this morning, so no idea how fast this can get out
 into a release.

 -Sean

>>>
>>> So, it seems that the issue came when
>>> https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
>>> Fortunatelt, Mike seems to have a patch in place for Nova in order to
>>> fix this https://review.openstack.org/#/c/253859/
>>>
>>> I'd suggest an intensive review pass on that one to make sure it's OK.
>>
>> do you folks have a best practice suggestion on this?  My patch kind of
>> stayed twisting in the wind for a week even though those who read it
>> would have seen "hey, this is going to break on Alembic's next minor
>> release!"I pinged the important people and all on it, but it still
>> got no attention.
> 
> Which people were those? I guess none of us this morning knew this was
> going to be an issue and were surprised that 12 hours worth of patches
> had all failed.
> 
>   -Sean

Best practice is send an email to the openstack-dev list:

Subject: [all] the following test jobs will be broken by Alembic 0.8.4
release

The Alembic 0.8.4 release is scheduled on 12/15. When it comes out it
will break Nova unit tests on all branches.

The following patch will fix master - .

You all will need to backport it as well to all branches.


Instead of just breaking the world, and burning 10s to 100 engineer
hours in redo tests and investigating and addressing the break after the
fact.

-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Will domain be removed in Keystone Mitaka?

2015-12-16 Thread Dolph Mathews
On Tue, Dec 15, 2015 at 10:08 PM, darren wang 
wrote:

> Hi Dolph,
>
>
>
> We are doing something on “domain” now, but I saw bp-reseller which will
> integrate domain with project and remove domain finally, I’m pretty
> concerned that will domain be removed in Mitaka?
>

No, the API-facing concept of domains will not be removed. Documented APIs
will still behave as expected:


https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3.rst


>
>
> Yet this bp has neither series goal nor milestone target.
>

You can track implementation progress in gerrit under the bp/reseller topic:

  https://review.openstack.org/#/q/branch:master+topic:bp/reseller,n,z


>
>
> Looking forward to your reply, Thanks a lot!
>
>
>
> Darren
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-16 Thread Brant Knudson
I'd like to propose moving oslo.policy from the oslo program to the
keystone program. Keystone developers know what's going on with oslo.policy
and I think are more interested in what's going on with it so that reviews
will get proper vetting, and it's not like oslo doesn't have enough going
on with all the other repos. Keystone core has equivalent stringent
development policy that we already enforce with keystoneclient and
keystoneauth, so oslo.policy isn't going to be losing any stability.

If there aren't any objections, let's go ahead with this. I heard this
requires a change to a governance repo, and gerrit permission changes to
make keystone-core core, and updates in oslo.policy to change some docs or
links. Any oslo.policy specs that are currently proposed

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-16 Thread Jordan Pittier
Hi,
I am sure oslo.policy would be good under Keystone's governance. But I am
not sure I understood what's wrong in having oslo.policy under the oslo
program ?

Jordan

On Wed, Dec 16, 2015 at 6:13 PM, Brant Knudson  wrote:

>
> I'd like to propose moving oslo.policy from the oslo program to the
> keystone program. Keystone developers know what's going on with oslo.policy
> and I think are more interested in what's going on with it so that reviews
> will get proper vetting, and it's not like oslo doesn't have enough going
> on with all the other repos. Keystone core has equivalent stringent
> development policy that we already enforce with keystoneclient and
> keystoneauth, so oslo.policy isn't going to be losing any stability.
>
> If there aren't any objections, let's go ahead with this. I heard this
> requires a change to a governance repo, and gerrit permission changes to
> make keystone-core core, and updates in oslo.policy to change some docs or
> links. Any oslo.policy specs that are currently proposed
>
> - Brant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Jeremy Stanley
On 2015-12-16 11:03:38 -0700 (-0700), Carl Baldwin wrote:
[...]
> We need to vet new package releases before they wreak havoc.  We need
> to accept new package releases by proposing a patch to update the
> version and take it through the gate.  Weren't we working on this at
> one point?  I understand if it isn't quite possible to do this yet but
> we need to be working toward this and accelerating our efforts rather
> than lashing out at package maintainers.
[...]

Yes, it's progressing nicely. DevStack-based jobs are already
covered this way for master and stable/liberty, and Neutron is
piloting the same solution for its other non-DevStack-based jobs. If
Nova's unit test jobs were already switched to their
upper-constraints equivalents then there's a chance this wouldn't
have impacted there (though we still need to work out the bit where
we run a representative sample of jobs like neutron/nova unit tests
on proposed constraints bumps to block any with this sort of impact,
right now we're really just relying on
devstack-tempest/grenade/other integration test jobs as canaries).

Anyway, the solution seems to be working (modulo unforeseen issues
like people thinking it's sane to delete their latest releases of
some dependencies from PyPI) but it's a long road to full
implementation.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] ping cannot work from VM to external gateway IP.

2015-12-16 Thread Richard Raseley
Are the gateway nodes virtualized? If so, are you allowing promiscuous mode / 
MAC spoofing?

> On Dec 16, 2015, at 4:19 AM, Li Ma  wrote:
> 
> Updated:
> 
> Lots of ARP requests from external physical router to VM are catched
> on the physical NIC binded to provider router port.
> 
> It seems that external physical router doesn't get answers to these
> ARP requests.
> 
> On Wed, Dec 16, 2015 at 8:08 PM, Li Ma  wrote:
>> Hi Midoers,
>> 
>> I have a platform running Midonet 2015 (I think it is the last release
>> when you switch to 5.0).
>> I cannot ping from VM to external gateway IP (which is set up at the
>> physical router side).
>> 
>> VM inter-connectivity is OK.
>> 
>> When I tcpdump packets on the physical interface located in the gateway node,
>> I just grabbed lots of ARP requests to external gateway IP.
>> 
>> I'm not sure how midonet gateway manages ARP?
>> Will the ARP be cached on the gateway host?
>> 
>> Can I specify a static ARP record by 'ip command' on gateway node to
>> solve it quickly (not gracefully)?
>> 
>> (Currently I'm in the business trip that cannot touch the environment.
>> So, I'd like to get some ideas first and then I can tell my partners
>> to work on it.)
>> 
>> Thanks a lot,
>> 
>> --
>> 
>> Li Ma (Nick)
>> Email: skywalker.n...@gmail.com
> 
> 
> 
> --
> 
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Matt Riedemann
I'm not entirely sure what the geo distribution is for everyone that 
works on stable, but I know we have people in Europe and some people in 
Australia.  So I was thinking alternating weekly meetings:


Mondays at 2100 UTC

Tuesdays at 1500 UTC

Does that at least sort of work for people that would be interested in 
attending a meeting about stable? I wouldn't expect a full hour 
discussion, my main interests are highlighting status, discussing any 
issues that come up in the ML or throughout the week, and whatever else 
people want to go over (work items, questions, process discussion, etc).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Testing concerns around boot from UEFI spec

2015-12-16 Thread James Bottomley
On Fri, 2015-12-04 at 08:46 -0500, Sean Dague wrote:
> On 12/04/2015 08:34 AM, Daniel P. Berrange wrote:
> > On Fri, Dec 04, 2015 at 07:43:41AM -0500, Sean Dague wrote:
> > > That seems weird enough that I'd rather push back on our Platinum
> > > Board
> > > member to fix the licensing before we let this in. Especially as
> > > this
> > > feature is being drive by Intel.
> > 
> > As copyright holder, Intel could choose to change the license of
> > their
> > code to make it free software avoiding all the problems. None the
> > less,
> > as above, I don't think this is a blocker for inclusion of the
> > feature
> > in Nova, nor our testing of it.

Actually, it's a bit over simplified to claim this.  The origins of
this clause are in the covenants not to sue in the FAT spec:

http://download.microsoft.com/download/1/6/1/161ba512-40e2-4cc9-843a-92
3143f3456c/fatgen103.doc

It's clause 1(e).  The reason for the clause is a complex negotiation
over the UEFI spec (Microsoft committed to a royalty free
implementation and UEFI needed to use FAT for backward compatibility
with older BIOS).  The problem is that the litigation history no longer
supports claiming the patents are invalid:

http://en.swpat.org/wiki/Microsoft_FAT_patents

As you can see, they're mostly expired (in the US) but the last one
will expire in 2020 (if I calculate the date correctly).  No
corporation (including Intel) can safely release a driver under a
licence that doesn't respect the FAT covenant not to sue without being
subject to potential accusations of contributory infringement.  So,
you're right, Intel could release the FAT 32 driver under a non
-restricted licence as you say but only if they effectively take on
liability for potential infringement for every downstream user ...
amazingly enough they don't want to do that.  Red Hat could do the
same, of course: just strip the additional restrictions clause; Intel
won't enforce it; then Red Hat would take on all the liability ...

The FAT driver is fully separated from the EDKII source:

https://github.com/tianocore/tianocore.github.io/wiki/Edk2-fat-driver

So it can easily be replaced.  The problem is how when every UEFI
driver or update comes on a FAT32 format system.

> That's fair. However we could also force having this conversation 
> again, and pay it forward to the larger open source community by 
> getting this ridiculous licensing fixed. We did the same thing with 
> some other libraries in the past.

The only way to "fix" the licence is to either get Microsoft to extend
the covenant not to sue to all open source projects (I suppose not
impossible given they're making friendlier open source noises) or wait
for the patents to expire.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][openstack-operators][chef] kitchen-openstack 2.2.0 released!

2015-12-16 Thread JJ Asghar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hey everyone!

I just released 2.2.0[1] of the kitchen-openstack driver. If your
curious the CHANGELOG is located here[2].

We're doing great work with kitchen-openstack, and would love to see
some more feedback on things the community would support or features.
Please put in an issue[3], for a feature request, so we can start the
discussion there.

If you have any questions or thoughts don't hesitate to reach out.

[1]: https://rubygems.org/gems/kitchen-openstack/versions/2.2.0
[2]:
https://github.com/test-kitchen/kitchen-openstack/blob/master/CHANGELOG.md#220
[3]: https://github.com/test-kitchen/kitchen-openstack/issues

- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWcbXfAAoJEDZbxzMH0+jT5S0P/RRHbahe0rMnzHNuiPtThilg
JODMMoHR0MmGsis/n3CSZ4WDez7Ev558Ph7iqx8upuHL3Asa/zreaK4fOlrjvDeW
WVopRa0myKVZpTssnQXNP6ZFRWcrr2nQaL8CXkk+r2ULVG64M+b/ekYfGrdOUfuw
zcNQoM6Z9vRGflPjD3WDLVDpVw2awRrZQ3AOanpPyxiIZMvVbQodJYnhBXA9pSlQ
lXgC9qG/JPhq3VYuAocLofiF8+maIvCUB6EU9i0ddfaqBMyiFrPo9l0RYaNIMQXF
f+b1WdDq5vRi8RJSl4v9TEWpaqF+spHpFqA2GS6LSeh1/Wv9R2+/xKS1C1jrAfMn
Zh29jb324tpJ6A/783+1C7+Psa6V2d8Su8I0Fr00brWovdhag7pmKliHLyWTA78A
a7wQtWohvalzGwjTofOkkbiY4IpIrwESUsOaT1pR+Gk0KZDt+s1+cwiU802IK6u1
eiCbXPOqho0Q0GNglFgVHbk/U2TqG0Z4+KV6utCCqOK9dkVJuGV5RHJ0IddEiXEr
fvZbDVIkSozWbSsJBpKZuFOcT/XWc9UUCk/ow1FdlmxDN4o8CUcV5fliM/Y0AjXf
Vh+wt9gfQNpv1Svhw+bhtjQiGoy5wkUOfT3QjE7D3Z7DFXcmmZqyh38YEDyiflZw
7Z031uewvalMgHRTeLGm
=7kGh
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-16 Thread Oleg Gelbukh
Hi

Although I agree that it should be done, the removal of Docker doesn't seem
an urgent feature to me. It is not blocking anything besides moving to full
package-based deployment of Fuel, as far as I understand. So it could be
easily delayed for one milestone, especially if it is already almost done
and submitted for review, so it could be merged fast before any other
significant changes land in 'master' after it is open.

--
Best regards,
Oleg Gelbukh

On Wed, Dec 16, 2015 at 8:56 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Vladimir,
>
> I have other activities planned for the time immediately after SCF
> (separating UI from fuel-web, maybe it is even more invasive :-)) and it is
> not a big deal to postpone this feature or another. I am against the
> approach itself of postponing something because it is too invasive. If we
> create stable branch master becomes open. That was our primary intention to
> open master earlier than later when we decided to move stable branch
> creation.
>
>
>
>
> Vladimir Kozhukalov
>
> On Wed, Dec 16, 2015 at 8:28 PM, Vladimir Kuklin 
> wrote:
>
>> Vladimir
>>
>> I am pretty much for removing docker, but I do not think that we should
>> startle our developers/QA folks with additional efforts on fixing 2
>> different environments. Let's just think from the point of development
>> velocity here and at delay such changes for at least after NY. Because if
>> we do it immediately after SCF there will be a whole bunch of holidays and
>> Russian holidays are Jan 1st-10th and you (who is the SME for docker
>> removal) will be offline. Do you really want to fix things instead of
>> enjoying holidays?
>>
>> On Wed, Dec 16, 2015 at 4:09 PM, Evgeniy L  wrote:
>>
>>> +1 to Vladimir Kozhukalov,
>>>
>>> Entire point of moving branches creation to SCF was to perform such
>>> changes as
>>> early as possible in the release, I see no reasons to wait for HCF.
>>>
>>> Thanks,
>>>
>>> On Wed, Dec 16, 2015 at 10:19 AM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 -1

 We already discussed this and we have made a decision to move stable
 branch creation from HCF to SCF. There were reasons for this. We agreed
 that once stable branch is created, master becomes open for new features.
 Let's avoid discussing this again.

 Vladimir Kozhukalov

 On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin <
 bgaiful...@mirantis.com> wrote:

> +1
>
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
>
>
>
> On 15 Dec 2015, at 22:19, Andrew Maksimov 
> wrote:
>
> +1
>
> Regards,
> Andrey Maximov
> Fuel Project Manager
>
> On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin  > wrote:
>
>> Folks
>>
>> This email is a proposal to push Docker containers removal from the
>> master node to the date beyond 8.0 HCF.
>>
>> Here is why I propose to do so.
>>
>> Removal of Docker is a rather invasive change and may introduce a lot
>> of regressions. It is well may affect how bugs are fixed - we might have 
>> 2
>> ways of fixing them, while during SCF of 8.0 this may affect velocity of
>> bug fixing as you need to fix bugs in master prior to fixing them in 
>> stable
>> branches. This actually may significantly increase our bugfixing pace and
>> put 8.0 GA release on risk.
>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 35bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com 
>> www.mirantis.ru
>> vkuk...@mirantis.com
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 

Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-16 Thread Brant Knudson
On Wed, Dec 16, 2015 at 11:13 AM, Brant Knudson  wrote:

>
> I'd like to propose moving oslo.policy from the oslo program to the
> keystone program. Keystone developers know what's going on with oslo.policy
> and I think are more interested in what's going on with it so that reviews
> will get proper vetting, and it's not like oslo doesn't have enough going
> on with all the other repos. Keystone core has equivalent stringent
> development policy that we already enforce with keystoneclient and
> keystoneauth, so oslo.policy isn't going to be losing any stability.
>
> If there aren't any objections, let's go ahead with this. I heard this
> requires a change to a governance repo, and gerrit permission changes to
> make keystone-core core, and updates in oslo.policy to change some docs or
> links. Any oslo.policy specs that are currently proposed
>

will have to be reproposed to keystone-specs.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Ubuntu bootstrap] WebUI notification

2015-12-16 Thread Aleksey Zvyagintsev
>
> > What if user choose CentOS bootstrap? We ship it on ISO, so why do
> > we need to show error message?
>
> CentOS bootstrap still is not activated
>

Its pretty simple case-flow:

If selected ubuntu:
 - Try build
 -- Notify if build and activate fail
 -- Notify if build and activate ok

If selected centos:
 - Call "fuel-bootstrap activate centos" - and  remove message (activating
centos bootstrap still supports, with "deprecated message in CLI")

If selected ubuntu+skip
- do nothing





On Wed, Dec 16, 2015 at 4:30 PM, Artur Svechnikov 
wrote:

> > Bootstrap building *is* a part of master node deployment.
>
> Not always, user can set flag `skip_default_img_build` then building
> bootstrap will not executed.
>
> > If you guys show "deployment is successful" before running building
> bootstrap,
> > then it's something you have to fix.
>
> fuel-bootstrap-cli is only responsible for remove error and set it in case
> activation is failed.
>
> > What if user choose CentOS bootstrap? We ship it on ISO, so why do
> > we need to show error message?
>
> CentOS bootstrap still is not activated
>
> > it's something unrelated to Nailgun itself
>
> I think that notifying user about errors or something else is related to
> Nailgun itself.
>
> Ok, It's looks like workaround for me, but we can set error message in
> the beginning of deployment.
> But it shouldn't be made by using fuel-bootstrap-cli. It can be curl or
> something else.
>
>
>
> Best regards,
> Svechnikov Artur
>
> On Wed, Dec 16, 2015 at 4:48 PM, Igor Kalnitsky 
> wrote:
>
>> >  As I already told deployment was finished, but bootstrap wasn't built.
>>
>> Bootstrap building *is* a part of master node deployment. If you guys
>> show "deployment is successful" before running building bootstrap,
>> then it's something you have to fix.
>>
>>
>> > Fuel deploying => WebUI blocked => deployment is failed due to some
>> minor
>> > thing => I fix it => Ooops how can I activate WebUI
>>
>> I see no problem here. You fix the problem, run deployment script
>> again and it unblocks everything for you. Usually it won't be enough
>> to fix something without re-running deployment, simply because a lot
>> of steps may be skipped due to the error.
>>
>> > I really can't understand why is it bad to set error message by default
>>
>> So far I can provide two reasons:
>>
>> * What if user choose CentOS bootstrap? We ship it on ISO, so why do
>> we need to show error message?
>> * Nailgun should have good defaults, and showing error by default is
>> bad practice (it's something unrelated to Nailgun itself). Moreover,
>> it's a good practice to separate areas of responsibility, and it's
>> building script who's responsible to show and hide error message if
>> necessary.
>>
>> - Igor
>>
>>
>> On Wed, Dec 16, 2015 at 3:31 PM, Artur Svechnikov
>>  wrote:
>> >> We keep it As Is, and say "user should not use Fuel until Fuel
>> >> Master deployment is finished".
>> >
>> > Yep deployment can be finished, but was it successful? As I already told
>> > deployment was finished, but bootstrap wasn't built. Command for
>> building
>> > bootstrap wasn't called because of some reason.
>> >
>> >> We make API / Web UI unaccessible externally until Fuel Master is
>> >> deployed (e.g. iptables rules or nginx ones).
>> >
>> > This approach seems too suspicious for me, due to the same reason as
>> above.
>> > I can imagine some workflow: Fuel deploying => WebUI blocked =>
>> deployment
>> > is failed due to some minor thing => I fix it => Ooops how can I
>> activate
>> > WebUI... But maybe I'm wrong, anyway this approach required serious
>> change
>> > of nailgun by handling deployment process.
>> >
>> > I really can't understand why is it bad to set error message by
>> default. By
>> > default before all deployment is not finished master hasn't any valid
>> > bootstrap image, hence this error message is not bad or weird, it's in
>> right
>> > place. Error message will be disabled by fuel-bootstrap-cli after
>> building,
>> > activation of bootstrap image.
>> >
>> > Best regards,
>> > Svechnikov Artur
>> >
>> > On Wed, Dec 16, 2015 at 4:05 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> > wrote:
>> >>
>> >> > I really don't like setting the error message as the default one in
>> >> > the DB schema and consider it as a last resort solution. If
>> >> > possible update the message to error one just before you start
>> >> > to build the image.
>> >>
>> >> +1.
>> >>
>> >> > What about add some check or some message
>> >> > "Fuel-master Deployment in progress, please wait %s" ?
>> >>
>> >> I don't like this idea, since I believe it's something that user
>> >> shouldn't care at all. I see two possible *right* appraoch to handle
>> >> this:
>> >>
>> >> 1. We keep it As Is, and say "user should not use Fuel until Fuel
>> >> Master deployment is finished".
>> >> 2. We make API / Web UI unaccessible externally until Fuel 

[openstack-dev] [puppet] adding puppet-rally to OpenStack

2015-12-16 Thread Emilien Macchi
[adding openstack-dev mailing list for public discussion]

Hi Andy,

I was wondering if you would be interested to move puppet-rally [1] to
OpenStack?
We have some documentation about the process [2] but feel free to ping
us on IRC #puppet-openstack or by this thread if you're interested to
contribute.

I think that would be awesome for OpenStack to have this module part of
our list. We would have a bit of work to bring it consistent but from
what I'm seeing, that's not a ton of things.
We also could add it in our integration testing CI and eventually run
more testing beside Tempest.

What do you think?

Thanks,

[1] https://github.com/NeCTAR-RC/puppet-rally
(2] https://wiki.openstack.org/wiki/Puppet/New_module
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gnocchi] 'bad' resource_id

2015-12-16 Thread Lu, Lianhao
On Dec 16, 2015 14:13, Chris Dent wrote:
> On Wed, 16 Dec 2015, Lu, Lianhao wrote:
> 
>> In ceilometer, some metrics(e.g. network.incoming.bytes for VM net 
>> interface, hardware.network.incoming.bytes for host net interface, 
>> compute.node.cpu.percentage for nova compute node host cpu 
>> utilization,
>> etc.) don't have their resource_id in UUID format(which is required 
>> by gnocchi). Instead, they have something like . as 
>> their resource_id, in some cases even the  part won't be in uuid 
>> format. Gnocchi will treat these kind of resource_id as bad id, and 
>> build a new UUID format resource_id for them. Since users are mostly 
>> using resource_id to identify their resources, changing user passed 
>> in resource_id would require the users extra effort to identify the 
>> resources in gnocchi and link them with the resources they original 
>> passed in.
> 
> Just for the sake of completeness can you describe the use cases where 
> the resource_id translation that gnocchi does does not help the use 
> case. The one way translation is used in the body of search queries as 
> well as in any URL which contains a resource_id.
> 
> I'm sure there are use cases where it breaks down, but I've not heard 
> them enumerated explicitly.
> 

I'm not saying the translation will break down anything. It's just that in the 
case of using ceilometer/gnocchi together, when ceilometer samples are stored 
into gnocchi, the users need to do extra steps to figure out which resource to 
query to find its related metrics in the bad resource_id case. By simply 
looking at the 
http://docs.openstack.org/admin-guide-cloud/telemetry-measurements.html , the 
users can not easily identify the resource and its related metrics in gnocchi. 
The users need to be able to do a resource search based on resource attributes, 
such as original_resource_id, because in ceilometer/gnocchi cases, they don't 
get a chance to see the new resource_id gnocchi generated unless they search. 

Say we have configured nova to send out compute node metrics notification which 
will be turns into compute.node.cpu.percentage samples by ceilometer and stored 
into gnocchi, the original resource_id would be constructed as _ of the nova compute node machine. But when 
admin want to search that resource in gnocchi, he either search for a specific 
new type of resource with some conditions or search for a generic resource with 
condition of original_resource_id="_", 
otherwise he doesn't have ways to find the resource which is identified by the 
original resource_id. 

-Lianhao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread James Penick
>We actually called out this problem in the Ironic midcycle and the Tokyo
>summit - we decided to report Ironic's total capacity from each compute
>host (resulting in over-reporting from Nova), and real capacity (for
>purposes of reporting, monitoring, whatever) should be fetched by
>operators from Ironic (IIRC, you specifically were okay with this
>limitation). This is still wrong, but it's the least wrong of any option
>(yes, all are wrong in some way). See the spec[1] for more details.

I do recall that discussion, but the merged spec says:

"In general, a nova-compute running the Ironic virt driver should expose
(total resources)/(number of compute services). This allows for resources
to be
sharded across multiple compute services without over-reporting resources."

I agree that what you said via email is Less Awful than what I read on the
spec (Did I misread it? Am I full of crazy?)

>We *do* still
>need to figure out how to handle availability zones or host aggregates,
>but I expect we would pass along that data to be matched against. I
>think it would just be metadata on a node. Something like
>node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
>you. Ditto for host aggregates - add the metadata to ironic to match
>what's in the host aggregate. I'm honestly not sure what to do about
>(anti-)affinity filters; we'll need help figuring that out.
&
>Right, I didn't mean gantt specifically, but rather "splitting out the
>scheduler" like folks keep talking about. That's why I said "actually
>exists". :)

 I think splitting out the scheduler isn't going to be realistic. My
feeling is, if Nova is going to fulfill its destiny of being The Compute
Service, then the scheduler will stay put and the VM pieces will split out
into another service (Which I think should be named "Seamus" so I can refer
to it as "The wee baby Seamus").

(re: ironic maintaining host aggregates)
>Yes, and yes, assuming those things are valuable to our users. The
>former clearly is, the latter will clearly depend on the change but I
>expect we will evolve to continue to fit Nova's model of the world
>(after all, fitting into Nova's model is a huge chunk of what we do, and
>is exactly what we're trying to do with this work).

It's a lot easier to fit into the nova model if we just use what's there
and don't bother trying to replicate it.

>Again, the other solutions I'm seeing that *do* solve more problems are:
>* Rewrite the resource tracker

>Do you have an entire team (yes, it will take a relatively large team,
>especially when you include some cores dedicated to reviewing the code)
>that can dedicate a couple of development cycles to one of these?

 We can certainly help.

>I sure
>don't. If and when we do, we can move forward on that and deprecate this
>model, if we find that to be a useful thing to do at that time. Right
>now, this is the best plan I have, that we can commit to completing in a
>reasonable timeframe.

I respect that you're trying to solve the problem we have right now to make
operators lives Suck Less. But I think that a short term decision made now
would hurt a lot more later on.

-James

On Wed, Dec 16, 2015 at 8:03 AM, Jim Rollenhagen 
wrote:

> On Tue, Dec 15, 2015 at 05:19:19PM -0800, James Penick wrote:
> > > getting rid of the raciness of ClusteredComputeManager in my
> > >current deployment. And I'm willing to help other operators do the same.
> >
> >  You do alleviate race, but at the cost of complexity and
> > unpredictability.  Breaking that down, let's say we go with the current
> > plan and the compute host abstracts hardware specifics from Nova.  The
> > compute host will report (sum of resources)/(sum of managed compute).  If
> > the hardware beneath that compute host is heterogenous, then the
> resources
> > reported up to nova are not correct, and that really does have
> significant
> > impact on deployers.
> >
> >  As an example: Let's say we have 20 nodes behind a compute process.
> Half
> > of those nodes have 24T of disk, the other have 1T.  An attempt to
> schedule
> > a node with 24T of disk will fail, because Nova scheduler is only aware
> of
> > 12.5T of disk.
>
> We actually called out this problem in the Ironic midcycle and the Tokyo
> summit - we decided to report Ironic's total capacity from each compute
> host (resulting in over-reporting from Nova), and real capacity (for
> purposes of reporting, monitoring, whatever) should be fetched by
> operators from Ironic (IIRC, you specifically were okay with this
> limitation). This is still wrong, but it's the least wrong of any option
> (yes, all are wrong in some way). See the spec[1] for more details.
>
> >  Ok, so one could argue that you should just run two compute processes
> per
> > type of host (N+1 redundancy).  If you have different raid levels on two
> > otherwise identical hosts, you'll now need a new compute process for each
> > variant of hardware.  What about host aggregates or 

Re: [openstack-dev] [nova][cinder] what are the key errors with volume detach

2015-12-16 Thread Matt Riedemann



On 12/14/2015 11:24 AM, Andrea Rosa wrote:



On 10/12/15 15:29, Matt Riedemann wrote:


In a simplified view of a detach volume we can say that the nova code
does:
1 detach the volume from the instance
2 Inform cinder about the detach and call the terminate_connection on
the cinder API.
3 delete the dbm recod in the nova DB


We actually:

1. terminate the connection in cinder:

https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2312


2. detach the volume

https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2315


3. delete the volume (if marked for delete_on_termination):

https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2348


4. delete the bdm in the nova db:

https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L908




I am confused here, why are are you referring to the _shutdown_instance
code?


Because that's the code in the compute manager that calls cinder to 
terminate the connection to the storage backend and detaches the volume 
from the instance, which you pointed out in your email as part of 
terminating the instance.






So if terminate_connection fails, we shouldn't get to detach. And if
detach fails, we shouldn't get to delete.



If 2 fails the volumes get stuck in a detaching status and any further
attempt to delete or detach the volume will fail:
"Delete for volume  failed: Volume  is still
attached, detach volume first. (HTTP 400)"

And if you try to detach:
"EROR (BadRequest): Invalid input received: Invalid volume: Unable to
detach volume. Volume status must be 'in-use' and attach_status must
be 'attached' to detach. Currently: status: 'detaching',
attach_status: 'attached.' (HTTP 400)"

at the moment the only way to clean up the situation is to hack the
nova DB for deleting the bdm record and do some hack on the cinder
side as well.
We wanted a way to clean up the situation avoiding the manual hack to
the nova DB.


Can't cinder rollback state somehow if it's bogus or failed an
operation? For example, if detach failed, shouldn't we not be in
'detaching' state? This is like auto-reverting task_state on server
instances when an operation fails so that we can reset or delete those
servers if needed.


I think that is an option but probably it is part of the redesign of the
cinder API (see the solution proposed #3), but It would be nice to get
cinder guys commenting here.


Solution proposed #3
Ok, so the solution is to fix the Cinder API and makes the interaction
between Nova volume manager and that API robust.
This time I was right (YAY) but as you can imagine this fix is not
going to be an easy one and after talking with Cinder guys they
clearly told me that thatt is going to be a massive change in the
Cinder API and it is unlikely to land in the N(utella) or O(melette)
release.



As Sean pointed out in another reply, I feel like what we're really
missing here is some rollback code in the case that delete fails so we
don't get in this stuck state and have to rely on deleting the BDMs
manually in the database just to delete the instance.

We should rollback on delete fail 1 so that delete request 2 can pass
the 'check attach' checks again.


The communication with cinder is async, Nova doesn't wait or check if
the detach on cinder side has been executed correctly.


Yeah, I guess nova gets the 202 back:

http://logs.openstack.org/18/258118/2/check/gate-tempest-dsvm-full-ceph/7a5290d/logs/screen-n-cpu.txt.gz#_2015-12-16_03_30_43_990

Should nova be waiting for detach to complete before it tries deleting 
the volume (in the case that delete_on_termination=True in the bdm)?


Should nova be waiting (regardless of volume delete) for the volume 
detach to complete - or timeout and fail the instance delete if it doesn't?




Thanks
--
Andrea Rosa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread James Bottomley
On Wed, 2015-12-16 at 22:48 +, Adrian Otto wrote:
> On Dec 16, 2015, at 2:25 PM, James Bottomley <
> james.bottom...@hansenpartnership.com james.bottom...@hansenpartnership.com>> wrote:
> 
> On Wed, 2015-12-16 at 20:35 +, Adrian Otto wrote:
> Clint,
> 
> On Dec 16, 2015, at 11:56 AM, Tim Bell > wrote:
> 
> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: 15 December 2015 22:40
> To: openstack-dev >
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
> 
> Hi! Can I offer a counter point?
> 
> Quotas are for _real_ resources.
> 
> No. Beyond billable resources, quotas are a mechanism for limiting
> abusive use patterns from hostile users.
> 
> Actually, I believe this is the wrong way to look at it.  You're
> confusing policy and mechanism.  Quotas are policy on resources.  The
> mechanisms by which you implement quotas can also be used to limit
> abuse by hostile users, but that doesn't mean that this limitation
> should be part of the quota policy.
> 
> I’m not convinced. Cloud operators already use quotas as a mechanism
> for limiting abuse (intentional or accidental). They can be
> configured with a system wide default, and can be set to a different
> value on a per-tenant basis. It would be silly to have a second
> mechanism for doing the same thing we already use quotas for.
> Quotas/limits can also be queried by a user so they can determine why
> they are getting a 4XX Rate Limit responses when they try to act on
> resources too rapidly.

I think we might be talking a bit past each other.  My definition of
"real" is end user visible.  So in the fork bomb example below the end
user visible (and billable) panel just gives a choice for "memory". 
 The provider policy divides this into user memory and kernel memory,
usually in a fixed ratio and then imposes that on the cgroup.

> The idea of hard coding system wide limits into the system is making
> my stomach turn. If you wanted to change the limit you’d need to edit
> the production system’s configuration, and restart the API services.
> Yuck! That’s why we put quotas/limits into OpenStack to begin with,
> so that we had a sensible, visible, account-level configurable place
> to configure limits.

I don't believe anyone advocated for hard coding.  I was just saying
that the view that Quota == Real End User Visible resource limits is a
valid way of looking at things because it forces you to think about
what the end user sees.  The fact that the service provided uses the
mechanism for abuse prevention is also valid, but you wouldn't usually
want the end user to see it.  Even in a private cloud, you'll have this
distinction between end user and cloud administrator.  Conversely,
taking the mechanistic view that anything you can do with the mechanism
constitutes a quota and should be exposed pushes the issue up to the
UI/UX layer to sort out.

Perhaps this whole thing is just a semantic question of does quota mean
mechanism or policy.  I think the latter, but I suppose it's possible
to take the view it's the former ... in which case we just need more
precision.

James

> Adrian
> 
> 
> For instance, in Linux, the memory limit policy is implemented by the
> memgc.  The user usually sees a single figure for "memory" but inside
> the cgroup, that memory is split into user and kernel.  Kernel memory
> limiting prevents things like fork bombs because you run out of your
> kernel memory limit creating task structures before you can bring
> down
> the host system.  However, we don't usually expose the kernel/user
> split or the fact that the kmem limit mechanism can prevent fork and
> inode bombs.
> 
> James
> 
> The rate at which Bays are created, and how many of them you can
> have in total are important limits to put in the hands of cloud
> operators. Each Bay contains a keypair, which takes resources to
> generate and securely distribute. Updates to and Deletion of bays
> causes a storm of activity in Heat, and even more activity in Nova.
> Cloud operators should have the ability to control the rate of
> activity by enforcing rate controls on Magnum resources before they
> become problematic further down in the control plane. Admission
> controls are best managed at the entrance to a system, not at the
> core.
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread Andrew Laski

On 12/16/15 at 12:40pm, James Penick wrote:

Affinity is mostly meaningless with baremetal. It's entirely a
virtualization related thing. If you try and group things by TOR, or
chassis, or anything else, it's going to start meaning something entirely
different than it means in Nova,


I disagree, in fact, we need TOR and power affinity/anti-affinity for VMs
as well as baremetal. As an example, there are cases where certain compute
resources move significant amounts of data between one or two other
instances, but you want to ensure those instances are not on the same
hypervisor. In that scenario it makes sense to have instances on different
hypervisors, but on the same TOR to reduce unnecessary traffic across the
fabric.


I think the point was that affinity/anti-affinity as it's defined today 
within Nova does not have any real meaning for baremetal.  The scope is 
a single host and baremetal won't have two instances on the same host so 
by default you have anti-affinity and asking for affinity doesn't make 
sense.


There's a WIP spec proposing scoped policies for server groups that I 
think addresses the case you outlined 
https://review.openstack.org/#/c/247654/.  It's affinity/anti-affinity 
at a different level.  It may help the discussion to differentiate 
between the general concept of affinity/anti-affinity which could 
apply to many different scopes and the current Nova definition of those 
concepts which has a very specific scope.






and it would probably be better to just
make lots of AZ's and have users choose their AZ mix appropriately,
since that is the real meaning of AZ's.


Yes, at some level certain things should be expressed in the form of an AZ,
power seems like a good candidate for that. But , expressing something like
a TOR as an AZ in an environment with hundreds of thousands of physical
hosts, would not scale. Further, it would require users to have a deeper
understanding of datacenter toplogy, which is exactly the opposite of why
IaaS exists.

The whole point of a service-oriented infrastructure is to be able to give
the end user the ability to boot compute resources that match a variety of
constraints, and have those resources selected and provisioned for them. IE
"Give me 12 instances of m1.blah, all running Linux, and make sure they're
spread across 6 different TORs and 2 different power domains in network
zone Blah."



I think the above spec covers this.  The difference to me is that AZs 
require the user to think about absolute placements while the spec 
offers a means to think about relative placements.











On Wed, Dec 16, 2015 at 10:38 AM, Clint Byrum  wrote:


Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:
> Nobody is talking about running a compute per flavor or capability. All
> compute hosts will be able to handle all ironic nodes. We *do* still
> need to figure out how to handle availability zones or host aggregates,
> but I expect we would pass along that data to be matched against. I
> think it would just be metadata on a node. Something like
> node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
> you. Ditto for host aggregates - add the metadata to ironic to match
> what's in the host aggregate. I'm honestly not sure what to do about
> (anti-)affinity filters; we'll need help figuring that out.
>

Affinity is mostly meaningless with baremetal. It's entirely a
virtualization related thing. If you try and group things by TOR, or
chassis, or anything else, it's going to start meaning something entirely
different than it means in Nova, and it would probably be better to just
make lots of AZ's and have users choose their AZ mix appropriately,
since that is the real meaning of AZ's.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] openstack/requirements repo has a stable/icehouse branch...

2015-12-16 Thread Davanum Srinivas
Jeremy,

Thanks. +1 from me.

-- dims

On Wed, Dec 16, 2015 at 11:41 PM, Jeremy Stanley  wrote:
> On 2015-12-16 15:35:23 -0600 (-0600), Matt Riedemann wrote:
>> That should be deleted, right? Or are there random projects that still have
>> stable/icehouse branches in projects.txt and we care about them?
>
> I believe I simply missed it during the stable/icehouse batch EOL.
> That was the timeframe where we were first rolling out the release
> management governance tags and there was still some hesitation about
> which repos were fair game in the new order. I'll go ahead and give
> it a proper burial this week when I also get around to taking care
> of the stable/juno EOL requested by the openstack-manuals
> maintainers. Thanks for spotting this and brining it up!
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2015-12-16 12:35:10 -0800:
> Clint,
> 
> > On Dec 16, 2015, at 11:56 AM, Tim Bell  wrote:
> > 
> >> -Original Message-
> >> From: Clint Byrum [mailto:cl...@fewbar.com]
> >> Sent: 15 December 2015 22:40
> >> To: openstack-dev 
> >> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> >> Resources
> >> 
> >> Hi! Can I offer a counter point?
> >> 
> >> Quotas are for _real_ resources.
> 
> No. Beyond billable resources, quotas are a mechanism for limiting abusive 
> use patterns from hostile users. The rate at which Bays are created, and how 
> many of them you can have in total are important limits to put in the hands 
> of cloud operators. Each Bay contains a keypair, which takes resources to 
> generate and securely distribute. Updates to and Deletion of bays causes a 
> storm of activity in Heat, and even more activity in Nova. Cloud operators 
> should have the ability to control the rate of activity by enforcing rate 
> controls on Magnum resources before they become problematic further down in 
> the control plane. Admission controls are best managed at the entrance to a 
> system, not at the core.
> 

I'm agreeing that users need limits of course. The implementation has a
capacity, and those limitations will be quickly found by any unlimited
user.

What I'm suggesting is that those limitations be the same for every user,
and that their billing would be the limit that stops them from doing
more than they can afford _way_ before they can really break Magnum. This
will make the service more of a force-multiplier for the real services,
and thus drive as much spending toward using your cloud, which is the
actual goal.

I think of things like Heat and Magnum like freeways. They cost an order
of magnitude less to maintain than the benefit they have on the economy as
a whole. So, have weight limits (per-request sanity checks) , set speed
limits (per-user global limit), meter the onramps during busy periods
(rate limiting), etc. But you wouldn't require every user of the freeway
to prove that it didn't have too many cars on the freeway right now before
letting it get on. The expense of doing that, and the complexity of it,
would make it untenable in any useful way. The users of this piece of
infrastructure are already limited enough by the actual cost of the cars,
cargo, commerce, or on the cloud, the disk/cpu/mem/network.

So I'm suggesting that you should not be tuning things up and down per
user, but testing the true safe limits of what your service can do,
and configuring it to do that in aggregate, which is _far_ simpler.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11 completed.

2015-12-16 Thread Michał Dulko
On 12/16/2015 10:22 PM, Zaro wrote:
> Thanks to everyone for their patience while we upgraded to Gerrit
> 2.11.  I'm happy to announce that we were able to successfully
> completed this task at around 21:00 UTC.  You may hack away once more.
>
> If you encounter any problems, please let us know here or in
> #openstack-infra on Freenode.
>
> Enjoy,
> -Khai
>

Good job! :)

In Cinder we have an impressive number of Third-Party CIs. Even with
"Toggle CI" option set to not-showing CIs comments, the comment frame is
displayed. E.g. [1]. This makes reading reviewers comments harder. Is
there any way of disabling that? Or any chances of fixing it up in
Gerrit deployment itself?

[1] https://review.openstack.org/#/c/248768/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] openstack/requirements repo has a stable/icehouse branch...

2015-12-16 Thread Matt Riedemann
That should be deleted, right? Or are there random projects that still 
have stable/icehouse branches in projects.txt and we care about them?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] openstack/requirements repo has a stable/icehouse branch...

2015-12-16 Thread Jeremy Stanley
On 2015-12-16 15:35:23 -0600 (-0600), Matt Riedemann wrote:
> That should be deleted, right? Or are there random projects that still have
> stable/icehouse branches in projects.txt and we care about them?

I believe I simply missed it during the stable/icehouse batch EOL.
That was the timeframe where we were first rolling out the release
management governance tags and there was still some hesitation about
which repos were fair game in the new order. I'll go ahead and give
it a proper burial this week when I also get around to taking care
of the stable/juno EOL requested by the openstack-manuals
maintainers. Thanks for spotting this and brining it up!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][DVR] - No IRC Meeting for the next two weeks.

2015-12-16 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
We will not be having our DVR sub-team meeting for the next two weeks.

Dec 23rd 2015
Dec 30th 2015.

We will resume our meeting on "Jan 6th 2016".

If you have any questions please ping us in IRC or send an email to the 
distribution list.

https://wiki.openstack.org/wiki/Meetings/Neutron-DVR


Wish you all Happy Holidays and Happy New Year.

Thanks
Swami

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Jeremy Stanley
On 2015-12-16 16:50:39 -0500 (-0500), Mike Bayer wrote:
[...]
> That said, as the upper-constraints system is available, and not using
> it means that any upstream package release that breaks any test will
> cause 12 hours of gate downtime, I'm surprised that rolling this system
> out across the board isn't an emergency priority. Because right now,
> literally any of the 50 or so projects I see in Nova requirements.txt
> can release something on Pypi at any moment and cause another 12 hours
> of downtime.
[...]

Yep, it happens pretty continually and has for years. The
constraints-based solution is certainly a cross-project priority
effort, but the developers who are in the process of implementing it
are wary of rushing it into place and causing new problems.
Sometimes the devil you know is easier to deal with than the one you
don't.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Tony Breeds
On Wed, Dec 16, 2015 at 01:12:13PM -0600, Matt Riedemann wrote:
> I'm not entirely sure what the geo distribution is for everyone that works
> on stable, but I know we have people in Europe and some people in Australia.
> So I was thinking alternating weekly meetings:
> 
> Mondays at 2100 UTC
> 
> Tuesdays at 1500 UTC
> 
> Does that at least sort of work for people that would be interested in
> attending a meeting about stable? I wouldn't expect a full hour discussion,
> my main interests are highlighting status, discussing any issues that come
> up in the ML or throughout the week, and whatever else people want to go
> over (work items, questions, process discussion, etc).

The Monday meeting works for me :)

Yours Tony.


pgpH_2C3UvQqG.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread melanie witt
On Dec 10, 2015, at 15:57, Devananda van der Veen  
wrote:

> So, at this point, I think we need to accept that the scheduling of 
> virtualized and bare metal workloads are two different problem domains that 
> are equally complex.
> 
> Either, we:
> * build a separate scheduler process in Ironic, forking the Nova scheduler as 
> a starting point so as to be compatible with existing plugins; or
> * begin building a direct integration between nova-scheduler and ironic, and 
> create a non-virtualization-centric resource tracker within Nova; or
> * proceed with the plan we previously outlined, accept that this isn't going 
> to be backwards compatible with nova filter plugins, and apologize to any 
> operators who rely on the using the same scheduler plugins for baremetal and 
> virtual resources; or
> * keep punting on this, bringing pain and suffering to all operators of bare 
> metal clouds, because nova-compute must be run as exactly one process for all 
> sizes of clouds.

Speaking only for myself, I find the current direction unfortunate, but at the 
same time understandable, given how long it’s been discussed and the need to 
act now.

It becomes apparent to me when I think about the future picture, if I imagine 
what the Compute API is should look like for all end users of 
vm/baremetal/container. They should be able to call one API to create an 
instance and the cloud will do the right things. I can see Nova being that API 
(entrypoint + scheduling, then handoff via driver to vm/baremetal/container 
API). An alternative would be a separate, new frontend API that hands off to a 
separate scheduling API (scheduler break out) that hands off to the various 
compute APIs (vm/baremetal/container).

I realized that if we were able to do a 1:1 ratio of nova-compute to Ironic 
node, everything would work fine as-is. But I understand the problems with that 
as nova-compute processes can’t be run on the inventory nodes themselves, so 
you’re left with a ton of processes that you would have to find a place to run 
and it’s wasteful. Ironic doesn’t “fit in” to the model of 1:1 nova-compute to 
resource.

My concern with the current plan is the need to sync constructs like aggregates 
and availability zones from one system (Nova) to the other (Ironic) in 
perpetuity. Users will have to set them up in both systems and keep them in 
sync. The code itself also has to be effectively duplicated along with filters 
and kept in sync. Eventually each of Nova and Ironic would be separate 
standalone systems, I imagine, to avoid having the sync issues.

I’d rather we provided something like a more generic “Resource View API” in 
Nova that allows baremetal/container/clustered hypervisor environments to 
report resources via a REST API, and scheduling would occur based on the 
resources table (instead of having resource trackers). Each environment 
reporting resources would provide corresponding in-tree Nova scheduler filters 
that know what to do with resources related to them. Then scheduling would 
select a resource and lookup the compute host responsible for that resource, 
and nova-compute would delegate the chosen resource to, for example, Ironic.

This same concept could exist in a separate scheduler service instead of Nova, 
but I don’t see why it can’t be in Nova. I figure we could either enhance Nova 
and eventually forklift the virtualization driver code out into a thin service 
that manages vms, or we could build a new frontend service and a scheduling 
service, and forklift the scheduling bits out of Nova so that it ends up being 
a thin service. The end result seems really similar to me, though one could 
argue that there are other systems that want to share scheduling code that 
aren’t provisioning compute, and thus scheduling would have to move out of Nova 
anyway.

With the current direction, I see things going separate standalone with 
duplicated constructs and then eventually refactored to use common services 
down the road if and when they exist.

I would personally prefer a direction toward something like a Resource View API 
in Nova that generalizes resources to avoid compute services, like Ironic, 
having to duplicate scheduling, aggregates, availability zones, etc.

-melanie








signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Jeremy Stanley
On 2015-12-16 13:59:55 -0700 (-0700), Carl Baldwin wrote:
> Is someone from Neutron actively helping out here?  Need more?
[...]

I believe all of the jobs currently voting on changes proposed to
the master branch of the openstack/neutron repo are using centrally
constrained requirements (they all either have "constraints" or
"dsvm" in their job names).

I'll let Robert and Sachi, who have been spearheading this effort up
to this point, comment on whether additional assistance is needed
and where they see next steps leading.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread Jim Rollenhagen
On Wed, Dec 16, 2015 at 01:51:47PM -0800, James Penick wrote:
> >We actually called out this problem in the Ironic midcycle and the Tokyo
> >summit - we decided to report Ironic's total capacity from each compute
> >host (resulting in over-reporting from Nova), and real capacity (for
> >purposes of reporting, monitoring, whatever) should be fetched by
> >operators from Ironic (IIRC, you specifically were okay with this
> >limitation). This is still wrong, but it's the least wrong of any option
> >(yes, all are wrong in some way). See the spec[1] for more details.
> 
> I do recall that discussion, but the merged spec says:
> 
> "In general, a nova-compute running the Ironic virt driver should expose
> (total resources)/(number of compute services). This allows for resources
> to be
> sharded across multiple compute services without over-reporting resources."
> 
> I agree that what you said via email is Less Awful than what I read on the
> spec (Did I misread it? Am I full of crazy?)

Oh wow, that was totally missed when we figured that problem out. If you
look down a few paragraphs (under what the reservation request looks
like), it's got more correct words. Sorry about that.

This should clear it up: https://review.openstack.org/#/c/258687/

> >We *do* still
> >need to figure out how to handle availability zones or host aggregates,
> >but I expect we would pass along that data to be matched against. I
> >think it would just be metadata on a node. Something like
> >node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
> >you. Ditto for host aggregates - add the metadata to ironic to match
> >what's in the host aggregate. I'm honestly not sure what to do about
> >(anti-)affinity filters; we'll need help figuring that out.
> &
> >Right, I didn't mean gantt specifically, but rather "splitting out the
> >scheduler" like folks keep talking about. That's why I said "actually
> >exists". :)
> 
>  I think splitting out the scheduler isn't going to be realistic. My
> feeling is, if Nova is going to fulfill its destiny of being The Compute
> Service, then the scheduler will stay put and the VM pieces will split out
> into another service (Which I think should be named "Seamus" so I can refer
> to it as "The wee baby Seamus").

Sure, that's honestly the best option, but will take even longer. :)

> (re: ironic maintaining host aggregates)
> >Yes, and yes, assuming those things are valuable to our users. The
> >former clearly is, the latter will clearly depend on the change but I
> >expect we will evolve to continue to fit Nova's model of the world
> >(after all, fitting into Nova's model is a huge chunk of what we do, and
> >is exactly what we're trying to do with this work).
> 
> It's a lot easier to fit into the nova model if we just use what's there
> and don't bother trying to replicate it.

The problem is, the Nova model is "one compute service per physical
host". This is actually *much* easier to implement, if you want to run a
compute service per physical host. :)

> >Again, the other solutions I'm seeing that *do* solve more problems are:
> >* Rewrite the resource tracker
> 
> >Do you have an entire team (yes, it will take a relatively large team,
> >especially when you include some cores dedicated to reviewing the code)
> >that can dedicate a couple of development cycles to one of these?
> 
>  We can certainly help.
> 
> >I sure
> >don't. If and when we do, we can move forward on that and deprecate this
> >model, if we find that to be a useful thing to do at that time. Right
> >now, this is the best plan I have, that we can commit to completing in a
> >reasonable timeframe.
> 
> I respect that you're trying to solve the problem we have right now to make
> operators lives Suck Less. But I think that a short term decision made now
> would hurt a lot more later on.

Yeah, I think that's the biggest disagreement here; I don't think we're
blocking any work to make this even better in the future, just taking a
step toward that. It will be extra work to unwind, and I think it's
worth the tradeoff.

// jim

> -James
> 
> On Wed, Dec 16, 2015 at 8:03 AM, Jim Rollenhagen 
> wrote:
> 
> > On Tue, Dec 15, 2015 at 05:19:19PM -0800, James Penick wrote:
> > > > getting rid of the raciness of ClusteredComputeManager in my
> > > >current deployment. And I'm willing to help other operators do the same.
> > >
> > >  You do alleviate race, but at the cost of complexity and
> > > unpredictability.  Breaking that down, let's say we go with the current
> > > plan and the compute host abstracts hardware specifics from Nova.  The
> > > compute host will report (sum of resources)/(sum of managed compute).  If
> > > the hardware beneath that compute host is heterogenous, then the
> > resources
> > > reported up to nova are not correct, and that really does have
> > significant
> > > impact on deployers.
> > >
> > >  As an example: Let's say we have 20 nodes behind a compute process.
> > Half

Re: [openstack-dev] [ceilometer][gnocchi] 'bad' resource_id

2015-12-16 Thread gord chung



On 16/12/2015 4:24 PM, Lu, Lianhao wrote:

On Dec 16, 2015 14:13, Chris Dent wrote:

On Wed, 16 Dec 2015, Lu, Lianhao wrote:


In ceilometer, some metrics(e.g. network.incoming.bytes for VM net
interface, hardware.network.incoming.bytes for host net interface,
compute.node.cpu.percentage for nova compute node host cpu
utilization,
etc.) don't have their resource_id in UUID format(which is required
by gnocchi). Instead, they have something like . as
their resource_id, in some cases even the  part won't be in uuid
format. Gnocchi will treat these kind of resource_id as bad id, and
build a new UUID format resource_id for them. Since users are mostly
using resource_id to identify their resources, changing user passed
in resource_id would require the users extra effort to identify the
resources in gnocchi and link them with the resources they original
passed in.

Just for the sake of completeness can you describe the use cases where
the resource_id translation that gnocchi does does not help the use
case. The one way translation is used in the body of search queries as
well as in any URL which contains a resource_id.

I'm sure there are use cases where it breaks down, but I've not heard
them enumerated explicitly.


I'm not saying the translation will break down anything. It's just that in the 
case of using ceilometer/gnocchi together, when ceilometer samples are stored 
into gnocchi, the users need to do extra steps to figure out which resource to 
query to find its related metrics in the bad resource_id case. By simply 
looking at the 
http://docs.openstack.org/admin-guide-cloud/telemetry-measurements.html , the 
users can not easily identify the resource and its related metrics in gnocchi. 
The users need to be able to do a resource search based on resource attributes, 
such as original_resource_id, because in ceilometer/gnocchi cases, they don't 
get a chance to see the new resource_id gnocchi generated unless they search.

Say we have configured nova to send out compute node metrics notification which will be turns into 
compute.node.cpu.percentage samples by ceilometer and stored into gnocchi, the original resource_id would be 
constructed as _ of the nova compute node machine. 
But when admin want to search that resource in gnocchi, he either search for a specific new type of resource with 
some conditions or search for a generic resource with condition of original_resource_id="_", otherwise he doesn't have ways to find the resource which is identified by 
the original resource_id.
but when you query, you do use the original resource_id.  the 
translation happens on both writes and reads. while in reality, the db 
is will store a different id, users shouldn't really be aware of this.


that said, because of pecan, translations don't help when our ids have 
'/' in them... we should definitely fix that.


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11 completed.

2015-12-16 Thread Zaro
On Tue, Dec 1, 2015 at 6:38 PM, Spencer Krum  wrote:
> Hi All,
>
> The infra team will be taking gerrit offline for an upgrade on December
> 16th. We
> will start the operation at 17:00 UTC and will continue until about
> 21:00 UTC.
>
> This outage is to upgrade Gerrit to version 2.11. The IP address of
> Gerrit will not be changing.
>
> There is a thread beginning here:
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
> which covers what to expect from the new software.
>
> If you have questions about the Gerrit outage you are welcome to post a
> reply to this thread or find the infra team in the #openstack-infra irc
> channel on freenode. If you have questions about the version of Gerrit
> we are upgrading to please post a reply to the email thread linked
> above, or again you are welcome to ask in the #openstack-infra channel.
>


Thanks to everyone for their patience while we upgraded to Gerrit
2.11.  I'm happy to announce that we were able to successfully
completed this task at around 21:00 UTC.  You may hack away once more.

If you encounter any problems, please let us know here or in
#openstack-infra on Freenode.

Enjoy,
-Khai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Matt Riedemann



On 12/16/2015 2:45 PM, Mark McClain wrote:



On Dec 16, 2015, at 2:12 PM, Matt Riedemann  wrote:

I'm not entirely sure what the geo distribution is for everyone that works on 
stable, but I know we have people in Europe and some people in Australia.  So I 
was thinking alternating weekly meetings:

Mondays at 2100 UTC

Tuesdays at 1500 UTC



Were you thinking of putting these on the opposite weeks as Neutron’s 
Monday/Tuesday schedule?


Does that at least sort of work for people that would be interested in 
attending a meeting about stable? I wouldn't expect a full hour discussion, my 
main interests are highlighting status, discussing any issues that come up in 
the ML or throughout the week, and whatever else people want to go over (work 
items, questions, process discussion, etc).




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I could, I guess I didn't get into that level of detail yet. I was 
really just looking for reasonable morning/afternoon times for me to 
chair the meeting and try to get some overlap with Australian and 
European timezones.


I have a change up to the irc-meetings repo:

https://review.openstack.org/#/c/258646/

I can make it such that the Monday meeting is the opposite week from the 
Neutron team meeting at the same time.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Mike Bayer


On 12/16/2015 01:32 PM, Jeremy Stanley wrote:
> On 2015-12-16 11:03:38 -0700 (-0700), Carl Baldwin wrote:
> [...]
>> We need to vet new package releases before they wreak havoc.  We need
>> to accept new package releases by proposing a patch to update the
>> version and take it through the gate.  Weren't we working on this at
>> one point?  I understand if it isn't quite possible to do this yet but
>> we need to be working toward this and accelerating our efforts rather
>> than lashing out at package maintainers.
> [...]
> 
> Yes, it's progressing nicely. DevStack-based jobs are already
> covered this way for master and stable/liberty, and Neutron is
> piloting the same solution for its other non-DevStack-based jobs. If
> Nova's unit test jobs were already switched to their
> upper-constraints equivalents then there's a chance this wouldn't
> have impacted there (though we still need to work out the bit where
> we run a representative sample of jobs like neutron/nova unit tests
> on proposed constraints bumps to block any with this sort of impact,
> right now we're really just relying on
> devstack-tempest/grenade/other integration test jobs as canaries).
> 
> Anyway, the solution seems to be working (modulo unforeseen issues
> like people thinking it's sane to delete their latest releases of
> some dependencies from PyPI) but it's a long road to full
> implementation.

So just FTR I mistakenly thought that the upper-constraints system was
in place for all gate jobs, and that the release of Alembic 0.8.4 would
at worst have raised a message somewhere within the system that updates
the upper-constraints file, and prevented the version bump from
proceeding.  Had I known that the raw master gate jobs for Nova were
going to go down overnight, I would not have released Alembic in this way.

That said, as the upper-constraints system is available, and not using
it means that any upstream package release that breaks any test will
cause 12 hours of gate downtime, I'm surprised that rolling this system
out across the board isn't an emergency priority. Because right now,
literally any of the 50 or so projects I see in Nova requirements.txt
can release something on Pypi at any moment and cause another 12 hours
of downtime.   Lots of them don't even have major version upper bounds,
even complex and intricate dependencies like lxml and boto.   If a
single test breakage due to an incompatible change in an upstream
release means 100 man hours lost and 12 hours of downtime, then Nova/
others are essentially a giant boulder balanced on the edge of a cliff,
with 50 or so loaded BB guns on Pypi pointed at it.








> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Weekly IRC meeting cancelled December 17th & 24th

2015-12-16 Thread Tripp, Travis S
After some more conversation, it sounds like New Years eve also is a bad day 
for most people. Are next meeting will be Thursday, December 7th.




On 12/15/15, 5:42 PM, "Tripp, Travis S"  wrote:

>We will not be holding our weekly IRC meeting this week due to the
>busy holiday season with many people out.  Our regular meeting will
>resume Thursday, December 31st.
>
>As always, you can find the meeting schedule and agenda here:
>http://eavesdrop.openstack.org/#Searchlight_Team_Meeting
>
>
>Thanks,
>Travis
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Weekly IRC meetings cancelled Dec 23rd and 30th

2015-12-16 Thread David Lyle
There will be no Horizon and HorizonDrivers meetings on Dec 23rd and Dec 30th.

We will resume on Jan 6th.

Thanks,
David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11 completed.

2015-12-16 Thread Zaro
We have identified this and will look into it soon.  Thanks for
reporting the issue.

On Wed, Dec 16, 2015 at 2:02 PM, Michał Dulko  wrote:
> On 12/16/2015 10:22 PM, Zaro wrote:
>> Thanks to everyone for their patience while we upgraded to Gerrit
>> 2.11.  I'm happy to announce that we were able to successfully
>> completed this task at around 21:00 UTC.  You may hack away once more.
>>
>> If you encounter any problems, please let us know here or in
>> #openstack-infra on Freenode.
>>
>> Enjoy,
>> -Khai
>>
>
> Good job! :)
>
> In Cinder we have an impressive number of Third-Party CIs. Even with
> "Toggle CI" option set to not-showing CIs comments, the comment frame is
> displayed. E.g. [1]. This makes reading reviewers comments harder. Is
> there any way of disabling that? Or any chances of fixing it up in
> Gerrit deployment itself?
>
> [1] https://review.openstack.org/#/c/248768/
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Ian Cordasco


On 12/16/15, 16:41, "Tony Breeds"  wrote:

>On Wed, Dec 16, 2015 at 01:12:13PM -0600, Matt Riedemann wrote:
>> I'm not entirely sure what the geo distribution is for everyone that
>>works
>> on stable, but I know we have people in Europe and some people in
>>Australia.
>> So I was thinking alternating weekly meetings:
>> 
>> Mondays at 2100 UTC
>> 
>> Tuesdays at 1500 UTC
>> 
>> Does that at least sort of work for people that would be interested in
>> attending a meeting about stable? I wouldn't expect a full hour
>>discussion,
>> my main interests are highlighting status, discussing any issues that
>>come
>> up in the ML or throughout the week, and whatever else people want to go
>> over (work items, questions, process discussion, etc).
>
>The Monday meeting works for me :)
>
>Yours Tony.

I'm not a stable team member, but I'm intrigued and I'll definitely join
all of you, at least to lurk. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Adrian Otto
Tom,

> On Dec 16, 2015, at 9:31 AM, Cammann, Tom  wrote:
> 
> I don’t see a benefit from supporting the old API through a microversion 
> when the same functionality will be available through the native API.

+1

[snip]

> Have we had any discussion on adding a v2 API and what changes (beyond 
> removing pod, rc, service) we would include in that change. What sort of 
> timeframe would we expect to remove the v1 API. I would like to move to a 
> v2 in this cycle, then we can think about removing v1 in N.

Yes, when we drop functionality from the API that’s a contract breaking change, 
and requires a new API major version. We can drop the v1 API in N if we set 
expectations in advance. I’d want that plan to be supported with some evidence 
that maintaining the v1 API was burdensome in some way. Because adoption is 
limited, deprecation of v1 is not likely to be a contentious issue.

Adrian

> 
> Tom
> 
> 
> 
> On 16/12/2015, 15:57, "Hongbin Lu"  wrote:
> 
>> Hi Tom,
>> 
>> If I remember correctly, the decision is to drop the COE-specific API 
>> (Pod, Service, Replication Controller) in the next API version. I think a 
>> good way to do that is to put a deprecated warning in current API version 
>> (v1) for the removed resources, and remove them in the next API version 
>> (v2).
>> 
>> An alternative is to drop them in current API version. If we decide to do 
>> that, we need to bump the micro-version [1], and ask users to specify the 
>> microversion as part of the requests when they want the removed APIs.
>> 
>> [1] 
>> http://docs.openstack.org/developer/nova/api_microversions.html#removing-a
>> n-api-method
>> 
>> Best regards,
>> Hongbin
>> 
>> -Original Message-
>> From: Cammann, Tom [mailto:tom.camm...@hpe.com] 
>> Sent: December-16-15 8:21 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
>> 
>> I have been noticing a fair amount of redundant work going on in magnum, 
>> python-magnumclient and magnum-ui with regards to APIs we have been 
>> intending to drop support for. During the Tokyo summit it was decided 
>> that we should support for only COE APIs that all COEs can support which 
>> means dropping support for Kubernetes specific APIs for Pod, Service and 
>> Replication Controller.
>> 
>> Egor has submitted a blueprint[1] “Unify container actions between all 
>> COEs” which has been approved to cover this work and I have submitted the 
>> first of many patches that will be needed to unify the APIs.
>> 
>> The controversial patches are here: 
>> https://review.openstack.org/#/c/258485/ and 
>> https://review.openstack.org/#/c/258454/
>> 
>> These patches are more a forcing function for our team to decide how to 
>> correctly deprecate these APIs as I mention there is a lot of redundant 
>> work going on these APIs. Please let me know your thoughts.
>> 
>> Tom
>> 
>> [1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] any project using olso.db test_migrations is currently blocked

2015-12-16 Thread Matt Riedemann



On 12/16/2015 10:22 AM, Mike Bayer wrote:



On 12/16/2015 09:10 AM, Sylvain Bauza wrote:



Le 16/12/2015 14:59, Sean Dague a écrit :

oslo.db test_migrations is using methods for alembic, which changed in
the 0.8.4 release. This ends up causing a unit test failure (at least in
the Nova case) that looks like this -
http://logs.openstack.org/44/258444/1/check/gate-nova-python27/2ed0401/console.html#_2015-12-16_12_20_17_404


There is an oslo.db patch out there
https://review.openstack.org/#/c/258478 to fix it, but #openstack-oslo
has been pretty quiet this morning, so no idea how fast this can get out
into a release.

 -Sean



So, it seems that the issue came when
https://bitbucket.org/zzzeek/alembic/issues/341 was merged.
Fortunatelt, Mike seems to have a patch in place for Nova in order to
fix this https://review.openstack.org/#/c/253859/

I'd suggest an intensive review pass on that one to make sure it's OK.


do you folks have a best practice suggestion on this?  My patch kind of
stayed twisting in the wind for a week even though those who read it
would have seen "hey, this is going to break on Alembic's next minor
release!"I pinged the important people and all on it, but it still
got no attention.

I thought of adding a launchpad bug but I had the impression that too
would just be more idle webpages sitting there until I just put the
release out, and then the whole thing got attention / fixed in just a
few hours!

I'm not sure there's any other upstream (non openstack/stackforge)
projects that actually run Openstack tests on their own CI against
upcoming versions like I do (the SQLAlchemy project actually spends
money on Amazon EC2 instances that are utilized for running these
suites, among others).









-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



A mailing list post or having it in the nova meeting agenda would have 
gotten attention better than adding people to a gerrit review. Even 
pinging on IRC would be better than just adding people to gerrit.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread Chris Dent

On Wed, 16 Dec 2015, melanie witt wrote:


I’d rather we provided something like a more generic “Resource
View API” in Nova that allows baremetal/container/clustered
hypervisor environments to report resources via a REST API, and
scheduling would occur based on the resources table (instead of having
resource trackers). Each environment reporting resources would provide
corresponding in-tree Nova scheduler filters that know what to do with
resources related to them. Then scheduling would select a resource and
lookup the compute host responsible for that resource, and nova-
compute would delegate the chosen resource to, for example, Ironic.


That ^ makes me think of this: https://review.openstack.org/#/c/253187/

Seem to be in at least similar veins.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread James Bottomley
On Wed, 2015-12-16 at 20:35 +, Adrian Otto wrote:
> Clint,
> 
> > On Dec 16, 2015, at 11:56 AM, Tim Bell  wrote:
> > 
> > > -Original Message-
> > > From: Clint Byrum [mailto:cl...@fewbar.com]
> > > Sent: 15 December 2015 22:40
> > > To: openstack-dev 
> > > Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> > > Resources
> > > 
> > > Hi! Can I offer a counter point?
> > > 
> > > Quotas are for _real_ resources.
> 
> No. Beyond billable resources, quotas are a mechanism for limiting 
> abusive use patterns from hostile users.

Actually, I believe this is the wrong way to look at it.  You're
confusing policy and mechanism.  Quotas are policy on resources.  The
mechanisms by which you implement quotas can also be used to limit
abuse by hostile users, but that doesn't mean that this limitation
should be part of the quota policy.

For instance, in Linux, the memory limit policy is implemented by the
memgc.  The user usually sees a single figure for "memory" but inside
the cgroup, that memory is split into user and kernel.  Kernel memory
limiting prevents things like fork bombs because you run out of your
kernel memory limit creating task structures before you can bring down
the host system.  However, we don't usually expose the kernel/user
split or the fact that the kmem limit mechanism can prevent fork and
inode bombs.

James

>  The rate at which Bays are created, and how many of them you can
> have in total are important limits to put in the hands of cloud
> operators. Each Bay contains a keypair, which takes resources to
> generate and securely distribute. Updates to and Deletion of bays
> causes a storm of activity in Heat, and even more activity in Nova.
> Cloud operators should have the ability to control the rate of
> activity by enforcing rate controls on Magnum resources before they
> become problematic further down in the control plane. Admission
> controls are best managed at the entrance to a system, not at the
> core.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Adrian Otto

On Dec 16, 2015, at 2:25 PM, James Bottomley 
>
 wrote:

On Wed, 2015-12-16 at 20:35 +, Adrian Otto wrote:
Clint,

On Dec 16, 2015, at 11:56 AM, Tim Bell 
> wrote:

-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev 
>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources

Hi! Can I offer a counter point?

Quotas are for _real_ resources.

No. Beyond billable resources, quotas are a mechanism for limiting
abusive use patterns from hostile users.

Actually, I believe this is the wrong way to look at it.  You're
confusing policy and mechanism.  Quotas are policy on resources.  The
mechanisms by which you implement quotas can also be used to limit
abuse by hostile users, but that doesn't mean that this limitation
should be part of the quota policy.

I’m not convinced. Cloud operators already use quotas as a mechanism for 
limiting abuse (intentional or accidental). They can be configured with a 
system wide default, and can be set to a different value on a per-tenant basis. 
It would be silly to have a second mechanism for doing the same thing we 
already use quotas for. Quotas/limits can also be queried by a user so they can 
determine why they are getting a 4XX Rate Limit responses when they try to act 
on resources too rapidly.

The idea of hard coding system wide limits into the system is making my stomach 
turn. If you wanted to change the limit you’d need to edit the production 
system’s configuration, and restart the API services. Yuck! That’s why we put 
quotas/limits into OpenStack to begin with, so that we had a sensible, visible, 
account-level configurable place to configure limits.

Adrian


For instance, in Linux, the memory limit policy is implemented by the
memgc.  The user usually sees a single figure for "memory" but inside
the cgroup, that memory is split into user and kernel.  Kernel memory
limiting prevents things like fork bombs because you run out of your
kernel memory limit creating task structures before you can bring down
the host system.  However, we don't usually expose the kernel/user
split or the fact that the kmem limit mechanism can prevent fork and
inode bombs.

James

The rate at which Bays are created, and how many of them you can
have in total are important limits to put in the hands of cloud
operators. Each Bay contains a keypair, which takes resources to
generate and securely distribute. Updates to and Deletion of bays
causes a storm of activity in Heat, and even more activity in Nova.
Cloud operators should have the ability to control the rate of
activity by enforcing rate controls on Magnum resources before they
become problematic further down in the control plane. Admission
controls are best managed at the entrance to a system, not at the
core.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-16 Thread Soichi Shigeta


 Thank you for your helpful comments.

 I'd like to update my proposal as follows:

  1. Set an integer cookie value to taas flows.
 Maybe all "1" for short term tentative value.
  2. Modify the cleanup logic in ovs-agent not to delete flows
 which have a reserved integer cookie value.



Sorry, that I don't see this earlier. Yes, cookies have integer values, so
we won't be able to set string there. May be we can have a reserved integer
cookie value for a project like all "1".

I won't support idea of modifying cleanup logic not to drop 0x0 cookies.
During implementation of graceful restart it was not dropped at first, but
I get rid of it  as having a lot of flows not related to anything was not
desirable, so we should try to avoid it here, too.

On Wed, Dec 16, 2015 at 7:46 AM, Soichi Shigeta <
shigeta.soi...@jp.fujitsu.com> wrote:



o) An idea to fix:


   1. Set "taas" stamp(*) to taas flows.
   2. Modify the cleanup logic in ovs-agent not to delete entries
  stamped as "taas".

   * Maybe a static string.
 If we need to use a string which generated dynamically
 (e.g. uuid), API to interact with ovs-agent is required.



   Last week I proposed to set a static string (e.g. "taas") as cookie
   of flows created by taas agent.

   But I found that the value of a cookie should not be a string,
   but an integer.

   At line 187 in
"neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py":
   self.agent_uuid_stamp = uuid.uuid4().int & UINT64_BITMASK

   In case of we set an integer value to cookie, coordination
   (reservation of range) is required to avoid conflict of cookies with
   other neutron sub-projects.

   As an alternative (*** short term ***) solution, my idea is:
   Modify the clean up logic in ovs agent not to delete flows whose
   "cookie = 0x0".
   Because old flows created by ovs agent have an old stamp, "cookie =
   0x0" means it was created by other than ovs agent.

   # But, this idea has a disadvantage:
 If there are flows which have been created by older version of ovs
 agent, they can not be cleaned.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Kai Qiang Wu
Hi Adrian,

Right now, I think:

for the unify-COE-container actions bp, it needs more discussion and good
design to make it  happen. ( I think spec is needed for this)
And for the k8s related objects deprecation, it needs backup  instead of
directly dropped it. Especially when we not have any spec or design come
out for unify-COE-container bp.


Right now, the work now mostly happen on UI part, I think for UI, it can
have discussion if need to implement those views or not.(instead we
directly drop API part while not come out a consistent design on
unify-COE-container actions bp)


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   17/12/2015 07:00 am
Subject:Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs



Tom,

> On Dec 16, 2015, at 9:31 AM, Cammann, Tom  wrote:
>
> I don’t see a benefit from supporting the old API through a microversion
> when the same functionality will be available through the native API.

+1

[snip]

> Have we had any discussion on adding a v2 API and what changes (beyond
> removing pod, rc, service) we would include in that change. What sort of
> timeframe would we expect to remove the v1 API. I would like to move to a

> v2 in this cycle, then we can think about removing v1 in N.

Yes, when we drop functionality from the API that’s a contract breaking
change, and requires a new API major version. We can drop the v1 API in N
if we set expectations in advance. I’d want that plan to be supported with
some evidence that maintaining the v1 API was burdensome in some way.
Because adoption is limited, deprecation of v1 is not likely to be a
contentious issue.

Adrian

>
> Tom
>
>
>
> On 16/12/2015, 15:57, "Hongbin Lu"  wrote:
>
>> Hi Tom,
>>
>> If I remember correctly, the decision is to drop the COE-specific API
>> (Pod, Service, Replication Controller) in the next API version. I think
a
>> good way to do that is to put a deprecated warning in current API
version
>> (v1) for the removed resources, and remove them in the next API version
>> (v2).
>>
>> An alternative is to drop them in current API version. If we decide to
do
>> that, we need to bump the micro-version [1], and ask users to specify
the
>> microversion as part of the requests when they want the removed APIs.
>>
>> [1]
>>
http://docs.openstack.org/developer/nova/api_microversions.html#removing-a
>> n-api-method
>>
>> Best regards,
>> Hongbin
>>
>> -Original Message-
>> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>> Sent: December-16-15 8:21 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
>>
>> I have been noticing a fair amount of redundant work going on in magnum,

>> python-magnumclient and magnum-ui with regards to APIs we have been
>> intending to drop support for. During the Tokyo summit it was decided
>> that we should support for only COE APIs that all COEs can support which

>> means dropping support for Kubernetes specific APIs for Pod, Service and

>> Replication Controller.
>>
>> Egor has submitted a blueprint[1] “Unify container actions between all
>> COEs” which has been approved to cover this work and I have submitted
the
>> first of many patches that will be needed to unify the APIs.
>>
>> The controversial patches are here:
>> https://review.openstack.org/#/c/258485/ and
>> https://review.openstack.org/#/c/258454/
>>
>> These patches are more a forcing function for our team to decide how to
>> correctly deprecate these APIs as I mention there is a lot of redundant
>> work going on these APIs. Please let me know your thoughts.
>>
>> Tom
>>
>> [1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Re: [openstack-dev] [magnum] Magnum conductor async container operations

2015-12-16 Thread SURO

Please find the reply inline.

Regards,
SURO
irc//freenode: suro-patz

On 12/16/15 7:19 PM, Adrian Otto wrote:

On Dec 16, 2015, at 6:24 PM, Joshua Harlow  wrote:

SURO wrote:

Hi all,
Please review and provide feedback on the following design proposal for
implementing the blueprint[1] on async-container-operations -

1. Magnum-conductor would have a pool of threads for executing the
container operations, viz. executor_threadpool. The size of the
executor_threadpool will be configurable. [Phase0]
2. Every time, Magnum-conductor(Mcon) receives a
container-operation-request from Magnum-API(Mapi), it will do the
initial validation, housekeeping and then pick a thread from the
executor_threadpool to execute the rest of the operations. Thus Mcon
will return from the RPC request context much faster without blocking
the Mapi. If the executor_threadpool is empty, Mcon will execute in a
manner it does today, i.e. synchronously - this will be the
rate-limiting mechanism - thus relaying the feedback of exhaustion.
[Phase0]
How often we are hitting this scenario, may be indicative to the
operator to create more workers for Mcon.
3. Blocking class of operations - There will be a class of operations,
which can not be made async, as they are supposed to return
result/content inline, e.g. 'container-logs'. [Phase0]
4. Out-of-order considerations for NonBlocking class of operations -
there is a possible race around condition for create followed by
start/delete of a container, as things would happen in parallel. To
solve this, we will maintain a map of a container and executing thread,
for current execution. If we find a request for an operation for a
container-in-execution, we will block till the thread completes the
execution. [Phase0]

Does whatever do these operations (mcon?) run in more than one process?

Yes, there may be multiple copies of magnum-conductor running on separate hosts.


Can it be requested to create in one process then delete in another? If so is 
that map some distributed/cross-machine/cross-process map that will be 
inspected to see what else is manipulating a given container (so that the 
thread can block until that is not the case... basically the map is acting like 
a operation-lock?)

Suro> @Josh, just after this, I had mentioned

"The approach above puts a prerequisite that operations for a given
container on a given Bay would go to the same Magnum-conductor instance."

Which suggested multiple instances of magnum-conductors. Also, my idea 
for implementing this was as follows - magnum-conductors have an 'id' 
associated, which carries the notion of [0 - (N-1)]th instance of 
magnum-conductor. Given a request for a container operation, we would 
always have the bay-id and container-id. I was planning to use 
'hash(bay-id, key-id) modulo N' to be the logic to ensure that the right 
instance picks up the intended request. Let me know if I am missing any 
nuance of AMQP here.

That’s how I interpreted it as well. This is a race prevention technique so 
that we don’t attempt to act on a resource until it is ready. Another way to 
deal with this is check the state of the resource, and return a “not ready” 
error if it’s not ready yet. If this happens in a part of the system that is 
unattended by a user, we can re-queue the call to retry after a minimum delay 
so that it proceeds only when the ready state is reached in the resource, or 
terminated after a maximum number of attempts, or if the resource enters an 
error state. This would allow other work to proceed while the retry waits in 
the queue.
Suro> @Adrian, I think async model is to let user issue a sequence of 
operations, which might be causally ordered.  I suggest we should honor 
the causal ordering than implementing the implicit retry model. As per 
my above proposal, if we can arbitrate operations for a given bay, given 
container - we should be able to achieve this ordering.







If it's just local in one process, then I have a library for u that can solve 
the problem of correctly ordering parallel operations ;)

What we are aiming for is a bit more distributed.

Suro> +1


Adrian


This mechanism can be further refined to achieve more asynchronous
behavior. [Phase2]
The approach above puts a prerequisite that operations for a given
container on a given Bay would go to the same Magnum-conductor instance.
[Phase0]
5. The hand-off between Mcon and a thread from executor_threadpool can
be reflected through new states on the 'container' object. These states
can be helpful to recover/audit, in case of Mcon restart. [Phase1]

Other considerations -
1. Using eventlet.greenthread instead of real threads => This approach
would require further refactoring the execution code and embed yield
logic, otherwise a single greenthread would block others to progress.
Given, we will extend the mechanism for multiple COEs, and to keep the
approach straight forward to begin with, we will use 'threading.Thread'
instead of 

Re: [openstack-dev] [midonet] how to configure static route for uplink

2015-12-16 Thread Jan Hilberath

On 12/17/2015 12:57 AM, Li Ma wrote:
> Hi, I'm following [1] to configure static route for uplink. I'm not
> sure whether I'm getting it.
>
> (1) Floating-IP subnet (gateway?) configuration in the guide?

You will have to replace 200.200.200.0/24 with the CIDR that you are 
using for your Neutron external network (that provides the floating IPs).


> (2) eth0 configuration in the guide?

The eth0 in the guide is the interface on the gateway node that provides 
connectivity to the Internet. eth0 is just an example, the interface 
name may of course be different on your system.


> (3) Do I need to configure uplink physical router? back route to eth0?

No, because the outgoing traffic will be NATed and masqueraded by 
iptables. The uplink router thus will see the packets as coming from 
eth0's IP address.


> (4) What if I just bind router0:port0 to physical nic: eth0, and don't
> do the uplink bridge and veth pair stuff. Can it work?

No. MidoNet's datapath would then grab all of eth0's traffic and things 
will get messy in the underlay.


>
> [1] 
https://docs.midonet.org/docs/v2015.06/en/operations-guide/content/static_setup.html

>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Tony Breeds
On Wed, Dec 16, 2015 at 08:17:03PM -0600, Matt Riedemann wrote:

> 2100 UTC is 3pm for me. The latest I could probably do is 2200 UTC but I
> can't do anything later than that since I have a 4 year old that has to be
> picked up every day.
> 
> The only way we probably get a meeting time that's good for China/Japan is
> if someone else runs one of those at like 00:00 UTC, which maybe Tony Breeds
> would be willing to do.

I'd be willing to do that BUT ...

> I'd kind of like to get a (stable) regular meeting going before we start
> switching it up though, especially since I'm not sure how many people are in
> an Asian timezone that are involved in stable branch maintenance (Tony is
> the closest lead I can think of near that time really).

I really think we need a period to "bootstrap" this.  If we move the APAC
meeting to a time that Matt can't attend then we'd dilute the value of that
meeting.

Excluding people isn't my aim, especially as we've been asking for help for a
long time.  I'd like to see us stick this this schedule for a while[1] then
once we have some traction and trust we can look at including more people.

I do note that attendance at the meeting isn't *required*[2] Many of us are in
#openstack-stable a quick chat there can be very productive.

Yours Tony.

[1] Perhaps post N election?
[2] I agree it is valuable.


pgpZEWAaXCH0k.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2015-12-16 16:56:39 -0800:
> Clint,
> 
> I think you are categorically dismissing a very real ops challenge of how to 
> set correct system limits, and how to adjust them in a running system. I have 
> been stung by this challenge repeatedly over the years. As developers we 
> *guess* at what a sensible default for a value will be for a limit, but we 
> are sometimes wrong. When we are, that guess has a very real, and very 
> negative impact on users of production systems. The idea of using one limit 
> for all users is idealistic. I’m convinced based on my experience that it's 
> not the best approach in practice. What we usually want to do is bump up a 
> limit for a single user, or dynamically drop a limit for all users. The 
> problem is that very few systems implement limits in a way they can be 
> adjusted while the system is running, and very rarely on a per-tenant basis. 
> So yes, I will assert that having a quota implementation and the related 
> complexity is justified by the ability to adapt limit levels while the system 
> is running.
> 
> Think for a moment about the pain that an ops team goes through when they 
> have to take a service down that’s affecting thousands or tens of thousands 
> of users. We have to send zillions of emails to customers, we need to hold 
> emergency change management meetings. We have to answer questions like “why 
> didn’t you test for this?” when we did test for it, and it worked fine under 
> simulation, but not in a real production environment under this particular 
> stimulus. "Why can’t you take the system down in sections to keep the service 
> up?" When the answer to all this is “because the developers never put 
> themselves in the shoes of the ops team when they designed it.”
> 
> Those who know me will attest to the fact that I care deeply about applying 
> the KISS principle. The principle guides us to keep our designs as simple as 
> possible unless it’s essential to make them more complex. In this case, the 
> complexity is justified.
> 
> Now if there are production ops teams for large scale systems that argue that 
> dynamic limits and per-user overrides are pointless, then I’ll certainly 
> reconsider my position.
> 

Hm, I think we agree that ops need ways to enact policies smoothly,
that's for sure, and I am sorry I've failed to communicate that. My
experience has been somewhat different, and I tend to treat every single
request that comes in as the one that will crash your service and trigger
those emails. With thousands of users, weakening the limitations that
protect the service for any subset of them seems like a huge undertaking
and carries a large risk. Making the system resilient no matter who is
talking to it would be my focus.

However, I'm not directly involved and we're just going in circles,
so we'll just have to agree to disagree. I hope, sincerely, that I'm
wrong. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-16 Thread Kevin Benton
If you go this route, taas will have to implement flow recovery logic
itself as well if you want hitless restarts.

On Wed, Dec 16, 2015 at 5:28 PM, Soichi Shigeta <
shigeta.soi...@jp.fujitsu.com> wrote:

>
>  Thank you for your helpful comments.
>
>  I'd like to update my proposal as follows:
>
>   1. Set an integer cookie value to taas flows.
>  Maybe all "1" for short term tentative value.
>   2. Modify the cleanup logic in ovs-agent not to delete flows
>  which have a reserved integer cookie value.
>
>
>
> Sorry, that I don't see this earlier. Yes, cookies have integer values, so
>> we won't be able to set string there. May be we can have a reserved
>> integer
>> cookie value for a project like all "1".
>>
>> I won't support idea of modifying cleanup logic not to drop 0x0 cookies.
>> During implementation of graceful restart it was not dropped at first, but
>> I get rid of it  as having a lot of flows not related to anything was not
>> desirable, so we should try to avoid it here, too.
>>
>> On Wed, Dec 16, 2015 at 7:46 AM, Soichi Shigeta <
>> shigeta.soi...@jp.fujitsu.com> wrote:
>>
>>
>>> o) An idea to fix:
>>>

1. Set "taas" stamp(*) to taas flows.
2. Modify the cleanup logic in ovs-agent not to delete entries
   stamped as "taas".

* Maybe a static string.
  If we need to use a string which generated dynamically
  (e.g. uuid), API to interact with ovs-agent is required.


Last week I proposed to set a static string (e.g. "taas") as cookie
>>>of flows created by taas agent.
>>>
>>>But I found that the value of a cookie should not be a string,
>>>but an integer.
>>>
>>>At line 187 in
>>>
>>> "neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py":
>>>self.agent_uuid_stamp = uuid.uuid4().int & UINT64_BITMASK
>>>
>>>In case of we set an integer value to cookie, coordination
>>>(reservation of range) is required to avoid conflict of cookies with
>>>other neutron sub-projects.
>>>
>>>As an alternative (*** short term ***) solution, my idea is:
>>>Modify the clean up logic in ovs agent not to delete flows whose
>>>"cookie = 0x0".
>>>Because old flows created by ovs agent have an old stamp, "cookie =
>>>0x0" means it was created by other than ovs agent.
>>>
>>># But, this idea has a disadvantage:
>>>  If there are flows which have been created by older version of ovs
>>>  agent, they can not be cleaned.
>>>
>>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] "Mistral HA and multi-regional support" meeting minutes

2015-12-16 Thread Anastasia Kuznetsova
Moshe,

thanks for sharing these meeting minutes.
I am happy to help with gate and new functional tests (probably some
destructive scenarios) which will help us to test Mistral HA.

On Wed, Dec 16, 2015 at 8:01 PM, ELISHA, Moshe (Moshe) <
moshe.eli...@alcatel-lucent.com> wrote:

> Hi all,
>
>
>
> Renat and I had an action item to think about "Mistral HA and
> multi-regional support".
>
> No big surprises. These are the meeting minutes:
>
>
>
> Mistral Multi-Region:
>
> * A blueprint already exists [1]
>
> * Most topics were already discussed in Mitaka OpenStack summit and are
> described in the blueprint.
>
>
>
> Mistral HA
>
> * Add a gate that runs Mistral in HA mode (Ask akuznetsova
> for more info as she looked into this once).
>
> * Add more functional tests that are focused on HA tests
>
> * Put together a list of known HA issues that are currently not handled
> (For example, if an executor dies immediately after dequeuing a task) and
> think of solutions.
>
> * Expose Mistral load metrics to allow some external system to decide if
> it needs to scale Mistral components in / out.
>
>
>
> [1]
> https://blueprints.launchpad.net/mistral/+spec/mistral-multi-region-support
>
>
>
> Thanks.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Disable 3.[3-7] gates for master?

2015-12-16 Thread Bartłomiej Piotrowski
On 2015-12-16 15:42, Alex Schultz wrote:
> On Wed, Dec 16, 2015 at 3:33 AM, Bartłomiej Piotrowski
>  wrote:
>> Fuelers,
>>
>> with the switch to CentOS 7, we also started using Puppet 3.8 in place
>> of 3.4. Is there any reason to run entire range of
>> gate-fuel-library-puppet-unit-3.*-dsvm-centos7 tests?
>>
>> I suppose we could leave only 3.8 and 4.0 there (at least for master).
>> For stable branches we could keep just 3.4, 3.8 and 4.0 and disable the
>> rest.
>>
>> What do you think?
>>
> 
> We should probably figure out what versions are supported by the
> distributions and target those. I would say we need to keep 3.4 since
> that's what ships with Ubuntu.  That being said as we move to
> supporting Fuel being installed via packages and not relying on the
> existing packages being provided by MOS, the end user could use any
> version of puppet they so desire via the puppetlabs repositories.
> We're just using the same set of tests that the Puppet OpenStack folks
> are using so it would continue to benefit us to support the same set
> of tests if there is no compelling reason not to.  Are we running into
> a particular issue with the other jobs?
> 
> Thanks,
> -Alex

So all these releases are actually supported? Looking at GitHub, it
doesn't look like they received much maintenance, e.g. no bugfix release
for 3.3.x since November 6, 2013.

I'm bringing this up because I wanted to use execute method of
Puppet::Util::Execution but it's been introduced in 3.4, but I guess if
we expect some user to run any of Puppet releases with our code, I'll
just work around it somehow.

Thanks for explanation,
Bartłomiej


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2015-12-16 16:05:29 -0800:
> Yeah, as an op, I've run into a few things that need quota's that just have 
> basically hardcoded values. heat stacks for example. its a single global in 
> /etc/heat/heat.conf:max_stacks_per_tenant=100. Instead of being able to tweak 
> it for just our one project that legitimately has to create over 200 stacks, 
> I had to set it cloud wide and I had to bounce services to do it. Please 
> don't do that.
> 
> Ideally, it would be nice if the quota stuff could be pulled out into its own 
> shared lib  (oslo?) and shared amongst projects so that they don't have to 
> spend much effort implementing quota's. Maybe then things that need quota's 
> that don't currently can more easily get them.
> 

You had to change a config value, once, and that's worse than the added
code complexity and server load that would come from tracking quotas for
a distributed service?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking guide meeting] Meeting Tomorrow!

2015-12-16 Thread Edgar Magana
Hello All,

Friendly reminder email that we have our networking guide team meeting tomorrow 
Thursday at 16:00 UTC in #openstack-meeting


Agenda:

https://wiki.openstack.org/wiki/Documentation/NetworkingGuide/Meetings


Thanks,

Edgar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] meeting time proposal

2015-12-16 Thread Rochelle Grober
Any chance you could make the Monday meeting a few hours later?  Japan and 
China are still mostly in bed then, but two hours would allow both to 
participate.

--Rocky

> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: Wednesday, December 16, 2015 11:12 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [stable] meeting time proposal
> 
> I'm not entirely sure what the geo distribution is for everyone that
> works on stable, but I know we have people in Europe and some people in
> Australia.  So I was thinking alternating weekly meetings:
> 
> Mondays at 2100 UTC
> 
> Tuesdays at 1500 UTC
> 
> Does that at least sort of work for people that would be interested in
> attending a meeting about stable? I wouldn't expect a full hour
> discussion, my main interests are highlighting status, discussing any
> issues that come up in the ML or throughout the week, and whatever else
> people want to go over (work items, questions, process discussion,
> etc).
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][tempest][defcore] Process to imrpove tests coverge in temepest

2015-12-16 Thread Egle Sigler
Thank you Flavio for bringing this up! We are using tempest tests for
DefCore testing, and we would like to work with anyone willing to increase
coverages in any of the current covered capabilities. We would also like
to hear from the teams when they are planning on removing, changing, or
renaming tests, as that could affect what DefCore tests.

Upcoming DefCore guidelines and tests:
https://github.com/openstack/defcore/blob/master/2016.01.json

Thank you,
Egle

On 12/8/15, 1:25 PM, "Flavio Percoco"  wrote:

>Greetings,
>
>I just reviewed a patch in tempest that proposed adding new tests for
>the Glance's task API. While I believe it's awesome that folks in the
>tempest team keep adding missing tests for projects, I think it'd be
>better for the tempest team, the project's team and defcore if we'd
>discuss these tests before they are worked on. This should help people
>avoid wasting time.
>
>I believe these cases are rare but the benefits of discussing missing
>tests across teams could also help prioritizing the work based on what
>the teams goals are, what the defcore team needs are, etc.
>
>So, I'd like to start improving this process by inviting folks from
>the tempest team to join project's meeting whenever new tests are
>going to be worked on.
>
>I'd also like to invite PTLs (or anyone, really) from each team to go
>through what's in tempest and what's missing and help this team
>improve the test suite. Remember that these tests are also used by the
>defcore team and they are not important just for the CI but have an
>impact on other areas as well.
>
>I'm doing the above for Glance and I can't stress enough how important
>it is for projects to do the same.
>
>Do teams have a different workflow/process to increase tests coverage
>in tempest?
>
>Cheers,
>Flavio
> 
>-- 
>@flaper87
>Flavio Percoco
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >