Re: [Openstack-operators] [openstack-dev] [openstack-operators][cinder] max_concurrent_builds in Cinder

2016-05-23 Thread John Griffith
On Mon, May 23, 2016 at 8:32 AM, Ivan Kolodyazhny  wrote:

> Hi developers and operators,
> I would like to get any feedback from you about my idea before I'll start
> work on spec.
>
> In Nova, we've got max_concurrent_builds option [1] to set 'Maximum number
> of instance builds to run concurrently' per each compute. There is no
> equivalent Cinder.
>
> Why do we need it for Cinder? IMO, it could help us to address following
> issues:
>
>- Creation of N volumes at the same time increases a lot of resource
>usage by cinder-volume service. Image caching feature [2] could help us a
>bit in case when we create volume form image. But we still have to upload N
>images to the volumes backend at the same time.
>- Deletion on N volumes at parallel. Usually, it's not very hard task
>for Cinder, but if you have to delete 100+ volumes at once, you can fit
>different issues with DB connections, CPU and memory usages. In case of
>LVM, it also could use 'dd' command to cleanup volumes.
>- It will be some kind of load balancing in HA mode: if cinder-volume
>process is busy with current operations, it will not catch message from
>RabbitMQ and other cinder-volume service will do it.
>- From users perspective, it seems that better way is to create/delete
>N volumes a bit slower than fail after X volumes were created/deleted.
>
>
> [1]
> https://github.com/openstack/nova/blob/283da2bbb74d0a131a66703516d16698a05817c7/nova/conf/compute.py#L161-L163
> [2]
> https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Just curious about a couple things:  Is this attempting to solve a
problem in the actual Cinder Volume Service or is this trying to solve
problems with backends that can't keep up and deliver resources under heavy
load?  I get the copy-image to volume, that's a special case that certainly
does impact Cinder services and the Cinder node itself, but there's already
throttling going on there, at least in terms of IO allowed.

Also, I'm curious... would the exiting API Rate Limit configuration achieve
the same sort of thing you want to do here?  Granted it's not selective but
maybe it's worth mentioning.

If we did do something like this I would like to see it implemented as a
driver config; but that wouldn't help if the problem lies in the Rabbit or
RPC space.  That brings me back to wondering about exactly where we want to
solve problems and exactly which.  If delete is causing problems like you
describe I'd suspect we have an issue in our DB code (too many calls to
start with) and that we've got some overhead elsewhere that should be
eradicated.  Delete is a super simple operation on the Cinder side of
things (and most back ends) so I'm a bit freaked out thinking that it's
taxing resources heavily.

Thanks,
John
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] disabling deprecated APIs by config?

2016-05-18 Thread John Griffith
On Wed, May 18, 2016 at 9:20 AM, Sean Dague  wrote:

> nova-net is now deprecated - https://review.openstack.org/#/c/310539/
>
> And we're in the process in Nova of doing some spring cleaning and
> deprecating the proxies to other services -
> https://review.openstack.org/#/c/312209/
>
> At some point in the future after deprecation the proxy code is going to
> stop working. Either accidentally, because we're not going to test or
> fix this forever (and we aren't going to track upstream API changes to
> the proxy targets), or intentionally when we decide to delete it to make
> it easier to address core features and bugs that everyone wants addressed.
>
> However, the world moves forward slowly. Consider the following scenario.
>
> We delete nova-net & the network proxy entirely in Peru (a not entirely
> unrealistic idea). At that release there are a bunch of people just
> getting around to Newton. Their deployments allow all these things to
> happen which are going to 100% break when they upgrade, and people are
> writing more and more OpenStack software every cycle.
>
> How do we signal to users this kind of deprecation? Can we give sites
> tools to help prevent new software being written to deprecated (and
> scheduled for deletion) APIs?
>
> One idea was a "big red switch" in the format of a config option
> ``disable_deprecated_apis=True`` (defaults to False). Which would set
> all deprecated APIs to 404 routes.
>
> One of the nice ideas here is this would allow some API servers to have
> this set, and others not. So users could point to the "clean" API
> server, figure out that they will break, but the default API server
> would still support these deprecated APIs. Or, conversely, the default
> could be the clean API server, and a legacy API server endpoint could be
> provided for projects that really needed it that included these
> deprecated things for now. Either way it would allow some site assisted
> transition. And be something like the -Werror flag in gcc.
>
> In the Nova case the kinds of things ending up in this bucket are going
> to be interfaces that people *really* shouldn't be using any more. Many
> of them data back to when OpenStack was only 2 projects, and the concept
> of splitting out function wasn't really thought about (note: we're
> getting ahead of this one for the 'placement' rest API, so it won't have
> any of these issues). At some point this house cleaning was going to
> have to happen, and now seems to be the time to do get it rolling.
>
> Feedback on this idea would be welcomed. We're going to deprecate the
> proxy APIs regardless, however disable_deprecated_apis is it's own idea
> and consequences, and we really want feedback before pushing forward on
> this.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I like the idea of a switch in the config file.  To Dean's point, would it
also be worth considering a "list-deprecated-calls" that could give him a
list without having to do the roundtrip every time?  That might not
actually solve anything for him, but perhaps something along those lines
would help?​
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack] [all] OpenStack voting by the numbers

2015-07-30 Thread John Griffith
On Wed, Jul 29, 2015 at 10:36 AM, David Medberry openst...@medberry.net
wrote:

 Nice writeup maish! very nice.

 On Wed, Jul 29, 2015 at 10:27 AM, Maish Saidel-Keesing 
 mais...@maishsk.com wrote:

 Some of my thoughts on the Voting process.


 http://technodrone.blogspot.com/2015/07/openstack-summit-voting-by-numbers.html

 Guess which category has the most number of submissions??
 ;)

 --
 Best Regards,
 Maish Saidel-Keesing

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 Well, I would expect most people attending the Summit would be most
interested in a general category like Operations.  Most of the audience
here I think would naturally be most interested in how to deploy and
manage; that just makes sense I think and I don't know that anybody would
argue that.

I'm sort of confused how this correlates to the operator community
providing feedback, unless I'm misinterpreting some of your writing here.

While there are some great talks about deploying and operating an OpenStack
cloud in there, I wouldn't make the general assumption that these are
Operators giving feedback.  A quick glance, it appears that the bulk of
the talks are vendors talking about their monitoring and deployment tools
which IMO is different than the voice of the operators.  This is in my
opinion sort of expected make up of the talks at the summit these days.

Just my two cents, great write up, just a grain of salt so to speak.

Thanks,
John
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators