Re: [Openstack-operators] [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Matt Riedemann

On 6/7/2018 1:54 PM, Jay Pipes wrote:


If Cinder tracks volume attachments as consumable resources, then this 
would be my preference.


Cinder does:

https://developer.openstack.org/api-ref/block-storage/v3/#attachments

However, there is no limit in Cinder on those as far as I know.

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Jay Pipes

On 06/07/2018 01:56 PM, melanie witt wrote:

Hello Stackers,

Recently, we've received interest about increasing the maximum number of 
allowed volumes to attach to a single instance > 26. The limit of 26 is 
because of a historical limitation in libvirt (if I remember correctly) 
and is no longer limited at the libvirt level in the present day. So, 
we're looking at providing a way to attach more than 26 volumes to a 
single instance and we want your feedback.


We'd like to hear from operators and users about their use cases for 
wanting to be able to attach a large number of volumes to a single 
instance. If you could share your use cases, it would help us greatly in 
moving forward with an approach for increasing the maximum.


Some ideas that have been discussed so far include:

A) Selecting a new, higher maximum that still yields reasonable 
performance on a single compute host (64 or 128, for example). Pros: 
helps prevent the potential for poor performance on a compute host from 
attaching too many volumes. Cons: doesn't let anyone opt-in to a higher 
maximum if their environment can handle it.


B) Creating a config option to let operators choose how many volumes 
allowed to attach to a single instance. Pros: lets operators opt-in to a 
maximum that works in their environment. Cons: it's not discoverable for 
those calling the API.


C) Create a configurable API limit for maximum number of volumes to 
attach to a single instance that is either a quota or similar to a 
quota. Pros: lets operators opt-in to a maximum that works in their 
environment. Cons: it's yet another quota?


If Cinder tracks volume attachments as consumable resources, then this 
would be my preference.


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Matt Riedemann

+operators (I forgot)

On 6/7/2018 1:07 PM, Matt Riedemann wrote:

On 6/7/2018 12:56 PM, melanie witt wrote:
Recently, we've received interest about increasing the maximum number 
of allowed volumes to attach to a single instance > 26. The limit of 
26 is because of a historical limitation in libvirt (if I remember 
correctly) and is no longer limited at the libvirt level in the 
present day. So, we're looking at providing a way to attach more than 
26 volumes to a single instance and we want your feedback.


The 26 volumes thing is a libvirt driver restriction.

There was a bug at one point because powervm (or powervc) was capping 
out at 80 volumes per instance because of restrictions in the 
build_requests table in the API DB:


https://bugs.launchpad.net/nova/+bug/1621138

They wanted to get to 128, because that's how power rolls.



We'd like to hear from operators and users about their use cases for 
wanting to be able to attach a large number of volumes to a single 
instance. If you could share your use cases, it would help us greatly 
in moving forward with an approach for increasing the maximum.


Some ideas that have been discussed so far include:

A) Selecting a new, higher maximum that still yields reasonable 
performance on a single compute host (64 or 128, for example). Pros: 
helps prevent the potential for poor performance on a compute host 
from attaching too many volumes. Cons: doesn't let anyone opt-in to a 
higher maximum if their environment can handle it.


B) Creating a config option to let operators choose how many volumes 
allowed to attach to a single instance. Pros: lets operators opt-in to 
a maximum that works in their environment. Cons: it's not discoverable 
for those calling the API.


I'm not a fan of a non-discoverable config option which will impact API 
behavior indirectly, i.e. on cloud A I can boot from volume with 64 
volumes but not on cloud B.




C) Create a configurable API limit for maximum number of volumes to 
attach to a single instance that is either a quota or similar to a 
quota. Pros: lets operators opt-in to a maximum that works in their 
environment. Cons: it's yet another quota?


This seems the most reasonable to me if we're going to do this, but I'm 
probably in the minority. Yes more quota limits sucks, but it's (1) 
discoverable by API users and therefore (2) interoperable.


If we did the quota thing, I'd probably default to unlimited and let the 
cinder volume quota cap it for the project as it does today. Then admins 
can tune it as needed.





--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread melanie witt

Hello Stackers,

Recently, we've received interest about increasing the maximum number of 
allowed volumes to attach to a single instance > 26. The limit of 26 is 
because of a historical limitation in libvirt (if I remember correctly) 
and is no longer limited at the libvirt level in the present day. So, 
we're looking at providing a way to attach more than 26 volumes to a 
single instance and we want your feedback.


We'd like to hear from operators and users about their use cases for 
wanting to be able to attach a large number of volumes to a single 
instance. If you could share your use cases, it would help us greatly in 
moving forward with an approach for increasing the maximum.


Some ideas that have been discussed so far include:

A) Selecting a new, higher maximum that still yields reasonable 
performance on a single compute host (64 or 128, for example). Pros: 
helps prevent the potential for poor performance on a compute host from 
attaching too many volumes. Cons: doesn't let anyone opt-in to a higher 
maximum if their environment can handle it.


B) Creating a config option to let operators choose how many volumes 
allowed to attach to a single instance. Pros: lets operators opt-in to a 
maximum that works in their environment. Cons: it's not discoverable for 
those calling the API.


C) Create a configurable API limit for maximum number of volumes to 
attach to a single instance that is either a quota or similar to a 
quota. Pros: lets operators opt-in to a maximum that works in their 
environment. Cons: it's yet another quota?


Please chime in with your use cases and/or thoughts on the different 
approaches.


Thanks for your help,
-melanie

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Berlin Summit CFP Deadline July 17

2018-06-07 Thread Ashlee Ferguson
Hi everyone,

The Call for Presentations 
 is open 
for the Berlin Summit , November 
13-15. The deadline to submit your presentation has been extended to July 17.

At the Vancouver Summit, we focused on open infrastructure integration as the 
Summit has evolved over the years to cover more than just OpenStack. We had 
over 30 different projects from the open infrastructure community join, 
including Kubernetes, Docker, Ansible, OpenShift and many more. 

The Tracks were organized around specific use cases and will remain the same 
for Berlin with the addition of Hands on Workshops as its own dedicated Track. 
We encourage you to submit presentations covering the open infrastructure tools 
you’re using, as well as the integration work needed to address these use 
cases. We also encourage you to invite peers from other open source communities 
to speak and collaborate.

The Tracks are:

• CI/CD
• Container Infrastructure
• Edge Computing
• Hands on Workshops
• HPC / GPU / AI
• Open Source Community
• Private & Hybrid Cloud
• Public Cloud
• Telecom & NFV

Community voting, the first step in building the Summit schedule, will open in 
mid July. Once community voting concludes, a Programming Committee for each 
Track will build the schedule. Programming Committees are made up of 
individuals from many different open source communities working in open 
infrastructure, in addition to people who have participated in the past.

If you’re interested in nominating yourself or someone else to be a member of 
the Summit Programming Committee for a specific Track, please fill out the  
nomination form 
.
 Nominations will close on June 28.

Again, the deadline to  submit proposals 
 is July 
17. Please note topic submissions for the OpenStack Forum (planning/working 
sessions with OpenStack devs and operators) will open at a later date. The 
Early Bird registration  
deadline will be in mid August.

We’re working hard to make it the best Summit yet, and look forward to bringing 
together different open infrastructure communities to solve these hard problems 
together. 

Want to provide feedback on this process? Please focus discussion on the 
openstack-community mailing list, or contact the Summit Team directly at 
sum...@openstack.org.

See you in Berlin!
Ashlee



Ashlee Ferguson
OpenStack Foundation
ash...@openstack.org





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-06-07 Thread Matt Riedemann

On 2/6/2018 6:44 PM, Matt Riedemann wrote:

On 2/6/2018 2:14 PM, Chris Apsey wrote:
but we would rather have intermittent build failures rather than 
compute nodes falling over in the future.


Note that once a compute has a successful build, the consecutive build 
failures counter is reset. So if your limit is the default (10) and you 
have 10 failures in a row, the compute service is auto-disabled. But if 
you have say 5 failures and then a pass, it's reset to 0 failures.


Obviously if you're doing a pack-first scheduling strategy rather than 
spreading instances across the deployment, a burst of failures could 
easily disable a compute, especially if that host is overloaded like you 
saw. I'm not sure if rescheduling is helping you or not - that would be 
useful information since we consider the need to reschedule off a failed 
compute host as a bad thing. At the Forum in Boston when this idea came 
up, it was specifically for the case that operators in the room didn't 
want a bad compute to become a "black hole" in their deployment causing 
lots of reschedules until they get that one fixed.


Just an update on this. There is a change merged in Rocky [1] which is 
also going through backports to Queens and Pike. If you've already 
disabled the "consecutive_build_service_disable_threshold" config option 
then it's a no-op. If you haven't, 
"consecutive_build_service_disable_threshold" is now used to count build 
failures but no longer auto-disable the compute service on the 
configured threshold is met (10 by default). The build failure count is 
then used by a new weigher (enabled by default) to sort hosts with build 
failures to the back of the list of candidate hosts for new builds. Once 
there is a successful build on a given host, the failure count is reset. 
The idea here is that hosts which are failing are given lower priority 
during scheduling.


[1] https://review.openstack.org/#/c/572195/

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] Need feedback on spec for handling down cells in the API

2018-06-07 Thread Matt Riedemann
We have a nova spec [1] which is at the point that it needs some API 
user (and operator) feedback on what nova API should be doing when 
listing servers and there are down cells (unable to reach the cell DB or 
it times out).


tl;dr: the spec proposes to return "shell" instances which have the 
server uuid and created_at fields set, and maybe some other fields we 
can set, but otherwise a bunch of fields in the server response would be 
set to UNKNOWN sentinel values. This would be unversioned, and therefore 
could wreak havoc on existing client side code that expects fields like 
'config_drive' and 'updated' to be of a certain format.


There are alternatives listed in the spec so please read this over and 
provide feedback since this is a pretty major UX change.


Oh, and no pressure, but today is the spec freeze deadline for Rocky.

[1] https://review.openstack.org/#/c/557369/

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [publiccloud-wg] Meeting this afternoon for Public Cloud WG

2018-06-07 Thread Tobias Rydberg

Hi folks,

Time for a new meeting for the Public Cloud WG.  Agenda can be found at 
https://etherpad.openstack.org/p/publiccloud-wg


See you all at IRC 1400 UTC in #openstack-publiccloud

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-ansible][releases][governance] Change in OSA roles tagging

2018-06-07 Thread Jean-Philippe Evrard
> Right, you can set the stable-branch-type field to 'tagless' (see
> http://git.openstack.org/cgit/openstack/releases/tree/README.rst#n462) and
> then set the branch location field to the SHA you want to use.

Exactly what I thought.

> If you would be ready to branch all of the roles at one time, you could
> put all of them into 1 deliverable file. Otherwise, you will want to
> split them up into their own files.

Same.

>
> And since you have so many, I will point out that we're really into
> automation over here on the release team, and if you wanted to work on
> making the edit-deliverable command smart enough to determine the SHA
> for you I could walk you through that code to get you started.

Cool.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators