Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-18 Thread Belmiro Moreira
Hi,
this looks reasonable to me but I would prefer B.
In this case the operator can configure the hard limit.
I don't think we more granularity or expose it using the API.

Belmiro

On Fri, Jun 8, 2018 at 3:46 PM Dan Smith  wrote:

> > Some ideas that have been discussed so far include:
>
> FYI, these are already in my order of preference.
>
> > A) Selecting a new, higher maximum that still yields reasonable
> > performance on a single compute host (64 or 128, for example). Pros:
> > helps prevent the potential for poor performance on a compute host
> > from attaching too many volumes. Cons: doesn't let anyone opt-in to a
> > higher maximum if their environment can handle it.
>
> I prefer this because I think it can be done per virt driver, for
> whatever actually makes sense there. If powervm can handle 500 volumes
> in a meaningful way on one instance, then that's cool. I think libvirt's
> limit should likely be 64ish.
>
> > B) Creating a config option to let operators choose how many volumes
> > allowed to attach to a single instance. Pros: lets operators opt-in to
> > a maximum that works in their environment. Cons: it's not discoverable
> > for those calling the API.
>
> This is a fine compromise, IMHO, as it lets operators tune it per
> compute node based on the virt driver and the hardware. If one compute
> is using nothing but iSCSI over a single 10g link, then they may need to
> clamp that down to something more sane.
>
> Like the per virt driver restriction above, it's not discoverable via
> the API, but if it varies based on compute node and other factors in a
> single deployment, then making it discoverable isn't going to be very
> easy anyway.
>
> > C) Create a configurable API limit for maximum number of volumes to
> > attach to a single instance that is either a quota or similar to a
> > quota. Pros: lets operators opt-in to a maximum that works in their
> > environment. Cons: it's yet another quota?
>
> Do we have any other quota limits that are per-instance like this would
> be? If not, then this would likely be weird, but if so, then this would
> also be an option, IMHO. However, it's too much work for what is really
> not a hugely important problem, IMHO, and both of the above are
> lighter-weight ways to solve this and move on.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-15 Thread Dan Smith
> I thought we were leaning toward the option where nova itself doesn't
> impose a limit, but lets the virt driver decide.
>
> I would really like NOT to see logic like this in any nova code:
>
>> if kvm|qemu:
>> return 256
>> elif POWER:
>> return 4000
>> elif:
>> ...

It's insanity to try to find a limit that will work for
everyone. PowerVM supports a billion, libvirt/kvm has some practical and
theoretical limits, both of which are higher than what is actually
sane. It depends on your virt driver, and how you're attaching your
volumes, maybe how tightly you pack your instances, probably how many
threads you give to an instance, how big your compute nodes are, and
definitely what your workload is.

That's a really big matrix, and even if we decide on something, IBM will
come out of the woodwork with some other hypervisor that has been around
since the Nixon era that uses BCD-encoded volume numbers and thus can
only support 10. It's going to depend, and a user isn't going to be able
to reasonably probe it using any of our existing APIs.

If it's going to depend on all the above factors, I see no reason not to
put a conf value in so that operators can pick a reasonably sane
limit. Otherwise, the limit we pick will be wrong for everyone.

Plus... if we do a conf option we can put this to rest and stop talking
about it, which I for one am *really* looking forward to :)

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-15 Thread Kashyap Chamarthy
On Mon, Jun 11, 2018 at 10:14:33AM -0500, Eric Fried wrote:
> I thought we were leaning toward the option where nova itself doesn't
> impose a limit, but lets the virt driver decide.

Yeah, I agree with that, if we can't arrive at a sensible limit for
Nova, after testing with all drivers that matter (which I doubt will
happen anytime soon).

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-11 Thread Eric Fried
I thought we were leaning toward the option where nova itself doesn't
impose a limit, but lets the virt driver decide.

I would really like NOT to see logic like this in any nova code:

> if kvm|qemu:
> return 256
> elif POWER:
> return 4000
> elif:
> ...

On 06/11/2018 10:06 AM, Kashyap Chamarthy wrote:
> On Mon, Jun 11, 2018 at 11:55:29AM +0200, Sahid Orentino Ferdjaoui wrote:
>> On Fri, Jun 08, 2018 at 11:35:45AM +0200, Kashyap Chamarthy wrote:
>>> On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote:
> 
> [...]
> 
 The 26 volumes thing is a libvirt driver restriction.
>>>
>>> The original limitation of 26 disks was because at that time there was
>>> no 'virtio-scsi'.  
>>>
>>> (With 'virtio-scsi', each of its controller allows upto 256 targets, and
>>> each target can use any LUN (Logical Unit Number) from 0 to 16383
>>> (inclusive).  Therefore, the maxium allowable disks on a single
>>> 'virtio-scsi' controller is 256 * 16384 == 4194304.)  Source[1].
>>
>> Not totally true for Nova. Nova handles one virtio-scsi controller per
>> guest and plug all the volumes in one target so in theory that would
>> be 16384 LUN (only).
> 
> Yeah, I could've been clearer that I was only talking the maximum
> allowable disks regardless of how Nova handles it.
> 
>> But you made a good point the 26 volumes thing is not a libvirt driver
>> restriction. For example the QEMU SCSI native implementation handles
>> 256 disks.
>>
>> About the virtio-blk limitation I made the same finding but Tsuyoshi
>> Nagata shared an interesting point saying that virtio-blk is not longer
>> limited by the number of PCI slot available. That in recent kernel and
>> QEMU version [0].
>>
>> I could join what you are suggesting at the bottom and fix the limit
>> to 256 disks.
> 
> Right, that's for KVM-based hypervisors.  
> 
> Eric Fried on IRC said the other day that for IBM POWER hypervisor they
> have tested (not with OpenStack) upto 4000 disks.  But I am yet to see
> any more concrete details from POWER hypervisor users on this thread.
> 
> If people can't seem to reach an agreement on the limits, we may have to
> settle with conditionals:
> 
> if kvm|qemu:
> return 256
> elif POWER:
> return 4000
> elif:
> ...
> 
> Before that we need concrete data that it is a _reasonble_ limit for
> POWER hypervisor (and possibly others).
> 
>> [0] 
>> https://review.openstack.org/#/c/567472/16/nova/virt/libvirt/blockinfo.py@162
> 
> [...]
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-11 Thread Kashyap Chamarthy
On Mon, Jun 11, 2018 at 11:55:29AM +0200, Sahid Orentino Ferdjaoui wrote:
> On Fri, Jun 08, 2018 at 11:35:45AM +0200, Kashyap Chamarthy wrote:
> > On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote:

[...]

> > > The 26 volumes thing is a libvirt driver restriction.
> > 
> > The original limitation of 26 disks was because at that time there was
> > no 'virtio-scsi'.  
> > 
> > (With 'virtio-scsi', each of its controller allows upto 256 targets, and
> > each target can use any LUN (Logical Unit Number) from 0 to 16383
> > (inclusive).  Therefore, the maxium allowable disks on a single
> > 'virtio-scsi' controller is 256 * 16384 == 4194304.)  Source[1].
> 
> Not totally true for Nova. Nova handles one virtio-scsi controller per
> guest and plug all the volumes in one target so in theory that would
> be 16384 LUN (only).

Yeah, I could've been clearer that I was only talking the maximum
allowable disks regardless of how Nova handles it.

> But you made a good point the 26 volumes thing is not a libvirt driver
> restriction. For example the QEMU SCSI native implementation handles
> 256 disks.
> 
> About the virtio-blk limitation I made the same finding but Tsuyoshi
> Nagata shared an interesting point saying that virtio-blk is not longer
> limited by the number of PCI slot available. That in recent kernel and
> QEMU version [0].
> 
> I could join what you are suggesting at the bottom and fix the limit
> to 256 disks.

Right, that's for KVM-based hypervisors.  

Eric Fried on IRC said the other day that for IBM POWER hypervisor they
have tested (not with OpenStack) upto 4000 disks.  But I am yet to see
any more concrete details from POWER hypervisor users on this thread.

If people can't seem to reach an agreement on the limits, we may have to
settle with conditionals:

if kvm|qemu:
return 256
elif POWER:
return 4000
elif:
...

Before that we need concrete data that it is a _reasonble_ limit for
POWER hypervisor (and possibly others).

> [0] 
> https://review.openstack.org/#/c/567472/16/nova/virt/libvirt/blockinfo.py@162

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-11 Thread Sahid Orentino Ferdjaoui
On Fri, Jun 08, 2018 at 11:35:45AM +0200, Kashyap Chamarthy wrote:
> On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote:
> > On 6/7/2018 12:56 PM, melanie witt wrote:
> > > Recently, we've received interest about increasing the maximum number of
> > > allowed volumes to attach to a single instance > 26. The limit of 26 is
> > > because of a historical limitation in libvirt (if I remember correctly)
> > > and is no longer limited at the libvirt level in the present day. So,
> > > we're looking at providing a way to attach more than 26 volumes to a
> > > single instance and we want your feedback.
> > 
> > The 26 volumes thing is a libvirt driver restriction.
> 
> The original limitation of 26 disks was because at that time there was
> no 'virtio-scsi'.  
> 
> (With 'virtio-scsi', each of its controller allows upto 256 targets, and
> each target can use any LUN (Logical Unit Number) from 0 to 16383
> (inclusive).  Therefore, the maxium allowable disks on a single
> 'virtio-scsi' controller is 256 * 16384 == 4194304.)  Source[1].

Not totally true for Nova. Nova handles one virtio-scsi controller per
guest and plug all the volumes in one target so in theory that would
be 16384 LUN (only).

But you made a good point the 26 volumes thing is not a libvirt driver
restriction. For example the QEMU SCSI native implementation handles
256 disks.

About the virtio-blk limitation I made the same finding but Tsuyoshi
Nagata shared an interesting point saying that virtio-blk is not longer
limited by the number of PCI slot available. That in recent kernel and
QEMU version [0].

I could join what you are suggesting at the bottom and fix the limit
to 256 disks.

[0] 
https://review.openstack.org/#/c/567472/16/nova/virt/libvirt/blockinfo.py@162

> [...]
> 
> > > Some ideas that have been discussed so far include:
> > > 
> > > A) Selecting a new, higher maximum that still yields reasonable
> > > performance on a single compute host (64 or 128, for example). Pros:
> > > helps prevent the potential for poor performance on a compute host from
> > > attaching too many volumes. Cons: doesn't let anyone opt-in to a higher
> > > maximum if their environment can handle it.
> 
> Option (A) can still be considered: We can limit it to 256 disks.  Why?
> 
> FWIW, I did some digging here:
> 
> The upstream libguestfs project after some thorough testing, arrived at
> a limit of 256 disks, and suggest the same for Nova.  And if anyone
> wants to increase that limit, the proposer should come up with a fully
> worked through test plan. :-) (Try doing any meaningful I/O to so many
> disks at once, and see how well that works out.)
> 
> What more, the libguestfs upstream tests 256 disks, and even _that_
> fails sometimes:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1478201 -- "kernel runs
> out of memory with 256 virtio-scsi disks"
> 
> The above bug is fixed now in kernel-4.17.0-0.rc3.git1.2. (And also
> required a corresponding fix in QEMU[2], which is available from version
> v2.11.0 onwards.)
> 
> [...]
> 
> 
> [1] https://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg02823.html
> -- virtio-scsi limits
> [2] https://git.qemu.org/?p=qemu.git;a=commit;h=5c0919d 
> 
> -- 
> /kashyap
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-08 Thread Gerald McBrearty
Dan Smith  wrote on 06/08/2018 08:46:01 AM:

> From: Dan Smith 
> To: melanie witt 
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
> , 
openstack-operat...@lists.openstack.org
> Date: 06/08/2018 08:48 AM
> Subject: Re: [openstack-dev] [nova] increasing the number of allowed
> volumes attached per instance > 26
> 
> > Some ideas that have been discussed so far include:
> 
> FYI, these are already in my order of preference.
> 
> > A) Selecting a new, higher maximum that still yields reasonable
> > performance on a single compute host (64 or 128, for example). Pros:
> > helps prevent the potential for poor performance on a compute host
> > from attaching too many volumes. Cons: doesn't let anyone opt-in to a
> > higher maximum if their environment can handle it.
> 
> I prefer this because I think it can be done per virt driver, for
> whatever actually makes sense there. If powervm can handle 500 volumes
> in a meaningful way on one instance, then that's cool. I think libvirt's
> limit should likely be 64ish.
> 

As long as this can be done on a per virt driver basis as Dan says
I think also would prefer this option.

Actually the meaning fully number is much higher that 500 for powervm.
I'm thinking the powervm limit could likely be 4096ish. On powervm we have 

a OS where the meaningful limit is 4096 volumes but routinely most
operators would have between 1000-2000.

-Gerald

> > B) Creating a config option to let operators choose how many volumes
> > allowed to attach to a single instance. Pros: lets operators opt-in to
> > a maximum that works in their environment. Cons: it's not discoverable
> > for those calling the API.
> 
> This is a fine compromise, IMHO, as it lets operators tune it per
> compute node based on the virt driver and the hardware. If one compute
> is using nothing but iSCSI over a single 10g link, then they may need to
> clamp that down to something more sane.
> 
> Like the per virt driver restriction above, it's not discoverable via
> the API, but if it varies based on compute node and other factors in a
> single deployment, then making it discoverable isn't going to be very
> easy anyway.
> 
> > C) Create a configurable API limit for maximum number of volumes to
> > attach to a single instance that is either a quota or similar to a
> > quota. Pros: lets operators opt-in to a maximum that works in their
> > environment. Cons: it's yet another quota?
> 
> Do we have any other quota limits that are per-instance like this would
> be? If not, then this would likely be weird, but if so, then this would
> also be an option, IMHO. However, it's too much work for what is really
> not a hugely important problem, IMHO, and both of the above are
> lighter-weight ways to solve this and move on.
> 
> --Dan
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> INVALID URI REMOVED
> 
u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-
> siA1ZOg=i0r4x6W1L_PMd5Bym8J36w=Vg5MEvB0VELjModDoJF8PGcmUinnq-
> kfFxavTqfnYYw=xe_2YmabBZEJJmtBK-4LZPh68rG3UI6dVqoZq6zKlIA=
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-08 Thread Dan Smith
> Some ideas that have been discussed so far include:

FYI, these are already in my order of preference.

> A) Selecting a new, higher maximum that still yields reasonable
> performance on a single compute host (64 or 128, for example). Pros:
> helps prevent the potential for poor performance on a compute host
> from attaching too many volumes. Cons: doesn't let anyone opt-in to a
> higher maximum if their environment can handle it.

I prefer this because I think it can be done per virt driver, for
whatever actually makes sense there. If powervm can handle 500 volumes
in a meaningful way on one instance, then that's cool. I think libvirt's
limit should likely be 64ish.

> B) Creating a config option to let operators choose how many volumes
> allowed to attach to a single instance. Pros: lets operators opt-in to
> a maximum that works in their environment. Cons: it's not discoverable
> for those calling the API.

This is a fine compromise, IMHO, as it lets operators tune it per
compute node based on the virt driver and the hardware. If one compute
is using nothing but iSCSI over a single 10g link, then they may need to
clamp that down to something more sane.

Like the per virt driver restriction above, it's not discoverable via
the API, but if it varies based on compute node and other factors in a
single deployment, then making it discoverable isn't going to be very
easy anyway.

> C) Create a configurable API limit for maximum number of volumes to
> attach to a single instance that is either a quota or similar to a
> quota. Pros: lets operators opt-in to a maximum that works in their
> environment. Cons: it's yet another quota?

Do we have any other quota limits that are per-instance like this would
be? If not, then this would likely be weird, but if so, then this would
also be an option, IMHO. However, it's too much work for what is really
not a hugely important problem, IMHO, and both of the above are
lighter-weight ways to solve this and move on.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-08 Thread Kashyap Chamarthy
On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote:
> On 6/7/2018 12:56 PM, melanie witt wrote:
> > Recently, we've received interest about increasing the maximum number of
> > allowed volumes to attach to a single instance > 26. The limit of 26 is
> > because of a historical limitation in libvirt (if I remember correctly)
> > and is no longer limited at the libvirt level in the present day. So,
> > we're looking at providing a way to attach more than 26 volumes to a
> > single instance and we want your feedback.
> 
> The 26 volumes thing is a libvirt driver restriction.

The original limitation of 26 disks was because at that time there was
no 'virtio-scsi'.  

(With 'virtio-scsi', each of its controller allows upto 256 targets, and
each target can use any LUN (Logical Unit Number) from 0 to 16383
(inclusive).  Therefore, the maxium allowable disks on a single
'virtio-scsi' controller is 256 * 16384 == 4194304.)  Source[1].

[...]

> > Some ideas that have been discussed so far include:
> > 
> > A) Selecting a new, higher maximum that still yields reasonable
> > performance on a single compute host (64 or 128, for example). Pros:
> > helps prevent the potential for poor performance on a compute host from
> > attaching too many volumes. Cons: doesn't let anyone opt-in to a higher
> > maximum if their environment can handle it.

Option (A) can still be considered: We can limit it to 256 disks.  Why?

FWIW, I did some digging here:

The upstream libguestfs project after some thorough testing, arrived at
a limit of 256 disks, and suggest the same for Nova.  And if anyone
wants to increase that limit, the proposer should come up with a fully
worked through test plan. :-) (Try doing any meaningful I/O to so many
disks at once, and see how well that works out.)

What more, the libguestfs upstream tests 256 disks, and even _that_
fails sometimes:

https://bugzilla.redhat.com/show_bug.cgi?id=1478201 -- "kernel runs
out of memory with 256 virtio-scsi disks"

The above bug is fixed now in kernel-4.17.0-0.rc3.git1.2. (And also
required a corresponding fix in QEMU[2], which is available from version
v2.11.0 onwards.)

[...]


[1] https://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg02823.html
-- virtio-scsi limits
[2] https://git.qemu.org/?p=qemu.git;a=commit;h=5c0919d 

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Jay Bryant
On Thu, Jun 7, 2018, 4:17 PM Matt Riedemann  wrote:

> On 6/7/2018 1:54 PM, Jay Pipes wrote:
> >
> > If Cinder tracks volume attachments as consumable resources, then this
> > would be my preference.
>
> Cinder does:
>
> https://developer.openstack.org/api-ref/block-storage/v3/#attachments
>
> However, there is no limit in Cinder on those as far as I know.
>
> There is no limit as we have no idea what to limit at.
>
There is no limit as we don't know what to limit at. Could depend on the
host, the protocol or the backend.

Also that is counting attachments for a volume. I don't think that helps us
determine how many attachments a host had without additional work.

>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Matt Riedemann

On 6/7/2018 1:54 PM, Jay Pipes wrote:


If Cinder tracks volume attachments as consumable resources, then this 
would be my preference.


Cinder does:

https://developer.openstack.org/api-ref/block-storage/v3/#attachments

However, there is no limit in Cinder on those as far as I know.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Chris Friesen

On 06/07/2018 12:07 PM, Matt Riedemann wrote:

On 6/7/2018 12:56 PM, melanie witt wrote:



C) Create a configurable API limit for maximum number of volumes to attach to
a single instance that is either a quota or similar to a quota. Pros: lets
operators opt-in to a maximum that works in their environment. Cons: it's yet
another quota?


This seems the most reasonable to me if we're going to do this, but I'm probably
in the minority. Yes more quota limits sucks, but it's (1) discoverable by API
users and therefore (2) interoperable.


Quota seems like kind of a blunt instrument, since it might not make sense for a 
little single-vCPU guest to get the same number of connections as a massive 
guest with many dedicated vCPUs.  (Since you can fit many more of the former on 
a given compute node.)


If what we care about is the number of connections per compute node it almost 
feels like a resource that should be tracked...but you wouldn't want to have one 
instance consume all of the connections on the node so you're back to needing a 
per-instance limit of some sort.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Jay Pipes

On 06/07/2018 01:56 PM, melanie witt wrote:

Hello Stackers,

Recently, we've received interest about increasing the maximum number of 
allowed volumes to attach to a single instance > 26. The limit of 26 is 
because of a historical limitation in libvirt (if I remember correctly) 
and is no longer limited at the libvirt level in the present day. So, 
we're looking at providing a way to attach more than 26 volumes to a 
single instance and we want your feedback.


We'd like to hear from operators and users about their use cases for 
wanting to be able to attach a large number of volumes to a single 
instance. If you could share your use cases, it would help us greatly in 
moving forward with an approach for increasing the maximum.


Some ideas that have been discussed so far include:

A) Selecting a new, higher maximum that still yields reasonable 
performance on a single compute host (64 or 128, for example). Pros: 
helps prevent the potential for poor performance on a compute host from 
attaching too many volumes. Cons: doesn't let anyone opt-in to a higher 
maximum if their environment can handle it.


B) Creating a config option to let operators choose how many volumes 
allowed to attach to a single instance. Pros: lets operators opt-in to a 
maximum that works in their environment. Cons: it's not discoverable for 
those calling the API.


C) Create a configurable API limit for maximum number of volumes to 
attach to a single instance that is either a quota or similar to a 
quota. Pros: lets operators opt-in to a maximum that works in their 
environment. Cons: it's yet another quota?


If Cinder tracks volume attachments as consumable resources, then this 
would be my preference.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Matt Riedemann

+operators (I forgot)

On 6/7/2018 1:07 PM, Matt Riedemann wrote:

On 6/7/2018 12:56 PM, melanie witt wrote:
Recently, we've received interest about increasing the maximum number 
of allowed volumes to attach to a single instance > 26. The limit of 
26 is because of a historical limitation in libvirt (if I remember 
correctly) and is no longer limited at the libvirt level in the 
present day. So, we're looking at providing a way to attach more than 
26 volumes to a single instance and we want your feedback.


The 26 volumes thing is a libvirt driver restriction.

There was a bug at one point because powervm (or powervc) was capping 
out at 80 volumes per instance because of restrictions in the 
build_requests table in the API DB:


https://bugs.launchpad.net/nova/+bug/1621138

They wanted to get to 128, because that's how power rolls.



We'd like to hear from operators and users about their use cases for 
wanting to be able to attach a large number of volumes to a single 
instance. If you could share your use cases, it would help us greatly 
in moving forward with an approach for increasing the maximum.


Some ideas that have been discussed so far include:

A) Selecting a new, higher maximum that still yields reasonable 
performance on a single compute host (64 or 128, for example). Pros: 
helps prevent the potential for poor performance on a compute host 
from attaching too many volumes. Cons: doesn't let anyone opt-in to a 
higher maximum if their environment can handle it.


B) Creating a config option to let operators choose how many volumes 
allowed to attach to a single instance. Pros: lets operators opt-in to 
a maximum that works in their environment. Cons: it's not discoverable 
for those calling the API.


I'm not a fan of a non-discoverable config option which will impact API 
behavior indirectly, i.e. on cloud A I can boot from volume with 64 
volumes but not on cloud B.




C) Create a configurable API limit for maximum number of volumes to 
attach to a single instance that is either a quota or similar to a 
quota. Pros: lets operators opt-in to a maximum that works in their 
environment. Cons: it's yet another quota?


This seems the most reasonable to me if we're going to do this, but I'm 
probably in the minority. Yes more quota limits sucks, but it's (1) 
discoverable by API users and therefore (2) interoperable.


If we did the quota thing, I'd probably default to unlimited and let the 
cinder volume quota cap it for the project as it does today. Then admins 
can tune it as needed.





--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread Matt Riedemann

On 6/7/2018 12:56 PM, melanie witt wrote:
Recently, we've received interest about increasing the maximum number of 
allowed volumes to attach to a single instance > 26. The limit of 26 is 
because of a historical limitation in libvirt (if I remember correctly) 
and is no longer limited at the libvirt level in the present day. So, 
we're looking at providing a way to attach more than 26 volumes to a 
single instance and we want your feedback.


The 26 volumes thing is a libvirt driver restriction.

There was a bug at one point because powervm (or powervc) was capping 
out at 80 volumes per instance because of restrictions in the 
build_requests table in the API DB:


https://bugs.launchpad.net/nova/+bug/1621138

They wanted to get to 128, because that's how power rolls.



We'd like to hear from operators and users about their use cases for 
wanting to be able to attach a large number of volumes to a single 
instance. If you could share your use cases, it would help us greatly in 
moving forward with an approach for increasing the maximum.


Some ideas that have been discussed so far include:

A) Selecting a new, higher maximum that still yields reasonable 
performance on a single compute host (64 or 128, for example). Pros: 
helps prevent the potential for poor performance on a compute host from 
attaching too many volumes. Cons: doesn't let anyone opt-in to a higher 
maximum if their environment can handle it.


B) Creating a config option to let operators choose how many volumes 
allowed to attach to a single instance. Pros: lets operators opt-in to a 
maximum that works in their environment. Cons: it's not discoverable for 
those calling the API.


I'm not a fan of a non-discoverable config option which will impact API 
behavior indirectly, i.e. on cloud A I can boot from volume with 64 
volumes but not on cloud B.




C) Create a configurable API limit for maximum number of volumes to 
attach to a single instance that is either a quota or similar to a 
quota. Pros: lets operators opt-in to a maximum that works in their 
environment. Cons: it's yet another quota?


This seems the most reasonable to me if we're going to do this, but I'm 
probably in the minority. Yes more quota limits sucks, but it's (1) 
discoverable by API users and therefore (2) interoperable.


If we did the quota thing, I'd probably default to unlimited and let the 
cinder volume quota cap it for the project as it does today. Then admins 
can tune it as needed.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev