Thanks Nir for your answers and feedback.

Simon

> On Jun 28, 2018, at 10:55 AM, Nir Soffer <nsof...@redhat.com> wrote:
> 
> On Thu, Jun 28, 2018 at 10:48 AM Simon Coter <simon.co...@oracle.com 
> <mailto:simon.co...@oracle.com>> wrote:
> Hi Nir,
> 
> thanks for your feedback on this; inline:
> 
>> On Jun 27, 2018, at 8:10 PM, Nir Soffer <nsof...@redhat.com 
>> <mailto:nsof...@redhat.com>> wrote:
>> 
>> On Tue, Jun 26, 2018 at 8:45 PM Simon Coter <simon.co...@oracle.com 
>> <mailto:simon.co...@oracle.com>> wrote:
>> I’m looking for documentation reference on oVirt max-limits, like max number 
>> of guest on single compute-node, max number of LUNs into a storage domain or 
>> others.
>> 
>> There is no limit to the number of LUNs you can add in a storage domain,
>> but I don't it is a good idea or very useful to have many LUNs in a storage
>> domain, since the number of volume in a storage domain is limited to about
>> 1947.
> 
> Yep, I see. Where did you get this number for example (1947) ?
> 
> This number is internal implementation detail, so don't depend on it,
> but it goes like this:
> 
> - we create 2G volume for leases
> - each lease needs 1M of storage
> - we reserve 100 leases for future use
> - we create one lease for the SPM
> - rest of the leases can be used for volumes
> - every volume has one lease, create when you create the volume
> 
> So we can have 2048 leases - 101 non volume leases -> 1947 volume leases.
> 
> This limit can be removed if we support extending the leases volume, but it 
> is                                                                            
>                                     
> not supported yet, because we don't recommend having more than 1300 volumes 
> per
> storage domains. The reason is our volumes on block storage are LVM logical
> volumes, and LVM is not designed to manage 1000's of logical volumes.
> 
> So the 1947 volume limit is likely to stay in the near future.
>  
> 
>> 
>> The number of LUNs in a system is limited by the kernel, I think the
>> number of around 16000.
>> 
>> Why do you care about the maximum number of LUNs?
> 
> I do not strictly care about the number of LUNs; I care of possible “max” 
> limits around the solution, to better understand how an architecture should 
> be created/designed while using oVirt.
> 
> Generally, you want to minimize the number of storage domains. Storage domains
> are not deigned for unlimited scale. You should create storage domains only if
> there is a need to separate storage to different storage domains. For example
> having separate storage domain for production or testing, of for different
> groups of users.
> 
> However you cannot have unlimited disks in one storage domain, so one of the
> factors when designing the system is how many disks you need in one storage
> domain.
> 
> Every disk snapshot is a logical volume, so if you have one disk with 5
> snapshot, you are using 5 logical volumes. Additionally, if you include memory
> snapshot, you need another logical volume for it. For example if you have vm
> with 5 disks and you want to keep memory snapshot, every snapshot will use 6
> logical volumes.
> 
> So you can calculate the expected number of volume on a storage domain based 
> on
> the number of virtual machines, the number of disks per virtual machine, and 
> if
> you want to keep memory snapshot or not.
> 
> For file based storage domain none of these limits apply. Here a storage 
> domain
> is a mountpoint and we have one directory per disk, so the number of disks is
> practically unlimited. Since many operations require listing all images and
> snapshots in the mountpoint, having 10,000 disks will be slower then 1000
> disks. 
> 
> In summary, oVirt is designed for data center, and not for the cloud, so you 
> should not expect to manage unlimited number of vms, disks, or storage 
> domains.
> 
> For 4.3 we are working on improving Cinder support. With Cinder based storage,
> we have one LUN per disk, and snapshots are implemented by storage server.
> Also, this LUN will be connected only to one host at a time, or two hosts 
> during
> live migration, instead of having all LUNs connected to all hosts all the 
> time. With
> this storage there will be no practical limit on oVirt side for number of 
> disks.
> 
> Yaniv, do you have some document with general recommendations for sizing?
> 
> Nir
>  
> 
> Thanks
> 
> Simon
> 
>> 
>> Nir
>>  
>> Is there a reference on oVirt documentation ?
>> Actually the only thing I found is related to RHEV and memory 
>> (https://access.redhat.com/articles/906543 
>> <https://access.redhat.com/articles/906543>)
>> Thanks for the help.
>> 
>> Simon
>> _______________________________________________
>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-le...@ovirt.org 
>> <mailto:users-le...@ovirt.org>
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
>> <https://www.ovirt.org/site/privacy-policy/>
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/ 
>> <https://www.ovirt.org/community/about/community-guidelines/>
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3BF7D6ZNVVI4ASJ44MYXEPGR3C4QBKS/
>>  
>> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3BF7D6ZNVVI4ASJ44MYXEPGR3C4QBKS/>

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KGIBKFPO4FD7O22GY5CFHDYYW7BM677C/

Reply via email to