Thanks a lot David,

for me is a little bit difficult to make some tests because I have to buy a
hardware... and the price is different with cache ssd tier o without it.

If anybody have experience with VDI/login storms... will be really welcome!

Note: I have removed the ceph-user list because I get errors when I copy it.

2017-08-18 2:20 GMT+02:00 David Turner <drakonst...@gmail.com>:

> Get it set up and start running tests. You can always enable or disable
> the cache tier later. I don't know if Christian will chime in. And please
> stop removing the ceph-users list from your responses.
>
> On Thu, Aug 17, 2017, 7:41 PM Oscar Segarra <oscar.sega...@gmail.com>
> wrote:
>
>> Thanks a lot David!!!
>>
>> Let's wait the opinion of Christian about the suggested configuration for
>> VDI...
>>
>> Óscar Segarra
>>
>> 2017-08-18 1:03 GMT+02:00 David Turner <drakonst...@gmail.com>:
>>
>>> `ceph df` and `ceph osd df` should give you enough information to know
>>> how full each pool, root, osd, etc are.
>>>
>>> On Thu, Aug 17, 2017, 5:56 PM Oscar Segarra <oscar.sega...@gmail.com>
>>> wrote:
>>>
>>>> Hi David,
>>>>
>>>> Thanks a lot again for your quick answer...
>>>>
>>>>
>>>> *The rules in the CRUSH map will always be followed.  It is not
>>>> possible for Ceph to go against that and put data into a root that
>>>> shouldn't have it.*
>>>> --> I will work on your proposal of creating two roots in the CRUSH
>>>> map... just one question more:
>>>> --> Regarding to space consumption, with this proposal, is it possible
>>>> to know how many disk space is it free in each pool?
>>>>
>>>>
>>>> *The problem with a cache tier is that Ceph is going to need to promote
>>>> and evict stuff all the time (not free).  A lot of people that want to use
>>>> SSD cache tiering for RBDs end up with slower performance because of this.
>>>> Christian Balzer is the expert on Cache Tiers for RBD usage.  His primary
>>>> stance is that it's most likely a bad idea, but there are definite cases
>>>> where it's perfect.*
>>>> --> Christian, is there any advice for VDI --> BASE IMAGE (raw) + 1000
>>>> linked clones (qcow2)
>>>>
>>>> Thanks a lot.
>>>>
>>>>
>>>> 2017-08-17 22:42 GMT+02:00 David Turner <drakonst...@gmail.com>:
>>>>
>>>>> The rules in the CRUSH map will always be followed.  It is not
>>>>> possible for Ceph to go against that and put data into a root that
>>>>> shouldn't have it.
>>>>>
>>>>> The problem with a cache tier is that Ceph is going to need to promote
>>>>> and evict stuff all the time (not free).  A lot of people that want to use
>>>>> SSD cache tiering for RBDs end up with slower performance because of this.
>>>>> Christian Balzer is the expert on Cache Tiers for RBD usage.  His primary
>>>>> stance is that it's most likely a bad idea, but there are definite cases
>>>>> where it's perfect.
>>>>>
>>>>>
>>>>> On Thu, Aug 17, 2017 at 4:20 PM Oscar Segarra <oscar.sega...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi David,
>>>>>>
>>>>>> Thanks a lot for your quick answer!
>>>>>>
>>>>>> *If I'm understanding you correctly, you want to have 2 different
>>>>>> roots that pools can be made using.  The first being entirely SSD 
>>>>>> storage.
>>>>>> The second being HDD Storage with an SSD cache tier on top of it.  *
>>>>>> --> Yes, this is what I mean.
>>>>>>
>>>>>> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-
>>>>>> and-ssd-within-the-same-box/
>>>>>> --> I'm not an expert in CRUSH rules... Whit this configuration, it
>>>>>> is guaranteed that objects stored in ssd pool do not "go" to the hdd 
>>>>>> disks?
>>>>>>
>>>>>> *The above guide explains how to set up the HDD root and the SSD
>>>>>> root.  After that all you do is create a pool on the HDD root for RBDs, a
>>>>>> pool on the SSD root for a cache tier to use with the HDD pool, and then 
>>>>>> a
>>>>>> a pool on the SSD root for RBDs.  There aren't actually a lot of use 
>>>>>> cases
>>>>>> out there where using an SSD cache tier on top of an HDD RBD pool is what
>>>>>> you really want.  I would recommend testing this thoroughly and comparing
>>>>>> your performance to just a standard HDD pool for RBDs without a cache 
>>>>>> tier.*
>>>>>> --> I'm working on a VDI solution where there are BASE IMAGES (raw)
>>>>>> and qcow2 linked clones... where I expect not all VDIs to be powered on 
>>>>>> at
>>>>>> the same time and perform a configuration to avoid problems related to
>>>>>> login storm. (1000 hosts)
>>>>>> --> Do you think it is not a good idea? do you know what does usually
>>>>>> people configure for this kind of scenarios?
>>>>>>
>>>>>> Thanks a lot.
>>>>>>
>>>>>>
>>>>>> 2017-08-17 21:31 GMT+02:00 David Turner <drakonst...@gmail.com>:
>>>>>>
>>>>>>> If I'm understanding you correctly, you want to have 2 different
>>>>>>> roots that pools can be made using.  The first being entirely SSD 
>>>>>>> storage.
>>>>>>> The second being HDD Storage with an SSD cache tier on top of it.
>>>>>>>
>>>>>>> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-
>>>>>>> and-ssd-within-the-same-box/
>>>>>>>
>>>>>>> The above guide explains how to set up the HDD root and the SSD
>>>>>>> root.  After that all you do is create a pool on the HDD root for RBDs, 
>>>>>>> a
>>>>>>> pool on the SSD root for a cache tier to use with the HDD pool, and 
>>>>>>> then a
>>>>>>> a pool on the SSD root for RBDs.  There aren't actually a lot of use 
>>>>>>> cases
>>>>>>> out there where using an SSD cache tier on top of an HDD RBD pool is 
>>>>>>> what
>>>>>>> you really want.  I would recommend testing this thoroughly and 
>>>>>>> comparing
>>>>>>> your performance to just a standard HDD pool for RBDs without a cache 
>>>>>>> tier.
>>>>>>>
>>>>>>> On Thu, Aug 17, 2017 at 3:18 PM Oscar Segarra <
>>>>>>> oscar.sega...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> Sorry guys, during theese days I'm asking a lot about how to
>>>>>>>> distribute my data.
>>>>>>>>
>>>>>>>> I have two kinds of VM:
>>>>>>>>
>>>>>>>> 1.- Management VMs (linux) --> Full SSD dedicated disks
>>>>>>>> 2.- Windows VM --> SSD + HHD (with tiering).
>>>>>>>>
>>>>>>>> I'm working on installing two clusters on the same host but I'm
>>>>>>>> encountering lots of problems as named clusters look not be fully 
>>>>>>>> supported.
>>>>>>>>
>>>>>>>> In the same cluster, Is there any way to distribute my VMs as I
>>>>>>>> like?
>>>>>>>>
>>>>>>>> Thanks a lot!
>>>>>>>>
>>>>>>>> Ó.
>>>>>>>> _______________________________________________
>>>>>>>> ceph-users mailing list
>>>>>>>> ceph-users@lists.ceph.com
>>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to