On 01/13/15 22:03, Roland Giesler wrote:
> I have a 4 node ceph cluster, but the disks are not equally
> distributed across all machines (they are substantially different from
> each other)
>
> One machine has 12 x 1TB SAS drives (h1), another has 8 x 300GB SAS
> (s3) and two machines have only two 1 TB drives each (s2 & s1).
>
> Now machine s3 has by far the most CPU's and RAM, so I'm running my
> VM's mostly from there, but I want to make sure that the writes that
> happen to the ceph cluster get written to the "local" osd's on s3
> first and then the additional writes/copies get done to the network.
>
> Is this possible with ceph.  The VM's are KVM in Proxmox in case it's
> relevant.

I don't think it is possible because I believe it would break its
durability guarantees: IIRC each write is only considered done when all
replicas have been written to (with default settings 3 replicas on 3
different servers) so you have to wait for 3 servers to acknowledge the
write for it to complete in the VM.

You could maybe achieve what you want by using tiering with a cache pool
configured to use only disks on the s3 server in write-back mode but
given the user experience reports on the list it may actually perform
worse than your current setup.

Best regards,

Lionel Bouton
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to