Wido, all,

can you point me to the "recent benchmarks" so I can have a look?
How do you define "performance"? I would not expect cephFS throughput to change, but it is surprising to me that metadata on SSD will have no measurable effect on latency.

- mike

On 1/3/17 10:49 AM, Wido den Hollander wrote:

Op 3 januari 2017 om 2:49 schreef Mike Miller <[email protected]>:


will metadata on SSD improve latency significantly?


No, as I said in my previous e-mail, recent benchmarks showed that storing 
CephFS metadata on SSD does not improve performance.

It still might be good to do since it's not that much data thus recovery will 
go quickly, but don't expect a CephFS performance improvement.

Wido

Mike

On 1/2/17 11:50 AM, Wido den Hollander wrote:

Op 2 januari 2017 om 10:33 schreef Shinobu Kinjo <[email protected]>:


I've never done migration of cephfs_metadata from spindle disks to
ssds. But logically you could achieve this through 2 phases.

 #1 Configure CRUSH rule including spindle disks and ssds
 #2 Configure CRUSH rule for just pointing to ssds
  * This would cause massive data shuffling.

Not really, usually the CephFS metadata isn't that much data.

Recent benchmarks (can't find them now) show that storing CephFS metadata on 
SSD doesn't really improve performance though.

Wido



On Mon, Jan 2, 2017 at 2:36 PM, Mike Miller <[email protected]> wrote:
Hi,

Happy New Year!

Can anyone point me to specific walkthrough / howto instructions how to move
cephfs metadata to SSD in a running cluster?

How is crush to be modified step by step such that the metadata migrate to
SSD?

Thanks and regards,

Mike
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to