Hi Vitaliy,

> You say you don't have access to raw drives. What does it mean? Do you
> run Ceph OSDs inside VMs? In that case you should probably disable
> Micron caches on the hosts, not just in VMs.

Sorry, I should have been more clear.  This cluster is in production, so I 
needed to schedule a maintenance window to do some tests, including "out"ing an 
OSD and removing the OSD from Ceph, so I can perform some tests, as well as 
maintenance to remove a host from the cluster, so power off/on tests can be 
performed.  Right now, all I have access to is the VM-level and the ability to 
enable/disable the write cache on the 5200's using hdparm (but no read/write 
tests directly on the 5200's, of course, which would require destructive 
operations).


> Yes, disabling the write cache only takes place upon a power cycle... or
> upon the next hotplug of the drive itself.

I have a suspicion this is the reason we didn't see any change! :)  Definitely 
an important item.  Once I have test results, I will report back.  May be 
something you want to add to your wiki article.


> If you get the same ~15k or more iops with -rw=randwrite -fsync=1
> -iodepth=1 with both hdparm -W 0 and -W 1 you're good :) if you have
> cache problems you'll get much less.

Once I have a 5200 available to play with, I will definitely let you know the 
results.


> About Micron 5300's, please benchmark them when you have them as
> described here
> https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-
> 0u0r5fAjjufLKayaut_FOPxYZjc/edit
> (instructions in the end of the sheet)

Most definitely.  I suspect it will be another month before we get them 
unfortunately. :(

Eric


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to