Re: [ceph-users] Mysql performance on CephFS vs RBD

2017-05-01 Thread RDS
There is 1 more thing that I noticed when using cephfs instead of RBD for MySQL, and that is CPU usage on the client. When using RBD, I was using 99% of the CPU’s. When I switched to cephfs, the same tests were using 60% of the CPU. Performance was about equal. This test was an OLTP sysbench

Re: [ceph-users] Experience with 5k RPM/archive HDDs

2017-03-06 Thread RDS
Maxime I forgot to mention a couple more things that you can try when using SMR HDD. You could try to use ext4 with the “lazy” initialization. Another option is specifying the “lazytime” ext4 mount option. Depending on your workload, you could possibly see some big improvements. Rick > On Feb

Re: [ceph-users] Ceph performance laggy (requests blocked > 32) on OpenStack

2016-11-25 Thread RDS
If I use slow HDD, I can get the same outcome. Placing journals on fast SAS or NVMe SSD will make a difference. If you are using SATA SSD, those SSD are much slower. Instead of guessing why Ceph is lagging, have you looked at ceph -w and iostat and vmstat reports during your tests? Io stat will

Re: [ceph-users] /var/lib/mysql, CephFS vs RBD

2016-08-31 Thread RDS
for RBD and RBD-NBD? > > Best regards, > > On Wed, Aug 31, 2016 at 10:11 PM, RDS <rs3...@me.com <mailto:rs3...@me.com>> > wrote: > In my testing, using RBD-NBD is faster than using RBD or CephFS. > For a MySQL/sysbench test using 25 threads using OLTP, using a 40G netw

Re: [ceph-users] /var/lib/mysql, CephFS vs RBD

2016-08-31 Thread RDS
In my testing, using RBD-NBD is faster than using RBD or CephFS. For a MySQL/sysbench test using 25 threads using OLTP, using a 40G network between the client and Ceph, here are some of my results: Using ceph-rbd: transactions per sec: 8620 using ceph rbd-nbd: transaction per sec: 9359 using

Re: [ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-12 Thread RDS
Mirror the OS disks, use 10 disks for 10 OSD's > On Aug 12, 2016, at 7:41 AM, Félix Barbeira wrote: > > Hi, > > I'm planning to make a ceph cluster but I have a serious doubt. At this > moment we have ~10 servers DELL R730xd with 12x4TB SATA disks. The official > ceph

Re: [ceph-users] performance decrease after continuous run

2016-07-27 Thread RDS
I have seen this and some of our big customers have also seen this. I was using 8TB HDDs and when running small tests using a fresh HDD setup, these tests resulted in very good performance. I then loaded the ceph cluster so each of the 8TB HDD used 4TB and reran the same tests. performance was

Re: [ceph-users] Ceph performance pattern

2016-07-27 Thread RDS
I had a similar issue when migrating from SSD to NVMe using Ubuntu. Read performance tanked using NVMe. Iostat showed each NVMe performing 30x more physical reads compared to SSD, but the MB/s was 1/6 the speed of the SSD. I set "blockdev --setra 128 /dev/nvmeX” and now performance is much

[ceph-users] Upgrade from .94 to 10.0.5

2016-03-19 Thread RDS
Is there documentation on all the steps showing how to upgrade from .94 to 10.0.5? Thanks Rick ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph RBD latencies

2016-03-03 Thread RDS
A couple of suggestions: 1) # of pgs per OSD should be 100-200 2) When dealing with SSD or Flash, performance of these devices hinge on how you partition them and how you tune linux: a) if using partitions, did you align the partitions on a 4k boundary? I start at sector 2048 using