Re: [ceph-users] Cache pool tiering SSD journal

2015-01-18 Thread lidc...@redhat.com
No, if you used cache tiering, It is no need to use ssd journal again.

From: Florent MONTHEL
Date: 2015-01-17 23:43
To: ceph-users
Subject: [ceph-users] Cache pool tiering  SSD journal
Hi list,

With cache pool tiering (in write back mode) enhancement, should I keep to use 
SSD journal on SSD ?
Can we have 1 big SSD pool for caching for all low cost storage pools ?
Thanks

Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] the performance issue for cache pool

2015-01-12 Thread lidc...@redhat.com
Hi everyone:

I used writeback mode for cache pool :
 
  ceph osd tier add sas ssd
  ceph osd tier add sas ssd
  ceph osd tier cache-mode ssd writeback
  ceph osd tier set-overlay sas ssd

and i also set dirty ratio and full ratio:

ceph osd pool set ssd cache_target_dirty_ratio .4 
ceph osd pool set ssd cache_target_full_ratio .8 
 
the capacity of ssd cache pool is 4T.

I used fio to test performance:
fio -filename=/dev/rbd0 -direct=1 -iodepth 32 -thread -rw=randwrite 
-ioengine=libaio -bs=16M -size=2000G -group_reporting -name=mytest

at the begin, the performance is very good, but after half a hour, I find when 
the hot cache pool begin flushing dirty objects,  the performance of rados is 
instability. from 87851 kB/s to 860 MB/s.

Do have any tunning parameters to get more stable performance?

Thanks.

2014-12-23 22:46:24.844730 mon.0 [INF] pgmap v24101: 6144 pgs: 6144 
active+clean; 1246 GB data, 4012 GB used, 45109 GB / 49121 GB avail; 680 MB/s 
wr, 1007 op/s
2014-12-23 22:46:27.851431 mon.0 [INF] pgmap v24102: 6144 pgs: 6144 
active+clean; 1246 GB data, 4012 GB used, 45109 GB / 49121 GB avail; 161 MB/s 
wr, 299 op/s
2014-12-23 22:46:28.883866 mon.0 [INF] pgmap v24103: 6144 pgs: 6144 
active+clean; 1247 GB data, 4015 GB used, 45106 GB / 49121 GB avail; 308 MB/s 
wr, 1065 op/s
2014-12-23 22:46:29.885914 mon.0 [INF] pgmap v24104: 6144 pgs: 6144 
active+clean; 1247 GB data, 4016 GB used, 45105 GB / 49121 GB avail; 701 MB/s 
wr, 1621 op/s
2014-12-23 22:46:32.842955 mon.0 [INF] pgmap v24105: 6144 pgs: 6144 
active+clean; 1247 GB data, 4016 GB used, 45105 GB / 49121 GB avail; 116 MB/s 
wr, 160 op/s
2014-12-23 22:46:33.863964 mon.0 [INF] pgmap v24106: 6144 pgs: 6144 
active+clean; 1248 GB data, 4021 GB used, 45100 GB / 49121 GB avail; 344 MB/s 
wr, 923 op/s
2014-12-23 22:46:34.861011 mon.0 [INF] pgmap v24107: 6144 pgs: 6144 
active+clean; 1248 GB data, 4021 GB used, 45100 GB / 49121 GB avail; 706 MB/s 
wr, 1564 op/s
2014-12-23 22:46:38.176885 mon.0 [INF] pgmap v24108: 6144 pgs: 6144 
active+clean; 1249 GB data, 4024 GB used, 45097 GB / 49121 GB avail; 222 MB/s 
wr, 938 op/s
2014-12-23 22:46:39.177233 mon.0 [INF] pgmap v24109: 6144 pgs: 6144 
active+clean; 1250 GB data, 4026 GB used, 45095 GB / 49121 GB avail; 427 MB/s 
wr, 1292 op/s
2014-12-23 22:46:42.842279 mon.0 [INF] pgmap v24110: 6144 pgs: 6144 
active+clean; 1250 GB data, 4026 GB used, 45095 GB / 49121 GB avail; 320 MB/s 
wr, 570 op/s
2014-12-23 22:46:43.872017 mon.0 [INF] pgmap v24111: 6144 pgs: 6144 
active+clean; 1251 GB data, 4030 GB used, 45090 GB / 49121 GB avail; 405 MB/s 
wr, 992 op/s
2014-12-23 22:46:44.862873 mon.0 [INF] pgmap v24112: 6144 pgs: 6144 
active+clean; 1251 GB data, 4030 GB used, 45090 GB / 49121 GB avail; 729 MB/s 
wr, 1755 op/s
2014-12-23 22:46:47.847813 mon.0 [INF] pgmap v24113: 6144 pgs: 6144 
active+clean; 1251 GB data, 4031 GB used, 45090 GB / 49121 GB avail; 2053 kB/s 
wr, 135 op/s
2014-12-23 22:46:48.857285 mon.0 [INF] pgmap v24114: 6144 pgs: 6144 
active+clean; 1252 GB data, 4033 GB used, 45087 GB / 49121 GB avail; 272 MB/s 
wr, 433 op/s
2014-12-23 22:46:49.871775 mon.0 [INF] pgmap v24115: 6144 pgs: 6144 
active+clean; 1252 GB data, 4034 GB used, 45087 GB / 49121 GB avail; 535 MB/s 
wr, 586 op/s
2014-12-23 22:46:52.842098 mon.0 [INF] pgmap v24116: 6144 pgs: 6144 
active+clean; 1252 GB data, 4033 GB used, 45088 GB / 49121 GB avail; 3074 kB/s 
wr, 113 op/s
2014-12-23 22:46:53.845398 mon.0 [INF] pgmap v24117: 6144 pgs: 6144 
active+clean; 1254 GB data, 4037 GB used, 45084 GB / 49121 GB avail; 342 MB/s 
wr, 571 op/s
2014-12-23 22:46:57.844137 mon.0 [INF] pgmap v24118: 6144 pgs: 6144 
active+clean; 1254 GB data, 4037 GB used, 45084 GB / 49121 GB avail; 302 MB/s 
wr, 577 op/s
2014-12-23 22:46:58.848028 mon.0 [INF] pgmap v24119: 6144 pgs: 6144 
active+clean; 1255 GB data, 4039 GB used, 45082 GB / 49121 GB avail; 319 MB/s 
wr, 897 op/s
2014-12-23 22:47:02.844724 mon.0 [INF] pgmap v24120: 6144 pgs: 6144 
active+clean; 1255 GB data, 4039 GB used, 45082 GB / 49121 GB avail; 327 MB/s 
wr, 856 op/s
2014-12-23 22:47:03.850795 mon.0 [INF] pgmap v24121: 6144 pgs: 6144 
active+clean; 1256 GB data, 4043 GB used, 45078 GB / 49121 GB avail; 297 MB/s 
wr, 887 op/s
2014-12-23 22:47:08.169046 mon.0 [INF] pgmap v24122: 6144 pgs: 6144 
active+clean; 1256 GB data, 4045 GB used, 45076 GB / 49121 GB avail; 318 MB/s 
wr, 830 op/s
2014-12-23 22:47:09.169302 mon.0 [INF] pgmap v24123: 6144 pgs: 6144 
active+clean; 1257 GB data, 4046 GB used, 45075 GB / 49121 GB avail; 133 MB/s 
wr, 257 op/s
2014-12-23 22:47:12.844073 mon.0 [INF] pgmap v24124: 6144 pgs: 6144 
active+clean; 1257 GB data, 4046 GB used, 45075 GB / 49121 GB avail; 65702 kB/s 
wr, 124 op/s
2014-12-23 22:47:13.845286 mon.0 [INF] pgmap v24125: 6144 pgs: 6144 
active+clean; 1257 GB data, 4047 GB used, 45074 GB / 49121 GB avail; 142 MB/s 
wr, 284 op/s
2014-12-23 22:47:14.846753 mon.0 [INF] pgmap v24126: 6144 pgs: 6144 
active+clean; 1257 GB data, 4047 GB used, 45074 GB / 49121 GB avail; 461 

[ceph-users] SSD Journal Best Practice

2015-01-12 Thread lidc...@redhat.com
Hi everyone:
  I plan to use SSD Journal to improve performance.
  I have one 1.2T SSD disk per server.

  what is the best practice for SSD Journal ?
  There are there choice to deploy SSD Journal
  1. all osd used same ssd partion
  ceph-deploy osd create ceph-node:sdb:/dev/ssd ceph-node:sdc:/dev/ssd
  2.each osd used one ssd partion
  ceph-deploy osd create ceph-node:sdb:/dev/ssd1 ceph-node:sdc:/dev/ssd2
  3.each osd used a file for Journal, this file is on ssd disk
  ceph-deploy osd create ceph-node:sdb:/mnt/ssd/ssd1 
ceph-node:sdc:/mnt/ssd/ssd2

  Any suggest?
   Thanks.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSD Journal Best Practice

2015-01-12 Thread lidc...@redhat.com
For the first choice:
 ceph-deploy osd create ceph-node:sdb:/dev/ssd ceph-node:sdc:/dev/ssd
i find ceph-deploy will create partition automaticaly, and each partition is 5G 
default.
So the first choice and second choice is almost the same.
Compare to filesystem, I perfer to block device to get more better performance. 


 
From: lidc...@redhat.com
Date: 2015-01-12 12:35
To: ceph-us...@ceph.com
Subject: SSD Journal Best Practice
Hi everyone:
  I plan to use SSD Journal to improve performance.
  I have one 1.2T SSD disk per server.

  what is the best practice for SSD Journal ?
  There are there choice to deploy SSD Journal
  1. all osd used same ssd partion
  ceph-deploy osd create ceph-node:sdb:/dev/ssd ceph-node:sdc:/dev/ssd
  2.each osd used one ssd partion
  ceph-deploy osd create ceph-node:sdb:/dev/ssd1 ceph-node:sdc:/dev/ssd2
  3.each osd used a file for Journal, this file is on ssd disk
  ceph-deploy osd create ceph-node:sdb:/mnt/ssd/ssd1 
ceph-node:sdc:/mnt/ssd/ssd2

  Any suggest?
   Thanks.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com