Hi,

 

We have manage deploy ceph with cloudstack. Now, we running 3 monitor and 5
osd. We share some output and we very proud get done ceph. We will move ceph
to production on short period. We have manage to build VSM (GUI) to monitor
ceph.

 

Result:

 

This test using one vm only. The result is good. 

 

ceph-1

 

test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64

fio-2.0.13

Starting 1 process

Jobs: 1 (f=1): [m] [100.0% done] [9078K/9398K/0K /s] [2269 /2349 /0  iops]
[eta 00m:00s]

test: (groupid=0, jobs=1): err= 0: pid=1167: Tue Jun 28 21:26:28 2016

  read : io=262184KB, bw=10323KB/s, iops=2580 , runt= 25399msec

  write: io=262104KB, bw=10319KB/s, iops=2579 , runt= 25399msec

  cpu          : usr=4.30%, sys=23.89%, ctx=69266, majf=0, minf=20

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
>=64=100.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%,
>=64=0.0%

     issued    : total=r=65546/w=65526/d=0, short=r=0/w=0/d=0

 

Run status group 0 (all jobs):

   READ: io=262184KB, aggrb=10322KB/s, minb=10322KB/s, maxb=10322KB/s,
mint=25399msec, maxt=25399msec

  WRITE: io=262104KB, aggrb=10319KB/s, minb=10319KB/s, maxb=10319KB/s,
mint=25399msec, maxt=25399msec

 

Disk stats (read/write):

    dm-0: ios=65365/65345, merge=0/0, ticks=501897/1094751,
in_queue=1598532, util=99.75%, aggrios=65546/65542, aggrmerge=0/1,
aggrticks=508542/1102418, aggrin_queue=1610856, aggrutil=99.70%

  vda: ios=65546/65542, merge=0/1, ticks=508542/1102418, in_queue=1610856,
util=99.70%

 

test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64

fio-2.0.13

Starting 1 process

test: Laying out IO file(s) (1 file(s) / 512MB)

Jobs: 1 (f=1): [r] [100.0% done] [58279K/0K/0K /s] [14.6K/0 /0  iops] [eta
00m:00s]

test: (groupid=0, jobs=1): err= 0: pid=1174: Tue Jun 28 21:31:25 2016

  read : io=524288KB, bw=60992KB/s, iops=15248 , runt=  8596msec

  cpu          : usr=9.59%, sys=49.33%, ctx=88437, majf=0, minf=83

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
>=64=100.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%,
>=64=0.0%

     issued    : total=r=131072/w=0/d=0, short=r=0/w=0/d=0

 

Run status group 0 (all jobs):

   READ: io=524288KB, aggrb=60992KB/s, minb=60992KB/s, maxb=60992KB/s,
mint=8596msec, maxt=8596msec

 

Disk stats (read/write):

    dm-0: ios=128588/3, merge=0/0, ticks=530897/81, in_queue=531587,
util=98.88%, aggrios=131072/4, aggrmerge=0/0, aggrticks=542615/81,
aggrin_queue=542605, aggrutil=98.64%

  vda: ios=131072/4, merge=0/0, ticks=542615/81, in_queue=542605,
util=98.64%

 

test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64

fio-2.0.13

Starting 1 process

test: Laying out IO file(s) (1 file(s) / 512MB)

Jobs: 1 (f=1): [w] [100.0% done] [0K/2801K/0K /s] [0 /700 /0  iops] [eta
00m:00s]

test: (groupid=0, jobs=1): err= 0: pid=1178: Tue Jun 28 21:36:43 2016

  write: io=524288KB, bw=7749.4KB/s, iops=1937 , runt= 67656msec

  cpu          : usr=2.20%, sys=14.58%, ctx=51767, majf=0, minf=19

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
>=64=100.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%,
>=64=0.0%

     issued    : total=r=0/w=131072/d=0, short=r=0/w=0/d=0

 

Run status group 0 (all jobs):

  WRITE: io=524288KB, aggrb=7749KB/s, minb=7749KB/s, maxb=7749KB/s,
mint=67656msec, maxt=67656msec

 

Disk stats (read/write):

    dm-0: ios=0/134525, merge=0/0, ticks=0/4563062, in_queue=4575253,
util=100.00%, aggrios=0/131235, aggrmerge=0/3303, aggrticks=0/4276064,
aggrin_queue=4275879, aggrutil=99.99%

  vda: ios=0/131235, merge=0/3303, ticks=0/4276064, in_queue=4275879,
util=99.99%

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to