Hello again,


thank all for the very very helpful advices.



Now i have reinstalled my ceph cluster. 

Three nodes with ceph version 0.80.7 and for every single disk an osd. The 
journal will be saved on a ssd.

 
My ceph.conf

  

[global] 

fsid = bceade34-3c54-4a35-a759-7af631a19df7 

mon_initial_members = ceph01 

mon_host = 10.0.0.20,10.0.0.21,10.0.0.22 

auth_cluster_required = cephx 

auth_service_required = cephx 

auth_client_required = cephx 

filestore_xattr_use_omap = true 

public_network = 10.0.0.0/24 

cluster_network = 10.0.1.0/24 

osd_pool_default_size = 2

osd_pool_default_min_size = 1 

osd_pool_default_pg_num = 4096 

osd_pool_default_pgp_num = 4096 

filestore_max_sync_interval = 30 

 


ceph osd tree

 


 -1 6.76 root default 

 
-2 2.44 host ceph01 



0 0.55 osd.0 up 1 

3 0.27 osd.3 up 1 

4 0.27 osd.4 up 1 

5 0.27 osd.5 up 1 

6 0.27 osd.6 up 1 

7 0.27 osd.7 up 1 

1 0.27 osd.1 up 1 

2 0.27 osd.2 up 1 

 
-3 2.16 host ceph02 



9 0.27 osd.9 up 1 

11 0.27 osd.11 up 1 

12 0.27 osd.12 up 1 

13 0.27 osd.13 up 1 

14 0.27 osd.14 up 1 

15 0.27 osd.15 up 1 

8 0.27 osd.8 up 1 

10 0.27 osd.10 up 1 

 
-4 2.16 host ceph03



17 0.27 osd.17 up 1 

18 0.27 osd.18 up 1 

19 0.27 osd.19 up 1 

20 0.27 osd.20 up 1 

21 0.27 osd.21 up 1 

22 0.27 osd.22 up 1 

23 0.27 osd.23 up 1 

16 0.27 osd.16 up 1

 


rados bench -p kvm 50 write --no-cleanup 

 
Total time run: 50.494855 

Total writes made: 1180 

Write size: 4194304 

Bandwidth (MB/sec): 93.475 



Stddev Bandwidth: 16.3955 

Max bandwidth (MB/sec): 112 

Min bandwidth (MB/sec): 0 

Average Latency: 0.684571 

Stddev Latency: 0.216088 

Max latency: 1.86831 

Min latency: 0.234673 



rados bench -p kvm 50 seq 



Total time run: 15.009855 

Total reads made: 1180 

Read size: 4194304 

Bandwidth (MB/sec): 314.460 

Average Latency: 0.20296 

Max latency: 1.06341 

Min latency: 0.02983 

 


I am really happy, these values above are enough for my little amount of vms. 
Inside the vms I get now for write 80mb/s and read 130mb/s, with write-cache 
enabled.

But there is one little problem. 

Are there some tuning parameters for small files?

For 4kb to 50kb files the cluster is very slow. 



thank you

best regards



 
-----Original message-----
From: Lindsay Mathieson <[email protected]>
Sent: Friday 9th January 2015 0:59
To: [email protected]
Cc: Patrik Plank <[email protected]>
Subject: Re: [ceph-users] slow read-performance inside the vm


On Thu, 8 Jan 2015 05:36:43 PM Patrik Plank wrote:

Hi Patrick, just a beginner myself, but have been through a similar process 
recently :)

> With these values above, I get a write performance of 90Mb/s and read
> performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver
> and writeback-cache enabled) Are these values normal with my configuration
> and hardware? -> 

They do seem *very* odd. Your write performance is pretty good, your read 
performance is abysmal - with a similar setup, with 3 OSD's slower than yours 
I was getting 200 MB/s reads. 

Maybe your network setup is dodgy? Jumbo frames can be tricky. Have you run 
iperf between the nodes?

What are you using for benchmark testing on the windows guest?

Also, probably more useful to turn writeback caching off for benchmarking, the 
cache will totally obscure the real performance.

How is the VM mounted? rbd driver?

> The read-performance seems slow. Would the
> read-performance better if I run for every single disk a osd?

I think so - in general the more OSD's the better. Also having 8 HD's in RAID0 
is a recipe for disaster, you'll lost the entire OSD is one of those disks 
fails.

I'd be creating an OSD for each HD (8 per node), with a 5-10GB SSD partition 
per OSD for journal. Tedious, but should make a big difference to reads and 
writes.

Might be worth while trying
[global]
  filestore max sync interval = 30

as well.

-- 
Lindsay



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to