Hi list,
I'm trying to get the best from my 3 node "low cost" hardware for testing
purposes :
3 Dell PowerEdge 2950.
Cluster/Public networks both with 2x1Gb LACP (layer3+4 hash)
No MDS running for now.
SAS disks (no ssd), both 10000 & 15000 rpm.
Sda = system
Sdb, sdc, sdd, sde = OSDs
Sdf = journal (osd_journal_size = 1024)
My question is how can I optimize my osd weights (or something else) regarding
my config ? is the hdparm -t test below useful do to so ? For instance, is it a
better choice to use sdd for journals ?
Some quick tests :
ceph1:
hdparm -t /dev/sdb ; hdparm -t /dev/sdc ; hdparm -t /dev/sdd ; hdparm -t
/dev/sde ; hdparm -t /dev/sdf
/dev/sdb:
Timing buffered disk reads: 374 MB in 3.00 seconds = 124.62 MB/sec
/dev/sdc:
Timing buffered disk reads: 264 MB in 3.02 seconds = 87.51 MB/sec
/dev/sdd:
Timing buffered disk reads: 470 MB in 3.01 seconds = 156.19 MB/sec
/dev/sde:
Timing buffered disk reads: 264 MB in 3.01 seconds = 87.61 MB/sec
/dev/sdf:
Timing buffered disk reads: 268 MB in 3.01 seconds = 89.00 MB/sec
ceph2:
hdparm -t /dev/sdb ; hdparm -t /dev/sdc ; hdparm -t /dev/sdd ; hdparm -t
/dev/sde ; hdparm -t /dev/sdf
/dev/sdb:
Timing buffered disk reads: 376 MB in 3.00 seconds = 125.15 MB/sec
/dev/sdc:
Timing buffered disk reads: 264 MB in 3.01 seconds = 87.68 MB/sec
/dev/sdd:
Timing buffered disk reads: 502 MB in 3.01 seconds = 166.71 MB/sec
/dev/sde:
Timing buffered disk reads: 264 MB in 3.02 seconds = 87.55 MB/sec
/dev/sdf:
Timing buffered disk reads: 268 MB in 3.01 seconds = 89.09 MB/sec
ceph3:
hdparm -t /dev/sdb ; hdparm -t /dev/sdc ; hdparm -t /dev/sdd ; hdparm -t
/dev/sde ; hdparm -t /dev/sdf
/dev/sdb:
Timing buffered disk reads: 376 MB in 3.00 seconds = 125.14 MB/sec
/dev/sdc:
Timing buffered disk reads: 236 MB in 3.00 seconds = 78.64 MB/sec
/dev/sdd:
Timing buffered disk reads: 504 MB in 3.00 seconds = 167.77 MB/sec
/dev/sde:
Timing buffered disk reads: 258 MB in 3.01 seconds = 85.66 MB/sec
/dev/sdf:
Timing buffered disk reads: 274 MB in 3.00 seconds = 91.28 MB/sec
Some test now with dd and rados bench on rbd client mountpoint :
rados bench -p logs 100 write
Total time run: 101.479343
Total writes made: 1626
Write size: 4194304
Bandwidth (MB/sec): 64.092
Average Latency: 0.996858
Max latency: 3.23692
Min latency: 0.106049
rados bench -p logs 100 seq
Total time run: 35.625185
Total reads made: 1626
Read size: 4194304
Bandwidth (MB/sec): 182.567
Average Latency: 0.350297
Max latency: 1.5221
Min latency: 0.048679
DD on a remote rbd mountpoint:
dd if=/dev/zero of=here bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 16.8977 s, 63.5 MB/s
dd if=/dev/zero of=here bs=200M count=100 oflag=direct
100+0 records in
100+0 records out
20971520000 bytes (21 GB) copied, 341.549 s, 61.4 MB/s
And my crushmap :
# begin crush map
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
# types
type 0 osd
type 1 host
type 2 rack
type 3 row
type 4 room
type 5 datacenter
type 6 root
# buckets
host ceph1 {
id -2 # do not change unnecessarily
# weight 0.880
alg straw
hash 0 # rjenkins1
item osd.0 weight 0.070
item osd.1 weight 0.270
item osd.2 weight 0.270
item osd.3 weight 0.270
}
host ceph2 {
id -3 # do not change unnecessarily
# weight 0.880
alg straw
hash 0 # rjenkins1
item osd.4 weight 0.070
item osd.5 weight 0.270
item osd.6 weight 0.270
item osd.7 weight 0.270
}
host ceph3 {
id -4 # do not change unnecessarily
# weight 0.880
alg straw
hash 0 # rjenkins1
item osd.8 weight 0.070
item osd.9 weight 0.270
item osd.10 weight 0.270
item osd.11 weight 0.270
}
root default {
id -1 # do not change unnecessarily
# weight 2.640
alg straw
hash 0 # rjenkins1
item ceph1 weight 0.880
item ceph2 weight 0.880
item ceph3 weight 0.880
}
# rules
rule data {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule metadata {
ruleset 1
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule rbd {
ruleset 2
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
# end crush map
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com