>
> Can you share your ceph.conf please?
>
> > > Another useful piece of information would be the ceph-osd's raw
> > > performance writing to the local disk+journal, which you can get with
> > >
> > > $ ceph tell osd.0 bench
> > >
> > > You might want to check it for several nodes to see if it's consistent,
> > > etc.
> > >
> > Below are the results from above command run against all osd's
> >
> >
> > 2012-03-15 13:06:19.980924 osd.0 -> 'bench: wrote 1024 MB in blocks of
> > 4096 KB in 67.474949 sec at 15540 KB/sec' (0)
> > 2012-03-15 13:09:20.573176 osd.1 -> 'bench: wrote 1024 MB in blocks of
> > 4096 KB in 70.815932 sec at 14807 KB/sec' (0)
> > 2012-03-15 13:11:57.895738 osd.2 -> 'bench: wrote 1024 MB in blocks of
> > 4096 KB in 60.370233 sec at 17369 KB/sec' (0)
>
> This is pretty slow, and probably due to the way your osd journals are
> configured. Please share your ceph.conf!
>
Below is my ceph conf file
[root@ceph-node-8 ~]# cat /etc/ceph/ceph.conf
[global]
;auth supported = cephx
keyring = /etc/ceph/admin.keyring
debug ms = 1
debug mds = 10
[mon]
mon data = /data/mon.$id
[mon.a]
host = ceph-node-4
mon addr = xx.xx.xx.xx
[mon.b]
host = ceph-node-5
mon addr = xx.xx.xx.xx
[mon.c]
host = ceph-node-6
mon addr = xx.xx.xx.xx
[mds]
keyring = /etc/ceph/keyring.$name
[mds.ceph-node-1]
host = ceph-node-7
[mds.ceph-node-2]
host = ceph-node-8
[osd]
osd data = /data/osd.$id
keyring = /etc/ceph/keyring.$name
osd journal = /journal/osd.$id.journal
osd journal size = 10000
debug ms = 1
debug osd = 20
debug filestore = 20
debug journal = 20
[osd.0]
host = ceph-node-1
btrfs devs = /dev/sda4
[osd.1]
host = ceph-node-2
btrfs devs = /dev/sda4
[osd.2]
host = ceph-node-3
btrfs devs = /dev/sda4
To brief, i have different partitions for mounting journal and osd.
/journal is used for mounting journal
/data is used for mounting osd
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html