[ceph-users] Fixing mark_unfound_lost revert failure

2014-08-31 Thread Loic Dachary
Hi Ceph, In a mixed dumpling / emperor cluster, because osd 2 has been removed but is still in might_have_unfound: [ { osd: 2, status: osd is down}, { osd: 6, status: already probed}], and because of that

Re: [ceph-users] question about monitor and paxos relationship

2014-08-31 Thread Scott Laird
If you want your data to be N+2 redundant (able to handle 2 failures, more or less), then you need to set size=3 and have 3 replicas of your data. If you want your monitors to be N+2 redundant, then you need 5 monitors. If you feel that your data is worth size=3, then you should really try to

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Mark Kirkwood
On 31/08/14 17:55, Mark Kirkwood wrote: On 29/08/14 22:17, Sebastien Han wrote: @Mark thanks trying this :) Unfortunately using nobarrier and another dedicated SSD for the journal (plus your ceph setting) didn’t bring much, now I can reach 3,5K IOPS. By any chance, would it be possible for

Re: [ceph-users] Difference between object rm and object unlink ?

2014-08-31 Thread Jason King
As the names suggest, the former removes the object from the store while the latter deletes bucket index only. Check the code for more details. Jason 2014-08-29 19:09 GMT+08:00 zhu qiang zhu_qiang...@foxmail.com: Hi all, From radosgw-admin commond : # radosgw-admin object rm

Re: [ceph-users] About IOPS num

2014-08-31 Thread Jason King
Guess you should multiply 27 by bs=4k? Jason 2014-08-29 15:52 GMT+08:00 lixue...@chinacloud.com.cn lixue...@chinacloud.com.cn: guys: There's a ceph cluster working and nodes were connected with 10Gb cable. We defined fio's bs=4k and the object size of rbd is 4MB. Client node was

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Jian Zhang
Somnath, on the small workload performance, 2014-08-29 14:37 GMT+08:00 Somnath Roy somnath@sandisk.com: Thanks Haomai ! Here is some of the data from my setup.

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Jian Zhang
Somnath, on the small workload performance, 107k is higher than the theoretical IOPS of 520, any idea why? Single client is ~14K iops, but scaling as number of clients increases. 10 clients *~107K* iops. ~25 cpu cores are used. 2014-09-01 11:52 GMT+08:00 Jian Zhang amberzhan...@gmail.com:

[ceph-users] Librbd log and configuration

2014-08-31 Thread Ding Dinghua
Hi all: Apologize if this question has been asked before. I noticed that since librbd doesn't have a daemon context, there seems no way to retrieve librbd log and tune librbd configuration, but since librbd is an important part in virtual machine IO stack, it may be helpful if we can get its log.

Re: [ceph-users] About IOPS num

2014-08-31 Thread Mark Kirkwood
Yes, as Jason suggests - 27 IOPS doing 4k blocks is: 27*4/1024 MB/s = 0.1 MB/s While the RBD volume is composed of 4MB objects - many of the (presumably) random IOs of 4k blocks can reside in the same 4MB object, so it is tricky to estimate how many 4MB objects are needing to be rewritten

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Mark Kirkwood
On 01/09/14 12:36, Mark Kirkwood wrote: Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS (running fio on the filesystem I've seen 70K IOPS so is reasonably believable). So anyway we are not getting anywhere near the max IOPS from our devices. We use the Intel S3700 for

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Alexandre DERUMIER
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS (running fio on the filesystem I've seen 70K IOPS so is reasonably believable). So anyway we are not getting anywhere near the max IOPS from our devices. Hi, Just check this:

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Stefan Priebe - Profihost AG
Yes crucial is not suitable for this. Of you write sequential data like the journal for around 1-2 hours the speed goes down to 80mb/s. Also it has very low performance in sync / flush mode which the journal is using. Stefan Excuse my typo sent from my mobile phone. Am 01.09.2014 um 07:10

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Mark Kirkwood
On 01/09/14 17:10, Alexandre DERUMIER wrote: Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS (running fio on the filesystem I've seen 70K IOPS so is reasonably believable). So anyway we are not getting anywhere near the max IOPS from our devices. Hi, Just check this: