I used 2 copies, not 3, so should be 1000MB/s in theory. thanks.
2016-09-29 17:54 GMT+08:00 Nick Fisk <n...@fisk.me.uk>:
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *min fang
> *Sent:* 29 September 2016 10:34
> *To:* ceph-users <ceph
Hi, I created 40 osds ceph cluster with 8 PM863 960G SSD as journal. One
ssd is used by 5 osd drives as journal. The ssd 512 random write
performance is about 450MB/s, but the whole cluster sequential write
throughput is only 800MB/s. Any suggestion on improving sequential write
performance?
Hi, as my understanding, in PG level, IOs are execute in a sequential way,
such as the following cases:
Case 1:
Write A, Write B, Write C to the same data area in a PG --> A Committed,
then B committed, then C. The final data will from write C. Impossible
that mixed (A, B,C) data is in the data
chooseleaf firstn 0 type host
step emit
}
# end crush map
2016-06-22 18:27 GMT+08:00 Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de>:
> Hi,
>
> On 06/22/2016 12:10 PM, min fang wrote:
>
> Hi, I created a new ceph cluster, and create a pool, but see &
Hi, I created a new ceph cluster, and create a pool, but see "stuck unclean
since forever" errors happen(as the following), can help point out the
possible reasons for this? thanks.
ceph -s
cluster 602176c1-4937-45fc-a246-cc16f1066f65
health HEALTH_WARN
8 pgs degraded
Hi, is there a document describing librbd compatibility? For example,
something like this: librbd from Ceph 0.88 can also be applied to
0.90,0.91..
I hope not keep librbd relative stable, so can avoid more code iteration
and testing.
Thanks.
___
Hi, I run randow fio with rwmixread=70, and found read iops is 707, write
is 303. (reference the following). This value is less than random write
and read value. The 4K random write IOPs is 529 and 4k randread IOPs is
11343. Apart from rw type is different, other parameters are all same.
I do
tsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 21.04.2016 um 13:27 schrieb min fang:
> > Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend
Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend
pool. For this configuration, do I need use SSD as journal device? I do not
know whether cache tier take the journal role? thanks
___
ceph-users mailing list
concurrently on the same image
> you need a clustering filesystem on top of RBD (e.g. GFS2) or the
> application needs to provide its own coordination to avoid concurrent
> writes to the same image extents.
>
> --
>
> Jason Dillaman
>
>
> - Original Message -
&
Hi, as my understanding, ceph rbd image will be divided into multiple
objects based on LBA address.
My question here is:
if two clients write to the same LBA address, such as client A write ""
to LBA 0x123456, client B write "" to the same LBA.
LBA address and data will only be in an
Dear, I used osd dump to extract osd monmap, and found up_from, up_thru
list, what is the difference between up_from and up_thru?
osd.0 up in weight 1 up_from 673 up_thru 673 down_at 670
last_clean_interval [637,669)
Thanks.
___
ceph-users mailing
ptimize
> sequential read is increasing /sys/class/block/rbd4/queue/read_ahead_kb
>
> Adrien
>
>
>
> On Tue, Mar 1, 2016 at 12:48 PM, min fang <louisfang2...@gmail.com> wrote:
>
>> I can use the following command to change parameter, for example as the
>> following, but
6 at 9:36 PM, Shinobu Kinjo <ski...@redhat.com> wrote:
>
>> You may want to set "ioengine=rbd", I guess.
>>
>> Cheers,
>>
>> - Original Message -
>> From: "min fang" <louisfang2...@gmail.com>
>> To: "ceph-user
Hi, I set the following parameters in ceph.conf
[client]
rbd cache=true
rbd cache size= 25769803776
rbd readahead disable after byte=0
map a rbd image to a rbd device then run fio testing on 4k read as the
command
./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read
-ioengine=aio
Hi, I did a fio testing on my ceph cluster, and found ceph random read
performance is better than sequential read. Is it true in your stand?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi, can rbd block_name_prefix be changed? Is it constant for a rbd image?
thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> this data will be blocked.
>
>
> Regards,
> Zhi Zhang (David)
> Contact: zhang.david2...@gmail.com
> zhangz.da...@outlook.com
>
>
> On Thu, Dec 31, 2015 at 10:33 AM, min fang <louisfang2...@gmail.com>
> wrote:
> > yes, the question here is, librbd
>
> 2015-12-31 10:15 GMT+08:00 min fang <louisfang2...@gmail.com>:
> > Hi, as my understanding, write IO will committed data to journal firstly,
> > then give a safe callback to ceph client. So it is possible that data
> still
> > in journal when I send a read IO
Hi, as my understanding, write IO will committed data to journal firstly,
then give a safe callback to ceph client. So it is possible that data still
in journal when I send a read IO to the same area. So what data will be
returned if the new data still in journal?
Thanks.
Hi, I am looking for OS for my ceph cluster, from
http://docs.ceph.com/docs/master/start/os-recommendations/#infernalis-9-1-0,
there are two OS has been fully tested, centos 7 and ubuntu 14.04. Which
one is better? thanks.
___
ceph-users mailing list
Hi, I have a 2 port 10Gb NIC installed in ceph client, but I just want to
use one NIC port to do ceph IO. The other port in the NIC will be reserved
to other purpose.
Does currently ceph support to choose NIC port to do IO?
Thanks.
___
ceph-users
Hi, I have a 2 port 10Gb NIC installed in ceph client, but I just want to
use open NIC port to do ceph IO. How can I achieve this?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is this function used in detach rx buffer, and complete IO back to the
caller? From the code, I think this function will not interact with OSD or
MON side, which means, we just cancel IO from client side. Am I right?
Thanks.
___
ceph-users mailing list
Hi, I setup ceph cluster for storing pictures. I want to introduce a data
mining program in ceph osd nodes to dig objects with concrete properties.
I hope some kind of map-reduce framework can use ceph object interface
directly,while not using posix file system interface.
Can somebody help give
n a RADOS interface to Apache Hadoop once,
> maybe search for that?
> Your other option is to try and make use of object classes directly, but
> that's a bit orimitive to build full map-reduce on top of without a lot of
> effort.
> -Greg
>
>
> On Friday, November 13, 2015, min fan
Hi cepher, I tried to use the following command to create a img, but
unfortunately, the command hung for a long time until I broken it by
crtl-z.
rbd -p hello create img-003 --size 512
so I checked the cluster status, and showed:
cluster 0379cebd-b546-4954-b5d6-e13d08b7d2f1
health
Hi,my code get Segmentation fault when using librbd to do sync read IO.
>From the trace, I can say there are several read IOs get successfully, but
the last read IO (2015-10-31 08:56:34.804383) can not be returned and my
code got segmentation fault. I used rbd_read interface and malloc a buffer
this segmentation fault should happen in rbd_read function, I can see code
call this function, and then get segmentation fault, which means rbd_read
has not been completed successfully when segmentation fault happened.
2015-11-01 10:34 GMT+08:00 min fang <louisfang2...@gmail.com>:
>
Jason Dillaman <dilla...@redhat.com>:
> There is no such interface currently on the librados / OSD side to abort
> IO operations. Can you provide some background on your use-case for
> aborting in-flight IOs?
>
> --
>
> Jason Dillaman
>
>
> - Original
Can librbd interface provide abort api for aborting IO? If yes, can the
abort interface detach write buffer immediately? I hope can reuse the write
buffer quickly after issued the abort request, while not waiting IO aborted
in osd side.
thanks.
___
31 matches
Mail list logo