Re: [ceph-users] ceph write performance issue

2016-09-29 Thread min fang
I used 2 copies, not 3, so should be 1000MB/s in theory. thanks. 2016-09-29 17:54 GMT+08:00 Nick Fisk <n...@fisk.me.uk>: > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *min fang > *Sent:* 29 September 2016 10:34 > *To:* ceph-users <ceph

[ceph-users] ceph write performance issue

2016-09-29 Thread min fang
Hi, I created 40 osds ceph cluster with 8 PM863 960G SSD as journal. One ssd is used by 5 osd drives as journal. The ssd 512 random write performance is about 450MB/s, but the whole cluster sequential write throughput is only 800MB/s. Any suggestion on improving sequential write performance?

[ceph-users] ceph pg level IO sequence

2016-06-23 Thread min fang
Hi, as my understanding, in PG level, IOs are execute in a sequential way, such as the following cases: Case 1: Write A, Write B, Write C to the same data area in a PG --> A Committed, then B committed, then C. The final data will from write C. Impossible that mixed (A, B,C) data is in the data

Re: [ceph-users] stuck unclean since forever

2016-06-22 Thread min fang
chooseleaf firstn 0 type host step emit } # end crush map 2016-06-22 18:27 GMT+08:00 Burkhard Linke < burkhard.li...@computational.bio.uni-giessen.de>: > Hi, > > On 06/22/2016 12:10 PM, min fang wrote: > > Hi, I created a new ceph cluster, and create a pool, but see &

[ceph-users] stuck unclean since forever

2016-06-22 Thread min fang
Hi, I created a new ceph cluster, and create a pool, but see "stuck unclean since forever" errors happen(as the following), can help point out the possible reasons for this? thanks. ceph -s cluster 602176c1-4937-45fc-a246-cc16f1066f65 health HEALTH_WARN 8 pgs degraded

[ceph-users] librbd compatibility

2016-06-20 Thread min fang
Hi, is there a document describing librbd compatibility? For example, something like this: librbd from Ceph 0.88 can also be applied to 0.90,0.91.. I hope not keep librbd relative stable, so can avoid more code iteration and testing. Thanks. ___

[ceph-users] performance drop a lot when running fio mix read/write

2016-05-02 Thread min fang
Hi, I run randow fio with rwmixread=70, and found read iops is 707, write is 303. (reference the following). This value is less than random write and read value. The 4K random write IOPs is 529 and 4k randread IOPs is 11343. Apart from rw type is different, other parameters are all same. I do

Re: [ceph-users] cache tier

2016-04-21 Thread min fang
tsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 21.04.2016 um 13:27 schrieb min fang: > > Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend

[ceph-users] cache tier

2016-04-21 Thread min fang
Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend pool. For this configuration, do I need use SSD as journal device? I do not know whether cache tier take the journal role? thanks ___ ceph-users mailing list

Re: [ceph-users] ceph rbd object write is atomic?

2016-04-06 Thread min fang
concurrently on the same image > you need a clustering filesystem on top of RBD (e.g. GFS2) or the > application needs to provide its own coordination to avoid concurrent > writes to the same image extents. > > -- > > Jason Dillaman > > > - Original Message - &

[ceph-users] ceph rbd object write is atomic?

2016-04-05 Thread min fang
Hi, as my understanding, ceph rbd image will be divided into multiple objects based on LBA address. My question here is: if two clients write to the same LBA address, such as client A write "" to LBA 0x123456, client B write "" to the same LBA. LBA address and data will only be in an

[ceph-users] osd up_from, up_thru

2016-03-06 Thread min fang
Dear, I used osd dump to extract osd monmap, and found up_from, up_thru list, what is the difference between up_from and up_thru? osd.0 up in weight 1 up_from 673 up_thru 673 down_at 670 last_clean_interval [637,669) Thanks. ___ ceph-users mailing

Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread min fang
ptimize > sequential read is increasing /sys/class/block/rbd4/queue/read_ahead_kb > > Adrien > > > > On Tue, Mar 1, 2016 at 12:48 PM, min fang <louisfang2...@gmail.com> wrote: > >> I can use the following command to change parameter, for example as the >> following, but

Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread min fang
6 at 9:36 PM, Shinobu Kinjo <ski...@redhat.com> wrote: > >> You may want to set "ioengine=rbd", I guess. >> >> Cheers, >> >> - Original Message - >> From: "min fang" <louisfang2...@gmail.com> >> To: "ceph-user

[ceph-users] rbd cache did not help improve performance

2016-02-29 Thread min fang
Hi, I set the following parameters in ceph.conf [client] rbd cache=true rbd cache size= 25769803776 rbd readahead disable after byte=0 map a rbd image to a rbd device then run fio testing on 4k read as the command ./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read -ioengine=aio

[ceph-users] ceph random read performance is better than sequential read?

2016-02-02 Thread min fang
Hi, I did a fio testing on my ceph cluster, and found ceph random read performance is better than sequential read. Is it true in your stand? Thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] can rbd block_name_prefix be changed?

2016-01-08 Thread min fang
Hi, can rbd block_name_prefix be changed? Is it constant for a rbd image? thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Read IO to object while new data still in journal

2015-12-30 Thread min fang
> this data will be blocked. > > > Regards, > Zhi Zhang (David) > Contact: zhang.david2...@gmail.com > zhangz.da...@outlook.com > > > On Thu, Dec 31, 2015 at 10:33 AM, min fang <louisfang2...@gmail.com> > wrote: > > yes, the question here is, librbd

Re: [ceph-users] Read IO to object while new data still in journal

2015-12-30 Thread min fang
> > 2015-12-31 10:15 GMT+08:00 min fang <louisfang2...@gmail.com>: > > Hi, as my understanding, write IO will committed data to journal firstly, > > then give a safe callback to ceph client. So it is possible that data > still > > in journal when I send a read IO

[ceph-users] Read IO to object while new data still in journal

2015-12-30 Thread min fang
Hi, as my understanding, write IO will committed data to journal firstly, then give a safe callback to ceph client. So it is possible that data still in journal when I send a read IO to the same area. So what data will be returned if the new data still in journal? Thanks.

[ceph-users] ubuntu 14.04 or centos 7

2015-12-28 Thread min fang
Hi, I am looking for OS for my ceph cluster, from http://docs.ceph.com/docs/master/start/os-recommendations/#infernalis-9-1-0, there are two OS has been fully tested, centos 7 and ubuntu 14.04. Which one is better? thanks. ___ ceph-users mailing list

[ceph-users] Configure Ceph client network

2015-12-24 Thread min fang
Hi, I have a 2 port 10Gb NIC installed in ceph client, but I just want to use one NIC port to do ceph IO. The other port in the NIC will be reserved to other purpose. Does currently ceph support to choose NIC port to do IO? Thanks. ___ ceph-users

[ceph-users] How to configure ceph client network

2015-12-20 Thread min fang
Hi, I have a 2 port 10Gb NIC installed in ceph client, but I just want to use open NIC port to do ceph IO. How can I achieve this? Thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] rados_aio_cancel

2015-11-15 Thread min fang
Is this function used in detach rx buffer, and complete IO back to the caller? From the code, I think this function will not interact with OSD or MON side, which means, we just cancel IO from client side. Am I right? Thanks. ___ ceph-users mailing list

[ceph-users] Ceph object mining

2015-11-13 Thread min fang
Hi, I setup ceph cluster for storing pictures. I want to introduce a data mining program in ceph osd nodes to dig objects with concrete properties. I hope some kind of map-reduce framework can use ceph object interface directly,while not using posix file system interface. Can somebody help give

Re: [ceph-users] Ceph object mining

2015-11-13 Thread min fang
n a RADOS interface to Apache Hadoop once, > maybe search for that? > Your other option is to try and make use of object classes directly, but > that's a bit orimitive to build full map-reduce on top of without a lot of > effort. > -Greg > > > On Friday, November 13, 2015, min fan

[ceph-users] can not create rbd image

2015-11-12 Thread min fang
Hi cepher, I tried to use the following command to create a img, but unfortunately, the command hung for a long time until I broken it by crtl-z. rbd -p hello create img-003 --size 512 so I checked the cluster status, and showed: cluster 0379cebd-b546-4954-b5d6-e13d08b7d2f1 health

[ceph-users] Fwd: segmentation fault when using librbd interface

2015-10-31 Thread min fang
Hi,my code get Segmentation fault when using librbd to do sync read IO. >From the trace, I can say there are several read IOs get successfully, but the last read IO (2015-10-31 08:56:34.804383) can not be returned and my code got segmentation fault. I used rbd_read interface and malloc a buffer

Re: [ceph-users] segmentation fault when using librbd interface

2015-10-31 Thread min fang
this segmentation fault should happen in rbd_read function, I can see code call this function, and then get segmentation fault, which means rbd_read has not been completed successfully when segmentation fault happened. 2015-11-01 10:34 GMT+08:00 min fang <louisfang2...@gmail.com>: >

Re: [ceph-users] How ceph client abort IO

2015-10-20 Thread min fang
Jason Dillaman <dilla...@redhat.com>: > There is no such interface currently on the librados / OSD side to abort > IO operations. Can you provide some background on your use-case for > aborting in-flight IOs? > > -- > > Jason Dillaman > > > - Original

[ceph-users] How ceph client abort IO

2015-10-19 Thread min fang
Can librbd interface provide abort api for aborting IO? If yes, can the abort interface detach write buffer immediately? I hope can reuse the write buffer quickly after issued the abort request, while not waiting IO aborted in osd side. thanks. ___