Re: [ceph-users] libvirt + rbd questions

2017-08-27 Thread Z Will
or qemu-rbd support - I did so on > latest Debain (stretch) > > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Z > Will > Sent: Friday, August 25, 2017 8:49 AM > To: Ceph-User <ceph-us...@ceph.com> > Subject: [ceph-

[ceph-users] libvirt + rbd questions

2017-08-25 Thread Z Will
Hi all: I have tried to install a vm using rbd as disk, following the steps from ceph doc, but met some problems. the packages environment is following: CentOS Linux release 7.2.1511 (Core) libvirt-2.0.0-10.el7_3.9.x86_64 libvirt-python-2.0.0-2.el7.x86_64

Re: [ceph-users] Speeding up garbage collection in RGW

2017-07-24 Thread Z Will
I think if you want to delete through gc, increase this OPTION(rgw_gc_processor_max_time, OPT_INT, 3600) // total run time for a single gc processor work decrease this OPTION(rgw_gc_processor_period, OPT_INT, 3600) // gc processor cycle time Or , I think if there is some option to bypass the gc

Re: [ceph-users] Monitor as local VM on top of the server pool cluster?

2017-07-10 Thread Z Will
For large cluster , there will be a lot of change at any time, this means the pressure of mon will be big at some time, because all change will go through leader , so for this , the local storage for mon should be good enough, I think this maybe a conderation . On Tue, Jul 11, 2017 at 11:29

Re: [ceph-users] ceph-mon leader election problem, should it be improved ?

2017-07-10 Thread Z Will
to change little code, any suggesstion for this ? Do I lack of any considerations ? On Wed, Jul 5, 2017 at 6:26 PM, Joao Eduardo Luis <j...@suse.de> wrote: > On 07/05/2017 08:01 AM, Z Will wrote: >> >> Hi Joao: >> I think this is all because we choose the moni

Re: [ceph-users] ceph-mon leader election problem, should it be improved ?

2017-07-08 Thread Z Will
rom other mons when needed for increasing performance. other logic is same as before What do you think of it ? On Thu, Jul 6, 2017 at 10:31 PM, Sage Weil <s...@newdream.net> wrote: > On Thu, 6 Jul 2017, Z Will wrote: >> Hi Joao : >> >> Thanks for thorough

Re: [ceph-users] ceph-mon leader election problem, should it be improved ?

2017-07-06 Thread Z Will
to current leader, then it will decide whether to stand by for a while and try later or start a leader election based on the information got from probing phase. Do you think this will be OK ? On Wed, Jul 5, 2017 at 6:26 PM, Joao Eduardo Luis <j...@suse.de> wrote: > On 07/05/2017 08

Re: [ceph-users] ceph-mon leader election problem, should it be improved ?

2017-07-05 Thread Z Will
a view num. In election phase: they send the view num , rank num . when receiving the election message, it compare the view num ( higher is leader ) and rank num ( lower is leader). On Tue, Jul 4, 2017 at 9:25 PM, Joao Eduardo Luis <j...@suse.de> wrote: > On 07/04/2017 06

Re: [ceph-users] ceph-mon leader election problem, should it be improved ?

2017-07-04 Thread Z Will
man ? Is there any way to handle it automaticly from design as you know ? On Tue, Jul 4, 2017 at 2:25 PM, Alvaro Soto <alsot...@gmail.com> wrote: > Z, > You are forcing a byzantine failure, the paxos implemented to form the > consensus ring of the mon daemons does not support thi

[ceph-users] ceph-mon leader election problem, should it be improved ?

2017-07-03 Thread Z Will
Hi: I am testing ceph-mon brain split . I have read the code . If I understand it right , I know it won't be brain split. But I think there is still another problem. My ceph version is 0.94.10. And here is my test detail : 3 ceph-mons , there ranks are 0, 1, 2 respectively.I stop the rank 1

Re: [ceph-users] Object storage performance tools

2017-06-16 Thread Z Will
opinion it The > only true S3 performance testing tool. > > > > -- > > Piotr Nowosielski > > Senior Systems Engineer > Zespół Infrastruktury 5 > Grupa Allegro sp. z o.o. > Tel: +48 512 08 55 92 > > Grupa Allegro Sp. z o.o. z siedzibą w Poznaniu, 60-166 Poznań,

Re: [ceph-users] Help build a drive reliability service!

2017-06-15 Thread Z Will
Hi Patrick: I want to ask a very tiny question. How much 9s do you claim your storage durability? And how is it calculated ? Based on the data you provided , have you find some failure model to refine the storage durability ? On Thu, Jun 15, 2017 at 12:09 AM, David Turner

[ceph-users] ceph durability calculation and test method

2017-06-13 Thread Z Will
Hi all : I have some questions about the durability of ceph. I am trying to mesure the durability of ceph .I konw it should be related with host and disk failing probability, failing detection time, when to trigger the recover and the recovery time . I use it with multiple replication, say

[ceph-users] should I use rocdsdb ?

2017-06-02 Thread Z Will
Hello gurus: My name is will . I have just study ceph and have a lot of interest in it . We are using ceph 0.94.10. And I am tring to tune the performance of ceph to satisfy our requirements. We are using it as object store now. Even though I have tried some different configuration. But I

[ceph-users] radosgw 100-continue problem

2017-02-13 Thread Z Will
Hi: I used nginx + fastcti + radosgw , when configure radosgw with "rgw print continue = true " In RFC 2616 , it says An origin server that sends a 100 (Continue) response MUST ultimately send a final status code, once the request body is received and processed, unless it terminates the

[ceph-users] radosgw fastcgi problem

2016-12-14 Thread Z Will
Hi: Very sorry for the last email, it was an accident. Recently, I trid to configure radosgw (0.94.7) with nginx as frontend, benchmark it with cosbench. And I found a strange thing. My os-related configuration are, net.core.netdev_max_backlog = 1000 net.core.somaxconn = 1024 In rgw_main.cc,

[ceph-users] radosgw fastcgi problem

2016-12-14 Thread Z Will
Hi: Recently, I trid to configure radosgw with nginx as frontend, benchmark it with cosbench. And I found a strange thing. My os-related configuration are, net.core.netdev_max_backlog = 1000 ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] any nginx + rgw best practice ?

2016-11-26 Thread Z Will
Hi: I have tried to use nginx + fastcgi + radosgw , and benchmarked it with cosbench. I tuned the nginx and centos configration , but coundn't get the desirable performance. I met the following problem. 1、 I tried to tuned the centos net.ipv4... parameters, and I got a lot of CLOSE_WAIT

[ceph-users] Ceph-fuse single read limitation?‏‏

2015-11-20 Thread Z Zhang
Hi Guys, Now we have a very small cluster with 3 OSDs but using 40Gb NIC. We use ceph-fuse as cephfs client and enable readahead, but testing single reading a large file from cephfs via fio, dd or cp can only achieve ~70+MB/s, even if fio or dd's block size is set to 1MB or 4MB. From the

[ceph-users] Write performance issue under rocksdb kvstore

2015-10-20 Thread Z Zhang
Hi Guys, I am trying latest ceph-9.1.0 with rocksdb 4.1 and ceph-9.0.3 with rocksdb 3.11 as OSD backend. I use rbd to test performance and following is my cluster info. [ceph@xxx ~]$ ceph -s     cluster b74f3944-d77f-4401-a531-fa5282995808      health HEALTH_OK      monmap e1: 1 mons at

Re: [ceph-users] Write performance issue under rocksdb kvstore

2015-10-20 Thread Z Zhang
dont provide with this option now > > On Tue, Oct 20, 2015 at 9:22 PM, Z Zhang <zhangz.da...@outlook.com> wrote: > > Thanks, Sage, for pointing out the PR and ceph branch. I will take a closer > > look. Yes, I am trying KVStore backend. The reason we are trying it is that >

Re: [ceph-users] Write performance issue under rocksdb kvstore

2015-10-20 Thread Z Zhang
.com > CC: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org > Subject: Re: [ceph-users] Write performance issue under rocksdb kvstore > > On Tue, 20 Oct 2015, Z Zhang wrote: > > Thanks, Sage, for pointing out the PR and ceph branch. I will take a > > closer look.

Re: [ceph-users] Write performance issue under rocksdb kvstore

2015-10-20 Thread Z Zhang
...@outlook.com CC: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org Subject: Re: [ceph-users] Write performance issue under rocksdb kvstore On Tue, 20 Oct 2015, Z Zhang wrote: > Hi Guys, > > I am trying latest ceph-9.1.0 with rocksdb 4.1 and ceph-9.0.3 with > rocksdb 3.11 as OSD backen

[ceph-users] FW: Long tail latency due to journal aio io_submit takes long time to return

2015-08-25 Thread Z Zhang
FW to ceph-user Thanks. Zhi Zhang (David) From: zhangz.da...@outlook.com To: ceph-de...@vger.kernel.org Subject: Long tail latency due to journal aio io_submit takes long time to return Date: Tue, 25 Aug 2015 18:46:34 +0800 Hi Ceph-devel,

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-08-06 Thread Z Zhang
On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang zhangz.da...@outlook.com wrote: Date: Thu, 30 Jul 2015 11:37:37 +0300 Subject: Re: [ceph-users] which kernel version can help avoid kernel client deadlock From: idryo...@gmail.com To: zhangz.da...@outlook.com CC: chaofa...@owtware.com; ceph

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang
Date: Thu, 30 Jul 2015 13:11:11 +0300 Subject: Re: [ceph-users] which kernel version can help avoid kernel client deadlock From: idryo...@gmail.com To: zhangz.da...@outlook.com CC: chaofa...@owtware.com; ceph-users@lists.ceph.com On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang zhangz.da

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang
Date: Thu, 30 Jul 2015 11:37:37 +0300 Subject: Re: [ceph-users] which kernel version can help avoid kernel client deadlock From: idryo...@gmail.com To: zhangz.da...@outlook.com CC: chaofa...@owtware.com; ceph-users@lists.ceph.com On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang zhangz.da

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-29 Thread Z Zhang
We also hit the similar issue from time to time on centos with 3.10.x kernel. By iostat, we can see kernel rbd client's util is 100%, but no r/w io, and we can't umount/unmap this rbd client. After restarting OSDs, it will become normal. @Ilya, could you pls point us the possible fixes on

[ceph-users] Timeout mechanism in ceph client tick

2015-07-02 Thread Z Zhang
Hi Guys, By reading through ceph client codes, there is timeout mechanism in tick when doing mount. Recently we met some client requests to mds spending long time to reply when doing massive test to cephfs. And if we want cephfs user to know the timeout instead of waiting for the reply, can we

Re: [ceph-users] krbd splitting large IO's into smaller IO's

2015-06-29 Thread Z Zhang
into smaller IO's From: idryo...@gmail.com To: zhangz.da...@outlook.com CC: ceph-users@lists.ceph.com On Fri, Jun 26, 2015 at 3:17 PM, Z Zhang zhangz.da...@outlook.com wrote: Hi Ilya, I am seeing your recent email talking about krbd splitting large IO's into smaller IO's, see below link

[ceph-users] krbd splitting large IO's into smaller IO's

2015-06-26 Thread Z Zhang
Hi Ilya, I am seeing your recent email talking about krbd splitting large IO's into smaller IO's, see below link. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg20587.html I just tried it on my ceph cluster using kernel 3.10.0-1. I adjust both max_sectors_kb and max_hw_sectors_kb of

[ceph-users] CephFS client issue

2015-06-14 Thread David Z
On Monday, June 15, 2015 3:05 AM, ceph-users-requ...@lists.ceph.com ceph-users-requ...@lists.ceph.com wrote: Send ceph-users mailing list submissions to     ceph-users@lists.ceph.com To subscribe or unsubscribe via the World Wide Web, visit    

Re: [ceph-users] rbd format v2 support

2015-06-08 Thread David Z
: On Fri, Jun 5, 2015 at 6:47 AM, David Z david.z1...@yahoo.com wrote: Hi Ceph folks, We want to use rbd format v2, but find it is not supported on kernel 3.10.0 of centos 7: [ceph@ ~]$ sudo rbd map zhi_rbd_test_1 rbd: sysfs write failed rbd: map failed: (22) Invalid argument [ceph

[ceph-users] rbd format v2 support

2015-06-04 Thread David Z
Hi Ceph folks, We want to use rbd format v2, but find it is not supported on kernel 3.10.0 of centos 7: [ceph@ ~]$ sudo rbd map zhi_rbd_test_1 rbd: sysfs write failed rbd: map failed: (22) Invalid argument [ceph@ ~]$ dmesg | tail [662453.664746] rbd: image zhi_rbd_test_1:

[ceph-users] The strategy of auto-restarting crashed OSD

2014-11-12 Thread David Z
Hi Guys, We are experiencing some OSD crashing issues recently, like messenger crash, some strange crash (still being investigating), etc. Those crashes seems not to reproduce after restarting OSD. So we are thinking about the strategy of auto-restarting crashed OSD for 1 or 2 times, then

Re: [ceph-users] CephFS - limit space available to individual users.

2013-04-05 Thread Vanja Z
Thanks Wildo, I have to admit its slightly disappointing (but completely understandable) since it basically means it's not safe for us to use CephFS :( Without userquotas, it would be sufficient to have multiple CephFS filesystems and to be able to set the size of each one. Is it part of the

[ceph-users] CephFS - limit space available to individual users.

2013-04-04 Thread Vanja Z
I have been testing CephFS on our computational cluster of about 30 computers. I've got 4 machines, 4 disks, 4 osd, 4 mon and 1 mds at the moment for testing. The testing has been going very well apart from one problem that needs to be resolved before we can use Ceph in place of our existing