Re: [ceph-users] How's cephfs going?

2017-07-21 Thread Ilya Dryomov
On Fri, Jul 21, 2017 at 4:06 PM, Дмитрий Глушенок wrote: > All three mons has value "simple". OK, so http://tracker.ceph.com/issues/17664 is unrelated. Open a new kernel client ticket with all the ceph-fuse vs kernel client info and as many log excerpts as possible. If you've

Re: [ceph-users] How's cephfs going?

2017-07-21 Thread Ilya Dryomov
On Thu, Jul 20, 2017 at 6:35 PM, Дмитрий Глушенок wrote: > Hi Ilya, > > While trying to reproduce the issue I've found that: > - it is relatively easy to reproduce 5-6 minutes hangs just by killing > active mds process (triggering failover) while writing a lot of data. >

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Дмитрий Глушенок
Hi Ilya, While trying to reproduce the issue I've found that: - it is relatively easy to reproduce 5-6 minutes hangs just by killing active mds process (triggering failover) while writing a lot of data. Unacceptable timeout, but not the case of http://tracker.ceph.com/issues/15255 - it is hard

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Ilya Dryomov
On Thu, Jul 20, 2017 at 3:23 PM, Дмитрий Глушенок wrote: > Looks like I have similar issue as described in this bug: > http://tracker.ceph.com/issues/15255 > Writer (dd in my case) can be restarted and then writing continues, but > until restart dd looks like hanged on write. >

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Дмитрий Глушенок
ht? >>> >>> Thanks again:-) >>> >>> 发件人: Дмитрий Глушенок [mailto:gl...@jet.msk.su <mailto:gl...@jet.msk.su>] >>> 发送时间: 2017年7月19日 17:33 >>> 收件人: 许雪寒 >>> 抄送: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Anish Gupta
Hello, Can anyone share their experience with the  bulit-in FSCache support with or without CephFS? Interested in knowing the following:- Are you using FSCache in production environment?- How large is your Ceph deployment?- If with CephFS, how many Ceph clients are using FSCache- which version

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Donny Davis
I had a corruption issue with the FUSE client on Jewel. I use CephFS for a samba share with a light load, and I was using the FUSE client. I had a power flap and didn't realize my UPS batteries had went bad so the MDS servers were cycled a couple times and some how the file system had become

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Дмитрий Глушенок
Unfortunately no. Using FUSE was discarded due to poor performance. > 19 июля 2017 г., в 13:45, Blair Bethwaite > написал(а): > > Interesting. Any FUSE client data-points? > > On 19 July 2017 at 20:21, Дмитрий Глушенок wrote: >> RBD (via krbd) was

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Blair Bethwaite
Interesting. Any FUSE client data-points? On 19 July 2017 at 20:21, Дмитрий Глушенок wrote: > RBD (via krbd) was in action at the same time - no problems. > > 19 июля 2017 г., в 12:54, Blair Bethwaite > написал(а): > > It would be worthwhile

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Дмитрий Глушенок
RBD (via krbd) was in action at the same time - no problems. > 19 июля 2017 г., в 12:54, Blair Bethwaite > написал(а): > > It would be worthwhile repeating the first test (crashing/killing an > OSD host) again with just plain rados clients (e.g. rados bench) > and/or

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Дмитрий Глушенок
hase, right? > > Thanks again:-) > > 发件人: Дмитрий Глушенок [mailto:gl...@jet.msk.su] > 发送时间: 2017年7月19日 17:33 > 收件人: 许雪寒 > 抄送: ceph-users@lists.ceph.com > 主题: Re: [ceph-users] How's cephfs going? > > Hi, > > I can share negative test results (on Jewel 10.2.6).

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Blair Bethwaite
It would be worthwhile repeating the first test (crashing/killing an OSD host) again with just plain rados clients (e.g. rados bench) and/or rbd. It's not clear whether your issue is specifically related to CephFS or actually something else. Cheers, On 19 July 2017 at 19:32, Дмитрий Глушенок

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Дмитрий Глушенок
Hi, I can share negative test results (on Jewel 10.2.6). All tests were performed while actively writing to CephFS from single client (about 1300 MB/sec). Cluster consists of 8 nodes, 8 OSD each (2 SSD for journals and metadata, 6 HDD RAID6 for data), MON/MDS are on dedicated nodes. 2 MDS at

Re: [ceph-users] How's cephfs going?

2017-07-18 Thread David McBride
On Mon, 2017-07-17 at 02:59 +, 许雪寒 wrote: > Hi, everyone. > > We intend to use cephfs of Jewel version, however, we don’t know its status. > Is it production ready in Jewel? Does it still have lots of bugs? Is it a > major effort of the current ceph development? And who are using cephfs now?

Re: [ceph-users] How's cephfs going?

2017-07-17 Thread Brady Deetz
I feel that the correct answer to this question is: it depends. I've been running a 1.75PB Jewel based cephfs cluster in production for about a 2 years at Laureate Institute for Brain Research. Before that we had a good 6-8 month planning and evaluation phase. I'm running with active/standby

Re: [ceph-users] How's cephfs going?

2017-07-17 Thread Deepak Naidu
Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] How's cephfs going? It works and can reasonably be called "production ready". However in Jewel there are still some features (e.g. directory sharding, multi active MDS, and some security constraints) that may limit widespread usage.

Re: [ceph-users] How's cephfs going?

2017-07-16 Thread Blair Bethwaite
It works and can reasonably be called "production ready". However in Jewel there are still some features (e.g. directory sharding, multi active MDS, and some security constraints) that may limit widespread usage. Also note that userspace client support in e.g. nfs-ganesha and samba is a mixed bag

[ceph-users] How's cephfs going?

2017-07-16 Thread 许雪寒
Hi, everyone. We intend to use cephfs of Jewel version, however, we don’t know its status. Is it production ready in Jewel? Does it still have lots of bugs? Is it a major effort of the current ceph development? And who are using cephfs now? ___