Re: [ceph-users] ceph-fuse using excessive memory

2018-09-24 Thread Yan, Zheng
On Tue, Sep 25, 2018 at 2:23 AM Andras Pataki wrote: > > The whole cluster, including ceph-fuse is version 12.2.7. > If this issue happens again, please set "debug_objectcacher" option of ceph-fuse to 15 (for 30 seconds) and set ceph-fuse log to us Regards Yan, Zheng > Andras > > On 9/24/18

[ceph-users] PG inconsistent, "pg repair" not working

2018-09-24 Thread Sergey Malinin
Hello, During normal operation our cluster suddenly thrown an error and since then we have had 1 inconsistent PG, and one of clients sharing cephfs mount has started to occasionally log "ceph: Failed to find inode X". "ceph pg repair" deep scrubs the PG and fails with the same error in log. Can

Re: [ceph-users] All shards of PG missing object and inconsistent

2018-09-24 Thread Gregory Farnum
On Fri, Sep 21, 2018 at 5:04 PM Thomas White wrote: > Hi all, > > > > I have recently performed a few tasks, namely purging several buckets from > our RGWs and added additional hosts into Ceph causing some data movement > for a rebalance. As this is now almost completed, I kicked off some deep >

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Jason Dillaman
I would say that we consider mimic production ready now -- it was released a few months ago with the second point release in final testing right now.On Mon, Sep 24, 2018 at 2:49 PM Florian Florensa wrote: > > For me its more about will mimic be production ready for mid october > > Le lun. 24

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Florian Florensa
For me its more about will mimic be production ready for mid october Le lun. 24 sept. 2018 à 19:11, Jason Dillaman a écrit : > On Mon, Sep 24, 2018 at 12:18 PM Florian Florensa > wrote: > > > > Currently building 4.18.9 on ubuntu to try it out, also wondering if i > should plan for

Re: [ceph-users] Mimic upgrade failure

2018-09-24 Thread KEVIN MICHAEL HRPCEK
The cluster is healthy and stable. I'll leave a summary for the archive in case anyone else has a similar problem. centos 7.5 ceph mimic 13.2.1 3 mon/mgr/mds hosts, 862 osd (41 hosts) This was all triggered by an unexpected ~1 min network blip on our 10Gbit switch. The ceph cluster lost

Re: [ceph-users] ceph-fuse using excessive memory

2018-09-24 Thread Andras Pataki
The whole cluster, including ceph-fuse is version 12.2.7. Andras On 9/24/18 6:27 AM, Yan, Zheng wrote: On Fri, Sep 21, 2018 at 5:40 AM Andras Pataki wrote: I've done some more experiments playing with client config parameters, and it seems like the the client_oc_size parameter is very

[ceph-users] [ceph-ansible] create EC pools

2018-09-24 Thread Gilles Mocellin
Hello Cephers, I use ceph-ansible v3.1.5 to build a new Mimic CEph Cluster for OpenStack. I want to use Erasure Coding for certain pools (images, cinder backups, cinder for one additional backend, rgw data...). The examples in group_vars/all.yml.sample don't show how to specify an erasure

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Jason Dillaman
On Mon, Sep 24, 2018 at 12:18 PM Florian Florensa wrote: > > Currently building 4.18.9 on ubuntu to try it out, also wondering if i should > plan for xenial+luminous or directly target bionic+mimic There shouldn't be any technical restrictions on the Ceph iSCSI side, so it would come down to

Re: [ceph-users] Cluster Security

2018-09-24 Thread Anthony Verevkin
It is not quite clear to me what you are trying to achieve. If you want to separate HyperVisors from Ceph, that would not give you much. HV is man-in-the-middle anyway so they would be able to tap into traffic whatever you do. iSCSI won't help you here. Also you would probably need to let the

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Florian Florensa
Currently building 4.18.9 on ubuntu to try it out, also wondering if i should plan for xenial+luminous or directly target bionic+mimic Le lun. 24 sept. 2018 à 18:08, Jason Dillaman a écrit : > It *should* work against any recent upstream kernel (>=4.16) and > up-to-date dependencies [1]. If you

Re: [ceph-users] bluestore osd journal move

2018-09-24 Thread Vasu Kulkarni
On Mon, Sep 24, 2018 at 8:59 AM Andrei Mikhailovsky wrote: > > Hi Eugen, > > Many thanks for the links and the blog article. Indeed, the process of > changing the journal device seem far more complex than the FileStore osds. > Far more complex than it should be from an administrator point of

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Jason Dillaman
It *should* work against any recent upstream kernel (>=4.16) and up-to-date dependencies [1]. If you encounter any distro-specific issues (like the PR that Mike highlighted), we would love to get them fixed. [1] http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/ On Mon, Sep

Re: [ceph-users] bluestore osd journal move

2018-09-24 Thread Andrei Mikhailovsky
Hi Eugen, Many thanks for the links and the blog article. Indeed, the process of changing the journal device seem far more complex than the FileStore osds. Far more complex than it should be from an administrator point of view. I guess developers and admins live on different planets and it

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Florian Florensa
So from my understanding, as of right now it is not possible to have an iSCSI gw outside of RHEL ? Le lun. 24 sept. 2018 à 17:45, Mike Christie a écrit : > On 09/24/2018 05:47 AM, Florian Florensa wrote: > > Hello there, > > > > I am still in the works of preparing a deployment with iSCSI

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Mike Christie
On 09/24/2018 05:47 AM, Florian Florensa wrote: > Hello there, > > I am still in the works of preparing a deployment with iSCSI gateways > on Ubuntu, but both the latest LTS of ubuntu ships with kernel 4.15, > and i dont see support for iscsi. > What kernel are people using for this ? > -

Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-24 Thread Ken Dreyer
On Thu, Sep 13, 2018 at 8:48 PM kefu chai wrote: > my question is: is it okay to drop the support of centos/rhel 7.4? so > we will solely build and test the supported Ceph releases (luminous, > mimic) on 7.5 ? CentOS itself does not support old point releases, and I don't think we should imply

Re: [ceph-users] ceph-ansible

2018-09-24 Thread Ken Dreyer
Hi Alfredo, I've packaged the latest version in Fedora, but I didn't update EPEL. I've submitted the update for EPEL now at https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-7f8d3be3e2 . solarflow99, you can test this package and report "+1" in Bodhi there. It's also in the CentOS Storage

Re: [ceph-users] bluestore osd journal move

2018-09-24 Thread Eugen Block
Hi, I am wondering if it is possible to move the ssd journal for the bluestore osd? I would like to move it from one ssd drive to another. yes, this question has been asked several times. Depending on your deployment there are several things to be aware of, maybe you should first read [1]

Re: [ceph-users] Mimic upgrade failure

2018-09-24 Thread Sage Weil
Hi Kevin, Do you have an update on the state of the cluster? I've opened a ticket http://tracker.ceph.com/issues/36163 to track the likely root cause we identified, and have a PR open at https://github.com/ceph/ceph/pull/24247 Thanks! sage On Thu, 20 Sep 2018, Sage Weil wrote: > On Thu, 20

[ceph-users] bluestore osd journal move

2018-09-24 Thread Andrei Mikhailovsky
Hello everyone, I am wondering if it is possible to move the ssd journal for the bluestore osd? I would like to move it from one ssd drive to another. Thanks ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Florian Florensa
Hello there, I am still in the works of preparing a deployment with iSCSI gateways on Ubuntu, but both the latest LTS of ubuntu ships with kernel 4.15, and i dont see support for iscsi. What kernel are people using for this ? - Mainline v4.16 of the ubuntu kernel team ? - Kernel from

Re: [ceph-users] ceph-fuse using excessive memory

2018-09-24 Thread Yan, Zheng
On Fri, Sep 21, 2018 at 5:40 AM Andras Pataki wrote: > > I've done some more experiments playing with client config parameters, > and it seems like the the client_oc_size parameter is very correlated to > how big ceph-fuse grows. With its default value of 200MB, ceph-fuse > gets to about 22GB of

Re: [ceph-users] [slightly OT] XFS vs. BTRFS vs. others as root/usr/var/tmp filesystems ?

2018-09-24 Thread mj
On 09/24/2018 08:53 AM, Nicolas Huillard wrote: Thanks for your anecdote ;-) Could it be that I stack too many things (XFS in LVM in md-RAID in SSD 's FTL)? No, we regularly use the same compound of layers, just without the SSD. mj ___ ceph-users

Re: [ceph-users] [slightly OT] XFS vs. BTRFS vs. others as root/usr/var/tmp filesystems ?

2018-09-24 Thread mj
On 09/24/2018 08:46 AM, Nicolas Huillard wrote: Too bad, since this FS have a lot of very promising features. I view it as the single-host-ceph-like FS, and do not see any equivalent (apart from ZFS which will also never included in the kernel). Agreed. It's also so much more flexible than

Re: [ceph-users] [slightly OT] XFS vs. BTRFS vs. others as root/usr/var/tmp filesystems ?

2018-09-24 Thread Nicolas Huillard
Le dimanche 23 septembre 2018 à 20:28 +0200, mj a écrit : > XFS has *always* treated us nicely, and we have been using it for a > VERY  > long time, ever since the pre-2000 suse 5.2 days on pretty much all > our  > machines. > > We have seen only very few corruptions on xfs, and the few times we 

Re: [ceph-users] [slightly OT] XFS vs. BTRFS vs. others as root/usr/var/tmp filesystems ?

2018-09-24 Thread Nicolas Huillard
Le dimanche 23 septembre 2018 à 17:49 -0700, solarflow99 a écrit : > ya, sadly it looks like btrfs will never materialize as the next > filesystem > of the future.  Redhat as an example even dropped it from its future, > as > others probably will and have too. Too bad, since this FS have a lot of