Re: [ceph-users] Group-based permissions issue when using ACLs on CephFS

2018-03-23 Thread Josh Haft
On Fri, Mar 23, 2018 at 8:49 PM, Yan, Zheng wrote: > On Fri, Mar 23, 2018 at 9:50 PM, Josh Haft wrote: > > On Fri, Mar 23, 2018 at 12:14 AM, Yan, Zheng wrote: > >> > >> On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft

Re: [ceph-users] Enable object map kernel module

2018-03-23 Thread Konstantin Shalygin
how can we deal with that? I see some comments that large images without omap may suffer to get deleted Only way for now is use nbd-rbd or fuse-rbd. k ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Shell / curl test script for rgw

2018-03-23 Thread Konstantin Shalygin
On 03/24/2018 07:22 AM, Marc Roos wrote: Thanks! I got it working, although I had to change the date to "date -R -u", because I got the "RequestTimeTooSkewed" error. I also had to enable buckets=read on the account that was already able to read and write via cyberduck, I don’t get that.

Re: [ceph-users] Group-based permissions issue when using ACLs on CephFS

2018-03-23 Thread Yan, Zheng
On Fri, Mar 23, 2018 at 9:50 PM, Josh Haft wrote: > On Fri, Mar 23, 2018 at 12:14 AM, Yan, Zheng wrote: >> >> On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft wrote: >> > Hello! >> > >> > I'm running Ceph 12.2.2 with one primary and one standby

Re: [ceph-users] How to persist configuration about enabled mgr plugins in Luminous 12.2.4

2018-03-23 Thread Gregory Farnum
I believe this popped up recently and is a container bug. It’s forcibly resetting the modules to run on every start. On Sat, Mar 24, 2018 at 5:44 AM Subhachandra Chandra wrote: > Hi, > >We used ceph-ansible to install/update our Ceph cluster config where > all the cph

Re: [ceph-users] Shell / curl test script for rgw

2018-03-23 Thread Marc Roos
Thanks! I got it working, although I had to change the date to "date -R -u", because I got the "RequestTimeTooSkewed" error. I also had to enable buckets=read on the account that was already able to read and write via cyberduck, I don’t get that. radosgw-admin caps add --uid='test$test1'

[ceph-users] Enable object map kernel module

2018-03-23 Thread Thiago Gonzaga
Hi All, I'm starting with ceph and faced a problem while using object-map root@ceph-mon-1:/home/tgonzaga# rbd create test -s 1024 --image-format 2 --image-feature exclusive-lock root@ceph-mon-1:/home/tgonzaga# rbd feature enable test object-map root@ceph-mon-1:/home/tgonzaga# rbd list test

[ceph-users] How to persist configuration about enabled mgr plugins in Luminous 12.2.4

2018-03-23 Thread Subhachandra Chandra
Hi, We used ceph-ansible to install/update our Ceph cluster config where all the cph dameons run as containers. In mgr.yml I have the following config ### # MODULES # ### # Ceph mgr modules to enable, current modules available are:

Re: [ceph-users] MDS Bug/Problem

2018-03-23 Thread John Spray
On Fri, Mar 23, 2018 at 7:45 PM, Perrin, Christopher (zimkop1) wrote: > Hi, > > Last week out MDSs started failing one after another, and could not be > started anymore. After a lot of tinkering I found out that MDSs crashed after > trying to rejoin the Cluster. The only

Re: [ceph-users] Erasure Coded Pools and OpenStack

2018-03-23 Thread Mike Cave
Thank you for getting back to me so quickly. Your suggestion of adding the config change in ceph.conf was a great one. That helped a lot. I didn't realize that the client would need to be updated and thought that it was a cluster side modification only. Something else that I missed was giving

Re: [ceph-users] Uneven pg distribution cause high fs_apply_latency on osds with more pgs

2018-03-23 Thread David Turner
Luminous addresses it with a mgr plugin that actively changes the weights of OSDs to balance the distribution. In addition to having PGs distributed well for your OSDs to have an equal amount of data on them is also which OSDs are Primary. If you're running into a lot of latency on specific OSDs

Re: [ceph-users] cephalocon slides/videos

2018-03-23 Thread David Turner
A lot of videos from ceph days and such pop up on the [1] Ceph youtube channel. [1] https://www.youtube.com/channel/UCno-Fry25FJ7B4RycCxOtfw On Fri, Mar 23, 2018 at 5:28 AM Serkan Çoban wrote: > Hi, > > Where can I find slides/videos of the conference? > I already tried

Re: [ceph-users] Lost space or expected?

2018-03-23 Thread David Turner
The first thing I looked at was if you had any snapshots/clones in your pools, but that count is 0 for you. Second, I would look at seeing if you have orphaned objects from deleted RBDs. You could check that by comparing a list of the rbd 'block_name_prefix' for all of the rbds in the pool with

Re: [ceph-users] remove big rbd image is very slow

2018-03-23 Thread David Turner
Just to note the "magic" of object-map... If you had a 50TB RBD with object-map and 100% of the RBD is in use, the rbd rm will take the same amount of time to delete the RBD as if you don't have object-map enabled on a brand new 50TB RBD that has no data in it. Removing that many objects just

Re: [ceph-users] Moving OSDs between hosts

2018-03-23 Thread David Turner
Just moving the OSD is indeed the right thing to do and the crush map will update when the OSDs start up on the new host. The only "gotcha" is if you do not have your journals/WAL/DBs on the same device as your data. In that case, you will need to move both devices to the new server for the OSD

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Nicolas Huillard
Le vendredi 23 mars 2018 à 12:14 +0100, Ilya Dryomov a écrit : > On Fri, Mar 23, 2018 at 11:48 AM,   wrote: > > The stock kernel from Debian is perfect > > Spectre / meltdown mitigations are worthless for a Ceph point of > > view, > > and should be disabled (again, strictly

Re: [ceph-users] why we show removed snaps in ceph osd dump pool info?

2018-03-23 Thread David Turner
The removed snaps is also in the osd map. It does truncate the list over time to show ranges and such and it is definitely annoying, but it is needed for some of the internals of Ceph. I don't remember what they are, but that was the gist of the answer I got back when we were working on some

Re: [ceph-users] CHOOSING THE NUMBER OF PLACEMENT GROUPS

2018-03-23 Thread David Turner
PGs per pool also has a lot to do with how much data each pool will have. If 1 pool will have 90% of the data, it should have 90% of the PGs. If it will be common for you to create and delete pools (not usually common and probably something you can do simpler), then you can aim to start at a

Re: [ceph-users] Luminous and jemalloc

2018-03-23 Thread Alexandre DERUMIER
Hi, I think it's no more a problem since async messenger is default. Difference is minimal now between jemalloc and tcmalloc. Regards, Alexandre - Mail original - De: "Xavier Trilla" À: "ceph-users" Cc: "Arnau Marcé"

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Ilya Dryomov
On Fri, Mar 23, 2018 at 3:01 PM, wrote: > Ok ^^ > > For Cephfs, as far as I know, quota support is not supported in kernel space > This is not specific to luminous, tho quota support is coming, hopefully in 4.17. Thanks, Ilya

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread ceph
Ok ^^ For Cephfs, as far as I know, quota support is not supported in kernel space This is not specific to luminous, tho On 03/23/2018 03:00 PM, Ilya Dryomov wrote: > On Fri, Mar 23, 2018 at 2:18 PM, wrote: >> On 03/23/2018 12:14 PM, Ilya Dryomov wrote: >>> luminous

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Ilya Dryomov
On Fri, Mar 23, 2018 at 2:18 PM, wrote: > On 03/23/2018 12:14 PM, Ilya Dryomov wrote: >> luminous cluster-wide feature bits are supported since kernel 4.13. > > ? > > # uname -a > Linux abweb1 4.14.0-0.bpo.3-amd64 #1 SMP Debian 4.14.13-1~bpo9+1 > (2018-01-14) x86_64

Re: [ceph-users] Group-based permissions issue when using ACLs on CephFS

2018-03-23 Thread Josh Haft
On Fri, Mar 23, 2018 at 12:14 AM, Yan, Zheng wrote: > > On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft wrote: > > Hello! > > > > I'm running Ceph 12.2.2 with one primary and one standby MDS. Mounting > > CephFS via ceph-fuse (to leverage quotas), and enabled

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread ceph
On 03/23/2018 12:14 PM, Ilya Dryomov wrote: > luminous cluster-wide feature bits are supported since kernel 4.13. ? # uname -a Linux abweb1 4.14.0-0.bpo.3-amd64 #1 SMP Debian 4.14.13-1~bpo9+1 (2018-01-14) x86_64 GNU/Linux # rbd info truc rbd image 'truc': size 20480 MB in 5120 objects

[ceph-users] Luminous and jemalloc

2018-03-23 Thread Xavier Trilla
Hi, Does anybody have information about using jemalloc with Luminous? For what I've seen on the mailing list and online, bluestor crashes when using jemalloc. We've been running ceph with jemalloc since Hammer, as performance with tcmalloc was terrible (We run a quite big full SSD cluster)

Re: [ceph-users] Bluestore cluster, bad IO perf on blocksize<64k... could it be throttling ?

2018-03-23 Thread Maged Mokhtar
On 2018-03-21 19:50, Frederic BRET wrote: > Hi all, > > The context : > - Test cluster aside production one > - Fresh install on Luminous > - choice of Bluestore (coming from Filestore) > - Default config (including wpq queuing) > - 6 nodes SAS12, 14 OSD, 2 SSD, 2 x 10Gb nodes, far more Gb at

[ceph-users] MDS Bug/Problem

2018-03-23 Thread Perrin, Christopher (zimkop1)
Hi, Last week out MDSs started failing one after another, and could not be started anymore. After a lot of tinkering I found out that MDSs crashed after trying to rejoin the Cluster. The only Solution I found that, let them start again was resetting the journal vie cephfs-journal-tool. Now I

Re: [ceph-users] OSD crash with segfault Luminous 12.2.4

2018-03-23 Thread Dietmar Rieder
Hi, I encountered one more two days ago, and I opened a ticket: http://tracker.ceph.com/issues/23431 In our case it is more like 1 every two weeks, for now... And it is affecting different OSDs on different hosts. Dietmar On 03/23/2018 11:50 AM, Oliver Freyermuth wrote: > Hi together, > > I

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Ilya Dryomov
On Fri, Mar 23, 2018 at 11:48 AM, wrote: > The stock kernel from Debian is perfect > Spectre / meltdown mitigations are worthless for a Ceph point of view, > and should be disabled (again, strictly from a Ceph point of view) > > If you need the luminous features, using the

Re: [ceph-users] Bluestore cluster, bad IO perf on blocksize<64k... could it be throttling ?

2018-03-23 Thread Ilya Dryomov
On Wed, Mar 21, 2018 at 6:50 PM, Frederic BRET wrote: > Hi all, > > The context : > - Test cluster aside production one > - Fresh install on Luminous > - choice of Bluestore (coming from Filestore) > - Default config (including wpq queuing) > - 6 nodes SAS12, 14 OSD, 2

Re: [ceph-users] IO rate-limiting with Ceph RBD (and libvirt)

2018-03-23 Thread Luis Periquito
On Fri, Mar 23, 2018 at 4:05 AM, Anthony D'Atri wrote: > FYI: I/O limiting in combination with OpenStack 10/12 + Ceph doesn?t work > properly. Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1476830 > > > That's an OpenStack bug, nothing to do with Ceph. Nothing stops you

Re: [ceph-users] OSD crash with segfault Luminous 12.2.4

2018-03-23 Thread Oliver Freyermuth
Hi together, I notice exactly the same, also the same addresses, Luminous 12.2.4, CentOS 7. Sadly, logs are equally unhelpful. It happens randomly on an OSD about once per 2-3 days (of the 196 total OSDs we have). It's also not a container environment. Cheers, Oliver Am 08.03.2018

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread ceph
The stock kernel from Debian is perfect Spectre / meltdown mitigations are worthless for a Ceph point of view, and should be disabled (again, strictly from a Ceph point of view) If you need the luminous features, using the userspace implementations is required (librbd via rbd-nbd or qemu,

[ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Nicolas Huillard
Hi all, I'm using Luminous 12.2.4 on all servers, with Debian stock kernel. I use the kernel cephfs/rbd on the client side, and have a choice of : * stock Debian 9 kernel 4.9 : LTS, Spectre/Meltdown mitigations in place, field-tested, probably old libceph inside. * backports kernel 4.14 :

[ceph-users] cephalocon slides/videos

2018-03-23 Thread Serkan Çoban
Hi, Where can I find slides/videos of the conference? I already tried (1), but cannot view the videos. Serkan 1- http://www.itdks.com/eventlist/detail/1962 ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?

2018-03-23 Thread Alexandre DERUMIER
Hi, >>Did the fs have lots of mount/umount? not too much, I have around 300 ceph-fuse clients (12.2.2 && 12.2.4) and ceph cluster is 12.2.2. maybe when client reboot, but that don't happen too much. >> We recently found a memory leak >>bug in that area https://github.com/ceph/ceph/pull/20148