[ceph-users] Re: vfs_ceph and permissions

2019-09-11 Thread Konstantin Shalygin
On 9/12/19 2:04 AM, ceph-us...@dxps31.33mail.com wrote: Thanks both for the pointer! Even with the vfs objects on the same line I get the same result. This is the testparm for the share (I'm logged in as SAMDOM\Administrator): [data] acl group control = Yes admin users =

[ceph-users] Re: vfs_ceph and permissions

2019-09-11 Thread ceph-users
Hi! Thanks both for the pointer! Even with the vfs objects on the same line I get the same result. This is the testparm for the share (I'm logged in as SAMDOM\Administrator): [data] acl group control = Yes admin users = "@Domain Admins" "SAMDOM\Domain Admins"

[ceph-users] Using same instance name for rgw

2019-09-11 Thread Eric Choi
I previously posted this question to lists.ceph.com not understanding lists.ceph.io is the replacement for it. Posting it again here with some edits. --- Hi there, we have been using ceph for a few years now, it's only now that I've noticed we have been using the same name for all RGW hosts,

[ceph-users] Re: subscriptions from lists.ceph.com now on lists.ceph.io?

2019-09-11 Thread Eric Choi
I can verify lists.ceph.com still works, I just posted a message there.. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: regurlary 'no space left on device' when deleting on cephfs

2019-09-11 Thread Kenneth Waegeman
On 11/09/2019 04:14, Yan, Zheng wrote: On Wed, Sep 11, 2019 at 6:51 AM Kenneth Waegeman wrote: We sync the file system without preserving hard links. But we take snapshots after each sync, so I guess deleting files which are still in snapshots can also be in the stray directories?

[ceph-users] Re: FileStore OSD, journal direct symlinked, permission troubles.

2019-09-11 Thread Marco Gaiarin
> I'm not a ceph expert, but solution iii) seems decent for me, with a > little overhead (a readlinkk and a stat for every osd start). I've tested my patch and work as expected, i've created: https://tracker.ceph.com/issues/41777 Thanks. -- dott. Marco Gaiarin

[ceph-users] Re: ceph-volume lvm create leaves half-built OSDs lying around

2019-09-11 Thread Jan Fajerski
On Wed, Sep 11, 2019 at 11:17:47AM +0100, Matthew Vernon wrote: >Hi, > >We keep finding part-made OSDs (they appear not attached to any host, >and down and out; but still counting towards the number of OSDs); we >never saw this with ceph-disk. On investigation, this is because >ceph-volume lvm

[ceph-users] verify_upmap number of buckets 5 exceeds desired 4

2019-09-11 Thread Eric Dold
Hello, I'm running ceph 14.2.3 on six hosts with each four osds. I did recently upgrade this from four hosts. The cluster is running fine. But i get this in my logs: Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700 -1 verify_upmap number of buckets 5 exceeds desired 4

[ceph-users] Warning: 1 pool nearfull and unbalanced data distribution

2019-09-11 Thread Thomas
Hi, the output of ceph health details gives me a warning that concerns me a little. I'll explain in a second. root@ld3955:/mnt/rbd# ceph health detail HEALTH_WARN 1 nearfull osd(s); 1 pool(s) nearfull; 4 pools have too many placement groups OSD_NEARFULL 1 nearfull osd(s)     osd.122 is near

[ceph-users] Re: unsubscribe

2019-09-11 Thread Wesley Peng
Hi on 2019/9/11 15:14, Gökhan Kocak wrote: ___ ceph-users mailing list --ceph-users@ceph.io To unsubscribe send an email toceph-users-le...@ceph.io The signature of message you just sent has the info to leave out. regards.

[ceph-users] unsubscribe

2019-09-11 Thread Gökhan Kocak
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io