On 9/12/19 2:04 AM, ceph-us...@dxps31.33mail.com wrote:
Thanks both for the pointer! Even with the vfs objects on the same line I get
the same result.
This is the testparm for the share (I'm logged in as SAMDOM\Administrator):
[data]
acl group control = Yes
admin users =
Hi!
Thanks both for the pointer! Even with the vfs objects on the same line I get
the same result.
This is the testparm for the share (I'm logged in as SAMDOM\Administrator):
[data]
acl group control = Yes
admin users = "@Domain Admins" "SAMDOM\Domain Admins"
I previously posted this question to lists.ceph.com not understanding
lists.ceph.io is the replacement for it. Posting it again here with some edits.
---
Hi there, we have been using ceph for a few years now, it's only now that
I've noticed we have been using the same name for all RGW hosts,
I can verify lists.ceph.com still works, I just posted a message there..
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 11/09/2019 04:14, Yan, Zheng wrote:
On Wed, Sep 11, 2019 at 6:51 AM Kenneth Waegeman
wrote:
We sync the file system without preserving hard links. But we take
snapshots after each sync, so I guess deleting files which are still in
snapshots can also be in the stray directories?
> I'm not a ceph expert, but solution iii) seems decent for me, with a
> little overhead (a readlinkk and a stat for every osd start).
I've tested my patch and work as expected, i've created:
https://tracker.ceph.com/issues/41777
Thanks.
--
dott. Marco Gaiarin
On Wed, Sep 11, 2019 at 11:17:47AM +0100, Matthew Vernon wrote:
>Hi,
>
>We keep finding part-made OSDs (they appear not attached to any host,
>and down and out; but still counting towards the number of OSDs); we
>never saw this with ceph-disk. On investigation, this is because
>ceph-volume lvm
Hello,
I'm running ceph 14.2.3 on six hosts with each four osds. I did recently
upgrade this from four hosts.
The cluster is running fine. But i get this in my logs:
Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700
-1 verify_upmap number of buckets 5 exceeds desired 4
Hi,
the output of ceph health details gives me a warning that concerns me a
little. I'll explain in a second.
root@ld3955:/mnt/rbd# ceph health detail
HEALTH_WARN 1 nearfull osd(s); 1 pool(s) nearfull; 4 pools have too many
placement groups
OSD_NEARFULL 1 nearfull osd(s)
osd.122 is near
Hi
on 2019/9/11 15:14, Gökhan Kocak wrote:
___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io
The signature of message you just sent has the info to leave out.
regards.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
11 matches
Mail list logo