PM
> To: Davie De Smet
> Cc: Gregory Farnum ; ceph-users
> Subject: Re: [ceph-users] CephFS: No space left on device
>
> I have written a tool that fixes this type of error. I'm currently testing
> it. Will push it out tomorrow
>
> Regards
> Yan, Zheng
>
> On
On Wed, Oct 12, 2016 at 7:18 AM, Davie De Smet
wrote:
> Hi Gregory,
>
> Thanks for the help! I've been looping over all trashcan files and the amount
> of strays is lowering. This is going to take quite some time as it are a lot
> of files but so far so good. If I should encounter any further pr
; Director Technical Operations and Customer Services, Nomadesk
> +32 9 240 10 31 (Office)
>
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: Wednesday, October 12, 2016 2:11 AM
> To: Davie De Smet
> Cc: Mykola Dvornik ; John Spray
> ; ceph-
erations and Customer Services, Nomadesk
> +32 9 240 10 31 (Office)
>
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: Wednesday, October 12, 2016 2:11 AM
> To: Davie De Smet
> Cc: Mykola Dvornik ; John Spray
> ; ceph-users
> S
ay ;
ceph-users
Subject: Re: [ceph-users] CephFS: No space left on device
On Tue, Oct 11, 2016 at 12:20 AM, Davie De Smet
wrote:
> Hi,
>
> We do use hardlinks a lot. The application using the cluster has a build in
> 'trashcan' functionality based on hardlinks. Obviously, al
On Tue, Oct 11, 2016 at 12:20 AM, Davie De Smet
wrote:
> Hi,
>
> We do use hardlinks a lot. The application using the cluster has a build in
> 'trashcan' functionality based on hardlinks. Obviously, all removed files and
> hardlinks are not visible anymore on the CephFS mount itself. Can I manua
8:20 PM
To: Davie De Smet
Cc: Mykola Dvornik ; John Spray ;
ceph-users
Subject: Re: [ceph-users] CephFS: No space left on device
On Mon, Oct 10, 2016 at 9:06 AM, Davie De Smet
wrote:
> Hi,
>
>
>
> I don’t want to hijack this topic but the behavior described below
2.8M and
> growing. We do use hardlinks on our system. Any tips for actions that I can
> take to resolve this?
Hmm. How many hard links do you have? If you hard linked 2.8 million
files and then deleted the original links, you can resolve this by
doing something to open (or maybe even just t
Any tips for actions that I can
take to resolve this?
Kind regards,
Davie De Smet
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mykola
Dvornik
Sent: Tuesday, October 4, 2016 2:07 PM
To: John Spray
Cc: ceph-users
Subject: Re: [ceph-users] CephFS: No spac
you use before upgrading to 10.2.3 ?
>
> Regards
> Yan, Zheng
>
> >
> >
> > -Mykola
> >
> >
> >
> > From: Yan, Zheng
> > Sent: Thursday, 6 October 2016 04:48
> > To: Mykola Dvornik
> > Cc: John Spray; ceph-users
> >
gt; From: Yan, Zheng
> Sent: Thursday, 6 October 2016 04:48
> To: Mykola Dvornik
> Cc: John Spray; ceph-users
> Subject: Re: [ceph-users] CephFS: No space left on device
>
>
>
> On Wed, Oct 5, 2016 at 2:27 PM, Mykola Dvornik
> wrote:
>
>> Hi Zheng,
>
>>
&
Is there any way to repair pgs/cephfs gracefully?
-Mykola
From: Yan, Zheng
Sent: Thursday, 6 October 2016 04:48
To: Mykola Dvornik
Cc: John Spray; ceph-users
Subject: Re: [ceph-users] CephFS: No space left on device
On Wed, Oct 5, 2016 at 2:27 PM, Mykola Dvornik wrote:
> Hi Zheng,
>
On Wed, Oct 5, 2016 at 2:27 PM, Mykola Dvornik wrote:
> Hi Zheng,
>
> Many thanks for you reply.
>
> This indicates the MDS metadata is corrupted. Did you do any unusual
> operation on the cephfs? (e.g reset journal, create new fs using
> existing metadata pool)
>
> No, nothing has been explicitly
Hi Zheng,
Many thanks for you reply.
This indicates the MDS metadata is corrupted. Did you do any unusual
operation on the cephfs? (e.g reset journal, create new fs using
existing metadata pool)
No, nothing has been explicitly done to the MDS. I had a few inconsistent
PGs that belonged to the (3
On Mon, Oct 3, 2016 at 5:48 AM, Mykola Dvornik wrote:
> Hi Johan,
>
> Many thanks for your reply. I will try to play with the mds tunables and
> report back to your ASAP.
>
> So far I see that mds log contains a lot of errors of the following kind:
>
> 2016-10-02 11:58:03.002769 7f8372d54700 0 md
To my best knowledge nobody used hardlinks within fs.
So I have unmounted everything to see what would happen:
[root@005-s-ragnarok ragnarok]# ceph daemon mds.fast-test session ls
[]
-mds-- --mds_server-- ---objecter--- -mds_cache-
---mds_log
rlat inos caps|hsr hcs hcr |wri
(Re-adding list)
The 7.5k stray dentries while idle is probably indicating that clients
are holding onto references to them (unless you unmount the clients
and they don't purge, in which case you may well have found a bug).
The other way you can end up with lots of dentries sitting in stray
dirs i
Hi Johan,
Many thanks for your reply. I will try to play with the mds tunables and
report back to your ASAP.
So far I see that mds log contains a lot of errors of the following kind:
2016-10-02 11:58:03.002769 7f8372d54700 0 mds.0.cache.dir(100056ddecd)
_fetched badness: got (but i already had
On Sun, Oct 2, 2016 at 11:09 AM, Mykola Dvornik
wrote:
> After upgrading to 10.2.3 we frequently see messages like
>From which version did you upgrade?
> 'rm: cannot remove '...': No space left on device
>
> The folders we are trying to delete contain approx. 50K files 193 KB each.
My guess wou
19 matches
Mail list logo