3:41 PM
> To: Davie De Smet <davie.des...@nomadesk.com>
> Cc: Gregory Farnum <gfar...@redhat.com>; ceph-users <ceph-us...@ceph.com>
> Subject: Re: [ceph-users] CephFS: No space left on device
>
> I have written a tool that fixes this type of error. I'm currently testing
>
On Wed, Oct 12, 2016 at 7:18 AM, Davie De Smet
wrote:
> Hi Gregory,
>
> Thanks for the help! I've been looping over all trashcan files and the amount
> of strays is lowering. This is going to take quite some time as it are a lot
> of files but so far so good. If I
; ceph-users <ceph-us...@ceph.com>
Subject: Re: [ceph-users] CephFS: No space left on device
I have written a tool that fixes this type of error. I'm currently testing it.
Will push it out tomorrow
Regards
Yan, Zheng
On Wed, Oct 12, 2016 at 9:18 PM, Davie De Smet <davie.des...@nomadesk.com&
Cc: Mykola Dvornik <mykola.dvor...@gmail.com>; John Spray
> <jsp...@redhat.com>; ceph-users <ceph-us...@ceph.com>
> Subject: Re: [ceph-users] CephFS: No space left on device
>
> On Tue, Oct 11, 2016 at 12:20 AM, Davie De Smet <davie.des...@nomadesk.com>
> wrot
Mykola Dvornik <mykola.dvor...@gmail.com>; John Spray <jsp...@redhat.com>;
ceph-users <ceph-us...@ceph.com>
Subject: Re: [ceph-users] CephFS: No space left on device
On Tue, Oct 11, 2016 at 12:20 AM, Davie De Smet <davie.des...@nomadesk.com>
wrote:
> Hi,
>
> We do use
On Tue, Oct 11, 2016 at 12:20 AM, Davie De Smet
wrote:
> Hi,
>
> We do use hardlinks a lot. The application using the cluster has a build in
> 'trashcan' functionality based on hardlinks. Obviously, all removed files and
> hardlinks are not visible anymore on the
Smet <davie.des...@nomadesk.com>
Cc: Mykola Dvornik <mykola.dvor...@gmail.com>; John Spray <jsp...@redhat.com>;
ceph-users <ceph-us...@ceph.com>
Subject: Re: [ceph-users] CephFS: No space left on device
On Mon, Oct 10, 2016 at 9:06 AM, Davie De Smet <davie.des...
hardlinks on our system. Any tips for actions that I can
> take to resolve this?
Hmm. How many hard links do you have? If you hard linked 2.8 million
files and then deleted the original links, you can resolve this by
doing something to open (or maybe even just touch?) the remaining
links (presumab
Davie De Smet
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mykola
Dvornik
Sent: Tuesday, October 4, 2016 2:07 PM
To: John Spray <jsp...@redhat.com>
Cc: ceph-users <ceph-us...@ceph.com>
Subject: Re: [ceph-users] CephFS: No space left on device
To my bes
his type of corruption.
>
> Which version of ceph did you use before upgrading to 10.2.3 ?
>
> Regards
> Yan, Zheng
>
> >
> >
> > -Mykola
> >
> >
> >
> > From: Yan, Zheng
> > Sent: Thursday, 6 October 2016 04:48
> > To: My
gt;
>
> -Mykola
>
>
>
> From: Yan, Zheng
> Sent: Thursday, 6 October 2016 04:48
> To: Mykola Dvornik
> Cc: John Spray; ceph-users
> Subject: Re: [ceph-users] CephFS: No space left on device
>
>
>
> On Wed, Oct 5, 2016 at 2:27 PM, Mykola Dvornik <mykola.dvor
Is there any way to repair pgs/cephfs gracefully?
-Mykola
From: Yan, Zheng
Sent: Thursday, 6 October 2016 04:48
To: Mykola Dvornik
Cc: John Spray; ceph-users
Subject: Re: [ceph-users] CephFS: No space left on device
On Wed, Oct 5, 2016 at 2:27 PM, Mykola Dvornik <mykola.dvor...@gmail.com>
On Wed, Oct 5, 2016 at 2:27 PM, Mykola Dvornik wrote:
> Hi Zheng,
>
> Many thanks for you reply.
>
> This indicates the MDS metadata is corrupted. Did you do any unusual
> operation on the cephfs? (e.g reset journal, create new fs using
> existing metadata pool)
>
> No,
Hi Zheng,
Many thanks for you reply.
This indicates the MDS metadata is corrupted. Did you do any unusual
operation on the cephfs? (e.g reset journal, create new fs using
existing metadata pool)
No, nothing has been explicitly done to the MDS. I had a few inconsistent
PGs that belonged to the
On Mon, Oct 3, 2016 at 5:48 AM, Mykola Dvornik wrote:
> Hi Johan,
>
> Many thanks for your reply. I will try to play with the mds tunables and
> report back to your ASAP.
>
> So far I see that mds log contains a lot of errors of the following kind:
>
> 2016-10-02
To my best knowledge nobody used hardlinks within fs.
So I have unmounted everything to see what would happen:
[root@005-s-ragnarok ragnarok]# ceph daemon mds.fast-test session ls
[]
-mds-- --mds_server-- ---objecter--- -mds_cache-
---mds_log
rlat inos caps|hsr hcs hcr
(Re-adding list)
The 7.5k stray dentries while idle is probably indicating that clients
are holding onto references to them (unless you unmount the clients
and they don't purge, in which case you may well have found a bug).
The other way you can end up with lots of dentries sitting in stray
dirs
Hi Johan,
Many thanks for your reply. I will try to play with the mds tunables and
report back to your ASAP.
So far I see that mds log contains a lot of errors of the following kind:
2016-10-02 11:58:03.002769 7f8372d54700 0 mds.0.cache.dir(100056ddecd)
_fetched badness: got (but i already
On Sun, Oct 2, 2016 at 11:09 AM, Mykola Dvornik
wrote:
> After upgrading to 10.2.3 we frequently see messages like
>From which version did you upgrade?
> 'rm: cannot remove '...': No space left on device
>
> The folders we are trying to delete contain approx. 50K files
After upgrading to 10.2.3 we frequently see messages like
'rm: cannot remove '...': No space left on device
The folders we are trying to delete contain approx. 50K files 193 KB each.
The cluster state and storage available are both OK:
cluster 98d72518-6619-4b5c-b148-9a781ef13bcb
20 matches
Mail list logo