On Sat, Oct 15, 2016 at 1:36 AM, Heller, Chris wrote:
> Just a thought, but since a directory tree is a first class item in cephfs,
> could the wire protocol be extended with an “recursive delete” operation,
> specifically for cases like this?
In principle yes, but the
rom: Gregory Farnum
> Sent: Saturday, 15 October 2016 05:02
> To: Mykola Dvornik
> Cc: Heller, Chris; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] cephfs slow delete
>
>
>
> On Fri, Oct 14, 2016 at 6:26 PM, <mykola.dvor...@gmail.com> wrote:
>
>> If you
ave size of 16EB. Those
are impossible to delete as they are ‘non empty’.
-Mykola
From: Gregory Farnum
Sent: Saturday, 15 October 2016 05:02
To: Mykola Dvornik
Cc: Heller, Chris; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] cephfs slow delete
On Fri, Oct 14, 2016 at 6:26 PM, <mykol
On Fri, Oct 14, 2016 at 6:26 PM, wrote:
> If you are running 10.2.3 on your cluster, then I would strongly recommend
> to NOT delete files in parallel as you might hit
> http://tracker.ceph.com/issues/17177
I don't think these have anything to do with each other. What
-users] cephfs slow delete
Just a thought, but since a directory tree is a first class item in cephfs,
could the wire protocol be extended with an “recursive delete” operation,
specifically for cases like this?
On 10/14/16, 4:16 PM, "Gregory Farnum" <gfar...@redhat.com> wrote:
Just a thought, but since a directory tree is a first class item in cephfs,
could the wire protocol be extended with an “recursive delete” operation,
specifically for cases like this?
On 10/14/16, 4:16 PM, "Gregory Farnum" wrote:
On Fri, Oct 14, 2016 at 1:11 PM,
On Fri, Oct 14, 2016 at 1:11 PM, Heller, Chris wrote:
> Ok. Since I’m running through the Hadoop/ceph api, there is no syscall
> boundary so there is a simple place to improve the throughput here. Good to
> know, I’ll work on a patch…
Ah yeah, if you're in whatever they
Ok. Since I’m running through the Hadoop/ceph api, there is no syscall boundary
so there is a simple place to improve the throughput here. Good to know, I’ll
work on a patch…
On 10/14/16, 3:58 PM, "Gregory Farnum" wrote:
On Fri, Oct 14, 2016 at 11:41 AM, Heller, Chris
On Fri, Oct 14, 2016 at 11:41 AM, Heller, Chris wrote:
> Unfortunately, it was all in the unlink operation. Looks as if it took nearly
> 20 hours to remove the dir, roundtrip is a killer there. What can be done to
> reduce RTT to the MDS? Does the client really have to
Unfortunately, it was all in the unlink operation. Looks as if it took nearly
20 hours to remove the dir, roundtrip is a killer there. What can be done to
reduce RTT to the MDS? Does the client really have to sequentially delete
directories or can it have internal batching or parallelization?
On Thu, Oct 13, 2016 at 12:44 PM, Heller, Chris wrote:
> I have a directory I’ve been trying to remove from cephfs (via
> cephfs-hadoop), the directory is a few hundred gigabytes in size and
> contains a few million files, but not in a single sub directory. I startd
> the
I have a directory I’ve been trying to remove from cephfs (via cephfs-hadoop),
the directory is a few hundred gigabytes in size and contains a few million
files, but not in a single sub directory. I startd the delete yesterday at
around 6:30 EST, and it’s still progressing. I can see from (ceph
12 matches
Mail list logo