"Mikko Lammi" <mikko.la...@lmmz.net> wrote:

> Hello,
>
> As a result of one badly designed application running loose for some time,
> we now seem to have over 60 million files in one directory. Good thing
> about ZFS is that it allows it without any issues. Unfortunatelly now that
> we need to get rid of them (because they eat 80% of disk space) it seems
> to be quite challenging.
>
> Traditional approaches like "find ./ -exec rm {} \;" seem to take forever
> - after running several days, the directory size still says the same. The
> only way how I've been able to remove something has been by giving "rm
> -rf" to problematic directory from parent level. Running this command
> shows directory size decreasing by 10,000 files/hour, but this would still
> mean close to ten months (over 250 days) to delete everything!

Do you know the number of files where it really starts to become unusable slow?
I had firectories with 3 million files on UFS and this was just a bit slower
than with small directories.

BTW: "find ./ -exec rm {} \;" is definitely the wrong command as it is known 
since a long time to take forever. This is why "find ./ -exec rm {} +" was 
introduced 20 years ago.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
       j...@cs.tu-berlin.de                (uni)  
       joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to