Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-25 Thread Jason King
On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens matthew.ahr...@sun.com wrote: Michael Schuster wrote: Mike Gerdts wrote: On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-20 Thread Miles Nordin
ml == Mikko Lammi mikko.la...@lmmz.net writes: ml rm -rf to problematic directory from parent level. Running ml this command shows directory size decreasing by 10,000 ml files/hour, but this would still mean close to ten months ml (over 250 days) to delete everything!

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-19 Thread Matthew Ahrens
Michael Schuster wrote: Mike Gerdts wrote: On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joerg Schilling
Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Markus Kovero
Hi, while not providing complete solution, I'd suggest turning atime off so find/rm does not change access time and possibly destroying unnecessary snapshots before removing files, should be quicker. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mike Gerdts
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
Mike Gerdts wrote: On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues.

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 05:34, Mikko Lammi wrote: As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
On Tue, January 5, 2010 05:34, Mikko Lammi wrote: As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mikko Lammi
On Tue, January 5, 2010 17:08, David Magda wrote: On Tue, January 5, 2010 05:34, Mikko Lammi wrote: As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 10:12, casper@sun.com wrote: How about creating a new data set, moving the directory into it, and then destroying it? Assuming the directory in question is /opt/MYapp/data: 1. zfs create rpool/junk 2. mv /opt/MYapp/data /rpool/junk/ 3. zfs destroy rpool/junk

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
On Tue, January 5, 2010 10:12, casper@sun.com wrote: How about creating a new data set, moving the directory into it, and then destroying it? Assuming the directory in question is /opt/MYapp/data: 1. zfs create rpool/junk 2. mv /opt/MYapp/data /rpool/junk/ 3. zfs destroy rpool/junk

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Dennis Clarke
On Tue, January 5, 2010 10:12, casper@sun.com wrote: How about creating a new data set, moving the directory into it, and then destroying it? Assuming the directory in question is /opt/MYapp/data: 1. zfs create rpool/junk 2. mv /opt/MYapp/data /rpool/junk/ 3. zfs destroy rpool/junk

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 10:50, Michael Schuster wrote: David Magda wrote: Normally when you do a move with-in a 'regular' file system all that's usually done is the directory pointer is shuffled around. This is not the case with ZFS data sets, even though they're on the same pool? no - mv

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 2:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joerg Schilling
Michael Schuster michael.schus...@sun.com wrote: rm -rf would be at least as quick. Normally when you do a move with-in a 'regular' file system all that's usually done is the directory pointer is shuffled around. This is not the case with ZFS data sets, even though they're on the same

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
no - mv doesn't know about zpools, only about posix filesystems. mv doesn't care about filesystems only about the interface provided by POSIX. There is no zfs specific interface which allows you to move a file from one zfs to the next. Casper ___

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Dyer-Bennet
On Tue, January 5, 2010 10:01, Richard Elling wrote: OTOH, if you can reboot you can also run the latest b130 livecd which has faster stat(). How much faster is it? He estimated 250 days to rm -rf them; so 10x faster would get that down to 25 days, 100x would get it down to 2.5 days (assuming

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: On Tue, January 5, 2010 10:01, Richard Elling wrote: OTOH, if you can reboot you can also run the latest b130 livecd which has faster stat(). How much faster is it? He estimated 250 days to rm -rf them; so 10x faster would get that down

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Tim Cook
On Tue, Jan 5, 2010 at 11:25 AM, Richard Elling richard.ell...@gmail.comwrote: On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: On Tue, January 5, 2010 10:01, Richard Elling wrote: OTOH, if you can reboot you can also run the latest b130 livecd which has faster stat(). How much

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Daniel Rock
Am 05.01.2010 16:22, schrieb Mikko Lammi: However when we deleted some other files from the volume and managed to raise free disk space from 4 GB to 10 GB, the rm -rf directory method started to perform significantly faster. Now it's deleting around 4,000 files/minute (240,000/h - quite an

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joe Blount
On 01/ 5/10 10:01 AM, Richard Elling wrote: How are the files named? If you know something about the filename pattern, then you could create subdirs and mv large numbers of files to reduce the overall size of a single directory. Something like: mkdir .A mv A* .A mkdir .B mv B*

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Paul Gress
On 01/ 5/10 05:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of them

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
Paul Gress wrote: On 01/ 5/10 05:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Fajar A. Nugraha
On Wed, Jan 6, 2010 at 12:44 AM, Michael Schuster michael.schus...@sun.com wrote: we need to get rid of them (because they eat 80% of disk space) it seems to be quite challenging. I've been following this thread.  Would it be faster to do the reverse.  Copy the 20% of disk then format then

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 8:52 AM, Daniel Rock wrote: Am 05.01.2010 16:22, schrieb Mikko Lammi: However when we deleted some other files from the volume and managed to raise free disk space from 4 GB to 10 GB, the rm -rf directory method started to perform significantly faster. Now it's deleting

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Dyer-Bennet
On Tue, January 5, 2010 10:25, Richard Elling wrote: On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: It's interesting how our ability to build larger disks, and our software's ability to do things like create really large numbers of files, comes back to bite us on the ass every now