On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
Michael Schuster wrote:
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem
ml == Mikko Lammi mikko.la...@lmmz.net writes:
ml rm -rf to problematic directory from parent level. Running
ml this command shows directory size decreasing by 10,000
ml files/hour, but this would still mean close to ten months
ml (over 250 days) to delete everything!
Michael Schuster wrote:
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it
Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of
Hi, while not providing complete solution, I'd suggest turning atime off so
find/rm does not change access time and possibly destroying unnecessary
snapshots before removing files, should be quicker.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues.
On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of
On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of
On Tue, January 5, 2010 17:08, David Magda wrote:
On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any
On Tue, January 5, 2010 10:12, casper@sun.com wrote:
How about creating a new data set, moving the directory into it, and then
destroying it?
Assuming the directory in question is /opt/MYapp/data:
1. zfs create rpool/junk
2. mv /opt/MYapp/data /rpool/junk/
3. zfs destroy rpool/junk
On Tue, January 5, 2010 10:12, casper@sun.com wrote:
How about creating a new data set, moving the directory into it, and then
destroying it?
Assuming the directory in question is /opt/MYapp/data:
1. zfs create rpool/junk
2. mv /opt/MYapp/data /rpool/junk/
3. zfs destroy rpool/junk
On Tue, January 5, 2010 10:12, casper@sun.com wrote:
How about creating a new data set, moving the directory into it, and
then
destroying it?
Assuming the directory in question is /opt/MYapp/data:
1. zfs create rpool/junk
2. mv /opt/MYapp/data /rpool/junk/
3. zfs destroy rpool/junk
On Tue, January 5, 2010 10:50, Michael Schuster wrote:
David Magda wrote:
Normally when you do a move with-in a 'regular' file system all that's
usually done is the directory pointer is shuffled around. This is not
the case with ZFS data sets, even though they're on the same pool?
no - mv
On Jan 5, 2010, at 2:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly
now that
we need to get
Michael Schuster michael.schus...@sun.com wrote:
rm -rf would be at least as quick.
Normally when you do a move with-in a 'regular' file system all that's
usually done is the directory pointer is shuffled around. This is not the
case with ZFS data sets, even though they're on the same
no - mv doesn't know about zpools, only about posix filesystems.
mv doesn't care about filesystems only about the interface provided by
POSIX.
There is no zfs specific interface which allows you to move a file from
one zfs to the next.
Casper
___
On Tue, January 5, 2010 10:01, Richard Elling wrote:
OTOH, if you can reboot you can also run the latest
b130 livecd which has faster stat().
How much faster is it? He estimated 250 days to rm -rf them; so 10x
faster would get that down to 25 days, 100x would get it down to 2.5 days
(assuming
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
On Tue, January 5, 2010 10:01, Richard Elling wrote:
OTOH, if you can reboot you can also run the latest
b130 livecd which has faster stat().
How much faster is it? He estimated 250 days to rm -rf them; so 10x
faster would get that down
On Tue, Jan 5, 2010 at 11:25 AM, Richard Elling richard.ell...@gmail.comwrote:
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
On Tue, January 5, 2010 10:01, Richard Elling wrote:
OTOH, if you can reboot you can also run the latest
b130 livecd which has faster stat().
How much
Am 05.01.2010 16:22, schrieb Mikko Lammi:
However when we deleted some other files from the volume and managed to
raise free disk space from 4 GB to 10 GB, the rm -rf directory method
started to perform significantly faster. Now it's deleting around 4,000
files/minute (240,000/h - quite an
On 01/ 5/10 10:01 AM, Richard Elling wrote:
How are the files named? If you know something about the filename
pattern, then you could create subdirs and mv large numbers of files
to reduce the overall size of a single directory. Something like:
mkdir .A
mv A* .A
mkdir .B
mv B*
On 01/ 5/10 05:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of them
Paul Gress wrote:
On 01/ 5/10 05:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need
On Wed, Jan 6, 2010 at 12:44 AM, Michael Schuster
michael.schus...@sun.com wrote:
we need to get rid of them (because they eat 80% of disk space) it seems
to be quite challenging.
I've been following this thread. Would it be faster to do the reverse.
Copy the 20% of disk then format then
On Jan 5, 2010, at 8:52 AM, Daniel Rock wrote:
Am 05.01.2010 16:22, schrieb Mikko Lammi:
However when we deleted some other files from the volume and
managed to
raise free disk space from 4 GB to 10 GB, the rm -rf directory
method
started to perform significantly faster. Now it's deleting
On Tue, January 5, 2010 10:25, Richard Elling wrote:
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
It's interesting how our ability to build larger disks, and our
software's
ability to do things like create really large numbers of files,
comes back
to bite us on the ass every now
27 matches
Mail list logo