Re: Crash when copying large files

2011-09-13 Thread Toomas Aas

T, 13 sept  2011 kirjutas Chuck Swiger :

If you want a workaround to avoid the crash, consider using either  
rsync or dump/restore to copy the filesystem, rather than using tar.


Just to let everyone know, rsync worked fine. Of course there is still  
some underlying problem, because the system shouldn't panic when using  
tar, but considering that this is FreeBSD 7.3 it is probably not worth  
investigating now that 9 is almost released.


Thanks everyone for the suggestions.

--
Toomas Aas

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Crash when copying large files

2011-09-13 Thread Toomas Aas

Hello Chuck!


How big are your multi-GB files, anyway?


They are approximately between 1 and 4 GB.



If you want a workaround to avoid the crash, consider using either  
rsync or dump/restore to copy the filesystem, rather than using tar.




Thanks for the suggestions, I'll try rsync. The initial idea I had was  
copy most of the files (which are fairly static) over with tar, which  
takes ca 1.5 hours, and then run rsync to catch these changes that had  
happened during that 1.5h period. I may as well just use rsync for the  
entire process.


Past experience on other FreeBSD systems has taught me to avoid  
dump/restore for large filesystems, because it seems to be an order of  
magnitude slower than tar.


--
Toomas Aas

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Crash when copying large files

2011-09-12 Thread Gary Gatten
ftp the large files, then tar? I like the rsync idea too.

- Original Message -
From: Chuck Swiger [mailto:cswi...@mac.com]
Sent: Monday, September 12, 2011 06:42 PM
To: Toomas Aas 
Cc: questi...@freebsd.org 
Subject: Re: Crash when copying large files

Hi--

On Sep 12, 2011, at 2:14 PM, Toomas Aas wrote:
> I've mounted the new FS under /mnt and use tar to transfer the files:
> 
> cd /mnt
> tar -c -v -f - -C /docroot . | tar xf -

You probably wanted -p flag on the extract side.
The manpage recommends one of the following constructs:

 To move file hierarchies, invoke tar as
   tar -cf - -C srcdir . | tar -xpf - -C destdir
 or more traditionally
   cd srcdir ; tar -cf - . | (cd destdir ; tar -xpf -)

However, this isn't going to resolve the system panic'ing.
Certainly, that's not a reasonable behavior...  :-)

> It seems that these large files cause a problem. Sometimes when the process 
> reaches one of these files, the machine reboots. It doesn't create a 
> crashdump in /var/crash, which may be because the system has less swap (2 GB) 
> than RAM (8 GB). Fortunately the machine comes back up OK, except that the 
> target FS (/mnt) is corrupt and needs to be fsck'd. I've tried to re-run the 
> process three times now, and caused the machine to crash as it reaches one or 
> another large file. Any ideas what I should do to avoid the crash?

Right, a machine with 8GB of RAM isn't going to be able to dump to a 2GB swap 
area.  (Although, I seem to recall some folks working on compressed crash 
dumps, but I don't know what state that is in.)  But you can set hw.physmem in 
loader.conf to limit the RAM being used to 2GB so you can generate a crash dump 
if you wanted to debug it further.

How big are your multi-GB files, anyway?

If you want a workaround to avoid the crash, consider using either rsync or 
dump/restore to copy the filesystem, rather than using tar.

Regards,
-- 
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"








"This email is intended to be reviewed by only the intended recipient
 and may contain information that is privileged and/or confidential.
 If you are not the intended recipient, you are hereby notified that
 any review, use, dissemination, disclosure or copying of this email
 and its attachments, if any, is strictly prohibited.  If you have
 received this email in error, please immediately notify the sender by
 return email and delete this email from your system."


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Crash when copying large files

2011-09-12 Thread Chuck Swiger
Hi--

On Sep 12, 2011, at 2:14 PM, Toomas Aas wrote:
> I've mounted the new FS under /mnt and use tar to transfer the files:
> 
> cd /mnt
> tar -c -v -f - -C /docroot . | tar xf -

You probably wanted -p flag on the extract side.
The manpage recommends one of the following constructs:

 To move file hierarchies, invoke tar as
   tar -cf - -C srcdir . | tar -xpf - -C destdir
 or more traditionally
   cd srcdir ; tar -cf - . | (cd destdir ; tar -xpf -)

However, this isn't going to resolve the system panic'ing.
Certainly, that's not a reasonable behavior...  :-)

> It seems that these large files cause a problem. Sometimes when the process 
> reaches one of these files, the machine reboots. It doesn't create a 
> crashdump in /var/crash, which may be because the system has less swap (2 GB) 
> than RAM (8 GB). Fortunately the machine comes back up OK, except that the 
> target FS (/mnt) is corrupt and needs to be fsck'd. I've tried to re-run the 
> process three times now, and caused the machine to crash as it reaches one or 
> another large file. Any ideas what I should do to avoid the crash?

Right, a machine with 8GB of RAM isn't going to be able to dump to a 2GB swap 
area.  (Although, I seem to recall some folks working on compressed crash 
dumps, but I don't know what state that is in.)  But you can set hw.physmem in 
loader.conf to limit the RAM being used to 2GB so you can generate a crash dump 
if you wanted to debug it further.

How big are your multi-GB files, anyway?

If you want a workaround to avoid the crash, consider using either rsync or 
dump/restore to copy the filesystem, rather than using tar.

Regards,
-- 
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Crash when copying large files

2011-09-12 Thread Polytropon
On Tue, 13 Sep 2011 00:14:45 +0300, Toomas Aas wrote:
> Hello!
> 
> I'm trying to move a filesystem to a new larger RAID volume. The old  
> filesystem was using gjournal, and I have also created the new  
> filesystem with gjournal. The FS in question holds the DocumentRoot of  
> our web server, and in its depths, a couple of fairly large (several  
> gigabytes) files are lurking.
> 
> I've mounted the new FS under /mnt and use tar to transfer the files:
> 
> cd /mnt
> tar -c -v -f - -C /docroot . | tar xf -
> 
> It seems that these large files cause a problem. Sometimes when the  
> process reaches one of these files, the machine reboots. It doesn't  
> create a crashdump in /var/crash, which may be because the system has  
> less swap (2 GB) than RAM (8 GB). Fortunately the machine comes back  
> up OK, except that the target FS (/mnt) is corrupt and needs to be  
> fsck'd. I've tried to re-run the process three times now, and caused  
> the machine to crash as it reaches one or another large file. Any  
> ideas what I should do to avoid the crash?

The par program operates on a per-file basis. In case that
causes a problem, try to leave this route and use the "old-
fashioned" tools dump and restore.

Make sure the file system isn't mounted, then use:

# cd /your/target/directory
# dump -0 -f - /dev/ | restore -r -f -

wheree  refers to the device you've initially
mounted /mnt from.




-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Crash when copying large files

2011-09-12 Thread Toomas Aas

Hello!

I'm trying to move a filesystem to a new larger RAID volume. The old  
filesystem was using gjournal, and I have also created the new  
filesystem with gjournal. The FS in question holds the DocumentRoot of  
our web server, and in its depths, a couple of fairly large (several  
gigabytes) files are lurking.


I've mounted the new FS under /mnt and use tar to transfer the files:

cd /mnt
tar -c -v -f - -C /docroot . | tar xf -

It seems that these large files cause a problem. Sometimes when the  
process reaches one of these files, the machine reboots. It doesn't  
create a crashdump in /var/crash, which may be because the system has  
less swap (2 GB) than RAM (8 GB). Fortunately the machine comes back  
up OK, except that the target FS (/mnt) is corrupt and needs to be  
fsck'd. I've tried to re-run the process three times now, and caused  
the machine to crash as it reaches one or another large file. Any  
ideas what I should do to avoid the crash?


The OS version is 7.3 (amd64).

--
Toomas Aas

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"