Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-21 Thread Otto Moerbeek
On Thu, 21 Dec 2006, Matthias Bertschy wrote:

 Matthias Bertschy wrote:
  Otto Moerbeek wrote:
   Ok, I assume you no longer have the core file you generated early. If
   there's a bug i pax, I really like to fix it... I'll see if I can
   reproduce the problem on a file system with lots of links and while
   giving pax little memory.
   
   -Otto
  Unfortunately not :-(
  But even it the current move succeeds, I will make another run without
  increasing the memory in login.conf and provide you the core dump.
  
  Thanks for your support :-)
  
  Matthias
  
 pax has been running since Monday, given its current speed it won't be done
 until new year...
 Anyway, I keep you informed.

Hmmm, I like would like a copy of your filesystem to diagnose this...
But that's probably not feasible.

Anyway, since previously you mentioned that dump(8) worked, but
restore(8) ran out of memory, you could try to run restore(8) with the
larger mem allocation you now have set up properly.

-Otto



Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-19 Thread Matthias Bertschy

Otto Moerbeek wrote:

Ok, I assume you no longer have the core file you generated early. If
there's a bug i pax, I really like to fix it... I'll see if I can
reproduce the problem on a file system with lots of links and while
giving pax little memory.

-Otto

Unfortunately not :-(
But even it the current move succeeds, I will make another run without 
increasing the memory in login.conf and provide you the core dump.


Thanks for your support :-)

Matthias



Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Matthias Bertschy

OpenBSD 3.7 - i386
Pentium 4 3GHz - 1GB RAM - 2GB swap

Hello list,

For the past 3 weeks, I have been working on a difficult problem: moving 
a backuppc (http://backuppc.sourceforge.net/) pool from a RAID0 to a big 
disk, in order to free the RAID0 before rebuilding a RAID5.


The RAID0 has one partition, its size is 2112984700 blocks (512-blocks), 
roughly 1008GB, which is close to the maximum allowed by ffs. The big 
disk is 300GB.


I need to move 96GB of data which are, due to backuppc design, full of 
hardlinks!


So far, I have tried to use:
   1) dd: impossible because the partitions cannot be the same size 
(and the RAID5 won't be the same size as the RAID0)
   2) pax -rw: after transferring almost 70GB, it bails out with a 
Segmentation fault
   3) tar to archive: after something like 60GB, it complains with some 
file name too long errors
   4) gtar to archive (from package gtar-1.15.1p0-static.tgz): ends up 
with a gtar: memory exhauted error

   5) dump to file: successful but
   5') restore from file: stops even before starting due to a no 
memory for entry table error (there is still a lot of unused memory and 
swap - and no ulimit)


Any help is appreciated because I really don't know what to do next.

Matthias Bertschy
Echo Technologies SA



Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Tim Pushor
Have you tried using cpio in passthrough mode? I've used CPIO on big 
systems before with success, although admittedly not on OpenBSD ..


Matthias Bertschy wrote:

OpenBSD 3.7 - i386
Pentium 4 3GHz - 1GB RAM - 2GB swap

Hello list,

For the past 3 weeks, I have been working on a difficult problem: 
moving a backuppc (http://backuppc.sourceforge.net/) pool from a RAID0 
to a big disk, in order to free the RAID0 before rebuilding a RAID5.


The RAID0 has one partition, its size is 2112984700 blocks 
(512-blocks), roughly 1008GB, which is close to the maximum allowed by 
ffs. The big disk is 300GB.


I need to move 96GB of data which are, due to backuppc design, full of 
hardlinks!


So far, I have tried to use:
   1) dd: impossible because the partitions cannot be the same size 
(and the RAID5 won't be the same size as the RAID0)
   2) pax -rw: after transferring almost 70GB, it bails out with a 
Segmentation fault
   3) tar to archive: after something like 60GB, it complains with 
some file name too long errors
   4) gtar to archive (from package gtar-1.15.1p0-static.tgz): ends up 
with a gtar: memory exhauted error

   5) dump to file: successful but
   5') restore from file: stops even before starting due to a no 
memory for entry table error (there is still a lot of unused memory 
and swap - and no ulimit)


Any help is appreciated because I really don't know what to do next.

Matthias Bertschy
Echo Technologies SA




Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Andreas Maus

Hi.

Just a wild guess ...
Do you tried rsync?
(Although I don't know how rsync deals with _hard_ links).

HTH,

Andreas.


On 12/15/06, Matthias Bertschy [EMAIL PROTECTED] wrote:

OpenBSD 3.7 - i386
Pentium 4 3GHz - 1GB RAM - 2GB swap

Hello list,

For the past 3 weeks, I have been working on a difficult problem: moving
a backuppc (http://backuppc.sourceforge.net/) pool from a RAID0 to a big
disk, in order to free the RAID0 before rebuilding a RAID5.

The RAID0 has one partition, its size is 2112984700 blocks (512-blocks),
roughly 1008GB, which is close to the maximum allowed by ffs. The big
disk is 300GB.

I need to move 96GB of data which are, due to backuppc design, full of
hardlinks!

So far, I have tried to use:
1) dd: impossible because the partitions cannot be the same size
(and the RAID5 won't be the same size as the RAID0)
2) pax -rw: after transferring almost 70GB, it bails out with a
Segmentation fault
3) tar to archive: after something like 60GB, it complains with some
file name too long errors
4) gtar to archive (from package gtar-1.15.1p0-static.tgz): ends up
with a gtar: memory exhauted error
5) dump to file: successful but
5') restore from file: stops even before starting due to a no
memory for entry table error (there is still a lot of unused memory and
swap - and no ulimit)

Any help is appreciated because I really don't know what to do next.

Matthias Bertschy
Echo Technologies SA





--
Hobbes : Shouldn't we read the instructions?
Calvin : Do I look like a sissy?



Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Andy Hayward

On 12/15/06, Andreas Maus [EMAIL PROTECTED] wrote:

Just a wild guess ...
Do you tried rsync?
(Although I don't know how rsync deals with _hard_ links).


rsync --archive --hard-links ...

-- ach



Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Otto Moerbeek
On Fri, 15 Dec 2006, Matthias Bertschy wrote:

 OpenBSD 3.7 - i386
 Pentium 4 3GHz - 1GB RAM - 2GB swap
 
 Hello list,
 
 For the past 3 weeks, I have been working on a difficult problem: moving a
 backuppc (http://backuppc.sourceforge.net/) pool from a RAID0 to a big disk,
 in order to free the RAID0 before rebuilding a RAID5.
 
 The RAID0 has one partition, its size is 2112984700 blocks (512-blocks),
 roughly 1008GB, which is close to the maximum allowed by ffs. The big disk is
 300GB.
 
 I need to move 96GB of data which are, due to backuppc design, full of
 hardlinks!
 
 So far, I have tried to use:
1) dd: impossible because the partitions cannot be the same size (and the
 RAID5 won't be the same size as the RAID0)
2) pax -rw: after transferring almost 70GB, it bails out with a
 Segmentation fault

Please get me a gdb trace! Run gdb /sbin/pax pax.core and then bt.

I want to know where the seg fault occurs.

-Otto

3) tar to archive: after something like 60GB, it complains with some file
 name too long errors
4) gtar to archive (from package gtar-1.15.1p0-static.tgz): ends up with a
 gtar: memory exhauted error
5) dump to file: successful but
5') restore from file: stops even before starting due to a no memory for
 entry table error (there is still a lot of unused memory and swap - and no
 ulimit)
 
 Any help is appreciated because I really don't know what to do next.
 
 Matthias Bertschy
 Echo Technologies SA



Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Andreas Maus

Ahhh I was enlightened by you and Andy Hayward ;)

If it is memory consumption is the problem, adding a swapfile
via swapon could help.

Andreas.

On 12/15/06, Jaye Mathisen [EMAIL PROTECTED] wrote:

You might need to compile a kernel with a large default
data segment size, make sure tmp has enough room, or
set TMPDIR/TEMPDIR for restore.

Dump/resture should DTRT.

rsync -H will as well, but again, going back to needing lots of memory to
store all that hardlink info...

On Fri, Dec 15, 2006 at 11:04:25PM +0100, Andreas Maus wrote:
 Hi.

 Just a wild guess ...
 Do you tried rsync?
 (Although I don't know how rsync deals with _hard_ links).

 HTH,

 Andreas.


 On 12/15/06, Matthias Bertschy [EMAIL PROTECTED]
 wrote:
 OpenBSD 3.7 - i386
 Pentium 4 3GHz - 1GB RAM - 2GB swap
 
 Hello list,
 
 For the past 3 weeks, I have been working on a difficult problem: moving
 a backuppc (http://backuppc.sourceforge.net/) pool from a RAID0 to a big
 disk, in order to free the RAID0 before rebuilding a RAID5.
 
 The RAID0 has one partition, its size is 2112984700 blocks (512-blocks),
 roughly 1008GB, which is close to the maximum allowed by ffs. The big
 disk is 300GB.
 
 I need to move 96GB of data which are, due to backuppc design, full of
 hardlinks!
 
 So far, I have tried to use:
 1) dd: impossible because the partitions cannot be the same size
 (and the RAID5 won't be the same size as the RAID0)
 2) pax -rw: after transferring almost 70GB, it bails out with a
 Segmentation fault
 3) tar to archive: after something like 60GB, it complains with some
 file name too long errors
 4) gtar to archive (from package gtar-1.15.1p0-static.tgz): ends up
 with a gtar: memory exhauted error
 5) dump to file: successful but
 5') restore from file: stops even before starting due to a no
 memory for entry table error (there is still a lot of unused memory and
 swap - and no ulimit)
 
 Any help is appreciated because I really don't know what to do next.
 
 Matthias Bertschy
 Echo Technologies SA
 
 


 --
 Hobbes : Shouldn't we read the instructions?
 Calvin : Do I look like a sissy?


 !DSPAM:45831ea2743981250431860!





--
Hobbes : Shouldn't we read the instructions?
Calvin : Do I look like a sissy?



Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Otto Moerbeek
On Fri, 15 Dec 2006, Matthias Bertschy wrote:

5) dump to file: successful but
5') restore from file: stops even before starting due to a no memory for
 entry table error (there is still a lot of unused memory and swap - and no
 ulimit)

ulimit for memory usage is never unlimited. Look at ulimit -a. Check
the data size listed. To enlarge, change login.conf settings for
datasize-max and datasize-cur and don't forget to re-login. 

The pax problem you are hitting could very well be memory-related too. 

As a workaround, you might want to try to not copy the complete tree
in one go, but copy the various subirs separately. Or would that destroy
the hardlink structure?

-Otto



Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Jaye Mathisen
You might need to compile a kernel with a large default
data segment size, make sure tmp has enough room, or
set TMPDIR/TEMPDIR for restore.

Dump/resture should DTRT. 

rsync -H will as well, but again, going back to needing lots of memory to 
store all that hardlink info...

On Fri, Dec 15, 2006 at 11:04:25PM +0100, Andreas Maus wrote:
 Hi.
 
 Just a wild guess ...
 Do you tried rsync?
 (Although I don't know how rsync deals with _hard_ links).
 
 HTH,
 
 Andreas.
 
 
 On 12/15/06, Matthias Bertschy [EMAIL PROTECTED] 
 wrote:
 OpenBSD 3.7 - i386
 Pentium 4 3GHz - 1GB RAM - 2GB swap
 
 Hello list,
 
 For the past 3 weeks, I have been working on a difficult problem: moving
 a backuppc (http://backuppc.sourceforge.net/) pool from a RAID0 to a big
 disk, in order to free the RAID0 before rebuilding a RAID5.
 
 The RAID0 has one partition, its size is 2112984700 blocks (512-blocks),
 roughly 1008GB, which is close to the maximum allowed by ffs. The big
 disk is 300GB.
 
 I need to move 96GB of data which are, due to backuppc design, full of
 hardlinks!
 
 So far, I have tried to use:
 1) dd: impossible because the partitions cannot be the same size
 (and the RAID5 won't be the same size as the RAID0)
 2) pax -rw: after transferring almost 70GB, it bails out with a
 Segmentation fault
 3) tar to archive: after something like 60GB, it complains with some
 file name too long errors
 4) gtar to archive (from package gtar-1.15.1p0-static.tgz): ends up
 with a gtar: memory exhauted error
 5) dump to file: successful but
 5') restore from file: stops even before starting due to a no
 memory for entry table error (there is still a lot of unused memory and
 swap - and no ulimit)
 
 Any help is appreciated because I really don't know what to do next.
 
 Matthias Bertschy
 Echo Technologies SA
 
 
 
 
 -- 
 Hobbes : Shouldn't we read the instructions?
 Calvin : Do I look like a sissy?
 
 
 !DSPAM:45831ea2743981250431860!



Re: Moving a 100GB directory tree with lots of hardlinks

2006-12-15 Thread Daniel A. Ramaley
Try something like this:

rsync -avvHR /source/. /destination

The -vv is optional, but will print a line for each file as it is being 
copied. If the copy is interrupted partway through, just run it again 
and it'll pick up where it left off. If you don't have rsync installed, 
look for it in packages or ports.

On Friday 15 December 2006 10:22, you wrote:
OpenBSD 3.7 - i386
Pentium 4 3GHz - 1GB RAM - 2GB swap

Hello list,

For the past 3 weeks, I have been working on a difficult problem:
 moving a backuppc (http://backuppc.sourceforge.net/) pool from a
 RAID0 to a big disk, in order to free the RAID0 before rebuilding a
 RAID5.

The RAID0 has one partition, its size is 2112984700 blocks
 (512-blocks), roughly 1008GB, which is close to the maximum allowed
 by ffs. The big disk is 300GB.

I need to move 96GB of data which are, due to backuppc design, full of
hardlinks!

So far, I have tried to use:
1) dd: impossible because the partitions cannot be the same size
(and the RAID5 won't be the same size as the RAID0)
2) pax -rw: after transferring almost 70GB, it bails out with a
Segmentation fault
3) tar to archive: after something like 60GB, it complains with
 some file name too long errors
4) gtar to archive (from package gtar-1.15.1p0-static.tgz): ends
 up with a gtar: memory exhauted error
5) dump to file: successful but
5') restore from file: stops even before starting due to a no
memory for entry table error (there is still a lot of unused memory
 and swap - and no ulimit)

Any help is appreciated because I really don't know what to do next.

Matthias Bertschy
Echo Technologies SA

-- 

Dan RamaleyDial Center 118, Drake University
Network Programmer/Analyst 2407 Carpenter Ave
+1 515 271-4540Des Moines IA 50311 USA