Aaron S. Joyner wrote:

Samba is plenty-capable of handling >2GB files - I use the functionality on a regular basis. On the other hand, if the underlying filesystem can not, things can get ugly. As long as you're on a modern filesystem in Linux (ext3, reiser, etc), with a reasonably recent version of samba (>2.0) then you should not run into a 2GB file size limitations. If that's not the case, do let us know.

Aaron S. Joyner


Brian Henning wrote:

...so then has the 2GB single-file size smb 'bug' been fixed? I recall many
moments of frustration surrounding a similar project, where the smb transfer
seemed to stall (read: everything ground to a halt), and I found that the
reason was that smb puked on files larger than 2 gigabytes.. I don't know
if that's still an issue, but if it is, you'll need to use a utility (or
tar's built-in ability) to segment your images into <2GB chunks. (Be aware
that, as far as my limited knowledge goes, tar's volume-sizing ability does
not include sequential naming, so you'll have to wrap it in some other
script to rename each chunk after tar finishes with it)


Cheers,
~Brian

----- Original Message ----- From: "Aaron S. Joyner" <[EMAIL PROTECTED]>
To: "Triangle Linux Users Group discussion list" <[EMAIL PROTECTED]>
Sent: Wednesday, June 16, 2004 7:37 AM
Subject: Re: [TriLUG] Linux Backup Strategies




dd if=/dev/hda of=/path/to/samba/share/image.dmg

To dump a full-sized image of your disc to the samba share, the above
command will do the trick. On the other hand, if you'd prefer a more
elegant solution (which only works over ftp) check out the g4u package.
If space over speed is a concern, you can also compress the image like so:


dd if=/dev/hda | gzip > /path/to/samba/share/image.dmg

As an additional concern, if you'd like not to take up more space in the
image than necessary, at the expense of not being able to "undelete"
anything you may have already deleted (not such a big concern, imho) -
you can clear out all of the once-used space on the drive, like this...


dd if=/dev/zero of=/zerofile.tmp ; rm /zerofile.tmp

If you have more than one partition, you'll need to create a zero file
like the one above, for each partition. A bash script to automate that
based on /etc/fstab shouldn't be more than about 6 lines. Keep in mind,
that of course if you have more than one disc in the computer you'll
need to do it for each disc, and of course you may need to adjust the
/dev/hda references above if you're not using a single ide drive located
as the primary master.


Personally, I wouldn't do it image-style as this suggests. I would use
tar or dump, as it creates a more concise image, and gives you the
flexibility of doing incrementals in the future. Also, tar or dump will
give you the ability to easily do individual file restore. As another
step in the direction of elegance, Jeremy presented on rsync backups
using rsbackup at the May meeting. It's yet another "better" solution,
although depending on your purposes it may be cumbersome over smb.


All of this was covered in Jeremy's and Jason's recent backup
presentation. The presentation itself should be searchable in the
archives, sometime around the 2nd tuesday of May. Hope that's a start!

:)


Aaron S. Joyner







I thought 2gig file size limit had to do with the 32bit operating system.

/glen


-- Glen Ford [EMAIL PROTECTED]


-- TriLUG mailing list : http://www.trilug.org/mailman/listinfo/trilug TriLUG Organizational FAQ : http://trilug.org/faq/ TriLUG Member Services FAQ : http://members.trilug.org/services_faq/ TriLUG PGP Keyring : http://trilug.org/~chrish/trilug.asc

Reply via email to