Re: backups using rsync

2013-03-09 Thread Giorgos Keramidas
On Tue, 05 Mar 2013 20:30:22 +0100, Matthias Petermann  
wrote:
> Hello,
> Zitat von Giorgos Keramidas :
>
>> If this is a UFS2 filesystem, it may be a good idea to snapshot the
>> filesystem, and then rsync-backup the snapshot instead.
>
> Last time I tried UFS2 snapshots I found out two serious limitations.
> The first is it doesn't work when UFS Journaling is used. The second is
> that taking a snapshop on a large filesystem can cause parts of the
> system to freeze for many minutes up to hours when accessing files
> part of the snapshot, depending on the size of the filesystem.
> That's why I could not use it on my server with > 1TB UFS2.
>
> Did this improve in the last year? (I guess my experience is from the
> time around 9.0 release).

Hi Matthias,

Unfortunately I don't know if snapshots for such large filesystems are
faster now.  I've only used UFS2 snapshots in about 10x times smaller
filesystems here.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-06 Thread David Brodbeck
On Mon, Mar 4, 2013 at 1:37 PM, CyberLeo Kitsana wrote:

> You can use dump(8) to dump a SU-journaled filesystem; you just cannot
> create a snapshot. This implies that dump(8) will be run against the
> live and possibly changing filesystem, which can lead to issues with the
> consistency of the contents of files thus dumped; but not necessarily
> with the consistency of the dump itself. Any tool that backs up a live
> filesystem, such as rsync or tar, will have these issues.
>

Note that this is mainly a problem for things like databases, where the
contents of multiple files, or different portions of the same file, have to
be in sync.  For example, take your typical MySQL database table.  You have
the actual data, in a .MYD file, and the indexes, in a .MYI file.  If your
rsync backup hits while a table is being updated, it might get the .MYD
file before an update, and the .MYI file after, leaving the table and index
inconsistent.  Or it might catch the .MYD file *partway* through an update,
giving a file that's internally inconsistent.  This is likely to give very
unexpected results if you load the backup back into the database.

Note that even if you take a filesystem snapshot, if you don't halt
database updates while you take it, you can still end up with inconsistent
files.  Snapshots are mostly useful for limiting the downtime in these
kinds of scenarios -- instead of taking the DB offline for the whole backup
window, you just down it long enough to take the snapshot.

In the absence of snapshots, the easiest way is to use whatever backup
tools the database offers to make sure a consistent copy exists to be
backed up.  For example, before you run the backup, run mysqlhotcopy or
mysqldump to write-lock the database, make consistent backup copies of all
the files, then unlock it.  That way, even if the backup of the active
database is inconsistent, the copies that were backed up along with it are
guaranteed to be consistent.

Anything database-like can have this problem; another common example is a
Subversion FSFS repository.  Backing it up without running "svnadmin
hotcopy" first is asking for corrupt commits when you do a restore.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-05 Thread Matthias Petermann

Hello,

Zitat von Giorgos Keramidas :


If this is a UFS2 filesystem, it may be a good idea to snapshot the
filesystem, and then rsync-backup the snapshot instead.


Last time I tried UFS2 snapshots I found out two serious limitations.
The first is it doesn't work when UFS Journaling is used. The second is
that taking a snapshop on a large filesystem can cause parts of the
system to freeze for many minutes up to hours when accessing files
part of the snapshot, depending on the size of the filesystem.
That's why I could not use it on my server with > 1TB UFS2.

Did this improve in the last year? (I guess my experience is from the
time around 9.0 release).

Kind regards,
Matthias


--
Matthias Petermann 

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-05 Thread Giorgos Keramidas
On 2013-03-04 03:35, "Ronald F. Guilmette"  wrote:
> As a result of this past Black Friday weekend, I now enjoy a true
> abundance of disk space, for the first time in my life.
> 
> I wanna make a full backup, on a weekly basis, of my main system's
> shiny new 1TB drive onto another 1TB drive that I also picked up cheap
> back on Black Friday.
> 
> I've been planning to set this up for some long time now, but I've
> only gotten 'round to working on it now.
> 
> Now, unfortunately, I have just been bitten by the evil... and
> apparently widely known (except to me)... ``You can't use dump(8) to
> dump a journaled filesystem with soft updates'' bug-a-boo.
>
> Sigh.  The best laid plans of mice and men...
> 
> I _had_ planned on using dump/restore and making backups from live mounted
> filesystems while the system was running.  But I really don't want to have
> to take the system down to single-user mode every week for a few hours while
> I'm making my disk-to-disk backup.  So now I'm looking at doing the backups
> using rsync.

Yes, this should be possible...

One thing that can bite you when using rsync to traverse & copy large
filesystems is that the filesystem may still be changing beneath rsync
*as it's doing* the copy.

If this is a UFS2 filesystem, it may be a good idea to snapshot the
filesystem, and then rsync-backup the snapshot instead.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-05 Thread Polytropon
On Mon, 04 Mar 2013 12:19:09 -0800, Ronald F. Guilmette wrote:
> 
> In message <20130304125634.8450cfaf.free...@edvax.de>, 
> Polytropon  wrote:
> 
> >On Mon, 04 Mar 2013 03:35:30 -0800, Ronald F. Guilmette wrote:
> >> Now, unfortunately, I have just been bitten by the evil... and apparently
> >> widely known (except to me)... ``You can't use dump(8) to dump a journaled
> >> filesystem with soft updates'' bug-a-boo.
> >
> >There are other tools you can use, for example tar or cpdup
> >or rsync, as you've mentioned in the subject.
> 
> tar I already knew about, but I think you will agree that it has lots of
> limitations that make it entirely inappropriate for mirroring an entire
> system.

That's true. If your purpose is "backup of data files",
tar is a good tool, especially for cross-platform use.
But if you need to deal with "exceptional" things like
extended permissions, ACL, sparse files and such, you
will quickly see its limits. On the other hand, it can
be used for multi-volume savesets, but this is not your
intention.



> This cpdup thing is entirely new to me.  Thanks for mentioning it!  I really
> never heard of it before, but I just now installed it from ports, and I'm
> perusing the man page. 

It's a little bit comparable to rsync and can also do
things like "only add" (so you won't lose any files:
if they are removed in source, they will be kept in
backup). It also has limitations that rsync will not.



> It looks very promising.  Too bad it doesn't
> properly handle sparse files, but oh well.  That's just a very minor nit.
> (Does it properly handle everything else that rsync claims to be able to
> properly handle, e.g. ACLs, file attributes, etc., etc.?)

That's something you should check with an "example
dataset" you back up, restore, and compare. I've been
using it for "normal files" successfully.



> >The same problems that apply when dumping live systems can
> >bite you using rsync,
> 
> What problems are we talking about, in particular?

The problems I'm refering to is the kind of _possible_
trouble you can get into when backing up files that
keep changing. The ability to make a snapshot prior
to starting the backup is a great help here (if you
don't have the chance to unmount the partitions to
backup). I can't imagine _how_ programs will react
if they start reading a file, prepare sufficient space
in some kind of TOC, then continue reading while the
file grows... or if a file is being read which is
removed during read... If you minimize the writing
activity to the (still) _live_ data you're dealing
with, that could be a benefit.




> I am guessing that if I use rsync, then I *won't* encounter this rather
> annoying issue/problem relating to UFS filesystems that have both soft
> updates and journaling enabled, correct?
> 
> >but support for this "on file system
> >level" seems to be better in rsync than what dump does "on
> >block level".
> 
> What exactly did you mean by "this" ?

As mentioned above: Unexpected and unpredictable results,
strange kinds of inconsistency, may they appear during
backup or later on restore.



> >> If I use all of the following rsync options...  -a,-H,-A, -X, and -S 
> >> when trying to make my backups, and if I do whatever additional fiddling
> >> is necessary to insure that I separately copy over the MBR and boot loader
> >> also to my backup drive, then is there any reason that, in the event of
> >> a sudden meteor shower that takes out my primary disk drive while leaving
> >> my backup drive intact, I can't just unplug my old primary drive, plug in
> >> my (rsync-created) backup drive, reboot and be back in the sadddle again,
> >> almost immediately, and with -zero- problems?
> >
> >You would have to make sure _many_ things are consistent
> >on the backup disk.
> 
> Well, this is what I am getting at.  This is/was the whole point of my post
> and my question.  I want to know:  What is that set of things, exactly?

The backup disk (or failover disk, as I said) needs to be
initialized properly prior to the first backup run: Make
sure it's bootable. Depending on how you handle identification
of the disk (by device name, by label or UFSID) and how
you're going to boot from it (by selecting the failover
disk in some post-BIOS/POST dialog or by swapping cables
or bays), you should check it actually starts booting.



> >Regarding terminology, that would make the disk a failover disk
> 
> OK.  Thank you.  I will henceforth use that terminology.

Just a suggestion from how you described you will be
using the 

Re: backups using rsync

2013-03-04 Thread Warren Block

On Mon, 4 Mar 2013, Ronald F. Guilmette wrote:


So, um, I was reading about this last night, but I was sleepy and my eyes
glazed over... Please remind me, what is the exact procedire for turning
off the journaling?   I boot to single user mode (from a live cd?) and
then what?  Is it tunefs with some special option?


Just boot in single user mode so all the filesystems are unmounted or 
mounted readonly.  Then use 'tunefs -j disable /dev/...'.  It will also 
mention the name of the journal file, which can be deleted.



Use the latest net/rsync port, and enable the FLAGS option.  I use these
options, copying each filesystem individually:

-axHAXS --delete --fileflags --force-change


Hummm... I guess that I have some non-current rsync installed.  In the man
page I have there is no mention of any "--force-change" option.  What does
it do?


"affect user/system immutable files/dirs".  Probably only included in 
the man page when the port is built with the FLAGS option set.


An additional note: the script that runs my rsync backup also modifies 
the mirrored /etc/fstab to use the appropriate labels for the backup 
filesystems.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-04 Thread Lawrence K. Chen, P.Eng.


- Original Message -
> On Mon, 04 Mar 2013 03:35:30 -0800, Ronald F. Guilmette wrote:
> > Now, unfortunately, I have just been bitten by the evil... and
> > apparently
> > widely known (except to me)... ``You can't use dump(8) to dump a
> > journaled
> > filesystem with soft updates'' bug-a-boo.
> 
> There are other tools you can use, for example tar or cpdup
> or rsync, as you've mentioned in the subject.
> 
> 

Or if you want to be ambitious you could install something like 
'sysutils/backuppc' (where one of its methods is rsync, its what I use for all 
the systems I back up with it. - Windows, Linux, Mac OSX.)

And, then could get more than just the weekly rsync to itthough it could 
probably be made to only do fulls every week.  But, you could potentially then 
restore from an older full.

I do system fulls of my other systems to it...can't do a baremetal restore, but 
it can get me back up and running faster.  IE: I recently had harddrive 
failures in a couple of FreeBSD systems.  I did a fresh install and at first I 
restored /home and /usr/local (and some other dirs, like /var/db/pkg & 
/var/db/ports)...and then other dirs and files as I found things missing.  Had 
to rebuild a handful of ports after that and then things were good.

The second system didn't go as well, because it had been silently corrupting 
things for a long time beforebut I still did the same kind of restore at 
first, but ended up rebuilding all the ports to get things good again.

Not sure if losing the system disk, if I could recover from a local backuppc... 
but I have my old backuppc system, getting most of my current system (mainly 
omit the backuppc pool, think my backup storage requirements would grow 
exponentially if I didn'tmy main backuppc pool is currently 6300G out of 
7300G zpool.)  But, I've suffered bit rot on the old backuppc pool in the 
past..when it was a RAID 1+0 arrayprobably worse now that its a 2.7TB 
volume without raid (the only volume on that system that isn't mirrored.)  
Though wonder if I want to try zfs on linux again, or replace it with FreeBSD.

I was faced with something like this on my Windows boxwhere eventually, I 
ended up writing off restoring from the local backup (a commercial time machine 
like product)...the mistake was using a windows fake raid5 external array as my 
backup drive.  And, losing the system due to problems in the fake raid.  I did 
briefly put together a CentOS live CD that could access the array, but the 
drives I copied the data to promptly failed on me shortly after I had broken 
the array and turned them into a raidz pool.  Someday I need to get back to 
going through the disk image of the failed system drive and recover as much as 
possible from that.  The box that was my Windows desktop is now my FreeBSD 
desktop

Lawrence
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-04 Thread CyberLeo Kitsana
On 03/04/2013 05:35 AM, Ronald F. Guilmette wrote:
> As a result of this past Black Friday weekend, I now enjoy a true abundance
> of disk space, for the first time in my life.
> 
> I wanna make a full backup, on a weekly basis, of my main system's shiny
> new 1TB drive onto another 1TB drive that I also picked up cheap back on
> Black Friday.
> 
> I've been planning to set this up for some long time now, but I've only
> gotten 'round to working on it now.
> 
> Now, unfortunately, I have just been bitten by the evil... and apparently
> widely known (except to me)... ``You can't use dump(8) to dump a journaled
> filesystem with soft updates'' bug-a-boo.

You can use dump(8) to dump a SU-journaled filesystem; you just cannot
create a snapshot. This implies that dump(8) will be run against the
live and possibly changing filesystem, which can lead to issues with the
consistency of the contents of files thus dumped; but not necessarily
with the consistency of the dump itself. Any tool that backs up a live
filesystem, such as rsync or tar, will have these issues.

> Sigh.  The best laid plans of mice and men...
> 
> I _had_ planned on using dump/restore and making backups from live mounted
> filesystems while the system was running.  But I really don't want to have
> to take the system down to single-user mode every week for a few hours while
> I'm making my disk-to-disk backup.  So now I'm looking at doing the backups
> using rsync.

I've used rsync to back up Linux and FreeBSD machines daily for years,
and I've never had a problem with the backups nor subsequent
restorations. Especially for restorations of the laptop that ate SSDs.

Having a decent snapshot capability on the backup target filesystem can
help a lot if you want to maintain multiple sparse backup revisions;
otherwise, you're stuck using creative scripting around rsync's
--link-dest option.

> I see that rsync can nowadays properly cope with all sorts of oddities,
> like fer instance device files, hard-linked files, ACLs, file attributes,
> and all sorts of other unusual but important filesystem thingies.  That's
> good news, but I still have to ask the obvious question:
> 
> If I use all of the following rsync options...  -a,-H,-A, -X, and -S 
> when trying to make my backups, and if I do whatever additional fiddling
> is necessary to insure that I separately copy over the MBR and boot loader
> also to my backup drive, then is there any reason that, in the event of
> a sudden meteor shower that takes out my primary disk drive while leaving
> my backup drive intact, I can't just unplug my old primary drive, plug in
> my (rsync-created) backup drive, reboot and be back in the sadddle again,
> almost immediately, and with -zero- problems?

There will /always/ be problems. The best you can do is become familiar
with the tools and procedures so you can tackle them when they happen.

My suggestion for something that you can use as a warm standby is to
create it as a warm standby: go through the entire installation
procedure again for the backup drive, and then use rsync or suchlike to
periodically synchronize the second filesystem with the first. When you
update the boot code on one, do so on the other.

Be extremely careful if you decide to do this with both disks attached
to the same machine: if you use geom labels (gpt, ufs, glabel, et alia)
or dynamically numbered storage devices, you can easily run into a
situation where a reboot with both devices attached suddenly starts
using your backup instead without you realizing it, or flips back and forth.

> P.S.  My apologies if I've already asked this exact same question here
> before.  I'm getting a sense of deja vu... or else a feeling that I am
> often running around in circles, chasing my own tail.
> 
> P.P.S.  Before anyone asks, no I really _do not_ want to just use RAID
> as my one and only backup strategy.  RAID is swell if your only problem
> is hardware failures.  As far as I know however it will not save your
> bacon in the event of a fumble fingers "rm -rf *" moment.  Only frequent
> and routine actual backups can do that.

-- 
Fuzzy love,
-CyberLeo
Furry Peace! - http://www.fur.com/peace/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-04 Thread Ronald F. Guilmette

In message , 
Warren Block  wrote:

>Until SUJ has been deemed 100%, I avoid it and suggest others do also. 
>It can be disabled on an existing filesystem from single user mode.

hehe

Silly me!  What do *I* know?  I just go about my business and try not to
create too much trouble for myself.  To be honest and truthful I have to
say that this journaling stuff entirely snuck up on me.  I confess...
I wasn't paying attention (to the world of FreeBSD innovations) and thus,
when I moved myself recently to 9.x (from 8.3) I did so without even
having been aware that the new filesystems that I was creating during
my clean/fresh install of 9.1 had journaling turned on by default.
(As the saying goes, I didn't get the memo.)  Not that I mind, really.
It sounds like a great concept and a great feature and I was happy to
have it right up until the moment that "dump -L" told me to go pound
sand. :-(

So, um, I was reading about this last night, but I was sleepy and my eyes
glazed over... Please remind me, what is the exact procedire for turning
off the journaling?   I boot to single user mode (from a live cd?) and
then what?  Is it tunefs with some special option?

>> If I use all of the following rsync options...  -a,-H,-A, -X, and -S 
>> when trying to make my backups, and if I do whatever additional fiddling
>> is necessary to insure that I separately copy over the MBR and boot loader
>> also to my backup drive, then is there any reason that, in the event of
>> a sudden meteor shower that takes out my primary disk drive while leaving
>> my backup drive intact, I can't just unplug my old primary drive, plug in
>> my (rsync-created) backup drive, reboot and be back in the sadddle again,
>> almost immediately, and with -zero- problems?
>
>It works.  I use this to "slow mirror" SSDs to a hard disk, avoiding the 
>speed penalty of combining an SSD with a hard disk in RAID1.

Great!  Thanks Warren.

>Use the latest net/rsync port, and enable the FLAGS option.  I use these 
>options, copying each filesystem individually:
>
>-axHAXS --delete --fileflags --force-change

Hummm... I guess that I have some non-current rsync installed.  In the man
page I have there is no mention of any "--force-change" option.  What does
it do?

>Yes, the partitions and bootcode must be set up beforehand.  After that, 
>it works.

Good to know.  Thanks again Warren.

>Like any disk redundancy scheme, test it before an emergency.

Naw.  I like to live dangerously. :-)


Regards,
rfg
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-04 Thread Ronald F. Guilmette

In message <20130304125634.8450cfaf.free...@edvax.de>, 
Polytropon  wrote:

>On Mon, 04 Mar 2013 03:35:30 -0800, Ronald F. Guilmette wrote:
>> Now, unfortunately, I have just been bitten by the evil... and apparently
>> widely known (except to me)... ``You can't use dump(8) to dump a journaled
>> filesystem with soft updates'' bug-a-boo.
>
>There are other tools you can use, for example tar or cpdup
>or rsync, as you've mentioned in the subject.

tar I already knew about, but I think you will agree that it has lots of
limitations that make it entirely inappropriate for mirroring an entire
system.

This cpdup thing is entirely new to me.  Thanks for mentioning it!  I really
never heard of it before, but I just now installed it from ports, and I'm
perusing the man page.  It looks very promising.  Too bad it doesn't
properly handle sparse files, but oh well.  That's just a very minor nit.
(Does it properly handle everything else that rsync claims to be able to
properly handle, e.g. ACLs, file attributes, etc., etc.?)

>The same problems that apply when dumping live systems can
>bite you using rsync,

What problems are we talking about, in particular?

I am guessing that if I use rsync, then I *won't* encounter this rather
annoying issue/problem relating to UFS filesystems that have both soft
updates and journaling enabled, correct?

>but support for this "on file system
>level" seems to be better in rsync than what dump does "on
>block level".

What exactly did you mean by "this" ?

>> If I use all of the following rsync options...  -a,-H,-A, -X, and -S 
>> when trying to make my backups, and if I do whatever additional fiddling
>> is necessary to insure that I separately copy over the MBR and boot loader
>> also to my backup drive, then is there any reason that, in the event of
>> a sudden meteor shower that takes out my primary disk drive while leaving
>> my backup drive intact, I can't just unplug my old primary drive, plug in
>> my (rsync-created) backup drive, reboot and be back in the sadddle again,
>> almost immediately, and with -zero- problems?
>
>You would have to make sure _many_ things are consistent
>on the backup disk.

Well, this is what I am getting at.  This is/was the whole point of my post
and my question.  I want to know:  What is that set of things, exactly?

>Regarding terminology, that would make the disk a failover disk

OK.  Thank you.  I will henceforth use that terminology.

>The disk would need to have an initialized file system and
>a working boot mechanism, both things rsync does not deal with

Check and check.  I implicitly understood the former, and I explicitly
mentioned the latter in my original post in this thread.

But is there anything else, other than those two things (which, just as
you say, are both clearly outside of the scope of what rsync does)?
Anything else I need to do or worry about in order to be able to use
rsync to create & maintain a full-blown fully-working system failover
drive?

If so, I'd much rather learn about it now... you know... as opposed
to learning about it if and when I actually have to _use_ my failover
drive.


Regards,
rfg
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-04 Thread Warren Block

On Mon, 4 Mar 2013, Ronald F. Guilmette wrote:


Now, unfortunately, I have just been bitten by the evil... and apparently
widely known (except to me)... ``You can't use dump(8) to dump a journaled
filesystem with soft updates'' bug-a-boo.


Until SUJ has been deemed 100%, I avoid it and suggest others do also. 
It can be disabled on an existing filesystem from single user mode.



If I use all of the following rsync options...  -a,-H,-A, -X, and -S 
when trying to make my backups, and if I do whatever additional fiddling
is necessary to insure that I separately copy over the MBR and boot loader
also to my backup drive, then is there any reason that, in the event of
a sudden meteor shower that takes out my primary disk drive while leaving
my backup drive intact, I can't just unplug my old primary drive, plug in
my (rsync-created) backup drive, reboot and be back in the sadddle again,
almost immediately, and with -zero- problems?


It works.  I use this to "slow mirror" SSDs to a hard disk, avoiding the 
speed penalty of combining an SSD with a hard disk in RAID1.


Use the latest net/rsync port, and enable the FLAGS option.  I use these 
options, copying each filesystem individually:


-axHAXS --delete --fileflags --force-change

--delete removes files present on the copy that are not on the original. 
Some people may want to leave those.


--exclude= is used on certain filesystems to skip directories that are 
full of easily recreated data that changes often, like /usr/obj.


Yes, the partitions and bootcode must be set up beforehand.  After that, 
it works.  Like any disk redundancy scheme, test it before an emergency.



P.P.S.  Before anyone asks, no I really _do not_ want to just use RAID
as my one and only backup strategy.  RAID is swell if your only problem
is hardware failures.  As far as I know however it will not save your
bacon in the event of a fumble fingers "rm -rf *" moment.  Only frequent
and routine actual backups can do that.


Yes, RAID is not a backup.  Another suggestion I've been making often: 
use sysutils/rsnapshot to make an accessible history of files.  The 
archive go on another partition on the mirror drive, which likely has 
more space than the original.  rsnapshot uses rsync with hard links to 
make an archive that lets you easily get to old versions of files that 
have changed in the last few hours/days/weeks/months.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups using rsync

2013-03-04 Thread Polytropon
On Mon, 04 Mar 2013 03:35:30 -0800, Ronald F. Guilmette wrote:
> Now, unfortunately, I have just been bitten by the evil... and apparently
> widely known (except to me)... ``You can't use dump(8) to dump a journaled
> filesystem with soft updates'' bug-a-boo.

There are other tools you can use, for example tar or cpdup
or rsync, as you've mentioned in the subject.



> I _had_ planned on using dump/restore and making backups from live mounted
> filesystems while the system was running.  But I really don't want to have
> to take the system down to single-user mode every week for a few hours while
> I'm making my disk-to-disk backup.  So now I'm looking at doing the backups
> using rsync.

The same problems that apply when dumping live systems can
bite you using rsync, but support for this "on file system
level" seems to be better in rsync than what dump does "on
block level".



> If I use all of the following rsync options...  -a,-H,-A, -X, and -S 
> when trying to make my backups, and if I do whatever additional fiddling
> is necessary to insure that I separately copy over the MBR and boot loader
> also to my backup drive, then is there any reason that, in the event of
> a sudden meteor shower that takes out my primary disk drive while leaving
> my backup drive intact, I can't just unplug my old primary drive, plug in
> my (rsync-created) backup drive, reboot and be back in the sadddle again,
> almost immediately, and with -zero- problems?

You would have to make sure _many_ things are consistent
on the backup disk.

Regarding terminology, that would make the disk a failover
disk, even if the act of making it the actual "work disk"
is something you do manually.

The disk would need to have an initialized file system and
a working boot mechanism, both things rsync does not deal
with, if I remember correctly. But as soon as you have
initialized the disk for the first time and made sure (by
testing your first result of a rsync run), it should work
with any subsequent change of data you transfer to that
disk.



> P.P.S.  Before anyone asks, no I really _do not_ want to just use RAID
> as my one and only backup strategy. 

RAID _is_ **NO** backup. It's for dedundancy and performance.
If something is erased or corrupted, it's on all disks. And
all the disks permanently run. A backup disk only runs twice:
when backing something up, or when restoring. In your case,
restoring means that the disk is put into operation in its
role as a failover disk.



> RAID is swell if your only problem
> is hardware failures. 

Still hardware failures can corrupt data on all participating
disks.



> As far as I know however it will not save your
> bacon in the event of a fumble fingers "rm -rf *" moment.  Only frequent
> and routine actual backups can do that.

Correct. It's important to learn that lesson _before_ it is
actually needed. :-)




-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


backups using rsync

2013-03-04 Thread Ronald F. Guilmette


As a result of this past Black Friday weekend, I now enjoy a true abundance
of disk space, for the first time in my life.

I wanna make a full backup, on a weekly basis, of my main system's shiny
new 1TB drive onto another 1TB drive that I also picked up cheap back on
Black Friday.

I've been planning to set this up for some long time now, but I've only
gotten 'round to working on it now.

Now, unfortunately, I have just been bitten by the evil... and apparently
widely known (except to me)... ``You can't use dump(8) to dump a journaled
filesystem with soft updates'' bug-a-boo.

Sigh.  The best laid plans of mice and men...

I _had_ planned on using dump/restore and making backups from live mounted
filesystems while the system was running.  But I really don't want to have
to take the system down to single-user mode every week for a few hours while
I'm making my disk-to-disk backup.  So now I'm looking at doing the backups
using rsync.

I see that rsync can nowadays properly cope with all sorts of oddities,
like fer instance device files, hard-linked files, ACLs, file attributes,
and all sorts of other unusual but important filesystem thingies.  That's
good news, but I still have to ask the obvious question:

If I use all of the following rsync options...  -a,-H,-A, -X, and -S 
when trying to make my backups, and if I do whatever additional fiddling
is necessary to insure that I separately copy over the MBR and boot loader
also to my backup drive, then is there any reason that, in the event of
a sudden meteor shower that takes out my primary disk drive while leaving
my backup drive intact, I can't just unplug my old primary drive, plug in
my (rsync-created) backup drive, reboot and be back in the sadddle again,
almost immediately, and with -zero- problems?


Regards,
rfg


P.S.  My apologies if I've already asked this exact same question here
before.  I'm getting a sense of deja vu... or else a feeling that I am
often running around in circles, chasing my own tail.

P.P.S.  Before anyone asks, no I really _do not_ want to just use RAID
as my one and only backup strategy.  RAID is swell if your only problem
is hardware failures.  As far as I know however it will not save your
bacon in the event of a fumble fingers "rm -rf *" moment.  Only frequent
and routine actual backups can do that.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-14 Thread Lowell Gilbert
krad  writes:

> Just another silly thought try the tar j flag rather than the z flag, as
> you might have got your compression algorithms confused. Try the xz one as
> well just in case

The system tar (based on libarchive) will figure all of this out for
you, regardless of which flag you give it.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-14 Thread krad
Just another silly thought try the tar j flag rather than the z flag, as
you might have got your compression algorithms confused. Try the xz one as
well just in case
On Feb 14, 2012 3:37 PM, "Mike Kelly"  wrote:

> >
> > I don't have the script anymore. It is among the files lost, but it was
> > pretty
> > much straight forward, making use of:
> > tar -czf backupfile.tar.gz folders/ of/ my/ choice/.
> >
> > After creating the backups I just cp(1)ed them to an msdosfs formated
> > usb stick and got them onto 8.2 this way, so the famous ascii/binary
> > trap shouldn't be
> > an issue here.
> >
> > Just a thought... how large were the tar.gz files? Are you maybe hitting
> on a file size limit and the .tar.gz files are getting truncated? Not sure
> what the limit is for msdosfs.
>
> --
> Mike Kelly
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "
> freebsd-questions-unsubscr...@freebsd.org"
>
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-14 Thread C. P. Ghost
On Tue, Feb 14, 2012 at 2:56 AM, _  wrote:
> Trying to recover these files on 8.2, I found that some of the archives -
> unfortunately those with
> the files that are dear to me - are corrupted.

Do you have MD5, SHA256 etc... checksums of the
.tar.gz files somewhere? Do they still match, or do
they differ now?

(If they match, you have a software problem with tar
or gzip; try reading the files under Linux (Knoppix?)
just to be sure. If they don't match, either the media
is corrupt (very likely), or something's wrong on the
code path that reads your backup device (a lot less
likely))

-cpghost.

-- 
Cordula's Web. http://www.cordula.ws/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-14 Thread Waitman Gobble
On Feb 14, 2012 7:37 AM, "Mike Kelly"  wrote:
>
> >
> > I don't have the script anymore. It is among the files lost, but it was
> > pretty
> > much straight forward, making use of:
> > tar -czf backupfile.tar.gz folders/ of/ my/ choice/.
> >
> > After creating the backups I just cp(1)ed them to an msdosfs formated
> > usb stick and got them onto 8.2 this way, so the famous ascii/binary
> > trap shouldn't be
> > an issue here.
> >
> > Just a thought... how large were the tar.gz files? Are you maybe hitting
> on a file size limit and the .tar.gz files are getting truncated? Not sure
> what the limit is for msdosfs.
>
> --
> Mike Kelly
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "
freebsd-questions-unsubscr...@freebsd.org"

or perhaps pulled the drive before unmounting... with pending writes. just
a thought.

Waitman Gobble
San Jose California USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-14 Thread Mike Kelly
>
> I don't have the script anymore. It is among the files lost, but it was
> pretty
> much straight forward, making use of:
> tar -czf backupfile.tar.gz folders/ of/ my/ choice/.
>
> After creating the backups I just cp(1)ed them to an msdosfs formated
> usb stick and got them onto 8.2 this way, so the famous ascii/binary
> trap shouldn't be
> an issue here.
>
> Just a thought... how large were the tar.gz files? Are you maybe hitting
on a file size limit and the .tar.gz files are getting truncated? Not sure
what the limit is for msdosfs.

--
Mike Kelly
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-14 Thread Lowell Gilbert
"Pegasus Mc Cleaft"  writes:

>> It recreates something, but the most important files, which reside in
>> subfolders of the given tar.gz archives are gone, i.e. the subfolders
>> are empty.
>> The gunzip strategy you mentioned yields the same as a regular tar -xvf
>> file.tar.gz.
>> 
>> Pegasus, I have yet to try the pax(1) approach. I will let you know
>> about how that went.
>
> Hum.. I'm not sure if pax will be able to help in this case. From the
> looks of it, somehow the compressed data got corrupted - I don't think pax
> will be able to deal with this any better than tar did. 

Probably correct; the right data isn't there, no tool is going to be
able to recover it. Data compression makes this more fragile (i.e., lose
the rest of the archive as opposed to only the files in which the data
corruption occurs.

> I wonder if there was a change in gzip (like maybe libarchive) between the
> two versions of BSD that might be causing the problem. If I were attacking
> the problem, I might try booting up off a 7.x bootcd and see if I can gzip
> --test the archive from the usb stick. 

It's easy enough to try, but it seems awfully unlikely to help; lots of
us have .tar.gz files going back a couple of decades, and if there were
ever new implementations that couldn't understand the old ones, some old
hand would have noticed by now.

Media errors happen, and preparing for them involves noticing them
before you try to use the data, as well as recovering if they go bad. 
The user seems to have knowingly only had one copy of the valuable data, 
which makes the word "backup" a bit of an unusual usage of the term...

--Lowell
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-13 Thread _
2012/2/14, Adam Vande More :
> On Mon, Feb 13, 2012 at 7:56 PM, _  wrote:
>
>> Before making the move from 7.0 to 8.2, I ran a little script that did a
>> backup of selected files
>> and folders.
>>
>
> I think it's IT tip #2 "You don't have a backup unless it's tested".  #1 is
> "Make a backup".

If I am not mistaken, I did test my backups and they worked fine.
After all, one of the four files that I have unpacks with no problems
so I don't see where things could have gone wrong.

> You could try archivers/gzrecover

After gzrecover and cpio, the process stops at the same point where
the tar(1) command stops. It simply doesn't make it beyond the
boundary where the file is corrupted.

> Good luck,
>
> --
> Adam Vande More
>


Here is what pax(1) gave me:

# pax -rzf su12292011.tar.gz
pax: Invalid header, starting valid header search.
gzip: data stream error
pax: End of archive volume 1 reached

ATTENTION! pax archive volume change required.
Ready for archive volume: 2
Input archive name or "." to quit pax.
Archive name > .
Quitting pax!
#

Nothing new here. It seems like pax simply invokes gzip(1) internally.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-13 Thread Adam Vande More
On Mon, Feb 13, 2012 at 7:56 PM, _  wrote:

> Before making the move from 7.0 to 8.2, I ran a little script that did a
> backup of selected files
> and folders.
>

I think it's IT tip #2 "You don't have a backup unless it's tested".  #1 is
"Make a backup".

You could try archivers/gzrecover

Good luck,

-- 
Adam Vande More
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


RE: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-13 Thread Pegasus Mc Cleaft


> -Original Message-



> tar: Damaged tar archive
> tar: Retrying...
> tar: gzip decompression failed
> tar: Error exit delayed from previous errors.
> # gzip --test sr12292011.tar.gz
> gzip: data stream error
> gzip: sr12292011.tar.gz: uncompress failed # gunzip < sr12292011.tar.gz
> > archive.partial.tar
> gunzip: data stream error
> 
> It recreates something, but the most important files, which reside in
> subfolders of the given tar.gz archives are gone, i.e. the subfolders
> are empty.
> The gunzip strategy you mentioned yields the same as a regular tar -xvf
> file.tar.gz.
> 
> Pegasus, I have yet to try the pax(1) approach. I will let you know
> about how that went.

Hum.. I'm not sure if pax will be able to help in this case. From the
looks of it, somehow the compressed data got corrupted - I don't think pax
will be able to deal with this any better than tar did. 

I wonder if there was a change in gzip (like maybe libarchive) between the
two versions of BSD that might be causing the problem. If I were attacking
the problem, I might try booting up off a 7.x bootcd and see if I can gzip
--test the archive from the usb stick. 

Peg


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-13 Thread _
2012/2/14, APseudoUtopia :
> On Mon, Feb 13, 2012 at 8:56 PM, _  wrote:
>> Hi,
>>
>> Before making the move from 7.0 to 8.2, I ran a little script that did a
>> backup of selected files
>> and folders.
>>
>> Trying to recover these files on 8.2, I found that some of the archives -
>> unfortunately those with
>> the files that are dear to me - are corrupted.
>>
>> In other words, I just wanted to ask if there's anyone on here, who knows
>> of a good repair
>> utility for corrupted tar.gz archives?
>>
>> Thanks
>>
>> pancakeking79
>
> HrmWhat command/script did you run to create the archive? How did
> you transfer it over to the new system? What command are you using to
> attempt to extract it, and what error is it giving?
>
> You can try:
> gunzip < archive.tar.gz > archive.partial.tar
> Which may or may not giving you some of the files into the
> archive.partial.tar file.
>
> What does gzip --test archive.tar.gz give?
>

I don't have the script anymore. It is among the files lost, but it was pretty
much straight forward, making use of:
tar -czf backupfile.tar.gz folders/ of/ my/ choice/.

After creating the backups I just cp(1)ed them to an msdosfs formated
usb stick and got them onto 8.2 this way, so the famous ascii/binary
trap shouldn't be
an issue here.

Here are some of the outputs I get:

# ls
setcd12292011.tar.gzsu12292011.tar.gz
sr12292011.tar.gz
# tar -xvf sr12292011.tar.gz
x root/
[snipped]
tar: Error exit delayed from previous errors.
# tar -xvf su12292011.tar.gz
x usr/home/user/
[snipped]
tar: Damaged tar archive
tar: Retrying...
tar: Damaged tar archive
tar: Retrying...
tar: Damaged tar archive
tar: Retrying...
tar: Damaged tar archive
tar: Retrying...
tar: Damaged tar archive
tar: Retrying...
tar: gzip decompression failed
tar: Error exit delayed from previous errors.
# gzip --test sr12292011.tar.gz
gzip: data stream error
gzip: sr12292011.tar.gz: uncompress failed
# gunzip < sr12292011.tar.gz > archive.partial.tar
gunzip: data stream error

It recreates something, but the most important files, which reside in
subfolders of the given tar.gz archives are gone, i.e. the subfolders
are empty.
The gunzip strategy you mentioned yields the same as a regular tar
-xvf file.tar.gz.

Pegasus, I have yet to try the pax(1) approach. I will let you know
about how that went.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-13 Thread APseudoUtopia
On Mon, Feb 13, 2012 at 8:56 PM, _  wrote:
> Hi,
>
> Before making the move from 7.0 to 8.2, I ran a little script that did a
> backup of selected files
> and folders.
>
> Trying to recover these files on 8.2, I found that some of the archives -
> unfortunately those with
> the files that are dear to me - are corrupted.
>
> In other words, I just wanted to ask if there's anyone on here, who knows
> of a good repair
> utility for corrupted tar.gz archives?
>
> Thanks
>
> pancakeking79

HrmWhat command/script did you run to create the archive? How did
you transfer it over to the new system? What command are you using to
attempt to extract it, and what error is it giving?

You can try:
gunzip < archive.tar.gz > archive.partial.tar
Which may or may not giving you some of the files into the
archive.partial.tar file.

What does gzip --test archive.tar.gz give?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


RE: corrupted tar.gz archive - I lost my backups :)/:(

2012-02-13 Thread Pegasus Mc Cleaft
Hi, 

It would depend, I think, on how the file is corrupted.  Is it the
compressed data that is corrupted or the uncompressed tar stream?  You might
want to try the pax(1) utility to see if it is able to push through the
errors (if its in the tar stream). 

I was able to recover data from a corrupted cpio file that I created
(I was using huge file lengths and didn't realize that cpio had a file size
limit). 

Peg

> -Original Message-
> From: owner-freebsd-questi...@freebsd.org [mailto:owner-freebsd-
> questi...@freebsd.org] On Behalf Of _
> Sent: 14 February 2012 01:57
> To: freebsd-questions@freebsd.org
> Subject: corrupted tar.gz archive - I lost my backups :)/:(
> 
> Hi,
> 
> Before making the move from 7.0 to 8.2, I ran a little script that did a
> backup of selected files and folders.
> 
> Trying to recover these files on 8.2, I found that some of the archives
> - unfortunately those with the files that are dear to me - are
> corrupted.
> 
> In other words, I just wanted to ask if there's anyone on here, who
> knows of a good repair utility for corrupted tar.gz archives?
> 
> Thanks
> 
> pancakeking79
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-
> unsubscr...@freebsd.org"

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


corrupted tar.gz archive - I lost my backups :)/:(

2012-02-13 Thread _
Hi,

Before making the move from 7.0 to 8.2, I ran a little script that did a
backup of selected files
and folders.

Trying to recover these files on 8.2, I found that some of the archives -
unfortunately those with
the files that are dear to me - are corrupted.

In other words, I just wanted to ask if there's anyone on here, who knows
of a good repair
utility for corrupted tar.gz archives?

Thanks

pancakeking79
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-30 Thread Polytropon
About the dd method:

On Wed, 30 Sep 2009 11:30:58 -0400, Jerry McAllister  wrote:
> It can be used, but it is not a good way to do it.

For regular backups or even for cloning, it's not very
performant, I agree. I'm mostly using this method for
forensic purposes, when I need a copy of a media (a
whole disk, one slice or a particular partition) to toy
around with, so I don't mess up the original data.



> That is because it copies sector by sector and the new 
> disk/filesystem may not match the old exactly. 

That's a known problem. Another problem is time complexity.
The dd program does copy everything - even the unused disk
blocks (which don't need to be copied). This makes this
process often last very long.



> Besides
> when it is newly written on a file by file basis, it can
> be more efficiently laid out and accomodate any changes in
> size and sector addressing.  dd cannot do that.

That's true. This is the point where tools like cpdup and
rsync come into mind (according to creating backups or
clones).



-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-30 Thread Jerry McAllister
On Wed, Sep 30, 2009 at 05:08:05AM +0200, Polytropon wrote:

> Forgot to mention this:
> 
> 
> On Tue, 29 Sep 2009 22:23:00 -0400, PJ  wrote:
> > 1. will the s1a slice dump the entire system, that is, the a, d, e, f
> > and g slices or is it partitions?
> 
> The ad0s1 slice (containing the a, d, e, f and g partitions) can
> be copied 1:1 with dd. By using dump + restore, the partitions
> need to be copied after another. In each case, the entire system
> will be copied. For this purpose, even the long lasting
> 
>   # dd if=/dev/ad0 of=/dev/da0 bs=1m
>   # dd if=/dev/ad0 of=/dev/da0 bs=512 count=1
> 
> method can be used.
> 

It can be used, but it is not a good way to do it.
That is because it copies sector by sector and the new 
disk/filesystem may not match the old exactly.  Besides
when it is newly written on a file by file basis, it can
be more efficiently laid out and accomodate any changes in
size and sector addressing.  dd cannot do that.

jerry


> -- 
> Polytropon
> Magdeburg, Germany
> Happy FreeBSD user since 4.0
> Andra moi ennepe, Mousa, ...
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-30 Thread Jerry McAllister
On Tue, Sep 29, 2009 at 10:48:30PM -0400, PJ wrote:

> Polytropon wrote:
> > On Tue, 29 Sep 2009 21:26:19 -0400, PJ  wrote:
> >   
> >> But what does that mean? But ad2s1a has just been newfs'd - so how can
> >> it be dumped if its been formatted?
> >> 
> > When you're working on this low level, triple-check all your
> > commands. Failure to do so can cause data loss. In the example
> > you presented, ad1 was the source disk, ad2 the target disk.
> > You DON'T want to newfs your source disk.
> >   
> >> And what exactly does stdout mean?
> >> 
> >
> > This refers to the standard output. In most cases, this is the
> > terminal, the screen, such as
> >
> > # cat /etc/fstab
> >
> > will write the /etc/fstab to stdout. If you redirect it, for
> > example by using > or |, you can make stdout a file, or the
> > input - stdin - for another program.
> >
> > This is how the dump | restore process works: It leaves out
> > the "use the tape" or "use the file", but instead directs the
> > output of dump - the dump itself - to the restore program as
> > input to be restored.
> >   
> >> What is dump doing? outputting what to where exactly?
> >> 
> > The dump program is outputting a dump of the specified partition
> > to the standard output, which in this case is directly trans-
> > mitted to the restore program, which "picks it up" and processes
> > it = restores it.
> >
> >> I don't see it or
> >> should I say, understand this at all.
> >
> > Have a look at the command line again, simplified:
> >
> > # dump -0 -f - /dev/ad0s1a | restore -r -f -
> >
> > Run the dump program, do a full backup of the 1st partition of
> > the 1st slice of the 1st disk, write this dump to the standard
> > output, pipe this output to the restore program, do a full
> > restore, read the dump to be restored from standard input.
> >   
> >> and then the restore is from what
> >> to where?
> >
> > The restore program gets the dump to be restored from the standard
> > input - remember, that's the output of the dump program - and
> > writes it to the current working directory. That's the reason
> > why you should always check with
> >
> > # pwd
> >
> > in which directory you're currently located, because that will
> > be the place where the restored data will appear.
> >   
> >> "write error 10 blocks into volume 1
> >> do you want to restart:"
> >> 
> >
> > Could you present the command you're actually using, especially
> > with where you issued it from?
> >   
> Duh I think I see where this is leading... I'm pretty sure it was
> issued from / which makes it redundant, right? I should have issued it
> from somewhere else, like from home, usr or whatever but not from / as
> that is what I was trying to dump :-[

No, that is not a problem.   You can be in any directory and do the dump
command, except if you want that restore to work you have to be in
the receiving filesystem/directory.

I just noticed that I missed that you were newfs-ing the wrong partition.
That was the one you wanted to read from and your newfs would wipe out
everything on it.If you do the newfs - a good idea - it has to be
on the new filesystem you will be writing to.

jerry



> >   
> >> The first time I tried with -L the error was 20 blocks...
> >> Both the slices for dump from and to are same size (2gb) and certainly
> >> not full by a long shot ( if I reccall correctly, only about 14% is used)
> >> 
> >
> > I'm not sure where you put the dump file. "Write error" seems
> > to indicate one of the following problems:
> > a) The snapshot cannot be created.
> > b) The dump file cannot be created.
> >
> >
> >
> >   
> >> And what's this about a snapshot? AFAIK, I'm not making a snapshot;
> >> anyway, there is no long pause except for the dumb look on my face upon
> >> seeing these messages.
> >> 
> >
> > Check "man dump" and search for the -L option. The dump program,
> > in order to obtain a dump from a file system that's currently in
> > use, will need to make a snapshot because it cannot handle data
> > that is changing. So it will dump the data with the state of the
> > snapshot, allowing the file system to be altered afterwards.
> >
> >
> >
> >   
> >> As it is, I am currently erasing the brand new 500gb disk on which I
> >> want to restore.
> >> 
> >
> > Excellent.
> >
> >
> >
> >   
> >> Things started out really bad... don't u;nderstand what is going on.
> >> 
> >
> > Polite question: Have you read the manpages and the section in the
> > Handbook?
> >   
> Yes... but my brain can't handle it all so quickly... and being as
> impatient as I am, I tend to miss things on the run... it usually comes
> to me sooner or later... unfortunately, it's more often later than
> sooner... I've been reading the stuff in the man pages, and getting more
> confused by googling... Actually, I've been trying to get things
> straightened ot for at least 3 days already.
> >
> >
> >   
> >> I
> >> installed a minimal 7.2, boot

Re: backups & cloning

2009-09-30 Thread Jerry McAllister
On Tue, Sep 29, 2009 at 07:44:38PM -0400, PJ wrote:

> I am getting more and more confused with all the info regarding backing
> up and cloning or moving systems from disk to disk or computer to computer.
> I would like to do 2 things:
> 1. clone several instances of 7.2 from and existing installation
> 2. set up a backup script to back up changes either every night or once
> a week
> 
> There are numerous solutions out there; but they are mostly confusing,
> erroneous or non functional.
> To start, could someone please explail to the the following, which I
> found here:http://forums.freebsd.org/showthread.php?t=185

This page is essentially correct.   But, it covers several situations.
You need to decide which situation you are working on.

Are  you trying to make a backup of your system in case something
fails or are you trying to make a clone to boot in another system?

As for the restore, are you trying to use it to create a disk to
move to another machine to boot with or to recover a failed disk
on the same machine or just have a bootable disk handy if your
current one fails?Each is different.

If you are just making a dump in case of a disk failure, then
just dump to a file on some removable media (USB drive, Tape, 
across the net, etc) and forget about doing the restore for now.
You do that if the disk fails and you have acquired a new disk
and prepared it for service including slicing and partitioning 
and putting an MBR on it and a boot sector.  Then you use the
fixit boot to restore those backups.

If you are making a clone drive to move to another system
then you have to slice and partition the new drive and then
do the piped dump-restores you indicate below.

If you are making a disk to switch to in case of a failure, you
start by making a slice and partitioned drive and do the dump-restores.
But, then you keep it current using rsync.   Note that in this case, you
only do the dump-restore once.  The rsync does all the updating.  
Alternatively you might use some of the mirroring software to make a 
mirror drive that is [almost] always an exact copy.  That is a completely 
different process.

If you are making a disk to move to another machine then you probably
do not want the -u switch on the dump command.That is meant for
making a series of full and change dumps as backups. 

> 
> You can move system from disk to disk on fly with
> Code:
> 
> $ newfs -U /dev/ad2s1a
> $ mount /dev/ad2s1a /target
> $ cd /target
> $ dump -0Lauf - /dev/ad1s1a  | restore -rf -
> 
> you can do the same using sudo
> Code:
> 
> $ sudo echo
> $ sudo dump -0Lauf - /dev/ad1s1a  | sudo restore -rf -
> 
> This may be clear to someone; it certainly is not to me.
> As I understand it, newfs will (re)format the slice.
> Ok,  But what is standard out in the above example.  The dump is from
> where to where?
> Could someone clarify all this for me?

The only thing the sudo does is make you root.
If you are already root, you don't need it - as in the first example
you give.   In this particular case, it is probably better to just
be root and run this as root.   That wouldn't always be the case in
every sudo situation.

The dump command as you give it reads the /dev/ad1s1a file system
and sends it to standard out.   That is what the  '-f -'  part of
the command tells it.   The restore command reads from standard in
and restores the data sent to it from the dump via the pipe which
is the '|'.  The pipe takes whatever is in the standard out from
where it is coming and puts it in the standard in where it is going.

> So far, I have been unable to dump the / slice, not even with the -L
> option. I am trying to dump the whole system (all the slices)except swap
> to a usb (sata2 500gb disk) and then restore to another computer with
> 7.2 minimal installation.

> Slices ad2s1d,e,f and g dump ok to usb. a does not - errors ("should use
> -L when dumping live filesystems)

So, what are the errors.

> Do you have to newfs each slice before restoring?  But if you are
> restoring on a running 7.2 system, don't you have to restore to another
> disk than the one the system is on?

You have asked two unrelated questions.   First:
No, but it is a convenient way to make sure there is a clean
receiving place.Actually, I don't bother doing the restore.
I just write a dump file and leave it there in case I need to
restore from it later.   So my dump command would look something like:

   dump -0Laf /target/ad1s1adump /dev/ad1s1a

So the file ad1s1adump would contain the dump.  You might add
in some characters that identify the date of the dump in the name
of the file.

Also, you can mount the source filesystem and dump using the
mount name.   So, if /dev/ad1s1a is normally mounted as /work,
then it would work - and be mnemonic to do:

  dump -0Laf /target/workdum

Re: backups & cloning

2009-09-30 Thread Warren Block

On Wed, 30 Sep 2009, Polytropon wrote:


On Tue, 29 Sep 2009 21:49:01 -0600 (MDT), Warren Block  
wrote:

So usually I back up /, /var, and /usr to files
on a USB disk or sshfs.  Then I switch to the new target system, booting
it with a FreeBSD disk and doing a minimal install.  That makes sure the
MBR is installed, gives me a chance to set all the filesystem sizes, and
newfses them.


Similar here. In most cases, the FreeBSD live system is completely
sufficient: run sysinstall, slice, boot loader, partitions, drop
to shell; mount USB stick, restore from files located there.


Then I restore from the dump files created earlier, over the running
system.  First /usr, then /var, then /.  On reboot, it's a clone.


This means you bring up the minimal (installed) system first, then
do the restore? Why not do it right after the basic steps of
preparation right from the install CD?


Probably mostly inertia, but I also like that it makes certain 
everything has been done to make a complete bootable system.  Seems like 
when I do it manually, CRS syndrome kicks in and I forget a step which 
ends up taking more time.


-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-30 Thread Giorgos Keramidas
On Tue, 29 Sep 2009 22:23:00 -0400, PJ  wrote:
> Polytropon wrote:
>> Assuming nobody uses tape drives anymore, you need to specify
>> another file, which is the standard output in this case, which
>> may not be obvious, but it is if we reorder the command line:
>>
>> # dump -0 -L - a -u -f - /dev/ad1s1a | restore -r -f -

> 1. will the s1a slice dump the entire system, that is, the a, d, e, f
> and g slices or is it partitions?

No, dump will backup a single partition (or filesystem) specified by the
options you pass, i.e.:

*   Dump only the ad0s1a partition to standard output:

dump -0 -L -a -u -f - /dev/ad0s1a

*   Dump only the ad0s1d partition to standard output:

dump -0 -L -a -u -f - /dev/ad0s1d

You will have to run multiple `dump' instances to backup more than one
partition.  For example, a short script that I run daily to save dumps
at level 2 for all my laptop's UFS filesystems includes code that is
equivalent to the following set of commands:

TODAY=$( date -u '+%Y-%m-%d' )

dump -2 -L -a -u -f - /dev/ad0s1a > kobe.2.${TODAY}.ad0s1a
dump -2 -L -a -u -f - /dev/ad0s1d > kobe.2.${TODAY}.ad0s1d
dump -2 -L -a -u -f - /dev/ad0s1e > kobe.2.${TODAY}.ad0s1e
dump -2 -L -a -u -f - /dev/ad0s3d > kobe.2.${TODAY}.ad0s3d

Each partition is dumped to a separate output file, so I can restore
them separately.

>>> I am trying to dump the whole system (all the slices)except swap
>>> to a usb (sata2 500gb disk) and then restore to another computer with
>>> 7.2 minimal installation.
>>
>> I think that's not possible because dump operates on file system
>> level, which means on partitions, not on slices.
>
> I've been very confused with the slices/partitions.  I meant above, to
> dump the whole slice - but I guess that it has to be done with the
> partitions.

You cannot dump a full slice (what other operating systems call a "BIOS
partition") in a single dump run.  Use multiple dump commands like the
ones shown above.

>>> Slices ad2s1d,e,f and g dump ok to usb. a does not - errors ("should use
>>> -L when dumping live filesystems)
>
> and when I do dump -0Laf  /dev /ad1s1a  /dev/da0s1a
> the errors are
> "write error 10 blocks into volume 1
> do you want to restart:"

This is not a correct invocation of dump.  You can't pass multiple
partitions in one instance of dump, like /dev, /ad1s1a and /dev/da0s1a.

If the order of dump options confuses you, reorder them the same way
Polytropon did, so that the `-f OUTPUT' option stands out a bit more:

dump -0 -a -L -f - /ad1s1a

A break-down of these options is now easier to understand:

* This will save a level 0 dump (the -0 option) of partition
  /ad1s1a.

* The dump will not be split into multiple tape `archives' (the -a
  option).

* Before trying to save the current state of the input partition,
  dump will create a `snapshot' so that a consistent state of all
  files in the partition will be saved in the dumped archive (the -L
  option).

* The backup archive will be sent to standard output (the '-'
  argument of the -f option).

The special '-' value for the output file of the -f option may be a bit
confusing, but it is useful if you are going to immediately pipe dump's
output to the restore(8) program.  Otherwise, if you are just going to
save the dump archive to another disk, you could have used:

dump -0 -a -L -f dumpfile /ad1s1a

The dump utility would then save a dump archive to `dumpfile' instead of
writing everything to its standard output.

>> To illustrate a dump and restore process that involves several
>> partitions, just let me add this example:
>>
>> Stage 1: Initialize slice and partitions
>> # fdisk -I -B ad1
>> # bsdlabel -w -B ad1s1
>> # bsdlabel -e ad1s1
>> a: 512M * 4.2BSD 0 0 0
>> b: 1024M * swap
>> c: * * unused <--- don't change
>> e: 2G * 4.2BSD 0 0 0
>> f: 2G * 4.2BSD 0 0 0
>> g: 10G * 4.2BSD 0 0 0
>> h: * * 4.2BSD 0 0 0
>> ^KX (means: save & exit)
>> # newfs /dev/ad1s1a
>> # newfs -U /dev/ad1s1e
>> # newfs -U /dev/ad1s1f
>> # newfs -U /dev/ad1s1g
>> # newfs -U /dev/ad1s1h
>>
>> Stage 2: Go into SUM and prepare the source partitions
>
> why into SUM? I'm really the only user and I usually stay as root
> if SUM, shouldn't the # below be $?

Because when you are running in multi-user mode there may be services
and other background processes changing the files of the source
partitions while you are dumping them.

By going into single-user mode, you ensure that there is only one active
process in your system: the root shell you are using to back & restore
the files.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Polytropon
On Tue, 29 Sep 2009 21:49:01 -0600 (MDT), Warren Block  
wrote:
> So usually I back up /, /var, and /usr to files 
> on a USB disk or sshfs.  Then I switch to the new target system, booting 
> it with a FreeBSD disk and doing a minimal install.  That makes sure the 
> MBR is installed, gives me a chance to set all the filesystem sizes, and 
> newfses them.

Similar here. In most cases, the FreeBSD live system is completely
sufficient: run sysinstall, slice, boot loader, partitions, drop
to shell; mount USB stick, restore from files located there.

For automated cloning, there are good examples around that let
you boot from DVD or USB stick / USB hard disk and automatically
prepare the source disk, then restoring from files. This is a
common method especially via SSH, so a local media is needed only
for booting and maybe for preparing.



> Then I restore from the dump files created earlier, over the running 
> system.  First /usr, then /var, then /.  On reboot, it's a clone.

This means you bring up the minimal (installed) system first, then
do the restore? Why not do it right after the basic steps of
preparation right from the install CD?


-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Warren Block

On Wed, 30 Sep 2009, Polytropon wrote:


On Tue, 29 Sep 2009 21:37:50 -0600 (MDT), Warren Block  
wrote:

Why make it harder than it needs to be?  Call it / or /var or /usr
instead of /dev/ad0s1whatever.  dump will handle it.


This works without problems as long as it is running from the
system to be copied. In case you use a live system, it doesn't
know anything about the associations between devices and the
mountpoints; this information is, as far as I know, obtained
via /etc/fstab. This is important to know especially if the
source and target disk have different layouts and concepts,
e. g. /dev/ad0s1d = /var -> /dev/da0s1e = /var (different
partition names for same subtree).


Yes, you're right.  I only realized that after sending... so I just sent 
an additional message.


-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Warren Block

On Tue, 29 Sep 2009, Warren Block wrote:


On Wed, 30 Sep 2009, Polytropon wrote:


On Tue, 29 Sep 2009 22:48:30 -0400, PJ  wrote:

Duh I think I see where this is leading... I'm pretty sure it was
issued from / which makes it redundant, right? I should have issued it
from somewhere else, like from home, usr or whatever but not from / as
that is what I was trying to dump :-[


The working directory does only matter to the restore command.
The dump command just cares for the partition name. In order
to find out what partition corresponds with which subtree,
check /etc/fstab or run the

# mount
/dev/ad0s1a on / (ufs, local)
/dev/ad0s1d on /tmp (ufs, local, soft-updates)
/dev/ad0s1e on /var (ufs, local, soft-updates)
/dev/ad0s1f on /usr (ufs, local, soft-updates)
/dev/ad0s1g on /export/home (ufs, local, soft-updates)

command, as in the example above.


Why make it harder than it needs to be?  Call it / or /var or /usr instead of 
/dev/ad0s1whatever.  dump will handle it.  It's built for that.  If it's a 
live filesystem, add -L.


http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/backup-basics.html#AEN25814


Just to add a possibly more relevant example from the FAQ:

http://www.freebsd.org/doc/en_US.ISO8859-1/books/faq/disks.html#NEW-HUGE-DISK

That example has the user connect the new disk to the old system.  That 
works, but I've always felt it's too easy to get the disks mixed up and 
write to the wrong one.  So usually I back up /, /var, and /usr to files 
on a USB disk or sshfs.  Then I switch to the new target system, booting 
it with a FreeBSD disk and doing a minimal install.  That makes sure the 
MBR is installed, gives me a chance to set all the filesystem sizes, and 
newfses them.


Then I restore from the dump files created earlier, over the running 
system.  First /usr, then /var, then /.  On reboot, it's a clone.


-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Polytropon
On Tue, 29 Sep 2009 21:37:50 -0600 (MDT), Warren Block  
wrote:
> Why make it harder than it needs to be?  Call it / or /var or /usr 
> instead of /dev/ad0s1whatever.  dump will handle it. 

This works without problems as long as it is running from the
system to be copied. In case you use a live system, it doesn't
know anything about the associations between devices and the
mountpoints; this information is, as far as I know, obtained
via /etc/fstab. This is important to know especially if the
source and target disk have different layouts and concepts,
e. g. /dev/ad0s1d = /var -> /dev/da0s1e = /var (different
partition names for same subtree).




-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Warren Block

On Wed, 30 Sep 2009, Polytropon wrote:


On Tue, 29 Sep 2009 22:48:30 -0400, PJ  wrote:

Duh I think I see where this is leading... I'm pretty sure it was
issued from / which makes it redundant, right? I should have issued it
from somewhere else, like from home, usr or whatever but not from / as
that is what I was trying to dump :-[


The working directory does only matter to the restore command.
The dump command just cares for the partition name. In order
to find out what partition corresponds with which subtree,
check /etc/fstab or run the

# mount
/dev/ad0s1a on / (ufs, local)
/dev/ad0s1d on /tmp (ufs, local, soft-updates)
/dev/ad0s1e on /var (ufs, local, soft-updates)
/dev/ad0s1f on /usr (ufs, local, soft-updates)
/dev/ad0s1g on /export/home (ufs, local, soft-updates)

command, as in the example above.


Why make it harder than it needs to be?  Call it / or /var or /usr 
instead of /dev/ad0s1whatever.  dump will handle it.  It's built for 
that.  If it's a live filesystem, add -L.


http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/backup-basics.html#AEN25814

-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Warren Block

On Wed, 30 Sep 2009, Polytropon wrote:


Forgot to mention this:


On Tue, 29 Sep 2009 22:23:00 -0400, PJ  wrote:

1. will the s1a slice dump the entire system, that is, the a, d, e, f
and g slices or is it partitions?


The ad0s1 slice (containing the a, d, e, f and g partitions) can
be copied 1:1 with dd. By using dump + restore, the partitions
need to be copied after another. In each case, the entire system
will be copied. For this purpose, even the long lasting

# dd if=/dev/ad0 of=/dev/da0 bs=1m


This copies everything on the disk, including sectors not used by a 
filesystem.  So it usually takes a while.



# dd if=/dev/ad0 of=/dev/da0 bs=512 count=1


Not necessary, the first block was already copied, well, first.

-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Warren Block

On Tue, 29 Sep 2009, PJ wrote:


$ newfs -U /dev/ad2s1a
$ mount /dev/ad2s1a /target
$ cd /target
$ dump -0Lauf - /dev/ad1s1a  | restore -rf -


dump is reading /dev/ad1s1a and using stdout for output.
restore is writing to the current directory (/target) and is reading
from stdin.



But what does that mean? But ad2s1a has just been newfs'd


No.  Exact details are extremely important here.  ad2 is the target, 
dump is reading ad1.


And what exactly does stdout mean?  What is dump doing? outputting 
what to where exactly? I don't see it or should I say, understand this 
at all.and then the restore is from what to where?


The man page system is there to help you with this.  man dump and man 
restore show examples.  man stdout will help explain that.  Trying to do 
advanced operations without understanding these basics is going to be 
difficult, frustrating, and ultimately dangerous to your data.



A long pause while the system makes a snapshot is normal.

And what's this about a snapshot? AFAIK, I'm not making a snapshot;


But you are.  That's what the -L option to dump means, as described in 
the man page.


-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Polytropon
On Tue, 29 Sep 2009 22:48:30 -0400, PJ  wrote:
> Duh I think I see where this is leading... I'm pretty sure it was
> issued from / which makes it redundant, right? I should have issued it
> from somewhere else, like from home, usr or whatever but not from / as
> that is what I was trying to dump :-[

The working directory does only matter to the restore command.
The dump command just cares for the partition name. In order
to find out what partition corresponds with which subtree,
check /etc/fstab or run the

# mount
/dev/ad0s1a on / (ufs, local)
/dev/ad0s1d on /tmp (ufs, local, soft-updates)
/dev/ad0s1e on /var (ufs, local, soft-updates)
/dev/ad0s1f on /usr (ufs, local, soft-updates)
/dev/ad0s1g on /export/home (ufs, local, soft-updates)

command, as in the example above.



> Yes... but my brain can't handle it all so quickly... and being as
> impatient as I am, I tend to miss things on the run... it usually comes
> to me sooner or later... unfortunately, it's more often later than
> sooner...

As long as it doesn't damage your data, it's no real problem.



> I've been reading the stuff in the man pages, and getting more
> confused by googling...

FreeBSD has far the best documentation among operating systems
I've come around. The manpages give a good overview, and the
handbook illustrates many daily procedures with examples.



> Actually, I've been trying to get things
> straightened ot for at least 3 days already.

Maybe this "pattern" can help you understanding the "strange
piping dump into restore" command:

# cd targetdir
# dump -0 -L -a -u -f - sourcepartition | restore -r -f -

It's not that complicated, but you have to be SURE about certain
things.



> Well, that's why I'm really checking my new disk... but it could be the
> motherboard... I've always suspected it had something of a glitch in it
> ever since I got it... I don't think just a slower cpu should give it so
> many problems... a twin computer has the same hardware except for the
> cpu and it gives far less problems - only MS related.

You should consider checking some basic stuff, such as running
a memtest CD or building world + kernel (just for testing
purposes, load generating, and CPU utilization; GENERIC kernel
will be fine).



> Something about a boot sector - this is not the first time I have seen
> this identical error but on much older hdd's, though still satas.
> This does make me think that these problems are of hardware origin -
> motherboard or sata connectors - I find they are rather Disneyesque
> (Mickey Mouse) or just plain flimsy.

In this case, you should install the smartctl program by running

# pkg_add -r smartmontools

or installing them via ports by running

# cd /usr/ports/sysutils/smartmontools
# make install clean

Then run the

# smartctl -a da0

command to check the disk. Refer to

# man smartctl

for other options that can help to identify possible hardware
errors.



> Time to hit the sack... another day of computer frustration coming up...

Doesn't have to be.



> I'm under pressure to lear Flash and have to set up a reliable server to
> test a site I am designing and setting up. Have to do it myself... can't
> afford about anything today. :-(

You're learning things this way, and that's what makes "our" service
so expensive - because "we" know so much. :-)



-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Polytropon
Forgot to mention this:


On Tue, 29 Sep 2009 22:23:00 -0400, PJ  wrote:
> 1. will the s1a slice dump the entire system, that is, the a, d, e, f
> and g slices or is it partitions?

The ad0s1 slice (containing the a, d, e, f and g partitions) can
be copied 1:1 with dd. By using dump + restore, the partitions
need to be copied after another. In each case, the entire system
will be copied. For this purpose, even the long lasting

# dd if=/dev/ad0 of=/dev/da0 bs=1m
# dd if=/dev/ad0 of=/dev/da0 bs=512 count=1

method can be used.



-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Polytropon
On Tue, 29 Sep 2009 22:23:00 -0400, PJ  wrote:
> I feel a bit stupid, as usual, my carelessness led me to miss the
> difference between ad1 and ad2... dumb, dumb, dumb.

As long as you realize it BEFORE any writing operation, it's
no problem. Keep in mind that the numbering of ad*, as well
as of da* (which your USB disk will probably show up as)
depends on the position of the drive on the ATA controller,
e. g.   primary master   -> /dev/ad0
primary slave-> /dev/ad1
secondary master -> /dev/ad2
secondary slave  -> /dev/ad3
...
This numbering sceme even does take place if you're using
the master channels only, so in this example you would have
the ad0 and ad2 drives, no matter if ad1 is present or not.



> But... 2 questions:
> 1. will the s1a slice dump the entire system, that is, the a, d, e, f
> and g slices or is it partitions?

As far as I remember, you cannot dump slices. You can dump
partitions only, or, in other words, the dump program does
operate on file systems (and those are represented by one
partition per file system).

If you want to duplicate slice-wise, dd should be used, such
as the following example:

# dd if=/dev/ad0s1 of=/mnt/usb/slice1.dd bs=1m

If you want to duplicate partition-wise, you need to dump and
restore each partition, just as my verbose example showed.



> I've been very confused with the slices/partitions.

The term slice refers to what MICROS~1 calls "DOS primary
partitions" in the widest sense. Due to DOS limitations,
PCs do only support 4 slices per disk.

The term partition refers to a sub-area inside a FreeBSD
slice. Partitions can be used to separate functional
subtrees partition-wise. Of course, I often see settings
where there's only one partition on the slice, that's
a common thing.



> I meant above, to dump the whole slice - but I guess that it has to be
> done with the partitions.

Partitions: use dump + restore
Slices: use dd
Whole disks: use dd



> and when I do dump -0Laf  /dev /ad1s1a  /dev/da0s1a
> the errors are
> "write error 10 blocks into volume 1
> do you want to restart:"

Okay, everything is clear now. Just "interpret" the dump
command:

dump full snapshot autosize output=/dev /ad1s1a

First of all, /ad1s1a does not exist. Then, /dev/da0s1a is
ignored. The command doesn't make sense. The syntax of dump
can be simplified as follows:

dump (other options) -f outputfile inputdevice

Note that outputfile can be - (the standard output) which
can then be redirected somewhere else.

At this position, I need to ask you: What are you trying to do?
a) I want to dump ad1s1a as it is onto the disk that
   is da0.
b) I want to dump ad1s1a as a file on the disk that is
   da0.

Let's take a) first. I assume you have prepared the disk da0
as shown in my earlier example, or you simply have used sysinstall
to create a slice on the disk and partitions inside the slice that
are big enough to hold the data you want to transfer to them.

Check their existance:

# ll /dev/ad0* /dev/da0*

The listing should give you similar slices and partitions both
for the source and the target disk.

First you mount the target disk and change the working directory
to it:

# mount /dev/da0s1a /mnt
# cd /mnt

Now you dump from your ad0 disk to where you currently are:

# dump -0 -L -a -u -f - /dev/ad0s1a | restore -r -f -

Proceed with the other partitions. Mount them, change into
their mountpoints that are now relative to /mnt (e. g. /mnt/var,
/mnt/home) and repeat this command, substituting the source
/dev/ad0s1[defg]. Finally, change back to / and umount 
everything in a successive way, sync, done.

For case b) it's much easier. When you want to create data
files, you don't need to slice / partition your USB disk, just
newfs and mount it:

# newfs /dev/da0
# mount /dev/da0 /mnt

Now you can create all the data files for the different partitions:

# dump -0 -L -a -u -f /mnt/root.dump /dev/ad0s1a
# dump -0 -L -a -u -f /mnt/tmp.dump /dev/ad0s1d
# dump -0 -L -a -u -f /mnt/var.dump /dev/ad0s1e
# dump -0 -L -a -u -f /mnt/usr.dump /dev/ad0s1f
# dump -0 -L -a -u -f /mnt/home.dump /dev/ad0s1g



> The first time I tried with -L the error was 20 blocks...
> Both the slices for dump from and to are same size (2gb) and certainly
> not full by a long shot ( if I reccall correctly, only about 14% is used)

As far as I see, the command line just was wrong.


> why into SUM?

The idea behind doing dump / restore in SUM is - in addition with
unmounted partitions - to ensure that no write access disturbes
the reading process from the partitions. Of course, it's possible
to use -L and stay in MUM.



> I'm really the only user and I usually stay as root

It's valid to perform dump / restore as root.



> if SUM, shouldn't the # below be $?

No. The # indicates root permissions in any shell. The $ indicates
non-root acce

Re: backups & cloning

2009-09-29 Thread PJ
Polytropon wrote:
> On Tue, 29 Sep 2009 21:26:19 -0400, PJ  wrote:
>   
>> But what does that mean? But ad2s1a has just been newfs'd - so how can
>> it be dumped if its been formatted?
>> 
>
> When you're working on this low level, triple-check all your
> commands. Failure to do so can cause data loss. In the example
> you presented, ad1 was the source disk, ad2 the target disk.
> You DON'T want to newfs your source disk.
>
>   
>> And what exactly does stdout mean?
>> 
>
> This refers to the standard output. In most cases, this is the
> terminal, the screen, such as
>
>   # cat /etc/fstab
>
> will write the /etc/fstab to stdout. If you redirect it, for
> example by using > or |, you can make stdout a file, or the
> input - stdin - for another program.
>
> This is how the dump | restore process works: It leaves out
> the "use the tape" or "use the file", but instead directs the
> output of dump - the dump itself - to the restore program as
> input to be restored.
>
>
>
>   
>> What is dump doing? outputting what to where exactly?
>> 
>
> The dump program is outputting a dump of the specified partition
> to the standard output, which in this case is directly trans-
> mitted to the restore program, which "picks it up" and processes
> it = restores it.
>
>
>
>   
>> I don't see it or
>> should I say, understand this at all.
>> 
>
> Have a look at the command line again, simplified:
>
>   # dump -0 -f - /dev/ad0s1a | restore -r -f -
>
> Run the dump program, do a full backup of the 1st partition of
> the 1st slice of the 1st disk, write this dump to the standard
> output, pipe this output to the restore program, do a full
> restore, read the dump to be restored from standard input.
>
>
>
>   
>> and then the restore is from what
>> to where?
>> 
>
> The restore program gets the dump to be restored from the standard
> input - remember, that's the output of the dump program - and
> writes it to the current working directory. That's the reason
> why you should always check with
>
>   # pwd
>
> in which directory you're currently located, because that will
> be the place where the restored data will appear.
>
>
>
>   
>> "write error 10 blocks into volume 1
>> do you want to restart:"
>> 
>
> Could you present the command you're actually using, especially
> with where you issued it from?
>   
Duh I think I see where this is leading... I'm pretty sure it was
issued from / which makes it redundant, right? I should have issued it
from somewhere else, like from home, usr or whatever but not from / as
that is what I was trying to dump :-[
>
>
>   
>> The first time I tried with -L the error was 20 blocks...
>> Both the slices for dump from and to are same size (2gb) and certainly
>> not full by a long shot ( if I reccall correctly, only about 14% is used)
>> 
>
> I'm not sure where you put the dump file. "Write error" seems
> to indicate one of the following problems:
>   a) The snapshot cannot be created.
>   b) The dump file cannot be created.
>
>
>
>   
>> And what's this about a snapshot? AFAIK, I'm not making a snapshot;
>> anyway, there is no long pause except for the dumb look on my face upon
>> seeing these messages.
>> 
>
> Check "man dump" and search for the -L option. The dump program,
> in order to obtain a dump from a file system that's currently in
> use, will need to make a snapshot because it cannot handle data
> that is changing. So it will dump the data with the state of the
> snapshot, allowing the file system to be altered afterwards.
>
>
>
>   
>> As it is, I am currently erasing the brand new 500gb disk on which I
>> want to restore.
>> 
>
> Excellent.
>
>
>
>   
>> Things started out really bad... don't u;nderstand what is going on.
>> 
>
> Polite question: Have you read the manpages and the section in the
> Handbook?
>   
Yes... but my brain can't handle it all so quickly... and being as
impatient as I am, I tend to miss things on the run... it usually comes
to me sooner or later... unfortunately, it's more often later than
sooner... I've been reading the stuff in the man pages, and getting more
confused by googling... Actually, I've been trying to get things
straightened ot for at least 3 days already.
>
>
>   
>> I
>> installed a minimal 7.2, booted up and turned to another computer to do
>> some serious work. About 2 hours and 49 minutes later I notice messages
>> on the 7.2 about a page fault or something like that and then the system
>> reboots.
>> 
>
> This often indicates a hardware problem...
>   
Well, that's why I'm really checking my new disk... but it could be the
motherboard... I've always suspected it had something of a glitch in it
ever since I got it... I don't think just a slower cpu should give it so
many problems... a twin computer has the same hardware except for the
cpu and it gives far less problems - only MS related.
>
>
>   
>> Obviously with errors... but then I reboot again and it comes
>> up... I t

Re: backups & cloning

2009-09-29 Thread PJ
Olivier Nicole wrote:
 $ newfs -U /dev/ad2s1a
 $ mount /dev/ad2s1a /target
 $ cd /target
 $ dump -0Lauf - /dev/ad1s1a  | restore -rf -
 
>>> [...]
>>>   
>> But what does that mean? But ad2s1a has just been newfs'd - so how can
>> 
>
> Thats ad*1*s1a that has just been formatted, not ad2...
>
> Best,
>
> Olivier
>
>   
Thanks for that.  It took me a while to see that.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread PJ
You are a Master among masters... extraordianry understanding of the
genre and ver, very clear explanations...
I guess my filter between the brain and the computer is a bit foggy... :-(
I really appreciate your explanations.
But I still have a couple of small questions below...


Polytropon wrote:
> On Tue, 29 Sep 2009 19:44:38 -0400, PJ  wrote:
>> This may be clear to someone; it certainly is not to me.
>> As I understand it, newfs will (re)format the slice.
>
> No. The newfs program does create a new file system. In
> other terminology, this can be called a formatting process.
> Note that NOT a slice, but a PARTITION is subject to this
> process. So
>
> # newfs -U /dev/ad2s1a
>
> does format the first partition (a) of the first slice (s1)
> of the third disk (ad2).
>
>
>
>> Ok, But what is standard out in the above example. The dump is from
>> where to where?
>
> According to the command
>
> # dump -0Lauf - /dev/ad1s1a | restore -rf -
>
> you need to understand that the main purpose of dump is to
> dump unmounted (!) file systems to the system's tape drive.
> Assuming nobody uses tape drives anymore, you need to specify
> another file, which is the standard output in this case, which
> may not be obvious, but it is if we reorder the command line:
>
> # dump -0 -L - a -u -f - /dev/ad1s1a | restore -r -f -
>
> You can see that -f - specifies - to be the file to backup to.
> The backup comes from /dev/ad1s1a.
>
> The restore program, on the other side of the | pipe, does
> usually read from the system's tape drive. But in this case,
> it reads from standard input as the -f - command line option
> indicates. It restores the data to where the working directory
> at the moment is.
>
> Here's an example (ad1 is source disk, ad2 is target disk):
>
> # newfs -U /dev/ad2s1a
> # mount /dev/ad2s1a /mnt
> # cd /mnt
> # dump -0Lauf - /dev/ad1s1a | restore -rf -
>
>> Could someone clarify all this for me?
>
> Hopefully hereby done. :-)
I feel a bit stupid, as usual, my carelessness led me to miss the
difference between ad1 and ad2... dumb, dumb, dumb.
Ok, so I see that this works if you have two different drives on the
same machine...
But... 2 questions:
1. will the s1a slice dump the entire system, that is, the a, d, e, f
and g slices or is it partitions?
>
>> So far, I have been unable to dump the / slice, not even with the -L
>> option.
>
> Always keep in mind: Use dump only on unmounted partitions.
>
>> I am trying to dump the whole system (all the slices)except swap
>> to a usb (sata2 500gb disk) and then restore to another computer with
>> 7.2 minimal installation.
>
> I think that's not possible because dump operates on file system
> level, which means on partitions, not on slices.
I've been very confused with the slices/partitions.
I meant above, to dump the whole slice - but I guess that it has to be
done with the partitions.
>> Slices ad2s1d,e,f and g dump ok to usb. a does not - errors ("should use
>> -L when dumping live filesystems)
and when I do dump -0Laf  /dev /ad1s1a  /dev/da0s1a
the errors are
"write error 10 blocks into volume 1
do you want to restart:"
The first time I tried with -L the error was 20 blocks...
Both the slices for dump from and to are same size (2gb) and certainly
not full by a long shot ( if I reccall correctly, only about 14% is used)

>
> Keep an eye on terminology, you're swapping them here: The
> devices ad2s1[defg] are partitions, not slices. The corresponding
> slice that holds them is ad2s1.
Sorry; now it's getting clearer.
>
> Anyway, if you can, don't dump mounted file systems. Go into
> single user mode, mount / as ro, and run dump + restore. If you
> can, use a live system from CD, DVD or USB, which makes things
> easier.
>
>> Do you have to newfs each slice before restoring?
>
> Partitions. You don't have to newfs them once they are formatted.
> It's just the usual way to ensure they are free of any data.
>
>> But if you are
>> restoring on a running 7.2 system, don't you have to restore to another
>> disk than the one the system is on?
>
> I don't understand this question right... if you're using a running
> system for dump + restore - which is the system you want to be the
> source system, then do it in minimal condition. SUM is the most
> convenient way to do that, with all partitions unmounted, and
> only / in read-only mode so you can access the dump and restore
> binaries.
>
>> I am beginning to think that you have to have a system running and dumpt
>> to another disk on that system and then remove that disk and install in
>> another box and boot from that?
>> Am I getting close?
>
> Again, I'm not sure I understood you correctly. If you've done
> the dump + restore correctly, you always end up with a bootable
> system, so you can boot it in another box. Dumping and restoring
> just requires a running system, no matter if it is the source
> system itself or a live system from CD, DVD or USB. (I prefer
> tools like FreeSBIE for such tasks, but the FreeBSD live system

Re: backups & cloning

2009-09-29 Thread Polytropon
On Tue, 29 Sep 2009 21:26:19 -0400, PJ  wrote:
> But what does that mean? But ad2s1a has just been newfs'd - so how can
> it be dumped if its been formatted?

When you're working on this low level, triple-check all your
commands. Failure to do so can cause data loss. In the example
you presented, ad1 was the source disk, ad2 the target disk.
You DON'T want to newfs your source disk.

> And what exactly does stdout mean?

This refers to the standard output. In most cases, this is the
terminal, the screen, such as

# cat /etc/fstab

will write the /etc/fstab to stdout. If you redirect it, for
example by using > or |, you can make stdout a file, or the
input - stdin - for another program.

This is how the dump | restore process works: It leaves out
the "use the tape" or "use the file", but instead directs the
output of dump - the dump itself - to the restore program as
input to be restored.



> What is dump doing? outputting what to where exactly?

The dump program is outputting a dump of the specified partition
to the standard output, which in this case is directly trans-
mitted to the restore program, which "picks it up" and processes
it = restores it.



> I don't see it or
> should I say, understand this at all.

Have a look at the command line again, simplified:

# dump -0 -f - /dev/ad0s1a | restore -r -f -

Run the dump program, do a full backup of the 1st partition of
the 1st slice of the 1st disk, write this dump to the standard
output, pipe this output to the restore program, do a full
restore, read the dump to be restored from standard input.



> and then the restore is from what
> to where?

The restore program gets the dump to be restored from the standard
input - remember, that's the output of the dump program - and
writes it to the current working directory. That's the reason
why you should always check with

# pwd

in which directory you're currently located, because that will
be the place where the restored data will appear.



> "write error 10 blocks into volume 1
> do you want to restart:"

Could you present the command you're actually using, especially
with where you issued it from?



> The first time I tried with -L the error was 20 blocks...
> Both the slices for dump from and to are same size (2gb) and certainly
> not full by a long shot ( if I reccall correctly, only about 14% is used)

I'm not sure where you put the dump file. "Write error" seems
to indicate one of the following problems:
a) The snapshot cannot be created.
b) The dump file cannot be created.



> And what's this about a snapshot? AFAIK, I'm not making a snapshot;
> anyway, there is no long pause except for the dumb look on my face upon
> seeing these messages.

Check "man dump" and search for the -L option. The dump program,
in order to obtain a dump from a file system that's currently in
use, will need to make a snapshot because it cannot handle data
that is changing. So it will dump the data with the state of the
snapshot, allowing the file system to be altered afterwards.



> As it is, I am currently erasing the brand new 500gb disk on which I
> want to restore.

Excellent.



> Things started out really bad... don't u;nderstand what is going on.

Polite question: Have you read the manpages and the section in the
Handbook?



> I
> installed a minimal 7.2, booted up and turned to another computer to do
> some serious work. About 2 hours and 49 minutes later I notice messages
> on the 7.2 about a page fault or something like that and then the system
> reboots.

This often indicates a hardware problem...



> Obviously with errors... but then I reboot again and it comes
> up... I tried som copying from another disk and ended up with the disk
> all screwed up...

How that?



> yet the Seagate Seatools for Dos doesnt find any
> errors on it;

There's smartmontools (program: smartctl) for FreeBSD in the ports.
It can check various errors of modern hard disks.



> Partition magic found an error but couldn't fix it, so now
> Im wiping the whole thing and will try to reinstall tomorrow. Doesn't
> make sense.

What error was this?





-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Polytropon
On Tue, 29 Sep 2009 19:09:51 -0600 (MDT), Warren Block  
wrote:
> On Wed, 30 Sep 2009, Polytropon wrote:
> >> So far, I have been unable to dump the / slice, not even with the -L
> >> option.
> >
> > Always keep in mind: Use dump only on unmounted partitions.
> 
> That is unnecessary.  The -L option is there just for dumping mounted 
> filesystems.

You're right, but -L does require a certain time to create the
snapshot. Of course that's not problematic when you need to do
this only once. In other situations, especially when you're able
to boot from something else than the system you want to clone,
using the "pure" unmounted partitions is more convenient. This
is only my very individual opinion. :-)


-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Olivier Nicole
> >> $ newfs -U /dev/ad2s1a
> >> $ mount /dev/ad2s1a /target
> >> $ cd /target
> >> $ dump -0Lauf - /dev/ad1s1a  | restore -rf -
> >[...]
> But what does that mean? But ad2s1a has just been newfs'd - so how can

Thats ad*1*s1a that has just been formatted, not ad2...

Best,

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread PJ
Warren Block wrote:
> On Tue, 29 Sep 2009, PJ wrote:
>
>> I am getting more and more confused with all the info regarding backing
>> up and cloning or moving systems from disk to disk or computer to
>> computer.
>> I would like to do 2 things:
>> 1. clone several instances of 7.2 from and existing installation
>> 2. set up a backup script to back up changes either every night or once
>> a week
>>
>> There are numerous solutions out there; but they are mostly confusing,
>> erroneous or non functional.
>> To start, could someone please explail to the the following, which I
>> found here:http://forums.freebsd.org/showthread.php?t=185
>>
>> You can move system from disk to disk on fly with
>> Code:
>>
>> $ newfs -U /dev/ad2s1a
>> $ mount /dev/ad2s1a /target
>> $ cd /target
>> $ dump -0Lauf - /dev/ad1s1a  | restore -rf -
>
>> This may be clear to someone; it certainly is not to me.
>> As I understand it, newfs will (re)format the slice.
>> Ok,  But what is standard out in the above example.  The dump is from
>> where to where?
>
> dump is reading /dev/ad1s1a and using stdout for output.
> restore is writing to the current directory (/target) and is reading
> from stdin.
But what does that mean? But ad2s1a has just been newfs'd - so how can
it be dumped if its been formatted? And what exactly does stdout mean?
What is dump doing? outputting what to where exactly? I don't see it or
should I say, understand this at all.and then the restore is from what
to where?
>
>> Could someone clarify all this for me?
>> So far, I have been unable to dump the / slice, not even with the -L
>> option.
>
> It's hard to help without knowing the exact commands you are using and
> the errors they are producing.  Help us to help you by posting them.
>
>> I am trying to dump the whole system (all the slices)except swap
>> to a usb (sata2 500gb disk) and then restore to another computer with
>> 7.2 minimal installation.
>
> A minimal install makes it easier.  You don't need to copy /tmp, either.
>
>> Slices ad2s1d,e,f and g dump ok to usb. a does not - errors ("should use
>> -L when dumping live filesystems)
>
> Right.  So what happens when you use -L? 
"write error 10 blocks into volume 1
do you want to restart:"
The first time I tried with -L the error was 20 blocks...
Both the slices for dump from and to are same size (2gb) and certainly
not full by a long shot ( if I reccall correctly, only about 14% is used)

> A long pause while the system makes a snapshot is normal.
And what's this about a snapshot? AFAIK, I'm not making a snapshot;
anyway, there is no long pause except for the dumb look on my face upon
seeing these messages.
As it is, I am currently erasing the brand new 500gb disk on which I
want to restore.
Things started out really bad... don't u;nderstand what is going on. I
installed a minimal 7.2, booted up and turned to another computer to do
some serious work. About 2 hours and 49 minutes later I notice messages
on the 7.2 about a page fault or something like that and then the system
reboots. Obviously with errors... but then I reboot again and it comes
up... I tried som copying from another disk and ended up with the disk
all screwed up... yet the Seagate Seatools for Dos doesnt find any
errors on it; Partition magic found an error but couldn't fix it, so now
Im wiping the whole thing and will try to reinstall tomorrow. Doesn't
make sense.

>
>> Do you have to newfs each slice before restoring?
>
> The first time.  But your minimal install already did that for you.
>
>> But if you are restoring on a running 7.2 system, don't you have to
>> restore to another disk than the one the system is on?
>
> Nope.  You can overwrite the running system.  I restore in /usr, /var,
> and then / order.  Then reboot and you are running the new clone.
>
>> I am beginning to think that you have to have a system running and
>> dumpt to another disk on that system and then remove that disk and
>> install in another box and boot from that? Am I getting close? I know
>> it's a lot to ask, but then, I know you guys are capable...  :-)
>
> It's usually best to limit messages to a single question.
Sure, I agree... but when things are really complicated... I, at least,
don't know how to separate them when they are quite interdependent.
Thanks for responding.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Warren Block

On Wed, 30 Sep 2009, Polytropon wrote:

So far, I have been unable to dump the / slice, not even with the -L
option.


Always keep in mind: Use dump only on unmounted partitions.


That is unnecessary.  The -L option is there just for dumping mounted 
filesystems.


-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: backups & cloning

2009-09-29 Thread Polytropon
On Tue, 29 Sep 2009 19:44:38 -0400, PJ  wrote:
> This may be clear to someone; it certainly is not to me.
> As I understand it, newfs will (re)format the slice.

No. The newfs program does create a new file system. In
other terminology, this can be called a formatting process.
Note that NOT a slice, but a PARTITION is subject to this
process. So

# newfs -U /dev/ad2s1a

does format the first partition (a) of the first slice (s1)
of the third disk (ad2).



> Ok,  But what is standard out in the above example.  The dump is from
> where to where?

According to the command

# dump -0Lauf - /dev/ad1s1a | restore -rf -

you need to understand that the main purpose of dump is to
dump unmounted (!) file systems to the system's tape drive.
Assuming nobody uses tape drives anymore, you need to specify
another file, which is the standard output in this case, which
may not be obvious, but it is if we reorder the command line:

# dump -0 -L - a -u -f - /dev/ad1s1a | restore -r -f -

You can see that -f - specifies - to be the file to backup to.
The backup comes from /dev/ad1s1a.

The restore program, on the other side of the | pipe, does
usually read from the system's tape drive. But in this case,
it reads from standard input as the -f - command line option
indicates. It restores the data to where the working directory
at the moment is.

Here's an example (ad1 is source disk, ad2 is target disk):

# newfs -U /dev/ad2s1a
# mount /dev/ad2s1a /mnt
# cd /mnt
# dump -0Lauf - /dev/ad1s1a | restore -rf -



> Could someone clarify all this for me?

Hopefully hereby done. :-)



> So far, I have been unable to dump the / slice, not even with the -L
> option.

Always keep in mind: Use dump only on unmounted partitions.



> I am trying to dump the whole system (all the slices)except swap
> to a usb (sata2 500gb disk) and then restore to another computer with
> 7.2 minimal installation.

I think that's not possible because dump operates on file system
level, which means on partitions, not on slices.



> Slices ad2s1d,e,f and g dump ok to usb. a does not - errors ("should use
> -L when dumping live filesystems)

Keep an eye on terminology, you're swapping them here: The
devices ad2s1[defg] are partitions, not slices. The corresponding
slice that holds them is ad2s1.

Anyway, if you can, don't dump mounted file systems. Go into
single user mode, mount / as ro, and run dump + restore. If you
can, use a live system from CD, DVD or USB, which makes things
easier.



> Do you have to newfs each slice before restoring? 

Partitions. You don't have to newfs them once they are formatted.
It's just the usual way to ensure they are free of any data.



> But if you are
> restoring on a running 7.2 system, don't you have to restore to another
> disk than the one the system is on?

I don't understand this question right... if you're using a running
system for dump + restore - which is the system you want to be the
source system, then do it in minimal condition. SUM is the most
convenient way to do that, with all partitions unmounted, and
only / in read-only mode so you can access the dump and restore
binaries.



> I am beginning to think that you have to have a system running and dumpt
> to another disk on that system and then remove that disk and install in
> another box and boot from that?
> Am I getting close?

Again, I'm not sure I understood you correctly. If you've done
the dump + restore correctly, you always end up with a bootable
system, so you can boot it in another box. Dumping and restoring
just requires a running system, no matter if it is the source
system itself or a live system from CD, DVD or USB. (I prefer
tools like FreeSBIE for such tasks, but the FreeBSD live system
CD is fine, too.)

As far as I now understood, you don't want to clone from source
disk to target disk, but use a USB "transfer disk"; in this case,
you first need to clone onto this disk, and then use it in the
other computers to fill their disks with the copy you made from
your "master system".

As a sidenote, it's worth mentioning that this can be achieved
in an easier way: You create a minimal bootable FreeBSD on this
USB disk, in case your computers can boot from it; if not, use
a live system to boot the computers. Then, format the USB disk
with only one file system (e. g. /dev/da0) and put dump files
onto it, so they are available then as /mnt/root.dump, tmp.dump,
var.dump, usr.dump and home.dump. Instead of using -f - for the
dump and restore program, use those file names.



> I know it's a lot to ask, but then, I know you guys are capable...  :-)

If you still have questions, try to ask them as precise as
possible.

I may add that this list is the most friendly and intelligent
community to ask, so you're definitely at the right place here.



To illustrate a dump and restore process that involves several
partitions, just let me add this example:

Stage 1: Initialize slice and partitions
 

Re: backups & cloning

2009-09-29 Thread Warren Block

On Tue, 29 Sep 2009, PJ wrote:


I am getting more and more confused with all the info regarding backing
up and cloning or moving systems from disk to disk or computer to computer.
I would like to do 2 things:
1. clone several instances of 7.2 from and existing installation
2. set up a backup script to back up changes either every night or once
a week

There are numerous solutions out there; but they are mostly confusing,
erroneous or non functional.
To start, could someone please explail to the the following, which I
found here:http://forums.freebsd.org/showthread.php?t=185

You can move system from disk to disk on fly with
Code:

$ newfs -U /dev/ad2s1a
$ mount /dev/ad2s1a /target
$ cd /target
$ dump -0Lauf - /dev/ad1s1a  | restore -rf -



This may be clear to someone; it certainly is not to me.
As I understand it, newfs will (re)format the slice.
Ok,  But what is standard out in the above example.  The dump is from
where to where?


dump is reading /dev/ad1s1a and using stdout for output.
restore is writing to the current directory (/target) and is reading 
from stdin.



Could someone clarify all this for me?
So far, I have been unable to dump the / slice, not even with the -L
option.


It's hard to help without knowing the exact commands you are using and 
the errors they are producing.  Help us to help you by posting them.



I am trying to dump the whole system (all the slices)except swap
to a usb (sata2 500gb disk) and then restore to another computer with
7.2 minimal installation.


A minimal install makes it easier.  You don't need to copy /tmp, either.


Slices ad2s1d,e,f and g dump ok to usb. a does not - errors ("should use
-L when dumping live filesystems)


Right.  So what happens when you use -L?  A long pause while the system 
makes a snapshot is normal.



Do you have to newfs each slice before restoring?


The first time.  But your minimal install already did that for you.

But if you are restoring on a running 7.2 system, don't you have to 
restore to another disk than the one the system is on?


Nope.  You can overwrite the running system.  I restore in /usr, /var, 
and then / order.  Then reboot and you are running the new clone.


I am beginning to think that you have to have a system running and 
dumpt to another disk on that system and then remove that disk and 
install in another box and boot from that? Am I getting close? I know 
it's a lot to ask, but then, I know you guys are capable...  :-)


It's usually best to limit messages to a single question.

-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


backups & cloning

2009-09-29 Thread PJ
I am getting more and more confused with all the info regarding backing
up and cloning or moving systems from disk to disk or computer to computer.
I would like to do 2 things:
1. clone several instances of 7.2 from and existing installation
2. set up a backup script to back up changes either every night or once
a week

There are numerous solutions out there; but they are mostly confusing,
erroneous or non functional.
To start, could someone please explail to the the following, which I
found here:http://forums.freebsd.org/showthread.php?t=185

You can move system from disk to disk on fly with
Code:

$ newfs -U /dev/ad2s1a
$ mount /dev/ad2s1a /target
$ cd /target
$ dump -0Lauf - /dev/ad1s1a  | restore -rf -

you can do the same using sudo
Code:

$ sudo echo
$ sudo dump -0Lauf - /dev/ad1s1a  | sudo restore -rf -

This may be clear to someone; it certainly is not to me.
As I understand it, newfs will (re)format the slice.
Ok,  But what is standard out in the above example.  The dump is from
where to where?
Could someone clarify all this for me?
So far, I have been unable to dump the / slice, not even with the -L
option. I am trying to dump the whole system (all the slices)except swap
to a usb (sata2 500gb disk) and then restore to another computer with
7.2 minimal installation.
Slices ad2s1d,e,f and g dump ok to usb. a does not - errors ("should use
-L when dumping live filesystems)
Do you have to newfs each slice before restoring?  But if you are
restoring on a running 7.2 system, don't you have to restore to another
disk than the one the system is on?
I am beginning to think that you have to have a system running and dumpt
to another disk on that system and then remove that disk and install in
another box and boot from that?
Am I getting close?
I know it's a lot to ask, but then, I know you guys are capable...  :-)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Using rsync for versioned backups without --backup

2009-05-25 Thread Roland Smith
On Sun, May 24, 2009 at 11:39:57PM -0700, Kelly Jones wrote:
> I want to use rsync to backup a large file (say 1G) that changes a
> little each day (say 1M), but I also want the ability to re-create
> older versions of this file.
> 
> I could use --backup, but that would create a 1G file each day, even
> though I only "really" need the 1M that's changed.
> 
> How do I tell rsync: "while updating, also store the changes you'd
> need to convert today's backup into yesterday's backup"?

I don't think rsync can do that. Essentially it is a file copying tool.

You could use diff if it is a text-only file, or xdelta if it is a
binary file. But both would require you to keep at least two subsequent
versions of the file so a diff can be generated.

You'd need to do something like this every day:

  diff -u foo-yesterday foo >diff-20090525
  # save the diff somewhere
  rm foo-yesterday
  cp foo foo-yesterday


Another possibility is to control the file with a revision control
system. If the file is plain text, rcs(1) will work. If it is a binary
file use a system like devel/git that handles binary files well. 

Say that your file is called . Since git tracks directory contents,
best put it in a separate directory, and put that under git control:

  mkdir ~/foo
  mv bar ~/foo/
  cd ~/foo
  git init
  git add bar
  git commit -a -m "Initial commit"

So now you can start changing the file. Next day you see if it has
changed, and if so, check in the changes:

  cd ~/foo
  git status
  git commit -m "Changes 2009-05-25" bar

If you now make a backup of ~/foo/.git, you can always restore every
checkin you did of .

Roland
-- 
R.F.Smith   http://www.xs4all.nl/~rsmith/
[plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated]
pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID: C321A725)


pgpZ72158oy5H.pgp
Description: PGP signature


Using rsync for versioned backups without --backup

2009-05-24 Thread Kelly Jones
I want to use rsync to backup a large file (say 1G) that changes a
little each day (say 1M), but I also want the ability to re-create
older versions of this file.

I could use --backup, but that would create a 1G file each day, even
though I only "really" need the 1M that's changed.

How do I tell rsync: "while updating, also store the changes you'd
need to convert today's backup into yesterday's backup"?

I realize I could use diff or something, but since rsync has to
calculate minimal changes anyway, it'd be nice to store them.

I thought the --itemize-changes option might do this, but no.

-- 
We're just a Bunch Of Regular Guys, a collective group that's trying
to understand and assimilate technology. We feel that resistance to
new ideas and technology is unwise and ultimately futile.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Remote backups using ssh and dump

2008-04-07 Thread Mel
On Friday 04 April 2008 19:59:27 Paul Schmehl wrote:

> Has anyone done this?

/usr/local/etc/periodic/daily/201.backup-disks:

#!/bin/sh
# vim: ts=4 sw=4 noet ai
#
# The following tunables are supported:
# daily_backup_disks_enable (bool):
#   Turn backup on/off.
#   Default: Off
# daily_backup_disks_mounts (list):
#   The mount points to backup daily.
#   Default: / /var /home /usr
# daily_backup_disks_compress_local (bool):
#   Compress the backup on the local machine.
#   Default: Compress on receiving machine.
# daily_backup_disks_compressor (program path):
#   The compress program to use.
#   Default: /usr/bin/bzip2
# daily_backup_disks_ext (string):
#   The extension to use for the filename.
#   Default: bz2
# daily_backup_disks_islocal (bool):
#   If this is a backup to a local disk.
#   Default: NO
# daily_backup_disks_host (hostname):
#   Hostname to backup to.
#   Default: backup
# daily_backup_disks_machid (string):
#   Machine id of this machine, used as prefix on backup files.
#   Default: `hostname -s`
# daily_backup_disks_transporter (program path):
#   Program used to transport the backup. Arguments must be compatible with
#   ssh(1).
#   Default: /usr/bin/ssh
# daily_backup_disks_backuproot (path):
#   Path to the directory where backups are stored on the receiving machine.
#   Default: /backup

if [ -r /etc/defaults/periodic.conf ]
then
. /etc/defaults/periodic.conf
source_periodic_confs
fi
# Set defaults
daily_backup_disks_compress_local=${daily_backup_disks_compress_local:-"NO"}
daily_backup_disks_mounts=${daily_backup_disks_mounts:-"/ /var /home /usr"}
daily_backup_disks_compressor=${daily_backup_disks_compressor:-"/usr/bin/bzip2"}
daily_backup_disks_ext=${daily_backup_disks_ext:="bz2"}
daily_backup_disks_islocal=${daily_backup_disks_islocal:="NO"}
daily_backup_disks_hostname=${daily_backup_disks_hostname:="backup"}
daily_backup_disks_machid=${daily_backup_disks_machid:=`hostname -s`}
daily_backup_disks_transporter=${daily_backup_disks_transporter:="/usr/bin/ssh"}
daily_backup_disks_backuproot=${daily_backup_disks_backuproot:="/backup"}

case "$daily_backup_disks_enable" in
[Yy][Ee][Ss])
  level=$(/bin/date +%w)
  date=$(/bin/date +%Y%m%d)
  echo ''
  for disk in ${daily_backup_disks_mounts}; do
dname=${disk#"/"}
test -z "$dname" && dname=root
dname=$(echo $dname | /usr/bin/tr '/' '_')
fname="${daily_backup_disks_machid}.${dname}.${level}"
fname="${fname}.${date}.dump.${daily_backup_disks_ext}"
fpath="${daily_backup_disks_backuproot}/${fname}"
if test ${daily_backup_disks_islocal} = "NO"; then
  echo -n "Processing ${disk} to ${daily_backup_disks_host}:"
  echo"${daily_backup_disks_backuproot}/${fname}"
  if test ${daily_backup_disks_compress_local} = "NO"; then
# dd is necessary, because bzip2 cannot "compress STDIN to
# named file"
/sbin/dump -${level}uaLf - ${disk} | \
  ${daily_backup_disks_transporter} \
  ${daily_backup_disks_host} \
  "${daily_backup_disks_compressor} -c - | /bin/dd of=${fpath}"
  else
# dd is now necessary, because scp cannot copy STDIN.
/sbin/dump -${level}uaLf - ${disk} | \
  ${daily_backup_disks_compressor} -c - | \
  ${daily_backup_disks_transporter} \
  ${daily_backup_disks_host} "/bin/dd of=${fpath}"
  fi
else
  echo "Processing ${disk} to ${fpath}"
  /sbin/dump -${level}uaLf - ${disk} | \
${daily_backup_disks_compressor} -c - >${fpath}
fi
rc=$(($rc + $?))
  done
  ;;
*)  rc=0;;
esac

exit $rc

-- 
Mel

Problem with today's modular software: they start with the modules
and never get to the software part.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-05 Thread Vince

Paul Schmehl wrote:

Has anyone done this?

I'm presently using rsync over ssh, but I think dump would be better if 
it will work.  I've been reading the man page, but I'm wondering if 
anyone is doing this successfully and would like to share their cmdline.


We do this for ~100 linux (centos) systems at work. we looked at a 
variety of other systems (rsync, rdiffbackup, tar, dar) and settled on 
dump because of the ease of baremetal restore, support of file system 
attributes, ease of compression etc. It all runs via the crontab of the 
backup user on the backup server, uses a passphraseless ssh key to a 
backup user on each server who has passwordless sudo on dump. If you 
like I can dig out the script although it will probably need modifying 
for freebsd.



Vince
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread mike
On Fri, 04 Apr 2008 12:59:27 -0500, in sentex.lists.freebsd.questions
you wrote:

>Has anyone done this?

Hi,
Yes, we use something like the following



#!/bin/sh


if [ -z $1 ] ; then
echo ""
echo "Usage: $0 "
echo "   See 'man dump' for more information."
echo ""
exit 255
fi

BACKUP_LEVEL=0
BACKUP_LEVEL=$1


/sbin/dump -C24 -${BACKUP_LEVEL}anuf - / | /usr/bin/gzip -7 |
/usr/bin/ssh -2 -c 3des [EMAIL PROTECTED] dd
of=/path/to/dump/dump-myfifle-root-l${BACKUP_LEVEL}.gz


---Mike
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Fwd: Remote backups using ssh and dump

2008-04-04 Thread Jerry McAllister
On Fri, Apr 04, 2008 at 05:00:01PM -0400, John Almberg wrote:

> >
> >Little did I know, when I posted this question, that I would  
> >receive such a wealth of information.  I'm deeply appreciative of  
> >the community's willingness to share information and thank each and  
> >every one of your for your contributions.
> >
> >Now I have some reading to do.  :-)
> >
> >
> 
> I think there is a difference between what dump does and what tar/ 
> rsync do... I like the idea of doing a bit level backup, rather than  
> a file level backup.
> 
> If you've never done a dump, try it locally, and then try restore,  
> particularly interactive restore (restore -i). It's pretty cool and I  
> don't think tar or rsync have anything like it.

Although some of the aspects of using dump/restore are a little clunky,
it is still superior to any other method of backing up whole file systems.
One of its weaknesses is that it will only back up a file system and
not a subset of one such as one directory tree.  You can, though, restore
individual files and directory trees easily.

What dump gets you is a system that knows how to handle every type
of file condition correctly.   None of the other quite do that.

Its other weakness is that it is filesystem/OS specific.   Geneally,
you cannot take a dump on one OS and restore it under a different on - 
like you cannot dump SunOS and restore on FreeBSD or whatever.

It will work over networks, though that can be slow and it doesn't
recover well from network errors/failures.

jerry

> 
> -- John
> 
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Fwd: Remote backups using ssh and dump

2008-04-04 Thread John Almberg


Little did I know, when I posted this question, that I would  
receive such a wealth of information.  I'm deeply appreciative of  
the community's willingness to share information and thank each and  
every one of your for your contributions.


Now I have some reading to do.  :-)




I think there is a difference between what dump does and what tar/ 
rsync do... I like the idea of doing a bit level backup, rather than  
a file level backup.


If you've never done a dump, try it locally, and then try restore,  
particularly interactive restore (restore -i). It's pretty cool and I  
don't think tar or rsync have anything like it.


-- John

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread Paul Schmehl
--On Friday, April 04, 2008 22:21:52 +0200 Peter Boosten <[EMAIL PROTECTED]> 
wrote:





Paul Schmehl wrote:

Has anyone done this?



Little did I know, when I posted this question, that I would receive such a 
wealth of information.  I'm deeply appreciative of the community's willingness 
to share information and thank each and every one of your for your 
contributions.


Now I have some reading to do.  :-)

--
Paul Schmehl ([EMAIL PROTECTED])
Senior Information Security Analyst
The University of Texas at Dallas
http://www.utdallas.edu/ir/security/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread Peter Boosten



Paul Schmehl wrote:

Has anyone done this?

I'm presently using rsync over ssh, but I think dump would be better if 
it will work.  I've been reading the man page, but I'm wondering if 
anyone is doing this successfully and would like to share their cmdline.




I did this once: http://www.boosten.org/content/view/50/50/

But nowadays I prefer dirvish. That really works like charm.

Peter
--
http://www.boosten.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread dex
On Fri, Apr 4, 2008 at 1:59 PM, Paul Schmehl <[EMAIL PROTECTED]> wrote:

> Has anyone done this?
>  I'm presently using rsync over ssh, but I think dump would be better if
> it will work.  I've been reading the man page, but I'm wondering if anyone
> is doing this successfully and would like to share their cmdline.
>

This is an older restore document I wrote but at the beginning you can see
how the backup is done:

http://lists.freebsd.org/pipermail/freebsd-doc/2007-February/012190.html

The backup method writes everything to a file, so it only works for
filesystems of smaller sizes (<50G) depending on your backup and restore
windows.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread Stuart Mackie
>Has anyone done this?

>I'm presently using rsync over ssh, but I think dump would be better if it 
>will 
>work.  I've been reading the man page, but I'm wondering if anyone is doing 
>this successfully and would like to share their cmdline.

Hi,

[ from gopher://sdf-eu.org/00/users/mackie/Unix-Notes/_sdf-user-stuff.txt ]

Backup home dir (from sdf)
--
$ ssh [EMAIL PROTECTED] 'tar cvf - html/* | gzip - > html.tgz'
$ ssh [EMAIL PROTECTED] 'tar cvf - gopher/* | gzip - > gopher.tgz'
$ rsync -avz -e ssh [EMAIL PROTECTED]:/arpa/m/mackie /home/mackie/SDF

Backup home dir (to sdf)

(dump)
# dump -0f - /home | ssh -o 'EscapeChar none' [EMAIL PROTECTED] "cat > home.fs"

(restore)
# cd /home && rcp [EMAIL PROTECTED]:home.fs home.fs && cat home.fs | restore 
-rf -


HTH,

Stuart
-- 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread Warren Block

On Fri, 4 Apr 2008, Paul Schmehl wrote:


Has anyone done this?

I'm presently using rsync over ssh, but I think dump would be better if it 
will work.  I've been reading the man page, but I'm wondering if anyone is 
doing this successfully and would like to share their cmdline.


There's an example in the Handbook:

http://www.freebsd.org/doc/en/books/handbook/backup-basics.html

-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread David Robillard
> Has anyone done this?
>
> I'm presently using rsync over ssh, but I think dump would be better if it 
> will
> work.  I've been reading the man page, but I'm wondering if anyone is doing
> this successfully and would like to share their cmdline.

Hi Paul,

We're not using dump over ssh but I was curious to know why you'd
prefer dump over rsync?

We're using rsync and it's been good to us. So, I'd like to share with
you our backup strategy. Just in case it can help you or anyone
running various UNIX flavors. We use FreeBSD, RedHat Enterprise Linux,
Ubuntu Linux and IBM AIX in this setup.

This is a disk to disk to tape scenario.

All clients are configured with a user called "backup" with a UID of
zero (so that he can read everything). It's shell is set to rssh which
in turn is configured to allow rsync only to the backup user. We limit
who can connect to each clients via sshd_conf's AllowUsers config.
Each client has the central backup server's special ssh key file
installed in ~backup/.ssh/authorized_keys edited to have
from="backup.domain.com", in it to restrict which machine can use this
key.

The central FreeBSD backup server has ssh access to every clients and
has rsnapshot installed. We have an rsnapshot configuration for each
client. Each backup run is scheduled via the server's crontab. Backup
data is stored on the server's encrypted backup volume. The nice thing
about rsnapshot is that it uses efficient links to save disk space. In
the first run of a new client it takes the entire data set. But each
subsequent run only takes the changes. But the backup data is kept
online so you can actually browse it live and use scp/tar/rsync to
perform a restore. Be it a single file or the entire file system.
Using rsnapshot enables us to save a week's worth of data of all our
100+ machines without using more than 300Gb of disk space on the
backup server (lots of machines, but not much data, we're quite lucky
:)

Each day, the backup data is passed with dd into OpenPGP before being
sent to tape with tar. This way our tapes are encrypted and impossible
to read without the appropriate password. That password is kept on an
encrypted file. We can therefore send our tapes off site with any
company knowing our data is safe.  All the admins keep a detailed
howto and the important encrypted password files on a USB stick in
case the data center fails and we loose our wiki and the file server.

If anyone is interested in the exact configuration of this backup
setup, we have it all in a wiki, so it's easy to share it.

Hope that can help anyone,

Cheers,

David
-- 
David Robillard
UNIX systems administrator & Oracle DBA
CISSP, RHCE & Sun Certified Security Administrator
Montreal: +1 514 966 0122
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread Andrew Cid
Hey,

> I'm presently using rsync over ssh, but I think dump would be better if it 
> will work.  I've been reading the man page, but I'm wondering if anyone is 
> doing this successfully and would like to share their cmdline.

Are doing backups to disk?  I find rsync combined with hard links to be
quite efficient and easy to manage/restore.  Checkout Mike Rubel's
article [1] if you haven't seen it already, he explains it well.  I
find Rsnapshot [2] quite useful/easy for small installations.  There is
also Dirvish [3] which might be worth looking at.  Both use the same
concept.

Hope this helps,


Andrew. 

[1] http://www.mikerubel.org/computers/rsync_snapshots/
[2] http://rsnapshot.org/
[3] http://www.dirvish.org/
-- 
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups using ssh and dump

2008-04-04 Thread John Webster
We use the following in a script to backup our servers.

/bin/ssh -q -o 'BatchMode yes' -l'/sbin/dump -h 0 -0uf - /home \
| /usr/bin/gzip --fast' 2> /path/to/logs//home_full.dump.log
> /backups/_home_full.dump.gz


--On April 4, 2008 12:59:27 PM -0500 Paul Schmehl <[EMAIL PROTECTED]> wrote:

> Has anyone done this?
> 
> I'm presently using rsync over ssh, but I think dump would be better if it 
> will work.  I've been reading the man page, but I'm wondering if anyone is 
> doing this successfully and would like to share their cmdline.
> 
> -- 
> Paul Schmehl ([EMAIL PROTECTED])
> Senior Information Security Analyst
> The University of Texas at Dallas
> http://www.utdallas.edu/ir/security/
> 
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> 





pgp99xnQhq6vQ.pgp
Description: PGP signature


Re: Remote backups using ssh and dump

2008-04-04 Thread John Almberg

On Apr 4, 2008, at 1:59 PM, Paul Schmehl wrote:


Has anyone done this?

I'm presently using rsync over ssh, but I think dump would be  
better if it will work.  I've been reading the man page, but I'm  
wondering if anyone is doing this successfully and would like to  
share their cmdline.




I do, but I'm not sure I'm doing it the optimal way. I'd love some  
feedback that might improve my simple script.


Basically, I back up each partition separately. I don't think it is  
possible to just dump from the root, although if it is possible, I'd  
like to know how.


This script assumes root can log into the backup server without a  
password. It 'rotates' the backups by including the day of the week  
in the file name, this gives me 7 days of complete backups.


I also take a snapshot of the home directory, in case I need to fetch  
one file from backup. These dumps are really intended for  
catastrophic failure (which, knock on wood, I've never actually needed.)


BTW, the primary and secondary servers both have dual nic cards. The  
backup server is directly connected to the primary server, using a  
crossover cable, so the nightly gigabyte transfer doesn't clog the  
office lan switch.


-- John

#!/usr/bin/perl
my $day_of_week = (localtime)[6];
my $file_prefix = '/backup/ON-'.$day_of_week.'-';
system('dump -0Laun -f - /tmp   | gzip -2 | ssh  
[EMAIL PROTECTED] \'dd of='.$file_prefix.'tmp.gz\'');
system('dump -0Laun -f - /  | gzip -2 | ssh  
[EMAIL PROTECTED] \'dd of='.$file_prefix.'root.gz\'');
system('dump -0Laun -f - /usr   | gzip -2 | ssh  
[EMAIL PROTECTED] \'dd of='.$file_prefix.'usr.gz\'');
system('dump -0Laun -f - /usr/local | gzip -2 | ssh  
[EMAIL PROTECTED] \'dd of='.$file_prefix.'usr-local.gz\'');
system('dump -0Laun -f - /var   | gzip -2 | ssh  
[EMAIL PROTECTED] \'dd of='.$file_prefix.'var.gz\'');
system('dump -0Laun -f - /home  | gzip -2 | ssh  
[EMAIL PROTECTED] \'dd of='.$file_prefix.'home.gz\'');



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Remote backups using ssh and dump

2008-04-04 Thread Paul Schmehl

Has anyone done this?

I'm presently using rsync over ssh, but I think dump would be better if it will 
work.  I've been reading the man page, but I'm wondering if anyone is doing 
this successfully and would like to share their cmdline.


--
Paul Schmehl ([EMAIL PROTECTED])
Senior Information Security Analyst
The University of Texas at Dallas
http://www.utdallas.edu/ir/security/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FreeBSD USB disks - booting and backups

2007-08-23 Thread Wojciech Puchar


i'm doing this with my notebook.


Great.  What kind of drive?  And have you actually
had to do a restore?

some used 80GB 3.5" drive (Seagate) + noname USB-IDE jack (true noname, 
nothing written on it). the latter costed 6$ new, including disk power 
supply.


works very well.


i don't make any partitions on it, just


dd if=/dev/zero of=/dev/da0 bs=1m count=1

to clear things up

newfs -m 0 -O1 -i 16384 -b 4096 -f 512 -U /dev/da0

options for max of space, not performance, as i backup 120GB notebook 
drive.



then to make a copy i do:

mount -o noatime /dev/da0 /root/copy
cd /root/copy
rsync -avrlHpogDtS --delete --force --exclude-from=/root/copy.exclude / .
umount /root/copy




my copy.exclude file looks like that (change to your needs:

/OLD
/root/copy/*
/dev/*
/usr/ports
/proc/*
swap
/tmp/*
/var/tmp/*
/usr/compat/linux/proc/*
/usr/obj





the /OLD file are on copy drive, not master, just to be able to have many 
generations done by cp -lpR



after copying first time you have to

bsdlabel -B da0

WARNING: when booting from copy, get to single user and fix fstab to have 
/dev/da0 as root.




other remarks: keep the copy plugged only when copying, then store in safe 
place :)

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FreeBSD USB disks - booting and backups

2007-08-23 Thread Tijl Coosemans
On Thursday 23 August 2007 18:31:05 Patrick Baldwin wrote:
> I'm thinking of backing up my FreeBSD 6.2 webmail server by
> installing FreeBSD onto the USB, and then dumping the whole
> filesystem onto the USB.  That way, in the event of a drive failure,
> I can boot off the USB drive, and then just restore everything onto
> the webmail server.
> 
> Has anyone else done this?  I haven't found any mention via Google,
> which has me concerned that there might be a good reason no one's
> done this that I haven't thought of.One issue I ran into thus far
> has been the 500 GB Western Digital MyBook USB drive I tried first
> makes my system crash when I plug it in.  I can get another USB drive
> and repurpose the one I've got right now, but before I put any more
> resources into this idea, I thought I'd bounce it off some experts.
> 
> Any suggestions, links, etc. welcomed.  Particularly for large
> capacity USB drives that won't crash my system.

I use it for a different purpose than you, but I've installed FreeBSD
onto a 120Gb Western Digital Passport (2.5") USB drive. It was just
like installing normally and works like a charm.

That USB drive isn't supposed to crash your system by the way. Have you
filed a PR or something?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FreeBSD USB disks - booting and backups

2007-08-23 Thread Patrick Baldwin

Wojciech Puchar wrote:


I'm thinking of backing up my FreeBSD 6.2 webmail server by installing
FreeBSD onto the USB, and then dumping the whole filesystem onto the 
USB. That way, in the event of a drive failure, I can boot off the

USB drive, and then just restore everything onto the webmail server.



good idea. man rsync :)


I was thinking about just dump, as I'm more familiar with it, but
I'll check out rsync if you think it's better for this purpose.



Has anyone else done this?  I haven't found any mention via Google,



i'm doing this with my notebook.


Great.  What kind of drive?  And have you actually
had to do a restore?


--
Patrick Baldwin
Systems Administrator
Studsvik Scandpower, Inc.
1087 Beacon St.
Newton, MA 02459
1-617-965-7455

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FreeBSD USB disks - booting and backups

2007-08-23 Thread Wojciech Puchar

I'm thinking of backing up my FreeBSD 6.2 webmail server by installing
FreeBSD onto the USB, and then dumping the whole filesystem onto the USB. 
That way, in the event of a drive failure, I can boot off the

USB drive, and then just restore everything onto the webmail server.


good idea. man rsync :)



Has anyone else done this?  I haven't found any mention via Google,


i'm doing this with my notebook.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


FreeBSD USB disks - booting and backups

2007-08-23 Thread Patrick Baldwin

I'm thinking of backing up my FreeBSD 6.2 webmail server by installing
FreeBSD onto the USB, and then dumping the whole filesystem onto the 
USB.  That way, in the event of a drive failure, I can boot off the

USB drive, and then just restore everything onto the webmail server.

Has anyone else done this?  I haven't found any mention via Google,
which has me concerned that there might be a good reason no one's
done this that I haven't thought of.One issue I ran into
thus far has been the 500 GB Western Digital MyBook USB drive
I tried first makes my system crash when I plug it in.  I can get
another USB drive and repurpose the one I've got right now, but
before I put any more resources into this idea, I thought I'd
bounce it off some experts.

Any suggestions, links, etc. welcomed.  Particularly for large
capacity USB drives that won't crash my system.

Regards,


--
Patrick Baldwin
Systems Administrator
Studsvik Scandpower, Inc.
1087 Beacon St.
Newton, MA 02459
1-617-965-7455

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: cleanly reading compressed backups

2006-10-27 Thread Dan Nelson
In the last episode (Oct 27), Jim Stapleton said:
> I have several disk images, and I'd like to grab files off of them,
> but I'm not sure how.
> 
> I made these images by booting up a linux boot CD (it seemed easier
> than a BSD cd at the time, and the results should be the same), and
> make a backup as such:
> 
> dd if=/dev/sda | bzip2 -z9 | split [forgot the args, basically 1GB
> files that are bsd-backup-(date)-??]
> 
> anyway, without uncompressing them back to disk (it's the same
> slice/partitions as I have now), what's the easiest way to get read
> access to these contents of the files in these backups?

It would be extremely difficult to allow access to arbitrary files from
a backup made like that, without dd'ing the decompressed image to
another disk.  Theoretically a bzip2-compressed file can be randomly
accessed because the dictionary is reset every 900k bytes of
uncompressed data.  You would need to write a geom module that
prescanned the images to determine where the reset points were in the
compressed file, then when read requests come in, decompress the 900k
block containing the region of interest and return the requested block. 
You would then run mdconfig to create device nodes out of your split
files, join them with geom_concat, let your geom_bzip2 module
decompress the resulting joined device, and finally mount the
decompressed device node.  Accessing the resulting filesystem would be
slow, but it would work (in theory).

If you had booted a FreeBSD cd instead (disk 1 of the install CD set is
a livecd) and run a "dump -af - /dev/da0 | bzip2 | split" pipe, you
could have done easy restores with a "cat | bunzip2 | restore -ivf -"
pipe.  Dump's file format includes the file listing at the beginning,
so restore can present you with a file listing first, let you pick the
ones you want, then zip through the rest of the dump file sequentially
to restore the files.

-- 
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


cleanly reading compressed backups

2006-10-27 Thread Jim Stapleton

I have several disk images, and I'd like to grab files off of them,
but I'm not sure how.

I made these images by booting up a linux boot CD (it seemed easier
than a BSD cd at the time, and the results should be the same), and
make a backup as such:

dd if=/dev/sda | bzip2 -z9 | split [forgot the args, basically 1GB
files that are bsd-backup-(date)-??]

anyway, without uncompressing them back to disk (it's the same
slice/partitions as I have now), what's the easiest way to get read
access to these contents of the files in these backups?

Thanks,
-Jim Stapleton
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-07 Thread perikillo

On 10/6/06, Kris Kennaway <[EMAIL PROTECTED]> wrote:

On Fri, Oct 06, 2006 at 10:08:27AM -0700, perikillo wrote:

> change the scheduler to the old SCHED_4BSD and maxuser from 10 to 32
> like chuck told me.

These are probably what fixed it.

I guess you've learned a Lesson: when you choose to use code marked as
"experimental", a) don't be surprised when it goes wrong, and b) the
first thing you should do to try and fix it is to stop using the
experimental code :-)

Kris






 Yes, this is the lastime that i will use *experimental code*. It
looks everything back to normal.

 My local backups already finished withuout any problems, right now
is bringing the remote servser backups and they are running good.

  Thanks people for all your help.

Greetings!!!
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-06 Thread Kris Kennaway
On Fri, Oct 06, 2006 at 10:08:27AM -0700, perikillo wrote:

> change the scheduler to the old SCHED_4BSD and maxuser from 10 to 32
> like chuck told me.

These are probably what fixed it.

I guess you've learned a Lesson: when you choose to use code marked as
"experimental", a) don't be surprised when it goes wrong, and b) the
first thing you should do to try and fix it is to stop using the
experimental code :-)

Kris



pgpRB5V7lTNxs.pgp
Description: PGP signature


Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-06 Thread perikillo

On 10/4/06, backyard <[EMAIL PROTECTED]> wrote:



--- Chuck Swiger <[EMAIL PROTECTED]> wrote:

> On Oct 4, 2006, at 10:32 AM, perikillo wrote:
> > My kernel file is this:
> >
> > machine  i386
> > cpu   I686_CPU
>
> You should also list "cpu  I586_CPU", otherwise you
> will not include
> some optimizations intended for Pentium or higher
> processors.
 >

are you sure about this??? This statement seems to
contradict the handbook which says "it is best to use
only the CPU you have" I would think I686_CPU would
cause the build know it is higher then a pentium and
thus use those optimizations. But if this is true...


-brian



___
freebsd-questions@freebsd.org  mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to " [EMAIL PROTECTED]"





Hi people.

  Today i receive a completed FULL backups from all my local clients,
without any message saying:

vr0: watchdog timeout

  I did some changes, in kernel, bacula, and machine:

Machine
  Disable the internal NIC(via) and install one Linksys which use the
same driver vr0.

Kernel:

change the scheduler to the old SCHED_4BSD and maxuser from 10 to 32
like chuck told me.

disable AHC_ALLOW_MEMIO, this is the firs time that i use this option.

Enable IPFILTER to setup the firewall, i was thinking that maybe i
have been atack or something like that, i must check this.

Remove some SCSI drivers.

build the kernel, installed and reboot.

Bacula:

I setup the Heartbeat Interval var in the client and the storage demon
to 1 minute, because there is no formula to know which number is the
best.

  Today my backups where completed succesfully, no horror message, i
have been working with this server this past days, testing, change
here, there, until today, i dont know if it was the NIC, or some
kernel option, but is not very easy to test because is a production
server.

  I check my Firewall logs but there is nothing that give some clue
that i have been atack, good :-)

  Im testing the backup right now, today i will do another
FULL-BACKUPS  from all my local serves and i will bring the backups
from another serves that we have on another building and see if the
system is already stable.

  I will let you now people, thanks for your help.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-04 Thread backyard


--- Chuck Swiger <[EMAIL PROTECTED]> wrote:

> On Oct 4, 2006, at 10:32 AM, perikillo wrote:
> > My kernel file is this:
> >
> > machine  i386
> > cpu   I686_CPU
> 
> You should also list "cpu  I586_CPU", otherwise you
> will not include  
> some optimizations intended for Pentium or higher
> processors.
> 

are you sure about this??? This statement seems to
contradict the handbook which says "it is best to use
only the CPU you have" I would think I686_CPU would
cause the build know it is higher then a pentium and
thus use those optimizations. But if this is true...


-brian



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-04 Thread Chuck Swiger

On Oct 4, 2006, at 10:32 AM, perikillo wrote:

My kernel file is this:

machine  i386
cpu   I686_CPU


You should also list "cpu  I586_CPU", otherwise you will not include  
some optimizations intended for Pentium or higher processors.



ident BACULA
maxusers   10


Unless you've got extremely low RAM in the machine, you should either  
increase this to 32 or so, or let it autoconfigure itself.



# To statically compile in device wiring instead of /boot/device.hints
#hints"GENERIC.hints"# Default places to look for
devices.

makeoptions   DEBUG=-g # Build kernel with gdb(1) debug  
symbols


options   SCHED_ULE# ULE scheduler
#options  SCHED_4BSD   # 4BSD scheduler


And you should switch to using SCHED_4BSD instead of SCHED_ULE until  
the bugs are worked out of the ULE scheduler.


--
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-04 Thread perikillo

On 10/4/06, Bill Moran <[EMAIL PROTECTED]> wrote:


In response to perikillo <[EMAIL PROTECTED]>:

[snip]

> Right now my first backup again crash
>
> xl0: watchdog timeout
>
> Right now i change the cable from on port to another and see what
happends.
>
> Guy, please someone has something to tell me, this is critical for me.
>
> This is my second NIC.

Don't know if this is related or not, but it may be:

http://lists.freebsd.org/pipermail/freebsd-stable/2006-September/028792.html

--
Bill Moran
Collaborative Fusion Inc.



Hi people.

  Today my full backups completed succesfully, but my NIC again show me the
same failure:

Oct  3 23:36:54 bacula kernel: xl0: watchdog timeout
Oct  3 23:36:54 bacula kernel: xl0: no carrier - transceiver cable problem?
Oct  3 23:36:54 bacula kernel: xl0: link state changed to DOWN
Oct  3 23:36:56 bacula kernel: xl0: link state changed to UP
Oct  4 00:39:14 bacula kernel: xl0: watchdog timeout
Oct  4 00:39:14 bacula kernel: xl0: no carrier - transceiver cable problem?
Oct  4 00:39:14 bacula kernel: xl0: link state changed to DOWN
Oct  4 00:39:16 bacula kernel: xl0: link state changed to UP
Oct  4 01:41:39 bacula kernel: xl0: watchdog timeout
Oct  4 01:41:39 bacula kernel: xl0: no carrier - transceiver cable problem?
Oct  4 01:41:39 bacula kernel: xl0: link state changed to DOWN
Oct  4 01:41:42 bacula kernel: xl0: link state changed to UP
Oct  4 08:12:45 bacula login: ROOT LOGIN (root) ON ttyv0
Oct  4 08:15:50 bacula kernel: xl0: link state changed to DOWN
Oct  4 08:20:07 bacula login: ROOT LOGIN (root) ON ttyv1
Oct  4 08:27:34 bacula kernel: xl0: link state changed to UP
Oct  4 08:27:38 bacula kernel: xl0: link state changed to DOWN
Oct  4 08:27:40 bacula kernel: xl0: link state changed to UP
Oct  4 08:31:53 bacula su: ubacula to root on /dev/ttyp0

I check the switch, view the port where this server is connected but i dont
see nothing wrong there:

  Received   Transmitted
--
--
Packets:  53411791Packets:
93628031
Multicasts:  0Multicasts:
37550
Broadcasts: 19Broadcasts:
36157
Total Octets:   3644260033Total Octets:
737446293
Lost Packets:0Lost
Packets:0
Packets 64 bytes: 16678016Packets 64 bytes:
959175
   65-127 bytes  3673309465-127 bytes
384773
   128-255 bytes  384128-255 bytes
114963
   256-511 bytes   70256-511 bytes
304495
   512-1023 bytes  60512-1023 bytes
2472655
   1024-1518 bytes1671024-1518 bytes
89391970
FCS Errors:  0
Collisions:  0
Undersized Packets:  0Single
Collisions:   0
Oversized Packets:   0Multiple
Collisions: 0
Filtered Packets:   83Excessive
Collisions:0
Flooded Packets: 0Deferred
Packets:0
Frame Errors:0Late
Collisions: 0


My kernel file is this:

machine  i386
cpu   I686_CPU
ident BACULA
maxusers   10

# To statically compile in device wiring instead of /boot/device.hints
#hints"GENERIC.hints"# Default places to look for
devices.

makeoptions   DEBUG=-g # Build kernel with gdb(1) debug symbols

options   SCHED_ULE# ULE scheduler
#options  SCHED_4BSD   # 4BSD scheduler
options   PREEMPTION   # Enable kernel thread preemption
options   INET # InterNETworking
#options  INET6# IPv6 communications protocols
options   FFS  # Berkeley Fast Filesystem
options   SOFTUPDATES  # Enable FFS soft updates support
options   UFS_ACL# Support for access control lists
options   UFS_DIRHASH  # Improve performance on big directories
options   MD_ROOT# MD is a potential root device
options   MSDOSFS# MSDOS Filesystem
options   CD9660   # ISO 9660 Filesystem
options   PROCFS   # Process filesystem (requires PSEUDOFS)
options   PSEUDOFS # Pseudo-filesystem framework
options   GEOM_GPT # GUID Partition Tables.
options   COMPAT_43# Compatible with BSD 4.3 [KEEP THIS!]
#options  COMPAT_FREEBSD4# Compatible with FreeBSD4
options   COMPAT_FREEBSD5# Compatible with FreeBSD5
options   SCSI_DELAY=5000# Delay (in ms) before probing SCSI
options   KTRACE   # ktrace(1) sup

Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-04 Thread Bill Moran
In response to perikillo <[EMAIL PROTECTED]>:

[snip]

> Right now my first backup again crash
> 
> xl0: watchdog timeout
> 
> Right now i change the cable from on port to another and see what happends.
> 
> Guy, please someone has something to tell me, this is critical for me.
> 
> This is my second NIC.

Don't know if this is related or not, but it may be:
http://lists.freebsd.org/pipermail/freebsd-stable/2006-September/028792.html

-- 
Bill Moran
Collaborative Fusion Inc.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-04 Thread perikillo

On 10/3/06, Christopher Swingler <[EMAIL PROTECTED]> wrote:



On Oct 3, 2006, at 7:55 PM, perikillo wrote:

> On 10/3/06, perikillo <[EMAIL PROTECTED]> wrote:
>>
>>   Hi people i have read a some mails about this problem, 
>>
>> Greetings.
>>
>>
>
>
> Wow
>
>   
> To unsubscribe, send any mail to "freebsd-questions-
> [EMAIL PROTECTED]"

The only time I have ever had watchdog timeouts is when I've had a
bad cable or a bad port.  If it's doing it on two entirely different
NICs, then it is most CERTAINLY a bad cable or a bad port on the
switch end.  You seem to have addressed the most expensive issue
first (bad card), which is kind of backwards, but whatever.  You've
switched ports, that's good too, now switch cables.



Hi people, thanks for your answer, right now i googling around and see how
to handle this problem i have.

 Im home right now, here i have one NIC Intel (fxp driver) with 2 ports on
it, this is my firewall machine but tomorrow i will take to my work, i dont
have access top my server right now , but i will give you the info you
request ASAP.

  I have another Linksys NIC, some posts say that those 2 NIC's on freebsd
are really good.

 Another thing that i will do, i dont know if it works, disable the drivers
form the kernel and just use the modules and see what hapend :-?

 But i need to see first how my backups finish, and will let you you now
guys. Thanks for your time.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-03 Thread perikillo

On 10/3/06, perikillo <[EMAIL PROTECTED]> wrote:


  Hi people i have read a some mails about this problem, it looks like all
was running some 5.X branch, i have been using FreeBSD 6.1 some months
ago,  yesterday i make the buildworld process, right now i have my box with
FreeBSD6.1-p10.

  This box runs bacula server with this NIC:

vr0:  port 0xe400-0xe4ff mem
0xee022000-0xee0220ff at device 18.0 on pci0
vr0: Reserved 0x100 bytes for rid 0x10 type 4 at 0xe400
miibus0:  on vr0
vr0: bpf attached
vr0: Ethernet address: 00:01:6c:2c:09:90
vr0: [MPSAFE]

  This NIC is integrated with the motherboard, i used this box with
freebsd 5.4-pX almost 1 year running bacula 1.38.5 without a problem.

  1 full backup take almost 140Gb of data.

Last week i lost 1 job Full Backup from one of my biggest servers running
RH9 aprox 80Gb off data, bacula just backup 35Gb and mark the job ->Error

26-Sep 00:28 bacula-dir: MBXBDCB.2006-09-25_21.30.00 Fatal error: Network
error with FD during Backup: ERR=Operation timed out
26-Sep 00:28 bacula-dir: MBXBDCB.2006-09-25_21.30.00 Fatal error: No Job
status returned from FD.
26-Sep 00:28 bacula-dir: MBXBDCB.2006-09-25_21.30.00 Error: Bacula 
1.38.11(28Jun06): 26-Sep-2006 00:28:48

FD termination status:  Error
SD termination status:  Error
Termination:*** Backup Error ***

  I have no problem with the client, is running our ERP software and no
comment here.

In my freebsd console appear this:

vr0: watchdog timeout

  I reset the server, and all the Differential backups has been working
good, i do the buildworld yesterday and let my bacula server ready to do a
full backup for all my clients and whops...

I lost 2 clients jobs:

Client 1:

02-Oct 18:30 bacula-dir: Start Backup JobId 176, Job=
PDC.2006-10-02_18.30.00
02-Oct 20:40 bacula-dir: PDC.2006-10-02_18.30.00 Fatal error: Network
error with FD during Backup: ERR=Operation timed out
02-Oct 20:40 bacula-dir: PDC.2006-10-02_18.30.00 Fatal error: No Job
status returned from FD.
02-Oct 20:40 bacula-dir: PDC.2006-10-02_18.30.00 Error: Bacula 
1.38.11(28Jun06): 02-Oct-2006 20:40:11
  JobId:  176
  Job:PDC.2006-10-02_18.30.00
  Backup Level:   Full
  Client:   "PDC" Windows NT 4.0,MVS,NT 4.0.1381
  FileSet:"PDC-FS" 2006-08-21 18:04:12
  Pool:   "FullTape"
  Storage:"LTO-1"
  Scheduled time: 02-Oct-2006 18:30:00
  Start time: 02-Oct-2006 18:30:06
  End time:   02-Oct-2006 20:40:11
  Elapsed time:   2 hours 10 mins 5 secs
  Priority:   11
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  Volume name(s): FullTape-0004
  Volume Session Id:  2
  Volume Session Time:1159832414
  Last Volume Bytes:  38,857,830,949 ( 38.85 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Error
  Termination:*** Backup Error ***

Client 2

02-Oct 21:30 bacula-dir: Start Backup JobId 178, Job=
MBXBDCB.2006-10-02_21.30.00
02-Oct 21:31 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 21:37 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 21:44 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 21:51 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 21:58 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 22:04 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 22:10 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Fatal error: bnet.c:859
Unable to connect to File daemon on 192.168.2.9:9102 . ERR=Host is down
02-Oct 22:10 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Error: Bacula 
1.38.11(28Jun06): 02-Oct-2006 22:10:03
  JobId:  178
  Job:MBXBDCB.2006-10-02_21.30.00
  Backup Level:   Full
  Client: "MBXBDCB" i686-pc-linux-gnu,redhat,9
  FileSet:"MBXBDCB-FS" 2006-08-21 23:00:02
  Pool:   "FullTape"
  Storage:"LTO-1"
  Scheduled time: 02-Oct-2006 21:30:00
  Start time: 02-Oct-2006 21:30:02
  End time:   02-Oct-2006 22:10:03
  Elapsed time:   

vr0: watchdog timeout FreeBSD 6.1-p10 Crashing my backups

2006-10-03 Thread perikillo

 Hi people i have read a some mails about this problem, it looks like all
was running some 5.X branch, i have been using FreeBSD 6.1 some months
ago,  yesterday i make the buildworld process, right now i have my box with
FreeBSD6.1-p10.

 This box runs bacula server with this NIC:

vr0:  port 0xe400-0xe4ff mem
0xee022000-0xee0220ff at device 18.0 on pci0
vr0: Reserved 0x100 bytes for rid 0x10 type 4 at 0xe400
miibus0:  on vr0
vr0: bpf attached
vr0: Ethernet address: 00:01:6c:2c:09:90
vr0: [MPSAFE]

 This NIC is integrated with the motherboard, i used this box with freebsd
5.4-pX almost 1 year running bacula 1.38.5 without a problem.

 1 full backup take almost 140Gb of data.

Last week i lost 1 job Full Backup from one of my biggest servers running
RH9 aprox 80Gb off data, bacula just backup 35Gb and mark the job ->Error

26-Sep 00:28 bacula-dir: MBXBDCB.2006-09-25_21.30.00 Fatal error: Network
error with FD during Backup: ERR=Operation timed out
26-Sep 00:28 bacula-dir: MBXBDCB.2006-09-25_21.30.00 Fatal error: No Job
status returned from FD.
26-Sep 00:28 bacula-dir: MBXBDCB.2006-09-25_21.30.00 Error: Bacula
1.38.11(28Jun06): 26-Sep-2006 00:28:48

FD termination status:  Error
SD termination status:  Error
Termination:*** Backup Error ***

 I have no problem with the client, is running our ERP software and no
comment here.

In my freebsd console appear this:

vr0: watchdog timeout

 I reset the server, and all the Differential backups has been working
good, i do the buildworld yesterday and let my bacula server ready to do a
full backup for all my clients and whops...

I lost 2 clients jobs:

Client 1:

02-Oct 18:30 bacula-dir: Start Backup JobId 176, Job=PDC.2006-10-02_18.30.00
02-Oct 20:40 bacula-dir: PDC.2006-10-02_18.30.00 Fatal error: Network error
with FD during Backup: ERR=Operation timed out
02-Oct 20:40 bacula-dir: PDC.2006-10-02_18.30.00 Fatal error: No Job status
returned from FD.
02-Oct 20:40 bacula-dir: PDC.2006-10-02_18.30.00 Error: Bacula
1.38.11(28Jun06): 02-Oct-2006 20:40:11
 JobId:  176
 Job:PDC.2006-10-02_18.30.00
 Backup Level:   Full
 Client:   "PDC" Windows NT 4.0,MVS,NT 4.0.1381
 FileSet:"PDC-FS" 2006-08-21 18:04:12
 Pool:   "FullTape"
 Storage:"LTO-1"
 Scheduled time: 02-Oct-2006 18:30:00
 Start time: 02-Oct-2006 18:30:06
 End time:   02-Oct-2006 20:40:11
 Elapsed time:   2 hours 10 mins 5 secs
 Priority:   11
 FD Files Written:   0
 SD Files Written:   0
 FD Bytes Written:   0 (0 B)
 SD Bytes Written:   0 (0 B)
 Rate:   0.0 KB/s
 Software Compression:   None
 Volume name(s): FullTape-0004
 Volume Session Id:  2
 Volume Session Time:1159832414
 Last Volume Bytes:  38,857,830,949 (38.85 GB)
 Non-fatal FD errors:0
 SD Errors:  0
 FD termination status:  Error
 SD termination status:  Error
 Termination:*** Backup Error ***

Client 2

02-Oct 21:30 bacula-dir: Start Backup JobId 178, Job=
MBXBDCB.2006-10-02_21.30.00
02-Oct 21:31 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 21:37 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 21:44 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 21:51 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 21:58 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 22:04 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Warning: bnet.c:853
Could not connect to File daemon on 192.168.2.9:9102. ERR=Host is down
Retrying ...
02-Oct 22:10 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Fatal error: bnet.c:859
Unable to connect to File daemon on 192.168.2.9:9102. ERR=Host is down
02-Oct 22:10 bacula-dir: MBXBDCB.2006-10-02_21.30.00 Error: Bacula
1.38.11(28Jun06): 02-Oct-2006 22:10:03
 JobId:  178
 Job:MBXBDCB.2006-10-02_21.30.00
 Backup Level:   Full
 Client: "MBXBDCB" i686-pc-linux-gnu,redhat,9
 FileSet:"MBXBDCB-FS" 2006-08-21 23:00:02
 Pool:   "FullTape"
 Storage:"LTO-1"
 Scheduled time: 02-Oct-2006 21:30:00
 Start time: 02-Oct-2006 21:30:02
 End time:   02-Oct-2006 22:10:03
 Elapsed time:   40 mins 1 sec
 Priority:   13
 FD Files Written:   0
 SD Files Written:   0
 FD Bytes Writte

Re: /home is symlinked to /usr/home - question about backups

2006-03-20 Thread Alex Zbyslaw

Pat Maddox wrote:


However if I run rsync -avz to back up my
server, it creates something like this:

/backup/march/19/home -> /usr/home

So if I were to go to /backup/march/19 and rm -rf * wouldn't it go and
delete everything in /usr/home?  

Should add:  In you shell, alias rm to "rm -i" which will ask you about 
deleting anything and everything.  For an rm -r, once you are *sure* 
that you are deleting the right thing, you can ^C, pull back your 
command line and edit it to say "/bin/rm ...".  If you are sure you are 
deleting the right thing, and if you always edit the command line then 
you should never(*) delete something you didn't want to.


(*) Of course, there will still be times when you are not paying enough 
attention and still manage to delete something you didn't intend to, but 
those times should be greatly reduced :-)


--Alex

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: /home is symlinked to /usr/home - question about backups

2006-03-20 Thread Alex Zbyslaw

Pat Maddox wrote:


I got a dedicated server a while ago, and it came with /home symlinked
to /usr/home.  I'm not entirely sure why, to tell you the truth, but
it's never posed a problem.  However if I run rsync -avz to back up my
server, it creates something like this:

/backup/march/19/home -> /usr/home

So if I were to go to /backup/march/19 and rm -rf * wouldn't it go and
delete everything in /usr/home?  That's obviously not my intended
result.  I've read all the symlink options in man rsync but honestly
am not sure what it is that I need to do.  Ideally I'd like to have
symlinks reference the relative file..so something like
/backup/march/19/home -> /backup/march/19/usr/home

That way I don't lose all my stuff if I remove the file from backup. 
Right now I'm just ignoring /home when I rsync, but it makes me kind

of worried that if I ever backup without ignoring /home and then
delete my backup I might lose my live data...I could really use some
info.

 


You could always make some dummy directories and symlinks and try it :-)

But, no, it won't delete the real thing *unless you put a / on the 
end*.  If you don't put the trailing slash then the symlink is deleted.  
If you put the trailing slash, then the symlink is dereferenced and the 
contents recursively deleted.


If you did what you wanted with rsync, you wouldn't correctly recover 
symlinks.


The rm man page says:

The rm utility removes symbolic links, not the files referenced by the
links.

--Alex

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


/home is symlinked to /usr/home - question about backups

2006-03-19 Thread Pat Maddox
I got a dedicated server a while ago, and it came with /home symlinked
to /usr/home.  I'm not entirely sure why, to tell you the truth, but
it's never posed a problem.  However if I run rsync -avz to back up my
server, it creates something like this:

/backup/march/19/home -> /usr/home

So if I were to go to /backup/march/19 and rm -rf * wouldn't it go and
delete everything in /usr/home?  That's obviously not my intended
result.  I've read all the symlink options in man rsync but honestly
am not sure what it is that I need to do.  Ideally I'd like to have
symlinks reference the relative file..so something like
/backup/march/19/home -> /backup/march/19/usr/home

That way I don't lose all my stuff if I remove the file from backup. 
Right now I'm just ignoring /home when I rsync, but it makes me kind
of worried that if I ever backup without ignoring /home and then
delete my backup I might lose my live data...I could really use some
info.

Pat
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Help on Tape Backups / Disc Space

2006-02-08 Thread Lowell Gilbert
"Graham Bentley" <[EMAIL PROTECTED]> writes:

> >  Do you even know for sure that your backup was running at the 
> > time that the filesystem full messages were generated?
> 
> Unfortunatly not - the times are different so this could two
> unrelated issues.

That's exactly the point.  

> > Maybe.  At some point you filled /data up.  You don't have enough
> > information here to indicate why.
> 
> If you could indicate which information I would need would that help?

Your disk space usage.

> > You will probably want to keep closer track to understand your 
> > usage patterns better.
> 
> Any tips on how to do that ?

Look at your disk space usage more than once per day.
There are even ports to help you do that.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Help on Tape Backups / Disc Space

2006-02-07 Thread Graham Bentley
Thanks for the reply Lowell.

> Are you just guessing here, or do you have a reason to think this is
> happening?

Not sure what you mean? I cut and paste the logs so clearly
there is something happening i.e. a reason for those messages?

> A quick look at flexbackup makes me think that it uses
> virtual memory, not file space, to buffer data for spooling.

Yes, I think you are right ! Here is a cut from the conf file :-

# Buffering program - to help streaming
$buffer = 'buffer'; # one of false/buffer/mbuffer
$buffer_megs = '10'; # buffer memory size (in megabytes)
$buffer_fill_pct = '75'; # start writing when buffer this percent full

>  Do you even know for sure that your backup was running at the 
> time that the filesystem full messages were generated?

Unfortunatly not - the times are different so this could two
unrelated issues.

> Maybe.  At some point you filled /data up.  You don't have enough
> information here to indicate why.

If you could indicate which information I would need would that help?

> You will probably want to keep closer track to understand your 
> usage patterns better.

Any tips on how to do that ?








___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Help on Tape Backups / Disc Space

2006-02-07 Thread Lowell Gilbert
"Graham Bentley" <[EMAIL PROTECTED]> writes:

> Can any one comment on the below ;
> 
> candle# df -H
> Filesystem SizeUsed   Avail Capacity  Mounted on
> /dev/ad0s1a260M 37M202M15%/
> devfs  1.0k1.0k  0B   100%/dev
> /dev/ar0s1d116G 96G 11G90%/data
> /dev/ad0s1e260M 29k239M 0%/tmp
> /dev/ad0s1f 37G934M 33G 3%/usr
> /dev/ad0s1d260M5.4M234M 2%/var
> 
> /data is at 90% capacity - its a mount of two discs
> in a raid. The rest of the os install is on a sinlg disc.
> 
> I am using flexbackup, a perl backup script in the ports
> to do backups of /data however the log shows that it
> halts almost immediatly.
> 
> I also notice this at the end of dmesg.today
> 
> pid 90729 (smbd), uid 65534 inumber 2119778 on /data: filesystem full
> pid 90729 (smbd), uid 65534 inumber 2921022 on /data: filesystem full
> pid 90728 (smbd), uid 65534 inumber 1931544 on /data: filesystem full
> pid 90728 (smbd), uid 65534 inumber 1931545 on /data: filesystem full
> pid 90753 (smbd), uid 65534 inumber 1931545 on /data: filesystem full
> 
> I was wondering if flexbackup was trying to use /data also for
> temp spooling of the backup job? /usr is virtually unused so it
> seems to make more sense to use that.

Are you just guessing here, or do you have a reason to think this is
happening?  A quick look at flexbackup makes me think that it uses
virtual memory, not file space, to buffer data for spooling.  Do you
even know for sure that your backup was running at the time that the
filesystem full messages were generated?

> /data was up to 98% but we removed alot of stuff of it. Are we still too
> close to full capacity ?

Maybe.  At some point you filled /data up.  You don't have enough
information here to indicate why.  You will probably want to keep 
closer track to understand your usage patterns better.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Help on Tape Backups / Disc Space

2006-02-06 Thread Graham Bentley
Hi All,

Can any one comment on the below ;

candle# df -H
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ad0s1a260M 37M202M15%/
devfs  1.0k1.0k  0B   100%/dev
/dev/ar0s1d116G 96G 11G90%/data
/dev/ad0s1e260M 29k239M 0%/tmp
/dev/ad0s1f 37G934M 33G 3%/usr
/dev/ad0s1d260M5.4M234M 2%/var

/data is at 90% capacity - its a mount of two discs
in a raid. The rest of the os install is on a sinlg disc.

I am using flexbackup, a perl backup script in the ports
to do backups of /data however the log shows that it
halts almost immediatly.

I also notice this at the end of dmesg.today

pid 90729 (smbd), uid 65534 inumber 2119778 on /data: filesystem full
pid 90729 (smbd), uid 65534 inumber 2921022 on /data: filesystem full
pid 90728 (smbd), uid 65534 inumber 1931544 on /data: filesystem full
pid 90728 (smbd), uid 65534 inumber 1931545 on /data: filesystem full
pid 90753 (smbd), uid 65534 inumber 1931545 on /data: filesystem full

I was wondering if flexbackup was trying to use /data also for
temp spooling of the backup job? /usr is virtually unused so it
seems to make more sense to use that.

/data was up to 98% but we removed alot of stuff of it. Are we still too
close to full capacity ?

Thanks!

ad0: 38162MB  [77536/16/63] at ata0-master
UDMA100
acd0: CDROM  at ata1-master UDMA33
ad4: 114473MB  [232581/16/63] at ata2-master
UDMA100
ad6: 114473MB  [232581/16/63] at ata3-master
UDMA100
ar0: 114473MB  [14593/255/63] status: READY subdisks:
disk0 READY on ad4 at ata2-master
disk1 READY on ad6 at ata3-master

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: mysql backups (was Re: Remote backups, reading from and writing to the same file)

2006-01-15 Thread Greg 'groggy' Lehey
On Sunday, 15 January 2006 at  4:28:15 +0100, Hans Nieser wrote:
> N.J. Thomas wrote:
>> * Hans Nieser <[EMAIL PROTECTED]> [2006-01-13 00:25:14 +0100]:
>>> Among the things being backed up are my mysql database tables. This
>>> made me wonder wether the backup could possibly get borked when mysql
>>> writes to any of the mysql tables while tar is reading from them.
>>
>> Yes. While MySQL is writing to the the database, it will put the files
>> on the disk in an inconsistent state. If you happen to copy those files
>> while they are in that state, MySQL will see a corrupted database.
>
> Thanks for the replies all. I think for the short term I will simply
> lock/shutdown my MySQL server (it is a home-server after all), in the long
> term I think I will look into snapshotting. I've also been thinking about
> just doing an SQL dump with mysqldump right before the backup, that will
> still copy along the tables which may be in an inconsistent state, but
> also the sql dump.

We (MySQL) are currently working on an online backup solution which
will address the issues you mention.  We don't have a firm date yet,
but it should be this year.  If you or anybody else has feature
requests, please let me know (preferably [EMAIL PROTECTED], but this
address will work too).

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpWZZBS17BBb.pgp
Description: PGP signature


Re: mysql backups (was Re: Remote backups, reading from and writing to the same file)

2006-01-14 Thread Hans Nieser

N.J. Thomas wrote:

* Hans Nieser <[EMAIL PROTECTED]> [2006-01-13 00:25:14 +0100]:

Among the things being backed up are my mysql database tables. This
made me wonder wether the backup could possibly get borked when mysql
writes to any of the mysql tables while tar is reading from them.


Yes. While MySQL is writing to the the database, it will put the files
on the disk in an inconsistent state. If you happen to copy those files
while they are in that state, MySQL will see a corrupted database.



Thanks for the replies all. I think for the short term I will simply 
lock/shutdown my MySQL server (it is a home-server after all), in the long 
term I think I will look into snapshotting. I've also been thinking about 
just doing an SQL dump with mysqldump right before the backup, that will 
still copy along the tables which may be in an inconsistent state, but 
also the sql dump.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


mysql backups (was Re: Remote backups, reading from and writing to the same file)

2006-01-13 Thread N.J. Thomas
* Hans Nieser <[EMAIL PROTECTED]> [2006-01-13 00:25:14 +0100]:
> Among the things being backed up are my mysql database tables. This
> made me wonder wether the backup could possibly get borked when mysql
> writes to any of the mysql tables while tar is reading from them.

Yes. While MySQL is writing to the the database, it will put the files
on the disk in an inconsistent state. If you happen to copy those files
while they are in that state, MySQL will see a corrupted database.

> Do I really have to use MySQL's tools to do a proper SQL dump or stop
> MySQL (and any other services that may write to files included in my
> backup) before doing a backup? Do any of the more involved
> remote-backup solutions have ways of working around this? Or is it
> simply not possible to write to a file while it is being read?

Here are some methods that people use that I am aware of:

- Turn off the MySQL db the entire time you are backing up. No new
  software/hardware needed, but you incur db downtime.

- Use replication: have a slave that is a copy of the master,
  whenever you want to back up, break the replication for a little
  while, copy the slave, and then resume the replication. No
  downtime, but you will need another box for this, so you have the
  cost of new hardware.

- Use OS snapshotting. On Linux systems with LVM, it is possible
  to take an exact "snapshot" of the filesystem at any point in time
  without too much disk usage (assuming the lifetime that the
  snapshot exists is relatively short).

  So what you do in this case is write a script that tells MySQL to
  write lock the entire database and flush the cache, this takes a
  second or two and will bring the db files on disk to a consistent
  state. You then take a snapshot of the filesystem, and immediately
  resume MySQL when you have done that. Now, you just backup off of
  the snapshot, destroying it when you are done.

  No new hardware, but you will need a snapshot capable filesystem
  and write the script to do this. I'm not sure exactly what
  snapshotting features FreeBSD has...perhaps someone else could
  fill in this information. Also, you will have a short period of
  downtime during which the MySQL db is write locked. This may or
  may not be acceptable for you.

hth,
Thomas

-- 
N.J. Thomas
[EMAIL PROTECTED]
Etiamsi occiderit me, in ipso sperabo
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


RE: Remote backups, reading from and writing to the same file

2006-01-12 Thread Pietralla, Siegfried P
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Hans Nieser
> Sent: Friday, 13 January 2006 10:25 AM
> To: freebsd-questions@freebsd.org
> Subject: Remote backups, reading from and writing to the same file
> 
> Hi list,
> 
> For a while I have been doing remote backups from my little server at
home 
> (which hosts some personal websites and also serves as my testing 
> webserver) by tarring everything I wanted to be backed up and piping
it to 
> another machine on my network with nc(1), for example:
> 
> On the recieving machine: nc -l 1 > backup-`date +%Y-%m-%d`.tar.gz
> 
> On my server: tar -c -z --exclude /mnt* -f - / | nc -w 5 -o aphax
1
> 
> (Some excludes for tar(1) are left out for simplicity's sake)
> 
> Among the things being backed up are my mysql database tables. This
made 
> me wonder wether the backup could possibly get borked when mysql
writes to 
> any of the mysql tables while tar is reading from them.
> 
> Do I really have to use MySQL's tools to do a proper SQL dump or stop 
> MySQL (and any other services that may write to files included in my 
> backup) before doing a backup? Do any of the more involved
remote-backup 
> solutions have ways of working around this? Or is it simply not
possible 
> to write to a file while it is being read?


hi hans,


just some points to note in a general unix / db way ( not freebsd or
mysql specific ) :


tar ( and unix in general ) doesn't care if you're writing while you're
reading, so the tar will 'work' - though I believe tar may get confused
if you create new files while tar is running. 


just copying a 'live' db file will generally not give you a recoverable
backup. e.g. with 'oracle' you need to put files into backup mode before
copying them which lets oracle maintain extra recovery information. with
'ingres' you use the ingres backup command which records before images
along with the database files ( and incidentally prevents table creation
( i.e. new files ) while it backs up the db - usually with tar! ). so
you really need to find out what 'hot' backup is supported by your db
and run accordingly. or just shut down your db's before running your
backups. a common way to manage database backups ( if you have the space
) is to use normal db backup methods to backup to local disk, then use
the remote backup to backup the db backup ( and exclude the live db
files since they're probably not usable anyway ).



the number one rule for ALL backup regimes is - TEST YOUR RECOVERY
METHOD - preferably regularly. a real recovery is not the time to find
out what the shortcomings in your backup methodology are.



regards,
siegfried.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Remote backups, reading from and writing to the same file

2006-01-12 Thread Philip Hallstrom
For a while I have been doing remote backups from my little server at home 
(which hosts some personal websites and also serves as my testing webserver) 
by tarring everything I wanted to be backed up and piping it to another 
machine on my network with nc(1), for example:


On the recieving machine: nc -l 1 > backup-`date +%Y-%m-%d`.tar.gz

On my server: tar -c -z --exclude /mnt* -f - / | nc -w 5 -o aphax 1

(Some excludes for tar(1) are left out for simplicity's sake)

Among the things being backed up are my mysql database tables. This made me 
wonder wether the backup could possibly get borked when mysql writes to any 
of the mysql tables while tar is reading from them.


Do I really have to use MySQL's tools to do a proper SQL dump or stop MySQL 
(and any other services that may write to files included in my backup) before 
doing a backup? Do any of the more involved remote-backup solutions have ways 
of working around this? Or is it simply not possible to write to a file while 
it is being read?


The short answer is yes.  The medium answer is I would if I were you :-)

The long answer (at least to the extent I know it) is...

You might be able to take a snapshot of the filesystem mysql's files are 
on and back those up as they'd be consistent to themselves.  But 
everything I've read about backing up a database suggests that doing a 
proper backup is the way to go.


If you really don't want to do that you might also be able to use one of 
the various LOCK commands in Mysql to block all writes until you've copied 
them over.


But really a mysqldump ... | gzip > file should result in a very very 
small file.  And you could pipe that over the network (or even start 
mysqldump on your backup machine) if you didn't want the temp file issue.


You might also consider rsync.  That would only copy files that have 
changed.  Might be handy if bandwidth is an issue.  You can set it up to 
keep backup copies of files that have changed as well.  And it can run 
over ssh.


-philip
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Remote backups, reading from and writing to the same file

2006-01-12 Thread Hans Nieser

Hi list,

For a while I have been doing remote backups from my little server at home 
(which hosts some personal websites and also serves as my testing 
webserver) by tarring everything I wanted to be backed up and piping it to 
another machine on my network with nc(1), for example:


On the recieving machine: nc -l 1 > backup-`date +%Y-%m-%d`.tar.gz

On my server: tar -c -z --exclude /mnt* -f - / | nc -w 5 -o aphax 1

(Some excludes for tar(1) are left out for simplicity's sake)

Among the things being backed up are my mysql database tables. This made 
me wonder wether the backup could possibly get borked when mysql writes to 
any of the mysql tables while tar is reading from them.


Do I really have to use MySQL's tools to do a proper SQL dump or stop 
MySQL (and any other services that may write to files included in my 
backup) before doing a backup? Do any of the more involved remote-backup 
solutions have ways of working around this? Or is it simply not possible 
to write to a file while it is being read?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


  1   2   3   >