On Tue, 05 Mar 2013 20:30:22 +0100, Matthias Petermann matth...@d2ux.org
wrote:
Hello,
Zitat von Giorgos Keramidas keram...@ceid.upatras.gr:
If this is a UFS2 filesystem, it may be a good idea to snapshot the
filesystem, and then rsync-backup the snapshot instead.
Last time I tried UFS2
On Mon, Mar 4, 2013 at 1:37 PM, CyberLeo Kitsana cyber...@cyberleo.netwrote:
You can use dump(8) to dump a SU-journaled filesystem; you just cannot
create a snapshot. This implies that dump(8) will be run against the
live and possibly changing filesystem, which can lead to issues with the
, may they appear during
backup or later on restore.
If I use all of the following rsync options... -a,-H,-A, -X, and -S
when trying to make my backups, and if I do whatever additional fiddling
is necessary to insure that I separately copy over the MBR and boot loader
also to my backup
dump(8) to
dump a journaled filesystem with soft updates'' bug-a-boo.
Sigh. The best laid plans of mice and men...
I _had_ planned on using dump/restore and making backups from live mounted
filesystems while the system was running. But I really don't want to have
to take the system down
Hello,
Zitat von Giorgos Keramidas keram...@ceid.upatras.gr:
If this is a UFS2 filesystem, it may be a good idea to snapshot the
filesystem, and then rsync-backup the snapshot instead.
Last time I tried UFS2 snapshots I found out two serious limitations.
The first is it doesn't work when UFS
. The best laid plans of mice and men...
I _had_ planned on using dump/restore and making backups from live mounted
filesystems while the system was running. But I really don't want to have
to take the system down to single-user mode every week for a few hours while
I'm making my disk-to-disk backup
, for example tar or cpdup
or rsync, as you've mentioned in the subject.
I _had_ planned on using dump/restore and making backups from live mounted
filesystems while the system was running. But I really don't want to have
to take the system down to single-user mode every week for a few hours while
others do also.
It can be disabled on an existing filesystem from single user mode.
If I use all of the following rsync options... -a,-H,-A, -X, and -S
when trying to make my backups, and if I do whatever additional fiddling
is necessary to insure that I separately copy over the MBR
by this ?
If I use all of the following rsync options... -a,-H,-A, -X, and -S
when trying to make my backups, and if I do whatever additional fiddling
is necessary to insure that I separately copy over the MBR and boot loader
also to my backup drive, then is there any reason that, in the event
? Is it tunefs with some special option?
If I use all of the following rsync options... -a,-H,-A, -X, and -S
when trying to make my backups, and if I do whatever additional fiddling
is necessary to insure that I separately copy over the MBR and boot loader
also to my backup drive
thus dumped; but not necessarily
with the consistency of the dump itself. Any tool that backs up a live
filesystem, such as rsync or tar, will have these issues.
Sigh. The best laid plans of mice and men...
I _had_ planned on using dump/restore and making backups from live mounted
- Original Message -
On Mon, 04 Mar 2013 03:35:30 -0800, Ronald F. Guilmette wrote:
Now, unfortunately, I have just been bitten by the evil... and
apparently
widely known (except to me)... ``You can't use dump(8) to dump a
journaled
filesystem with soft updates'' bug-a-boo.
On Mon, 4 Mar 2013, Ronald F. Guilmette wrote:
So, um, I was reading about this last night, but I was sleepy and my eyes
glazed over... Please remind me, what is the exact procedire for turning
off the journaling? I boot to single user mode (from a live cd?) and
then what? Is it tunefs with
Pegasus Mc Cleaft k...@mthelicon.com writes:
It recreates something, but the most important files, which reside in
subfolders of the given tar.gz archives are gone, i.e. the subfolders
are empty.
The gunzip strategy you mentioned yields the same as a regular tar -xvf
file.tar.gz.
Pegasus,
I don't have the script anymore. It is among the files lost, but it was
pretty
much straight forward, making use of:
tar -czf backupfile.tar.gz folders/ of/ my/ choice/.
After creating the backups I just cp(1)ed them to an msdosfs formated
usb stick and got them onto 8.2 this way, so
On Feb 14, 2012 7:37 AM, Mike Kelly mdke...@ualr.edu wrote:
I don't have the script anymore. It is among the files lost, but it was
pretty
much straight forward, making use of:
tar -czf backupfile.tar.gz folders/ of/ my/ choice/.
After creating the backups I just cp(1)ed them
On Tue, Feb 14, 2012 at 2:56 AM, _ pancakekin...@gmail.com wrote:
Trying to recover these files on 8.2, I found that some of the archives -
unfortunately those with
the files that are dear to me - are corrupted.
Do you have MD5, SHA256 etc... checksums of the
.tar.gz files somewhere? Do they
, but it was
pretty
much straight forward, making use of:
tar -czf backupfile.tar.gz folders/ of/ my/ choice/.
After creating the backups I just cp(1)ed them to an msdosfs formated
usb stick and got them onto 8.2 this way, so the famous ascii/binary
trap shouldn't be
an issue here.
Just
krad kra...@gmail.com writes:
Just another silly thought try the tar j flag rather than the z flag, as
you might have got your compression algorithms confused. Try the xz one as
well just in case
The system tar (based on libarchive) will figure all of this out for
you, regardless of which
Hi,
Before making the move from 7.0 to 8.2, I ran a little script that did a
backup of selected files
and folders.
Trying to recover these files on 8.2, I found that some of the archives -
unfortunately those with
the files that are dear to me - are corrupted.
In other words, I just wanted to
To: freebsd-questions@freebsd.org
Subject: corrupted tar.gz archive - I lost my backups :)/:(
Hi,
Before making the move from 7.0 to 8.2, I ran a little script that did a
backup of selected files and folders.
Trying to recover these files on 8.2, I found that some of the archives
On Mon, Feb 13, 2012 at 8:56 PM, _ pancakekin...@gmail.com wrote:
Hi,
Before making the move from 7.0 to 8.2, I ran a little script that did a
backup of selected files
and folders.
Trying to recover these files on 8.2, I found that some of the archives -
unfortunately those with
the files
does gzip --test archive.tar.gz give?
I don't have the script anymore. It is among the files lost, but it was pretty
much straight forward, making use of:
tar -czf backupfile.tar.gz folders/ of/ my/ choice/.
After creating the backups I just cp(1)ed them to an msdosfs formated
usb stick and got
-Original Message-
SNIP
tar: Damaged tar archive
tar: Retrying...
tar: gzip decompression failed
tar: Error exit delayed from previous errors.
# gzip --test sr12292011.tar.gz
gzip: data stream error
gzip: sr12292011.tar.gz: uncompress failed # gunzip sr12292011.tar.gz
On Mon, Feb 13, 2012 at 7:56 PM, _ pancakekin...@gmail.com wrote:
Before making the move from 7.0 to 8.2, I ran a little script that did a
backup of selected files
and folders.
I think it's IT tip #2 You don't have a backup unless it's tested. #1 is
Make a backup.
You could try
tested. #1 is
Make a backup.
If I am not mistaken, I did test my backups and they worked fine.
After all, one of the four files that I have unpacks with no problems
so I don't see where things could have gone wrong.
You could try archivers/gzrecover
After gzrecover and cpio, the process stops
On Tue, 29 Sep 2009 22:23:00 -0400, PJ af.gour...@videotron.ca wrote:
Polytropon wrote:
Assuming nobody uses tape drives anymore, you need to specify
another file, which is the standard output in this case, which
may not be obvious, but it is if we reorder the command line:
# dump -0 -L - a
On Wed, 30 Sep 2009, Polytropon wrote:
On Tue, 29 Sep 2009 21:49:01 -0600 (MDT), Warren Block wbl...@wonkity.com
wrote:
So usually I back up /, /var, and /usr to files
on a USB disk or sshfs. Then I switch to the new target system, booting
it with a FreeBSD disk and doing a minimal install.
boot to restore those backups.
If you are making a clone drive to move to another system
then you have to slice and partition the new drive and then
do the piped dump-restores you indicate below.
If you are making a disk to switch to in case of a failure, you
start by making a slice and partitioned
On Tue, Sep 29, 2009 at 10:48:30PM -0400, PJ wrote:
Polytropon wrote:
On Tue, 29 Sep 2009 21:26:19 -0400, PJ af.gour...@videotron.ca wrote:
But what does that mean? But ad2s1a has just been newfs'd - so how can
it be dumped if its been formatted?
When you're working on this
On Wed, Sep 30, 2009 at 05:08:05AM +0200, Polytropon wrote:
Forgot to mention this:
On Tue, 29 Sep 2009 22:23:00 -0400, PJ af.gour...@videotron.ca wrote:
1. will the s1a slice dump the entire system, that is, the a, d, e, f
and g slices or is it partitions?
The ad0s1 slice
About the dd method:
On Wed, 30 Sep 2009 11:30:58 -0400, Jerry McAllister jerr...@msu.edu wrote:
It can be used, but it is not a good way to do it.
For regular backups or even for cloning, it's not very
performant, I agree. I'm mostly using this method for
forensic purposes, when I need a copy
I am getting more and more confused with all the info regarding backing
up and cloning or moving systems from disk to disk or computer to computer.
I would like to do 2 things:
1. clone several instances of 7.2 from and existing installation
2. set up a backup script to back up changes either
On Tue, 29 Sep 2009, PJ wrote:
I am getting more and more confused with all the info regarding backing
up and cloning or moving systems from disk to disk or computer to computer.
I would like to do 2 things:
1. clone several instances of 7.2 from and existing installation
2. set up a backup
On Tue, 29 Sep 2009 19:44:38 -0400, PJ af.gour...@videotron.ca wrote:
This may be clear to someone; it certainly is not to me.
As I understand it, newfs will (re)format the slice.
No. The newfs program does create a new file system. In
other terminology, this can be called a formatting process.
On Wed, 30 Sep 2009, Polytropon wrote:
So far, I have been unable to dump the / slice, not even with the -L
option.
Always keep in mind: Use dump only on unmounted partitions.
That is unnecessary. The -L option is there just for dumping mounted
filesystems.
-Warren Block * Rapid City,
Warren Block wrote:
On Tue, 29 Sep 2009, PJ wrote:
I am getting more and more confused with all the info regarding backing
up and cloning or moving systems from disk to disk or computer to
computer.
I would like to do 2 things:
1. clone several instances of 7.2 from and existing
$ newfs -U /dev/ad2s1a
$ mount /dev/ad2s1a /target
$ cd /target
$ dump -0Lauf - /dev/ad1s1a | restore -rf -
[...]
But what does that mean? But ad2s1a has just been newfs'd - so how can
Thats ad*1*s1a that has just been formatted, not ad2...
Best,
Olivier
On Tue, 29 Sep 2009 19:09:51 -0600 (MDT), Warren Block wbl...@wonkity.com
wrote:
On Wed, 30 Sep 2009, Polytropon wrote:
So far, I have been unable to dump the / slice, not even with the -L
option.
Always keep in mind: Use dump only on unmounted partitions.
That is unnecessary. The -L
On Tue, 29 Sep 2009 21:26:19 -0400, PJ af.gour...@videotron.ca wrote:
But what does that mean? But ad2s1a has just been newfs'd - so how can
it be dumped if its been formatted?
When you're working on this low level, triple-check all your
commands. Failure to do so can cause data loss. In the
You are a Master among masters... extraordianry understanding of the
genre and ver, very clear explanations...
I guess my filter between the brain and the computer is a bit foggy... :-(
I really appreciate your explanations.
But I still have a couple of small questions below...
Polytropon wrote:
Olivier Nicole wrote:
$ newfs -U /dev/ad2s1a
$ mount /dev/ad2s1a /target
$ cd /target
$ dump -0Lauf - /dev/ad1s1a | restore -rf -
[...]
But what does that mean? But ad2s1a has just been newfs'd - so how can
Thats ad*1*s1a that has just been formatted, not
Polytropon wrote:
On Tue, 29 Sep 2009 21:26:19 -0400, PJ af.gour...@videotron.ca wrote:
But what does that mean? But ad2s1a has just been newfs'd - so how can
it be dumped if its been formatted?
When you're working on this low level, triple-check all your
commands. Failure to do so
On Tue, 29 Sep 2009 22:23:00 -0400, PJ af.gour...@videotron.ca wrote:
I feel a bit stupid, as usual, my carelessness led me to miss the
difference between ad1 and ad2... dumb, dumb, dumb.
As long as you realize it BEFORE any writing operation, it's
no problem. Keep in mind that the numbering of
Forgot to mention this:
On Tue, 29 Sep 2009 22:23:00 -0400, PJ af.gour...@videotron.ca wrote:
1. will the s1a slice dump the entire system, that is, the a, d, e, f
and g slices or is it partitions?
The ad0s1 slice (containing the a, d, e, f and g partitions) can
be copied 1:1 with dd. By
On Tue, 29 Sep 2009 22:48:30 -0400, PJ af.gour...@videotron.ca wrote:
Duh I think I see where this is leading... I'm pretty sure it was
issued from / which makes it redundant, right? I should have issued it
from somewhere else, like from home, usr or whatever but not from / as
that is what
On Tue, 29 Sep 2009, PJ wrote:
$ newfs -U /dev/ad2s1a
$ mount /dev/ad2s1a /target
$ cd /target
$ dump -0Lauf - /dev/ad1s1a | restore -rf -
dump is reading /dev/ad1s1a and using stdout for output.
restore is writing to the current directory (/target) and is reading
from stdin.
But what
On Wed, 30 Sep 2009, Polytropon wrote:
Forgot to mention this:
On Tue, 29 Sep 2009 22:23:00 -0400, PJ af.gour...@videotron.ca wrote:
1. will the s1a slice dump the entire system, that is, the a, d, e, f
and g slices or is it partitions?
The ad0s1 slice (containing the a, d, e, f and g
On Wed, 30 Sep 2009, Polytropon wrote:
On Tue, 29 Sep 2009 22:48:30 -0400, PJ af.gour...@videotron.ca wrote:
Duh I think I see where this is leading... I'm pretty sure it was
issued from / which makes it redundant, right? I should have issued it
from somewhere else, like from home, usr or
On Tue, 29 Sep 2009 21:37:50 -0600 (MDT), Warren Block wbl...@wonkity.com
wrote:
Why make it harder than it needs to be? Call it / or /var or /usr
instead of /dev/ad0s1whatever. dump will handle it.
This works without problems as long as it is running from the
system to be copied. In case
On Tue, 29 Sep 2009, Warren Block wrote:
On Wed, 30 Sep 2009, Polytropon wrote:
On Tue, 29 Sep 2009 22:48:30 -0400, PJ af.gour...@videotron.ca wrote:
Duh I think I see where this is leading... I'm pretty sure it was
issued from / which makes it redundant, right? I should have issued it
On Wed, 30 Sep 2009, Polytropon wrote:
On Tue, 29 Sep 2009 21:37:50 -0600 (MDT), Warren Block wbl...@wonkity.com
wrote:
Why make it harder than it needs to be? Call it / or /var or /usr
instead of /dev/ad0s1whatever. dump will handle it.
This works without problems as long as it is
On Tue, 29 Sep 2009 21:49:01 -0600 (MDT), Warren Block wbl...@wonkity.com
wrote:
So usually I back up /, /var, and /usr to files
on a USB disk or sshfs. Then I switch to the new target system, booting
it with a FreeBSD disk and doing a minimal install. That makes sure the
MBR is
I want to use rsync to backup a large file (say 1G) that changes a
little each day (say 1M), but I also want the ability to re-create
older versions of this file.
I could use --backup, but that would create a 1G file each day, even
though I only really need the 1M that's changed.
How do I tell
On Sun, May 24, 2009 at 11:39:57PM -0700, Kelly Jones wrote:
I want to use rsync to backup a large file (say 1G) that changes a
little each day (say 1M), but I also want the ability to re-create
older versions of this file.
I could use --backup, but that would create a 1G file each day, even
# daily_backup_disks_backuproot (path):
# Path to the directory where backups are stored on the receiving machine.
# Default: /backup
if [ -r /etc/defaults/periodic.conf ]
then
. /etc/defaults/periodic.conf
source_periodic_confs
fi
# Set defaults
daily_backup_disks_compress_local
Paul Schmehl wrote:
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if
it will work. I've been reading the man page, but I'm wondering if
anyone is doing this successfully and would like to share their cmdline.
We do this for ~100 linux (centos)
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if it will
work. I've been reading the man page, but I'm wondering if anyone is doing
this successfully and would like to share their cmdline.
--
Paul Schmehl ([EMAIL PROTECTED])
Senior Information
root can log into the backup server without a
password. It 'rotates' the backups by including the day of the week
in the file name, this gives me 7 days of complete backups.
I also take a snapshot of the home directory, in case I need to fetch
one file from backup. These dumps are really
We use the following in a script to backup our servers.
/bin/ssh -q -o 'BatchMode yes' -l user host '/sbin/dump -h 0 -0uf - /home \
| /usr/bin/gzip --fast' 2 /path/to/logs/host/home_full.dump.log
/backups/host_home_full.dump.gz
--On April 4, 2008 12:59:27 PM -0500 Paul Schmehl [EMAIL
Hey,
I'm presently using rsync over ssh, but I think dump would be better if it
will work. I've been reading the man page, but I'm wondering if anyone is
doing this successfully and would like to share their cmdline.
Are doing backups to disk? I find rsync combined with hard links
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if it
will
work. I've been reading the man page, but I'm wondering if anyone is doing
this successfully and would like to share their cmdline.
Hi Paul,
We're not using dump over ssh but I was
On Fri, 4 Apr 2008, Paul Schmehl wrote:
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if it
will work. I've been reading the man page, but I'm wondering if anyone is
doing this successfully and would like to share their cmdline.
There's an
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if it
will
work. I've been reading the man page, but I'm wondering if anyone is doing
this successfully and would like to share their cmdline.
Hi,
[ from
On Fri, Apr 4, 2008 at 1:59 PM, Paul Schmehl [EMAIL PROTECTED] wrote:
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if
it will work. I've been reading the man page, but I'm wondering if anyone
is doing this successfully and would like to share
Paul Schmehl wrote:
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if
it will work. I've been reading the man page, but I'm wondering if
anyone is doing this successfully and would like to share their cmdline.
I did this once:
--On Friday, April 04, 2008 22:21:52 +0200 Peter Boosten [EMAIL PROTECTED]
wrote:
Paul Schmehl wrote:
Has anyone done this?
Little did I know, when I posted this question, that I would receive such a
wealth of information. I'm deeply appreciative of the community's willingness
to
Little did I know, when I posted this question, that I would
receive such a wealth of information. I'm deeply appreciative of
the community's willingness to share information and thank each and
every one of your for your contributions.
Now I have some reading to do. :-)
I think
On Fri, Apr 04, 2008 at 05:00:01PM -0400, John Almberg wrote:
Little did I know, when I posted this question, that I would
receive such a wealth of information. I'm deeply appreciative of
the community's willingness to share information and thank each and
every one of your for your
On Fri, 04 Apr 2008 12:59:27 -0500, in sentex.lists.freebsd.questions
you wrote:
Has anyone done this?
Hi,
Yes, we use something like the following
#!/bin/sh
if [ -z $1 ] ; then
echo
echo Usage: $0 backup level
echoSee 'man dump' for more
i'm doing this with my notebook.
Great. What kind of drive? And have you actually
had to do a restore?
some used 80GB 3.5 drive (Seagate) + noname USB-IDE jack (true noname,
nothing written on it). the latter costed 6$ new, including disk power
supply.
works very well.
i don't make
I'm thinking of backing up my FreeBSD 6.2 webmail server by installing
FreeBSD onto the USB, and then dumping the whole filesystem onto the
USB. That way, in the event of a drive failure, I can boot off the
USB drive, and then just restore everything onto the webmail server.
Has anyone else
I'm thinking of backing up my FreeBSD 6.2 webmail server by installing
FreeBSD onto the USB, and then dumping the whole filesystem onto the USB.
That way, in the event of a drive failure, I can boot off the
USB drive, and then just restore everything onto the webmail server.
good idea. man
Wojciech Puchar wrote:
I'm thinking of backing up my FreeBSD 6.2 webmail server by installing
FreeBSD onto the USB, and then dumping the whole filesystem onto the
USB. That way, in the event of a drive failure, I can boot off the
USB drive, and then just restore everything onto the webmail
On Thursday 23 August 2007 18:31:05 Patrick Baldwin wrote:
I'm thinking of backing up my FreeBSD 6.2 webmail server by
installing FreeBSD onto the USB, and then dumping the whole
filesystem onto the USB. That way, in the event of a drive failure,
I can boot off the USB drive, and then just
the args, basically 1GB
files that are bsd-backup-(date)-??]
anyway, without uncompressing them back to disk (it's the same
slice/partitions as I have now), what's the easiest way to get read
access to these contents of the files in these backups?
Thanks,
-Jim Stapleton
in these backups?
It would be extremely difficult to allow access to arbitrary files from
a backup made like that, without dd'ing the decompressed image to
another disk. Theoretically a bzip2-compressed file can be randomly
accessed because the dictionary is reset every 900k bytes of
uncompressed data. You
code marked as
experimental, a) don't be surprised when it goes wrong, and b) the
first thing you should do to try and fix it is to stop using the
experimental code :-)
Kris
Yes, this is the lastime that i will use *experimental code*. It
looks everything back to normal.
My local backups
. But if this is true...
-brian
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]
Hi people.
Today i receive a completed FULL backups
On Fri, Oct 06, 2006 at 10:08:27AM -0700, perikillo wrote:
change the scheduler to the old SCHED_4BSD and maxuser from 10 to 32
like chuck told me.
These are probably what fixed it.
I guess you've learned a Lesson: when you choose to use code marked as
experimental, a) don't be surprised when
know if it works, disable the drivers
form the kernel and just use the modules and see what hapend :-?
But i need to see first how my backups finish, and will let you you now
guys. Thanks for your time.
___
freebsd-questions@freebsd.org mailing list
http
In response to perikillo [EMAIL PROTECTED]:
[snip]
Right now my first backup again crash
xl0: watchdog timeout
Right now i change the cable from on port to another and see what happends.
Guy, please someone has something to tell me, this is critical for me.
This is my second NIC.
, this is critical for me.
This is my second NIC.
Don't know if this is related or not, but it may be:
http://lists.freebsd.org/pipermail/freebsd-stable/2006-September/028792.html
--
Bill Moran
Collaborative Fusion Inc.
Hi people.
Today my full backups completed succesfully, but my NIC again
On Oct 4, 2006, at 10:32 AM, perikillo wrote:
My kernel file is this:
machine i386
cpu I686_CPU
You should also list cpu I586_CPU, otherwise you will not include
some optimizations intended for Pentium or higher processors.
ident BACULA
maxusers 10
--- Chuck Swiger [EMAIL PROTECTED] wrote:
On Oct 4, 2006, at 10:32 AM, perikillo wrote:
My kernel file is this:
machine i386
cpu I686_CPU
You should also list cpu I586_CPU, otherwise you
will not include
some optimizations intended for Pentium or higher
the Differential backups has been working
good, i do the buildworld yesterday and let my bacula server ready to do a
full backup for all my clients and whops...
I lost 2 clients jobs:
Client 1:
02-Oct 18:30 bacula-dir: Start Backup JobId 176, Job=PDC.2006-10-02_18.30.00
02-Oct 20:40 bacula-dir: PDC.2006-10
: watchdog timeout
I reset the server, and all the Differential backups has been working
good, i do the buildworld yesterday and let my bacula server ready to do a
full backup for all my clients and whops...
I lost 2 clients jobs:
Client 1:
02-Oct 18:30 bacula-dir: Start Backup JobId 176, Job=
PDC
Pat Maddox wrote:
I got a dedicated server a while ago, and it came with /home symlinked
to /usr/home. I'm not entirely sure why, to tell you the truth, but
it's never posed a problem. However if I run rsync -avz to back up my
server, it creates something like this:
/backup/march/19/home -
Pat Maddox wrote:
However if I run rsync -avz to back up my
server, it creates something like this:
/backup/march/19/home - /usr/home
So if I were to go to /backup/march/19 and rm -rf * wouldn't it go and
delete everything in /usr/home?
Should add: In you shell, alias rm to rm -i which
I got a dedicated server a while ago, and it came with /home symlinked
to /usr/home. I'm not entirely sure why, to tell you the truth, but
it's never posed a problem. However if I run rsync -avz to back up my
server, it creates something like this:
/backup/march/19/home - /usr/home
So if I
Graham Bentley [EMAIL PROTECTED] writes:
Do you even know for sure that your backup was running at the
time that the filesystem full messages were generated?
Unfortunatly not - the times are different so this could two
unrelated issues.
That's exactly the point.
Maybe. At some
flexbackup, a perl backup script in the ports
to do backups of /data however the log shows that it
halts almost immediatly.
I also notice this at the end of dmesg.today
pid 90729 (smbd), uid 65534 inumber 2119778 on /data: filesystem full
pid 90729 (smbd), uid 65534 inumber 2921022 on /data
Thanks for the reply Lowell.
Are you just guessing here, or do you have a reason to think this is
happening?
Not sure what you mean? I cut and paste the logs so clearly
there is something happening i.e. a reason for those messages?
A quick look at flexbackup makes me think that it uses
backups of /data however the log shows that it
halts almost immediatly.
I also notice this at the end of dmesg.today
pid 90729 (smbd), uid 65534 inumber 2119778 on /data: filesystem full
pid 90729 (smbd), uid 65534 inumber 2921022 on /data: filesystem full
pid 90728 (smbd), uid 65534 inumber 1931544
On Sunday, 15 January 2006 at 4:28:15 +0100, Hans Nieser wrote:
N.J. Thomas wrote:
* Hans Nieser [EMAIL PROTECTED] [2006-01-13 00:25:14 +0100]:
Among the things being backed up are my mysql database tables. This
made me wonder wether the backup could possibly get borked when mysql
writes to
N.J. Thomas wrote:
* Hans Nieser [EMAIL PROTECTED] [2006-01-13 00:25:14 +0100]:
Among the things being backed up are my mysql database tables. This
made me wonder wether the backup could possibly get borked when mysql
writes to any of the mysql tables while tar is reading from them.
Yes.
* Hans Nieser [EMAIL PROTECTED] [2006-01-13 00:25:14 +0100]:
Among the things being backed up are my mysql database tables. This
made me wonder wether the backup could possibly get borked when mysql
writes to any of the mysql tables while tar is reading from them.
Yes. While MySQL is writing
Hi list,
For a while I have been doing remote backups from my little server at home
(which hosts some personal websites and also serves as my testing
webserver) by tarring everything I wanted to be backed up and piping it to
another machine on my network with nc(1), for example
For a while I have been doing remote backups from my little server at home
(which hosts some personal websites and also serves as my testing webserver)
by tarring everything I wanted to be backed up and piping it to another
machine on my network with nc(1), for example:
On the recieving
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Hans Nieser
Sent: Friday, 13 January 2006 10:25 AM
To: freebsd-questions@freebsd.org
Subject: Remote backups, reading from and writing to the same file
Hi list,
For a while I have been doing remote
1 - 100 of 190 matches
Mail list logo