On Sat, 10 Mar 2007 12:46:04 +0100
"Mikael Kermorgant" <[EMAIL PROTECTED]> wrote:
> How about using mirroring using raid1 ? (you'd probably have to buy
> a thirs 200gb).
> This way, you achieve data synchronisation easily, always have a
> local copy from which to run restores and you cycle between
How about using mirroring using raid1 ? (you'd probably have to buy a
thirs 200gb).
This way, you achieve data synchronisation easily, always have a local
copy from which to run restores and you cycle between 2 disks to keep
an offsite copy.
Regards,
--
Mikael Kermorgant
I wrote:
>> Pre-flight script:
>>
>> umount /var/lib/bacula/volumes
>> mount --bind /media/usbdisk/volumes /var/lib/bacula/volumes
>>
>> Post-flight script:
>>
>> umount /var/lib/bacula/volumes
>> rsync -a /media/usbdisk/volumes/ /on-site/bacula/volumes/
>> mount --bind /on-sit
Good questions.
The backups run concurrently, so the only waiting is on the max jobs
parameter. But, yes, more than one job is written to the tape at the
same time. This also keeps the data rate up and the tape drive busy, so
even if one client slows down momentarily the drive won't have to sto
On Wed, 7 Mar 2007 13:06:15 -0800 (PST)
Kel Raywood <[EMAIL PROTECTED]> wrote:
> A few weeks ago there was a short thread on a similar theme. See
> http://article.gmane.org/gmane.comp.sysutils.backup.bacula.general/31927
> and other posts in the thread.
Hmmm... I think I had seen this one, but a
On Wed, 07 Mar 2007 10:31:27 -0600
Brad Larson <[EMAIL PROTECTED]> wrote:
> I currently have my "on-site" bacula performing backups to a large
> LVM partition (no tapes). Some extra stuff gets copied to that
> partion (ie. bacula db dump,etc...) and then gets rsync'd through a
> ssh connection ove
On Wed, 07 Mar 2007 08:36:44 -0700
Don MacArthur <[EMAIL PROTECTED]> wrote:
> Nick,
>
> I use a similar approach with one significant difference and a few
> small ones.
>
> I have have periodic (daily and weekly) pools and write (in parallel
> jobs for all the clients) all the backups to one vol
A few weeks ago there was a short thread on a similar theme. See
http://article.gmane.org/gmane.comp.sysutils.backup.bacula.general/31927
and other posts in the thread.
In that thread, the on-site disk-volumes were written first and then
copied to tape for off-site protection.
However your requ
I currently have my "on-site" bacula performing backups to a large LVM
partition (no tapes). Some extra stuff gets copied to that partion (ie.
bacula db dump,etc...) and then gets rsync'd through a ssh connection
over the internet to an "off-site" LVM. One server has a monthly "full"
backup th
I apologize if I was ambiguous, it was unintentional.
I use the schedule to run a single job for each client, all starting at
10 PM, with a limit of 10 concurrent jobs.
So, about 20 jobs are initiated, and 10 run while the rest wait. When
one of the running jobs terminates, a waiting job sta
On 7 Mar 2007 at 8:36, Don MacArthur wrote:
> Nick,
>
> I use a similar approach with one significant difference and a few small
> ones.
>
> I have have periodic (daily and weekly) pools and write (in parallel
> jobs for all the clients) all the backups to one volume to save disk
> space and red
Nick,
I use a similar approach with one significant difference and a few small
ones.
I have have periodic (daily and weekly) pools and write (in parallel
jobs for all the clients) all the backups to one volume to save disk
space and reduce the management complexity. This media goes off site.
Nick,
For what it might be worth to you, here's how I do this with a couple
of my clients.
Part I
Full install on a Windows 2003 NAS/Server unit with full jobs on two
of the workstations and 'sniping' production only data sitting on the
NAS/Server unit.
Two external HDD that rotate M,W,F
G'day guys,
Just trying to design a backup solution using Bacula for a small
company I work for and would appreciate some help with a few issues.
This email may be rather long, so certainly appreciate anyone taking
the time to read it, let alone offer any insight they may have!
The main problem I
14 matches
Mail list logo