Re: [Bacula-users] fat32 incrementals backup too much

2011-09-27 Thread Geert Stappers
Op 20110926 om 20:49 schreef scar:
 Martin Simmons @ 09/26/2011 11:10 AM:
  Any particular types of file?
  
  Check the ctimes (ls -lc).
 
 most of the files are homemade audio recordings... mp3, wav, and flac.
 
 the ctimes don't show any modification either.

I'm not sure if fat32 really has ctime,
it could be faked for compatabilty reasons.


In the original post is said that the fat32 disk is an external device
and not allways mounted. Here a quick check:
 * Note the timestamp of the directory that will be the mount point
 * Mount the fat32 disk
 * Note the timestamp of the directory that is the mount point
 * Unmount the disk, take it somewhere else and write there
 * that the disk back and mount it
 * Check the timestamp of the mount point directory
 * Compare it with the previous notes

My guess is that timestamps are artificul changed at mount time.


Hope this helps

Cheers
Geert Stappers
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problem with Bacula over IPSec

2011-09-27 Thread Kevin Keane
Hi,

I recently made several changes to my network, and ever since my bacula backups 
to the affected server error out after exactly 15 minutes. I don't know exactly 
which change is the culprit, but I suspect it is somehow IPSec-related, and 
would appreciate some help troubleshooting the problem.

I have servers in two locations: my main office with the bacula director and 
SD, and several clients, and two remote servers in a data center 2000 miles 
away. These servers only run the bacula-fd.

The director used to use SSH to connect to both remote servers, and this worked 
reliably.

Recently, I implemented IPSec to one of the two remote servers. Director, 
remote FD and SD can now connect directly to each other, without any other 
tunnel (logically, IPSec is transparent). At the same time, I also switched my 
Internet connection from Cable modem to DSL. The second server still uses the 
SSH tunnel to do the backups.

Unfortunately, using IPSec, the backups seem to fail after 15 minutes; the 
connection from FD to SD seems to get severed. The backups to the second 
server, using SSH, work without a problem.

The director produces this log output:
26-Sep 19:31 my-dir JobId 10686: Start Backup JobId 10686, 
Job=remoteserver.2011-09-26_19.05.00_06
26-Sep 19:31 my -dir JobId 10686: Created new Volume  remoteserver 
_20110926193150_Differential.bacula in catalog.
26-Sep 19:31 my -dir JobId 10686: Using Device SATADisk1
26-Sep 19:31 Disk1 JobId 10686: Labeled new Volume  remoteserver 
_20110926193150_Differential.bacula on device SATADisk1 (/misc/BACKUP1).
26-Sep 19:31 Disk1 JobId 10686: Wrote label to prelabeled Volume  remoteserver 
_20110926193150_Differential.bacula on device SATADisk1 (/misc/BACKUP1)
26-Sep 19:31 my -dir JobId 10686: Max Volume jobs=1 exceeded. Marking Volume  
remoteserver_20110926193150_Differential.bacula as Used.
26-Sep 19:46 my -dir JobId 10686: Fatal error: Network error with FD during 
Backup: ERR=Connection reset by peer
26-Sep 19:46 Disk1 JobId 10686: JobId=10686 Job= 
remoteserver.2011-09-26_19.05.00_06 marked to be canceled.
26-Sep 19:46 Disk1 JobId 10686: Error: bsock.c:548 Read expected 65536 got 1392 
from client:xxx.xxx.xxx.xxx:366432

The FD produces this error message:

Sep 27 02:46:58 remoteserver bacula-fd: bsock.c:393 Write error sending 270 
bytes to client36387: ERR=Connection reset by peer


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with Bacula over IPSec

2011-09-27 Thread Alexandre Chapellon

  
  
My two cent it's something related to tcpmss.
  I remember I had some problems like that when I deployed my
  strongswan setup (not sure if it was with bacula or any other
  app).
  If so, this is completely unrelated to bacula, try playing with
  iptables TCPMSS target in the appropriated chain/table.
  
  Regards.


Le 27/09/2011 10:20, Kevin Keane a crit:

  Hi,

I recently made several changes to my network, and ever since my bacula backups to the affected server error out after exactly 15 minutes. I don't know exactly which change is the culprit, but I suspect it is somehow IPSec-related, and would appreciate some help troubleshooting the problem.

I have servers in two locations: my main office with the bacula director and SD, and several clients, and two remote servers in a data center 2000 miles away. These servers only run the bacula-fd.

The director used to use SSH to connect to both remote servers, and this worked reliably.

Recently, I implemented IPSec to one of the two remote servers. Director, remote FD and SD can now connect directly to each other, without any other tunnel (logically, IPSec is transparent). At the same time, I also switched my Internet connection from Cable modem to DSL. The second server still uses the SSH tunnel to do the backups.

Unfortunately, using IPSec, the backups seem to fail after 15 minutes; the connection from FD to SD seems to get severed. The backups to the second server, using SSH, work without a problem.

The director produces this log output:
26-Sep 19:31 my-dir JobId 10686: Start Backup JobId 10686, Job=remoteserver.2011-09-26_19.05.00_06
26-Sep 19:31 my -dir JobId 10686: Created new Volume " remoteserver _20110926193150_Differential.bacula" in catalog.
26-Sep 19:31 my -dir JobId 10686: Using Device "SATADisk1"
26-Sep 19:31 Disk1 JobId 10686: Labeled new Volume " remoteserver _20110926193150_Differential.bacula" on device "SATADisk1" (/misc/BACKUP1).
26-Sep 19:31 Disk1 JobId 10686: Wrote label to prelabeled Volume " remoteserver _20110926193150_Differential.bacula" on device "SATADisk1" (/misc/BACKUP1)
26-Sep 19:31 my -dir JobId 10686: Max Volume jobs=1 exceeded. Marking Volume " remoteserver_20110926193150_Differential.bacula" as Used.
26-Sep 19:46 my -dir JobId 10686: Fatal error: Network error with FD during Backup: ERR=Connection reset by peer
26-Sep 19:46 Disk1 JobId 10686: JobId=10686 Job=" remoteserver.2011-09-26_19.05.00_06" marked to be canceled.
26-Sep 19:46 Disk1 JobId 10686: Error: bsock.c:548 Read expected 65536 got 1392 from client:xxx.xxx.xxx.xxx:366432

The FD produces this error message:

Sep 27 02:46:58 remoteserver bacula-fd: bsock.c:393 Write error sending 270 bytes to client36387: ERR=Connection reset by peer


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
  
  
Alexandre Chapellon
Ingnierie des systmes open sources et
  rseaux.
  Follow me on twitter: @alxgomz
  

  

attachment: a_chapellon.vcf--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ld: warning: symbol `plugin_list' has differing sizes:

2011-09-27 Thread Kim Culhan
On Mon, September 26, 2011 10:29 am, Martin Simmons wrote:
 On Fri, 23 Sep 2011 17:16:31 -0400, Kim Culhan said:
 As noted in a previous  message, with a correct libtool it now produces
 those warnings but
 not when compiling with --disable-libtool.

 OK.

 However, I would expect ld to give a warning when linking bacula-dir
without
 --disable-libtool too, because it also defines plugin_list globally.
Yes ld gives the 'plugin_list' warning without --disable-libtool, with
--disable-libtool
it does not give those warnings and produces working bacula files.

With --disable-libtool there are some warnings of the general type:

Compiling crypto.c
crypto.c, line 567: Warning (Anachronism): Formal argument cb of type
extern C int(*)(char*,int,int,void*) in call to
PEM_read_bio_PrivateKey(bio_st*, evp_pkey_st**, extern C
int(*)(char*,int,int,void*), void*) is being passed
int(*)(char*,int,int,void*)

-kim
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] What does this debug code mean?

2011-09-27 Thread JJ
Let's try another Q:

/usr/sbin/bacula-fd -d100 -c
/etc/bacula/bacula-fd.conf on the fd host and received this output when 
connecting from the director...


bacula-fd: filed_conf.c:438-0 Inserting director res: bacula-director-mon
bacula-director-fd: jcr.c:132-0 read_last_jobs seek to 188
bacula-director-fd: jcr.c:139-0 Read num_items=10
bacula-director-fd: pythonlib.c:113-0 No script dir. prog=FDStartUp
root@bacula-director:/etc/bacula# bacula-director-fd: filed.c:225-0
filed: listening on port 9102
bacula-director-fd: bnet_server.c:96-0 Addresses
host[ipv4:192.168.1.18:9102]
bacula-director-fd: bnet.c:667-0 who=client host=192.168.1.17 port=36387
bacula-director-fd: jcr.c:603-0 OnEntry JobStatus=bacula-director-fd:
jcr.c:623-0 OnExit JobStatus=C set=C
bacula-director-fd: find.c:81-0 init_find_files ff=80a2968
bacula-director-fd: job.c:233-0 dird: Hello Director bacula10-dir calling
bacula-director-fd: job.c:249-0 Executing Hello command.
bacula-director-fd: job.c:252-0 Quit command loop. Canceled=0
bacula-director-fd: pythonlib.c:237-0 No startup module.
bacula-director-fd: job.c:337-0 Calling term_find_files
bacula-director-fd: job.c:340-0 Done with term_find_files
bacula-director-fd: job.c:342-0 Done with free_jcr



JJ
Support Engineer
Cirrhus9
10320 Meadow Glen Way East
Unit 1B
Escondido, CA  92026

Office  - (760) 297-2148 ext 3
Fax - (760) 466-7207
Home- (330) 788-1876
Skype:  - my-kungfu
gpg fingerprint:- C839 9940 16F0 F5ED 694A  6BA4 2F83 F0A7 86F2 8105

On 09/26/2011 11:24 AM, JJ wrote:
 bacula-fd: filed_conf.c:438-0 Inserting director res: bacula-director-mon
 bacula-director-fd: jcr.c:132-0 read_last_jobs seek to 188
 bacula-director-fd: jcr.c:139-0 Read num_items=10
 bacula-director-fd: pythonlib.c:113-0 No script dir. prog=FDStartUp
 root@bacula-director:/etc/bacula# bacula-director-fd: filed.c:225-0
 filed: listening on port 9102
 bacula-director-fd: bnet_server.c:96-0 Addresses
 host[ipv4:192.168.1.18:9102]
 bacula-director-fd: bnet.c:667-0 who=client host=192.168.1.17 port=36387
 bacula-director-fd: jcr.c:603-0 OnEntry JobStatus=bacula-director-fd:
 jcr.c:623-0 OnExit JobStatus=C set=C
 bacula-director-fd: find.c:81-0 init_find_files ff=80a2968
 bacula-director-fd: job.c:233-0dird: Hello Director bacula10-dir calling
 bacula-director-fd: job.c:249-0 Executing Hello command.
 bacula-director-fd: job.c:252-0 Quit command loop. Canceled=0
 bacula-director-fd: pythonlib.c:237-0 No startup module.
 bacula-director-fd: job.c:337-0 Calling term_find_files
 bacula-director-fd: job.c:340-0 Done with term_find_files
 bacula-director-fd: job.c:342-0 Done with free_jcr

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What does this debug code mean?

2011-09-27 Thread John Drescher
On Tue, Sep 27, 2011 at 9:25 AM, JJ jjo...@cirrhus9.com wrote:
 Let's try another Q:

 /usr/sbin/bacula-fd -d100 -c
 /etc/bacula/bacula-fd.conf on the fd host and received this output when
 connecting from the director...


 bacula-fd: filed_conf.c:438-0 Inserting director res: bacula-director-mon
 bacula-director-fd: jcr.c:132-0 read_last_jobs seek to 188
 bacula-director-fd: jcr.c:139-0 Read num_items=10
 bacula-director-fd: pythonlib.c:113-0 No script dir. prog=FDStartUp
 root@bacula-director:/etc/bacula# bacula-director-fd: filed.c:225-0
 filed: listening on port 9102
 bacula-director-fd: bnet_server.c:96-0 Addresses
 host[ipv4:192.168.1.18:9102]
 bacula-director-fd: bnet.c:667-0 who=client host=192.168.1.17 port=36387
 bacula-director-fd: jcr.c:603-0 OnEntry JobStatus=bacula-director-fd:
 jcr.c:623-0 OnExit JobStatus=C set=C
 bacula-director-fd: find.c:81-0 init_find_files ff=80a2968
 bacula-director-fd: job.c:233-0 dird: Hello Director bacula10-dir calling
 bacula-director-fd: job.c:249-0 Executing Hello command.
 bacula-director-fd: job.c:252-0 Quit command loop. Canceled=0
 bacula-director-fd: pythonlib.c:237-0 No startup module.
 bacula-director-fd: job.c:337-0 Calling term_find_files
 bacula-director-fd: job.c:340-0 Done with term_find_files
 bacula-director-fd: job.c:342-0 Done with free_jcr


Looks like every thing is fine, although I am not an expert at reading these.

John

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems doing concurrent jobs, and having lousy performance

2011-09-27 Thread Boudewijn Ector

 Do you have Maximum Concurrent Jobs set in the Director and storage
 sections in bacula-dir.conf?

I just added it to the storage section; it seems to have been removed 
somehow.
 Can someone please explain to me why bacula still is not able to run
 concurrent Jobs? Do I have to create a storage for each client (for
 instance)? And what's the reason for having to do so?

 Only 1 volume and thus pool can be loaded in a storage device at a
 time so if you have several pools that you want to run backups on you
 need more than 1 storage device. For disk based backups, I highly
 recommend using the bacula virtual autochanger.

 http://sourceforge.net/projects/vchanger/

 This will greatly simplify the setup of multiple pools, devices and
 concurrency. Just send all jobs to the virtual autochanger resource
 and let bacula handle the devices.

Is there any configuration required for doing so? the autochanger seems 
(to me) to be fairly complex.

 Software compression is a very heavy CPU usage process on the FD and
 will certainly slow down your backups.

When having a look at the FD hosts, bacula-fd doesn't really pop up when 
running 'top', nor does the system's load increase a lot (these machines 
are also quite over dimensioned for their purpose). But that's of later 
concern, I'd be very very happy if I would be able to get concurrent 
jobs to work first.


Boudewijn


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula disk to tape config

2011-09-27 Thread René Moser
Hi

We are currently using a proprietary backup solution, and we evaluation
bacula to replace it. 

We have some 100 hosts to backup up. The current work flow is like: 

1. backup server backups host files over working time to a disk volume
on backup server.

2. During night, the disk volume is written to tape, verified, and the
disk volume will be purged.

The catalogue knows afterwards that the files have to be restored from
tapes and no longer exists on disk volume.

So during working day, the tape drive is accessible and we are able to
restore files form the tapes.

How can does be done with bacula? 

Thanks for any hints.

Regards 
René









--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems doing concurrent jobs, and having lousy performance

2011-09-27 Thread John Drescher
On Tue, Sep 27, 2011 at 10:30 AM, Boudewijn Ector
boudew...@boudewijnector.nl wrote:

 Do you have Maximum Concurrent Jobs set in the Director and storage
 sections in bacula-dir.conf?

 I just added it to the storage section; it seems to have been removed
 somehow.
 Can someone please explain to me why bacula still is not able to run
 concurrent Jobs? Do I have to create a storage for each client (for
 instance)? And what's the reason for having to do so?

 Only 1 volume and thus pool can be loaded in a storage device at a
 time so if you have several pools that you want to run backups on you
 need more than 1 storage device. For disk based backups, I highly
 recommend using the bacula virtual autochanger.

 http://sourceforge.net/projects/vchanger/

 This will greatly simplify the setup of multiple pools, devices and
 concurrency. Just send all jobs to the virtual autochanger resource
 and let bacula handle the devices.

 Is there any configuration required for doing so? the autochanger seems
 (to me) to be fairly complex.

Yes it has a small configuration file. And probably 10 to 20
additional configuration lines in your bacula-sd.conf
For me it was like 30 minutes to read the vchanger documentation and
to implement then just let it do its work. This in my opinion is much
easier than creating a separate storage device for each pool or
juggling storage devices to obtain concurrency.


 Software compression is a very heavy CPU usage process on the FD and
 will certainly slow down your backups.

 When having a look at the FD hosts, bacula-fd doesn't really pop up when
 running 'top', nor does the system's load increase a lot (these machines
 are also quite over dimensioned for their purpose). But that's of later
 concern, I'd be very very happy if I would be able to get concurrent
 jobs to work first.


Then you need do some debugging to track down the bottleneck. It can
also be the database or a number of other reasons. Also are you
talking about Full backups? I ask that because usually Incrementals
and Differentials spend more time looking for files then actually
backing up data so these tend to have low rates however since they
only backup a fraction of the Full backup they still finish much
faster.

John

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula disk to tape config

2011-09-27 Thread Alan Brown
René Moser wrote:
 Hi
 
 We are currently using a proprietary backup solution,

Which proprietary backup system?

 and we evaluation
 bacula to replace it. 
 
 We have some 100 hosts to backup up. The current work flow is like: 

Are these hosts PCs? What do they do? How much changes on them each night?




--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula disk to tape config

2011-09-27 Thread Gavin McCullagh
On Tue, 27 Sep 2011, René Moser wrote:

 We are currently using a proprietary backup solution, and we evaluation
 bacula to replace it. 
 
 We have some 100 hosts to backup up. The current work flow is like: 
 
 1. backup server backups host files over working time to a disk volume
 on backup server.
 
 2. During night, the disk volume is written to tape, verified, and the
 disk volume will be purged.
 
 The catalogue knows afterwards that the files have to be restored from
 tapes and no longer exists on disk volume.
 
 So during working day, the tape drive is accessible and we are able to
 restore files form the tapes.
 
 How can does be done with bacula? 

It sounds like what you're looking for could be implemented as:

1. A normal Full/Differential/Incremental job to a disk-based storage
   device to happen during the day.
2. A Migrate job to tape from disk to happen during the night.

It is important to note in the above that at present the disk-based storage
device and the tape device must be on the same Storage Daemon for the
Migrate to work.  This should achieve moreorless what you're looking for I
think (though I must say that I haven't used migrate jobs personally).

To be honest though, the above work-flow raises a lot of questions for me:

1. You're doing backups of your live servers _during_ working hours.  This
   is the opposite of what most people ordinarily do.  Does this not create
   load on the already busy servers?  Are the backups you get always in a
   consistent state?

2. It is apparently not possible to backup all 100 hosts straight to
   tape, so you write them to disk first.  Why is that?  It sounds like
   you're manually doing Spooling, which Bacula has transparent support
   for.  You could, in Bacula, set up a single job which copied the backup
   data to disk first and then as that continues, spool them out to tape.
   This would need mean the two jobs happening at the same time, so you'd
   have to do it at night -- or else the tape drives would not be available
   during the day for restores.

3. Do you use Incremental backups at present or full backups each day?



Gavin






--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula disk to tape config

2011-09-27 Thread René Moser
On Tue, 2011-09-27 at 16:41 +0100, Alan Brown wrote: 
  We are currently using a proprietary backup solution,
 
 Which proprietary backup system?

BRU Backup Server

  We have some 100 hosts to backup up. The current work flow is like: 
 
 Are these hosts PCs? What do they do? How much changes on them each night?

Linux Servers (like Mailserver, Virtual Private Servers, Web servers,
FTP Servers, etc.)

We do differential backups. it really depends on the server how many
files changed. The range is from a few MB to 2-3GB changes 

We do not backup every host during night, just a few per night on a day
per week. 


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula disk to tape config

2011-09-27 Thread René Moser
On Tue, 2011-09-27 at 16:51 +0100, Gavin McCullagh wrote:

 It sounds like what you're looking for could be implemented as:
 
 1. A normal Full/Differential/Incremental job to a disk-based storage
device to happen during the day.
 2. A Migrate job to tape from disk to happen during the night.

Yep,
http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html
is what I was looking for. 

 To be honest though, the above work-flow raises a lot of questions for me:
 
 1. You're doing backups of your live servers _during_ working hours.  This
is the opposite of what most people ordinarily do.  Does this not create
load on the already busy servers?  Are the backups you get always in a
consistent state?

Okay, it is not _really_ during day it is more like backups over night
to disk and backup to tape should be finished in the morning. But this
is just a detail. We did not have (yet) really (big) problems about
consistency. But as you say these systems are live systems and if a we
hit a bad timing, that is always a problem.

 
 2. It is apparently not possible to backup all 100 hosts straight to
tape, so you write them to disk first.  Why is that?  It sounds like
you're manually doing Spooling, which Bacula has transparent support
for.  You could, in Bacula, set up a single job which copied the backup
data to disk first and then as that continues, spool them out to tape.
This would need mean the two jobs happening at the same time, so you'd
have to do it at night -- or else the tape drives would not be available
during the day for restores.

This may be a limitation of the current backup software in use. It may
be faster this way (we are using a tape changer). 

We also have had troubles with some tape drives and this way (backup
first to disk) was an advantage so the backup was already on disk and we
just had to fix the tape changer and backup the disk volume to tape.

I am really open do to it the bacula way if there is a better
workflow. 

 
 3. Do you use Incremental backups at present or full backups each day?

Actually mostly differentials.




--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula disk to tape config

2011-09-27 Thread René Moser
On Tue, 2011-09-27 at 16:51 +0100, Gavin McCullagh wrote: 
 2. It is apparently not possible to backup all 100 hosts straight to
tape, so you write them to disk first.  Why is that?  It sounds like
you're manually doing Spooling, which Bacula has transparent support
for.  You could, in Bacula, set up a single job which copied the backup
data to disk first and then as that continues, spool them out to tape.

Do you have an example, how the spooling like you described can be
implemented?


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula disk to tape config

2011-09-27 Thread Gavin McCullagh
On Tue, 27 Sep 2011, René Moser wrote:

 On Tue, 2011-09-27 at 16:51 +0100, Gavin McCullagh wrote: 
  2. It is apparently not possible to backup all 100 hosts straight to
 tape, so you write them to disk first.  Why is that?  It sounds like
 you're manually doing Spooling, which Bacula has transparent support
 for.  You could, in Bacula, set up a single job which copied the backup
 data to disk first and then as that continues, spool them out to tape.
 
 Do you have an example, how the spooling like you described can be
 implemented?

We're only starting to use tapes ourselves now, so this is somewhat
theoretical still for me.  However, this is where the documentation is and
I'm pretty sure it's used by a good number of people who use tapes and the
feature has been in Bacula for a long time:

http://www.bacula.org/5.0.x-manuals/en/main/main/Data_Spooling.html

Gavin


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula disk to tape config

2011-09-27 Thread Gavin McCullagh
Hi,

On Tue, 27 Sep 2011, René Moser wrote:

 Okay, it is not _really_ during day it is more like backups over night
 to disk and backup to tape should be finished in the morning. But this
 is just a detail. We did not have (yet) really (big) problems about
 consistency. But as you say these systems are live systems and if a we
 hit a bad timing, that is always a problem.

I'm not sure how your timings work out, but spooling might give you a
shorter overall backup window -- so the tapes would free up quicker, which
might allow you to run the entire process outside of work hours.  It's
difficult to tell from here though.

 This may be a limitation of the current backup software in use. It may
 be faster this way (we are using a tape changer). 
 
 We also have had troubles with some tape drives and this way (backup
 first to disk) was an advantage so the backup was already on disk and we
 just had to fix the tape changer and backup the disk volume to tape.

Not having extensively used tapes and spooling, I can't say if this will be
workable here.  You'd need to test.

 I am really open do to it the bacula way if there is a better
 workflow. 

I guess you'll need to do some testing and see what works best for you.  It
sounds somewhat promising anyway.

  3. Do you use Incremental backups at present or full backups each day?
 
 Actually mostly differentials.

Might incrementals be quicker in terms of limiting load on the live server?

Another feature you might consider is the ability to run Virtual Full
backups to consolidate an old full backup and subsequent incremental or
differential backups into a new full backup.  We do this using disk-based
backups, but I'm not sure it will be so practical on tapes.  You need to
have two devices too, one to read from and one to write to which you may
not have.  

Gavin



--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Need Some Help with Backup Strategy

2011-09-27 Thread Palmer, David W.
Hello,

I need some input on my backup strategy. Here is my setup:

1 -  LTO3  24 slot autochanger with one drive.

1 -  6.5 TB Openfiler SAN

1 - 17 TB Openfiler SAN

30 Servers with around 8 TB of Data.

I am trying to figure out a backup schedule that will allow all the data to be 
backed up during the open backup windows as well as allow for long term 
retention. Here is my current idea:

Pools:
Monthly Pool 
-Retention 2 months
-Storage is the 17TB San

Daily Pool
- 1 month retention
- Storage is the 6.5 TB San

Monthly Archive Pool 
 - 1 Year Retention
- Autochanger for storage

Yearly Archive
- 7 Year retention
- Autochanger for storage



Schedule:

1st Saturday of the month Monthly full backup are made to Monthly Pool

Daily incremental occur every day after Monthly Full Backup to Daily 
Pool

3rd Sunday previous monthly backup is copied to Monthly Archive

Yearly Archive is run at the end of December 


Does this seem like a good plan? Should I be running weekly fulls or does the 
incremental disk based backup (which is located in another building than server 
room) work?

Thanks,

David


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fat32 incrementals backup too much

2011-09-27 Thread scar
Thanks.  I'm just running another full backup now so i'll look into the
timestamps next time i need to do a backup.

too bad such a crumby filesystem is the most compatible between various
operating systems...




--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fat32 incrementals backup too much

2011-09-27 Thread John Drescher
On Tue, Sep 27, 2011 at 2:37 PM, scar s...@drigon.com wrote:
 Thanks.  I'm just running another full backup now so i'll look into the
 timestamps next time i need to do a backup.

 too bad such a crumby filesystem is the most compatible between various
 operating systems...


With the ntfs-3g driver for linux and macintosh ntfs can be an option
on linux, mac and windows.

John

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] [Bacula-users sql-querries for documentation

2011-09-27 Thread Joris Heinrich
Hello,

my current job is the documentation of our bacula-installation.

Are there any useful sql commands like this:

select client and show me jobdefs, schedule, fileset,

or

select schedule and show me all associated jobs and clients?

Thanks for your help

Greetings


JHN




--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users