Re: [Bacula-users] dir and sd hang

2009-06-25 Thread Silver Salonen
On Wednesday 24 June 2009 12:58:21 Attila Fülöp wrote:
 Silver Salonen wrote:
  On Tuesday 16 June 2009 15:08:29 Attila Fülöp wrote:
  Silver Salonen wrote:
  Hi.
 
  I use Bacula 3.0.0 on FreeBSD-6.3. The problem I have is that DIR and SD 
  tend 
  to hang often, and it seems one causes another, because they do it 
  together 
  mostly. Sometimes it happens between jobs, sometimes it happens before 
all 
  the 
  jobs. The only possibility unlock the processes is to kill them with -9 
  and 
  start again. After that backups work usually a few days and then the 
  processes 
  hang again.
 
 
 Silver,
 
 not sure if you have already seen this:
 http://security.freebsd.org/advisories/FreeBSD-EN-09:04.fork.asc
 Maybe this patch will fix your problems with FreeBSD 7.0.
 
 Attila

I'll try upgrading my 6.2 to 7-STABLE some time. I'll post about the results.

-- 
Silver

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] concurrent backups to one volume?

2009-06-25 Thread Andreas Schuldei
hi!

reading the documentation i understand that you should have several
volumes for concurrent backups, on different devices/directories. (i
work on disk for now.)

However some people here on the list seem to be doing well with
concurrent backups to only one volume. is that actually true or am i
misunderstanding something?

if so, how do i do that? i would like a reciept for that. what bacula
version is required for that?

/andreas

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slightly off topic: Using the mailslot on a Quantum Superloader 3.

2009-06-25 Thread Michel Meyers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

Forgive me for the O/T post, but I know several people here are using
Quantum Superloaders and have wondered how the mailslot could be used,
given that it cannot be addressed via mtx. I found this on the web:

http://www.symantec.com/connect/forums/ejecting-tapes-library-after-job-completion-quantum-superloader-3

I removed the 'Windows-ness' and this is what it boils down to:

Refreshing the commands page:
wget -O /dev/null -q --http-user=guest --http-password=guest
http://192.168.1.10/commands.html;

Moving tape from Slot 4 to Mailslot:
wget -O /dev/null -q --http-user=guest --http-password=guest
http://192.168.1.10/move.cgi?from=4to=18;

(wait for user to take out tape after doing this)

Move tape from Mailslot to Slot 4:
wget -O /dev/null -q --http-user=guest --http-password=guest
http://192.168.1.10/move.cgi?from=18to=4;

(User will have to put tape in and confirm it on the display.)

This can be used to get the status page (no auth required)
wget  -q  http://192.168.1.10/status.html;

The resulting status.html can be checked for the current Autoloader and
drive status (look for Idle to make sure they're not busy before doing
any operations .).


I plan to use this in combination with 'mtx status' and a few SQL
queries to automatically remove Volstatus = 'Full' tapes and insert
Recycled ones. (Query SQL to find volume names of 'Full' volumes, cross
reference with mtx status to find related slot numbers, eject them to
mailslot (one by one), find 'Recycled' volumes to insert, locate free
slots and move tape from mailslot there, ...)

The whole thing isn't set in stone yet and might fall under the table.
If I do complete it, I'll share the resulting scripts here and on the
bacula wiki. (Not sure what language I'll be using yet as I'm actually
no good at coding.)

Greetings,
   Michel
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)

iEYEARECAAYFAkpDWEgACgkQ2Vs+MkscAyVc5wCeP3RUT42yVvjP9VgfsxJo6iq0
kfMAoJk2bhlmMTrJPx6s29Us07lWrnCw
=sy6D
-END PGP SIGNATURE-


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Network send error to SD. ERR=Broken Pipe

2009-06-25 Thread Cesare Montresor

Hi guys, does anyone know how to resolve this issue ?

Considerations:
- Started and restared, all components many times :)
- Add hearthbeat interval = 1 to SF and SD
- Test connection using telnet, works...
- Unix permissions at SD are ok.
- This job dies always near this point:
   ~9min (8.57, 9.03, 9.03)
   ~11.000 files (11.110, 11.115, 11.113)
   ~880mb (882, 880, 882)
- dir and fd are v1.38 - debian 4 stable
- sd is v1.38 on centos
- this backup system well worked for 1 year
- the architecture:
   is designed for disaster recovery.
   there are only 2 server involved: repository and jane.
   both server have fd and sd, thay are located in 2 different 
buildings on different subnets.

   jane-fd sends his backup to repository-sd.
   repository-fd send his backup to jane-sd.
   director is located on repository.


All advice are welcome.

Thanks,
Cesare

25-giu 11:28 repository-dir: No prior Full backup Job record found.
25-giu 11:28 repository-dir: No prior or suitable Full backup found. Doing FULL 
backup.
25-giu 11:28 repository-dir: Start Backup JobId 2248, 
Job=repository.base.2009-06-25_11.28.07
25-giu 11:09 storage.jane: Volume repository.base-0005 previously written, 
moving to end of data.
25-giu 11:37 repository-fd: repository.base.2009-06-25_11.28.07 Fatal error: 
backup.c:500 Network send error to SD. ERR=Pipe rotta
25-giu 11:37 repository-dir: repository.base.2009-06-25_11.28.07 Error: Bacula 
1.38.11 (28Jun06): 25-giu-2009 11:37:40
 JobId:  2248
 Job:repository.base.2009-06-25_11.28.07
 Backup Level:   Full (upgraded from Incremental)
 Client: repository i486-pc-linux-gnu,debian,4.0
 FileSet:repository.base 2008-03-21 17:35:23
 Pool:   repository.base
 Storage:storage.jane
 Scheduled time: 25-giu-2009 11:28:05
 Start time: 25-giu-2009 11:28:43
 End time:   25-giu-2009 11:37:40
 Elapsed time:   8 mins 57 secs
 Priority:   10
 FD Files Written:   11,113
 SD Files Written:   0
 FD Bytes Written:   882,597,993 (882.5 MB)
 SD Bytes Written:   0 (0 B)
 Rate:   1643,6 KB/s
 Software Compression:   53,9 %
 Volume name(s): 
 Volume Session Id:  1

 Volume Session Time:1245920820 snap://1245920820
 Last Volume Bytes:  1 (1 B)
 Non-fatal FD errors:0
 SD Errors:  0
 FD termination status:  Error
 SD termination status:  Error
 Termination:*** Backup Error ***

begin:vcard
fn:Cesare Montresor
n:Montresor;Cesare
org:Netspin srl;Research and Development
adr:;;Via Maestro Ardizzone, 3;Grezzana;Verona;;Italia
email;internet:c.montre...@netspin.it
tel;work:045-86 50 147
tel;fax:045-86 58 007
x-mozilla-html:FALSE
url:http://www.netspin.it
version:2.1
end:vcard

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] concurrent backups to one volume?

2009-06-25 Thread Silver Salonen
On Thursday 25 June 2009 13:42:30 Andreas Schuldei wrote:
 hi!
 
 reading the documentation i understand that you should have several
 volumes for concurrent backups, on different devices/directories. (i
 work on disk for now.)
 
 However some people here on the list seem to be doing well with
 concurrent backups to only one volume. is that actually true or am i
 misunderstanding something?
 
 if so, how do i do that? i would like a reciept for that. what bacula
 version is required for that?
 
 /andreas

Hello.

As one device supports only one job, you have to create separate devices for 
each job you want to be able to run concurrently.

-- 
Silver

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network send error to SD. ERR=Broken Pipe

2009-06-25 Thread Stephan Heine - [ Genetic Interactive ]
Hi Cesare,

I ran accros this issue in 2005, with very similar results.
At that stage I was working with Nvida Onboard NICs on the Windows FD.
Problem turned out to be driver related.

If you are not running Nvidia hardware, investigate and update the
driver.

Kind Regards
Stephan

-Original Message-
From: Cesare Montresor [mailto:c.montre...@netspin.it] 
Sent: 25 June 2009 01:46 PM
To: Bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Network send error to SD. ERR=Broken Pipe

Hi guys, does anyone know how to resolve this issue ?

Considerations:
- Started and restared, all components many times :)
- Add hearthbeat interval = 1 to SF and SD
- Test connection using telnet, works...
- Unix permissions at SD are ok.
- This job dies always near this point:
~9min (8.57, 9.03, 9.03)
~11.000 files (11.110, 11.115, 11.113)
~880mb (882, 880, 882)
- dir and fd are v1.38 - debian 4 stable
- sd is v1.38 on centos
- this backup system well worked for 1 year
- the architecture:
is designed for disaster recovery.
there are only 2 server involved: repository and jane.
both server have fd and sd, thay are located in 2 different
buildings on different subnets.
jane-fd sends his backup to repository-sd.
repository-fd send his backup to jane-sd.
director is located on repository.


All advice are welcome.

Thanks,
Cesare

25-giu 11:28 repository-dir: No prior Full backup Job record found.
25-giu 11:28 repository-dir: No prior or suitable Full backup found.
Doing FULL backup.
25-giu 11:28 repository-dir: Start Backup JobId 2248,
Job=repository.base.2009-06-25_11.28.07
25-giu 11:09 storage.jane: Volume repository.base-0005 previously
written, moving to end of data.
25-giu 11:37 repository-fd: repository.base.2009-06-25_11.28.07 Fatal
error: backup.c:500 Network send error to SD. ERR=Pipe rotta
25-giu 11:37 repository-dir: repository.base.2009-06-25_11.28.07 Error:
Bacula 1.38.11 (28Jun06): 25-giu-2009 11:37:40
  JobId:  2248
  Job:repository.base.2009-06-25_11.28.07
  Backup Level:   Full (upgraded from Incremental)
  Client: repository i486-pc-linux-gnu,debian,4.0
  FileSet:repository.base 2008-03-21 17:35:23
  Pool:   repository.base
  Storage:storage.jane
  Scheduled time: 25-giu-2009 11:28:05
  Start time: 25-giu-2009 11:28:43
  End time:   25-giu-2009 11:37:40
  Elapsed time:   8 mins 57 secs
  Priority:   10
  FD Files Written:   11,113
  SD Files Written:   0
  FD Bytes Written:   882,597,993 (882.5 MB)
  SD Bytes Written:   0 (0 B)
  Rate:   1643,6 KB/s
  Software Compression:   53,9 %
  Volume name(s): 
  Volume Session Id:  1
  Volume Session Time:1245920820 snap://1245920820
  Last Volume Bytes:  1 (1 B)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Error
  Termination:*** Backup Error ***


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network send error to SD. ERR=Broken Pipe

2009-06-25 Thread Josh Fisher

Cesare Montresor wrote:
 Hi guys, does anyone know how to resolve this issue ?

 Considerations:
 - Started and restared, all components many times :)
 - Add hearthbeat interval = 1 to SF and SD
 - Test connection using telnet, works...
 - Unix permissions at SD are ok.
 - This job dies always near this point:
~9min (8.57, 9.03, 9.03)
~11.000 files (11.110, 11.115, 11.113)
~880mb (882, 880, 882)
 - dir and fd are v1.38 - debian 4 stable
 - sd is v1.38 on centos
 - this backup system well worked for 1 year

If this setup has worked for a year and nothing has been changed 
recently, then suspect a network hardware problem.

 - the architecture:
is designed for disaster recovery.
there are only 2 server involved: repository and jane.
both server have fd and sd, thay are located in 2 different 
 buildings on different subnets.
jane-fd sends his backup to repository-sd.
repository-fd send his backup to jane-sd.
director is located on repository.


 All advice are welcome.

 Thanks,
 Cesare

 25-giu 11:28 repository-dir: No prior Full backup Job record found.
 25-giu 11:28 repository-dir: No prior or suitable Full backup found. 
 Doing FULL backup.
 25-giu 11:28 repository-dir: Start Backup JobId 2248, 
 Job=repository.base.2009-06-25_11.28.07
 25-giu 11:09 storage.jane: Volume repository.base-0005 previously 
 written, moving to end of data.
 25-giu 11:37 repository-fd: repository.base.2009-06-25_11.28.07 Fatal 
 error: backup.c:500 Network send error to SD. ERR=Pipe rotta
 25-giu 11:37 repository-dir: repository.base.2009-06-25_11.28.07 
 Error: Bacula 1.38.11 (28Jun06): 25-giu-2009 11:37:40
  JobId:  2248
  Job:repository.base.2009-06-25_11.28.07
  Backup Level:   Full (upgraded from Incremental)
  Client: repository i486-pc-linux-gnu,debian,4.0
  FileSet:repository.base 2008-03-21 17:35:23
  Pool:   repository.base
  Storage:storage.jane
  Scheduled time: 25-giu-2009 11:28:05
  Start time: 25-giu-2009 11:28:43
  End time:   25-giu-2009 11:37:40
  Elapsed time:   8 mins 57 secs
  Priority:   10
  FD Files Written:   11,113
  SD Files Written:   0
  FD Bytes Written:   882,597,993 (882.5 MB)
  SD Bytes Written:   0 (0 B)
  Rate:   1643,6 KB/s
  Software Compression:   53,9 %
  Volume name(s):  Volume Session Id:  1
  Volume Session Time:1245920820 snap://1245920820
  Last Volume Bytes:  1 (1 B)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Error
  Termination:*** Backup Error ***

 

 --
   
 

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] concurrent backups to one volume?

2009-06-25 Thread Uwe Schuerkamp
On Thu, Jun 25, 2009 at 12:42:30PM +0200, Andreas Schuldei wrote:
 hi!
 
 reading the documentation i understand that you should have several
 volumes for concurrent backups, on different devices/directories. (i
 work on disk for now.)
 
 However some people here on the list seem to be doing well with
 concurrent backups to only one volume. is that actually true or am i
 misunderstanding something?
 
 if so, how do i do that? i would like a reciept for that. what bacula
 version is required for that?
 

Hallo Andreas,

we're running two concurrent backups to one disk based volume using
bacula 2.2.8. The volume grows to a maximum size of 400g (so we can
fit two of them onto an lto4 for off-site storage), then the oldest
one is recycled and used. 

So far, we haven't had any troubles restoring data with this
setup. All I did in order to enable 2 concurrent jobs was to add the
line 

Maximum concurrent jobs = 2 

to the director definition.

HTH,

Uwe 


-- 
uwe.schuerk...@nionex.net phone: [+49] 5242.91 - 4740, fax:-69 72
Hauptsitz: Avenwedder Str. 55, D-33311 Guetersloh, Germany
Registergericht Guetersloh HRB 4196, Geschaeftsfuehrer: Horst Gosewehr
NIONEX ist ein Unternehmen der DirectGroup Germany www.directgroupgermany.de

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] concurrent backups to one volume?

2009-06-25 Thread terryc
Silver Salonen wrote:

 As one device supports only one job, you have to create separate devices for 
 each job you want to be able to run concurrently.

That isn't how I understand it. I am working on having multiple clients 
feeding files into a single tape drive at the same time and expect that 
chuncks of each job will be interleaved along the tape.

It should be the same for disk files, if that ishow you configure it 
(say one file for each nights jobs).

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fatal error: Job canceled because max start delay time exceeded

2009-06-25 Thread Reynier Pérez Mira
Hi every:
Today in the morning I check my email and Bacula sending to me 139 
emails with this error: Fatal error: Job canceled because max start 
delay time exceeded.

What this means? How I can fix this?
Regards,
-- 
Ing. Reynier Pérez Mira

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] concurrent backups to one volume?

2009-06-25 Thread Attila Fülöp
Andreas Schuldei wrote:
 hi!
 
 reading the documentation i understand that you should have several
 volumes for concurrent backups, on different devices/directories. (i
 work on disk for now.)
 
 However some people here on the list seem to be doing well with
 concurrent backups to only one volume. is that actually true or am i
 misunderstanding something?

This is true, we use such a setup with tape based volumes. The point is
that you should use spooling to disk in such a case to avoid interleaving
of the jobs on tape. This is reported to prolong restore times. Since our
clients cannot saturate our LTO drive we would need spooling anyhow and
never tried without.

Not sure if spooling would also be needed for disk based volumes though.

 if so, how do i do that? i would like a reciept for that. 

Please search the list, this was discussed several times. It involves
adding MaximumConcurrentJobs directives in several places.

 what bacula version is required for that?

Any not too ancient bacula version should do. We are using 2.2.8

Attila


 /andreas
 
 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread John Drescher
On Thu, Jun 25, 2009 at 10:23 AM, John Drescherdresche...@gmail.com wrote:
 On Thu, Jun 25, 2009 at 9:34 AM, terrycter...@woa.com.au wrote:
 Silver Salonen wrote:

 As one device supports only one job, you have to create separate devices for
 each job you want to be able to run concurrently.

 That isn't how I understand it. I am working on having multiple clients
 feeding files into a single tape drive at the same time and expect that
 chuncks of each job will be interleaved along the tape.

 It should be the same for disk files, if that ishow you configure it
 (say one file for each nights jobs).


 As long as the pool is the same more than 1 job can concurrently write
 to the same volume. I have been doing that for years with tape and
 disk. However if you do want more than 1 pool then with disks its best
 to have multiple storage devices.

I should have said more than 1 pool to operate concurrently with disk.

John M. Drescher



-- 
John M. Drescher

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] concurrent backups to one volume?

2009-06-25 Thread John Drescher
On Thu, Jun 25, 2009 at 9:34 AM, terrycter...@woa.com.au wrote:
 Silver Salonen wrote:

 As one device supports only one job, you have to create separate devices for
 each job you want to be able to run concurrently.

 That isn't how I understand it. I am working on having multiple clients
 feeding files into a single tape drive at the same time and expect that
 chuncks of each job will be interleaved along the tape.

 It should be the same for disk files, if that ishow you configure it
 (say one file for each nights jobs).


As long as the pool is the same more than 1 job can concurrently write
to the same volume. I have been doing that for years with tape and
disk. However if you do want more than 1 pool then with disks its best
to have multiple storage devices.

John

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Antwort: Fatal error: Job canceled because max start delay time exceeded

2009-06-25 Thread C . Keschnat
Reynier Pérez Mira rper...@uci.cu wrote on 25.06.2009 15:51:46:

 Reynier Pérez Mira rper...@uci.cu 
 25.06.2009 15:51
 
 Bitte antworten an
 rper...@uci.cu
 
 An
 
 bacula-users@lists.sourceforge.net
 
 Kopie
 
 Thema
 
 [Bacula-users] Fatal error: Job canceled because max start delay time 
exceeded
 
 Hi every:
 Today in the morning I check my email and Bacula sending to me 139 
 emails with this error: Fatal error: Job canceled because max start 
 delay time exceeded.
 
 What this means? How I can fix this?
 Regards,
 -- 
 Ing. Reynier Pérez Mira

See Max Start Delay in the Job Resource. I guess one of your other Jobs 
took too long or hang.
From the documentation:
Max Start Delay = time
The time specifies the maximum delay between the scheduled time and the 
actual start time for the Job. For example, a job can be scheduled to run 
at 1:00am, but because other jobs are running, it may wait to run. If the 
delay is set to 3600 (one hour) and the job has not begun to run by 
2:00am, the job will be canceled. This can be useful, for example, to 
prevent jobs from running during day time hours. The default is 0 which 
indicates no limit. --
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Antwort: Fatal error: Job canceled because max start delay time exceeded

2009-06-25 Thread Reynier Pérez Mira
c.kesch...@internet-mit-iq.de wrote:
 See Max Start Delay in the Job Resource. I guess one of your other 
 Jobs took too long or hang.
  From the documentation:
 *Max Start Delay = time*
 The time specifies the maximum delay between the scheduled time and the 
 actual start time for the Job. For example, a job can be scheduled to 
 run at 1:00am, but because other jobs are running, it may wait to run. 
 If the delay is set to 3600 (one hour) and the job has not begun to run 
 by 2:00am, the job will be canceled. This can be useful, for example, to 
 prevent jobs from running during day time hours. The default is 0 which 
 indicates no limit.

Thanks for your reply but this confuse me a bit. I have, for now, 40 
clients and 40 Jobs one for each client. Every job start at 2:00 AM. My 
Director config is as follow:

Director {
   Name = serverbacula-dir
   Description = Bacula Director Centro de Datos UCI
   DIRport = 9101
   DirAddress = 10.128.50.11
   QueryFile = /etc/bacula/query.sql
   WorkingDirectory = /var/bacula/working
   PidDirectory = /var/run
   Maximum Concurrent Jobs = 20
   Password = some_password
   Messages = Daemon
   FD Connect Timeout = 5 min
   SD Connect Timeout = 5 min
}

So if I understood the Director resource and how Bacula works this 
configuration allow 20 Jobs running at the same time. Then a good 
configuration for Jobs could be set Max Start Delay to 2 hours? Give me 
a hint here because I'm lost.

Regards,
-- 
Ing. Reynier Pérez Mira

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Antwort: Re: Antwort: Fatal error: Job canceled because max start delay time exceeded

2009-06-25 Thread C . Keschnat
Reynier Pérez Mira rper...@uci.cu wrote on 25.06.2009 16:41:52:

 Reynier Pérez Mira rper...@uci.cu 
 25.06.2009 16:41
 
 Bitte antworten an
 rper...@uci.cu
 
 An
 
 c.kesch...@internet-mit-iq.de
 
 Kopie
 
 bacula-users@lists.sourceforge.net
 
 Thema
 
 Re: Antwort: [Bacula-users] Fatal error: Job canceled because max 
 start delay time exceeded
 
 c.kesch...@internet-mit-iq.de wrote:
  See Max Start Delay in the Job Resource. I guess one of your other 
  Jobs took too long or hang.
   From the documentation:
  *Max Start Delay = time*
  The time specifies the maximum delay between the scheduled time and 
the 
  actual start time for the Job. For example, a job can be scheduled to 
  run at 1:00am, but because other jobs are running, it may wait to run. 

  If the delay is set to 3600 (one hour) and the job has not begun to 
run 
  by 2:00am, the job will be canceled. This can be useful, for example, 
to 
  prevent jobs from running during day time hours. The default is 0 
which 
  indicates no limit.
 
 Thanks for your reply but this confuse me a bit. I have, for now, 40 
 clients and 40 Jobs one for each client. Every job start at 2:00 AM. My 
 Director config is as follow:
 
 Director {
Name = serverbacula-dir
Description = Bacula Director Centro de Datos UCI
DIRport = 9101
DirAddress = 10.128.50.11
QueryFile = /etc/bacula/query.sql
WorkingDirectory = /var/bacula/working
PidDirectory = /var/run
Maximum Concurrent Jobs = 20
Password = some_password
Messages = Daemon
FD Connect Timeout = 5 min
SD Connect Timeout = 5 min
 }
 
 So if I understood the Director resource and how Bacula works this 
 configuration allow 20 Jobs running at the same time. Then a good 
 configuration for Jobs could be set Max Start Delay to 2 hours? Give me 
 a hint here because I'm lost.
 
 Regards,
 -- 
 Ing. Reynier Pérez Mira

Oh you have 40 clients/jobs. I misread before. Well there are surely some 
Jobs that don't finish in the two hours. Even if 15 Jobs finish and 5 
don't there are then still 5 jobs waiting to execute.--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Antwort: Re: Antwort: Fatal error: Job canceled because max start delay time exceeded

2009-06-25 Thread C . Keschnat
Reynier Pérez Mira rper...@uci.cu wrote on 25.06.2009 16:41:52:

 Reynier Pérez Mira rper...@uci.cu 
 25.06.2009 16:41
 
 Bitte antworten an
 rper...@uci.cu
 
 An
 
 c.kesch...@internet-mit-iq.de
 
 Kopie
 
 bacula-users@lists.sourceforge.net
 
 Thema
 
 Re: Antwort: [Bacula-users] Fatal error: Job canceled because max 
 start delay time exceeded
 
 c.kesch...@internet-mit-iq.de wrote:
  See Max Start Delay in the Job Resource. I guess one of your other 
  Jobs took too long or hang.
   From the documentation:
  *Max Start Delay = time*
  The time specifies the maximum delay between the scheduled time and 
the 
  actual start time for the Job. For example, a job can be scheduled to 
  run at 1:00am, but because other jobs are running, it may wait to run. 

  If the delay is set to 3600 (one hour) and the job has not begun to 
run 
  by 2:00am, the job will be canceled. This can be useful, for example, 
to 
  prevent jobs from running during day time hours. The default is 0 
which 
  indicates no limit.
 
 Thanks for your reply but this confuse me a bit. I have, for now, 40 
 clients and 40 Jobs one for each client. Every job start at 2:00 AM. My 
 Director config is as follow:
 
 Director {
Name = serverbacula-dir
Description = Bacula Director Centro de Datos UCI
DIRport = 9101
DirAddress = 10.128.50.11
QueryFile = /etc/bacula/query.sql
WorkingDirectory = /var/bacula/working
PidDirectory = /var/run
Maximum Concurrent Jobs = 20
Password = some_password
Messages = Daemon
FD Connect Timeout = 5 min
SD Connect Timeout = 5 min
 }
 
 So if I understood the Director resource and how Bacula works this 
 configuration allow 20 Jobs running at the same time. Then a good 
 configuration for Jobs could be set Max Start Delay to 2 hours? Give me 
 a hint here because I'm lost.
 
 Regards,
 -- 
 Ing. Reynier Pérez Mira

Do any jobs have a higher/lower priority? If so, the other jobs will wait 
until these finish (you can somehow configure bacula to run jobs with 
different priorities at the same time iirc, but I would have to look that 
up in the docs)--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread Silver Salonen
On Thursday 25 June 2009 17:24:37 John Drescher wrote:
 On Thu, Jun 25, 2009 at 10:23 AM, John Drescherdresche...@gmail.com wrote:
  On Thu, Jun 25, 2009 at 9:34 AM, terrycter...@woa.com.au wrote:
  Silver Salonen wrote:
 
  As one device supports only one job, you have to create separate devices 
for
  each job you want to be able to run concurrently.
 
  That isn't how I understand it. I am working on having multiple clients
  feeding files into a single tape drive at the same time and expect that
  chuncks of each job will be interleaved along the tape.
 
  It should be the same for disk files, if that ishow you configure it
  (say one file for each nights jobs).
 
 
  As long as the pool is the same more than 1 job can concurrently write
  to the same volume. I have been doing that for years with tape and
  disk. However if you do want more than 1 pool then with disks its best
  to have multiple storage devices.
 
 I should have said more than 1 pool to operate concurrently with disk.
 
 John M. Drescher

Yes, in this case we are about to ask ourselves what are pools - to my mind 
pools are collections of backup-files and policies about how to overwrite 
these files. Eg. if we want to do ordinary Grandfather-Father-Son rotation, we 
create 3 different pools for every job - for full, differential and 
incremental backups. And as we define them, we have to define separate devices 
for them too.

Using only one pool with the whole backup-disk doesn't make sense to me, 
because managing these backups would be extremely limited, wouldn't it?

PS. The limit to be able to write only one job to one disk-based device has 
been a bizzare limit that just complicates the configuration, I still don't 
understand why we have this limit in disk-based backups (the claim Bacula 
uses disks as tapes is just as bizzare).

-- 
Silver

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread Silver Salonen
On Thursday 25 June 2009 18:10:35 John Drescher wrote:
  PS. The limit to be able to write only one job to one disk-based device 
has
  been a bizzare limit that just complicates the configuration, I still 
don't
  understand why we have this limit in disk-based backups (the claim Bacula
  uses disks as tapes is just as bizzare).
 
 There is no such limit. If you want more than one pool to write
 concurrently have more than 1 storage device. With disks you can have
 as many as you want. They can all point to the same physical storage
 location.

I meant the configuration limit - that I can't configure one device to accept 
multiple jobs concurrently.

-- 
Silver

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread Christian Gaul
Silver Salonen schrieb:
 PS. The limit to be able to write only one job to one disk-based device has 
 been a bizzare limit that just complicates the configuration, I still don't 
 understand why we have this limit in disk-based backups (the claim Bacula 
 uses disks as tapes is just as bizzare).

   
I seem to be writing to disk based volumes just fine with multiple (5+)
concurrent jobs.

Maybe i am misunderstanding something, but the problem isnt N jobs to 1
(disk based) volume, it is 1-N jobs to M (1) volumes at the same time
which doesnt work.

Also, one pool can only use one volume at one time for one job, and one
job can only use one SD at one time,  so that might limit you too.

Concurrent jobs (except for me currently, and only to tape) work quite
nice. I personally opted to use spooling, in order to not have jobs
interleaved on tape.


But speaking of configuration complication, for me it complicates things
that valid jobs (for comparing with Full, Diffs and Incrementals) are
generated out of {Client,Fileset}and there is no way to tell it to use
{Client,Fileset,Storage} or {Client,Fileset,Pool}.. for creating offsite
backups i basically keep 2 identical filesets for every class of client
because i want / need two unrelated backups on 2 different media types
(no, copy jobs wont do because they are 2 different SDs)..

Anyways, i am rambling, but multiple concurrent jobs to one disk volume
work fine, as long as your jobs use the same pool and are allowed to use
the same volume. If you try to use different pools ( - different
volumes) for different jobs, then yes, they will wait till the SD can
mount a volume in the second pool.

-- 
Christian Gaul
otop AG
D-55116 Mainz
Rheinstraße 105-107
Fon: 06131.5763.310
Fax: 06131.5763.500
E-Mail: christian.g...@otop.de
Internet: www.otop.de

Vorsitzender des Aufsichtsrats: Christof Glasmacher
Vorstand: Dirk Flug
Registergericht: Amtsgericht Mainz
Handelsregister: HRB 7647 


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] estimate and actual job differ

2009-06-25 Thread Silver Salonen
On Friday 05 June 2009 16:36:16 Silver Salonen wrote:
 On Thursday 04 June 2009 10:53:03 Christian Gaul wrote:
  Silver Salonen schrieb:
   On Thursday 04 June 2009 10:34:36 Christian Gaul wrote:
 
   Silver Salonen schrieb:
   
   Hi.
  
   I'm trying to run incremental job of a restored fileset (having 
   mtimeonly=yes). When I check its estimate, it shows correctly only new 
 
   files 
 
   that have been created/modified since restoration. But when I run the 
 
   actual 
 
   job, all the files are included in backup.
  
   The server is 3.0.0 on FreeBSD, client is 3.0.1 on Windows XP.
  
   May it be because all the folders' (but not files') mtime is the date 
of 
   restoration? And I wonder what is the latter caused by?
  
 
 
   Since you modified the fileset to add mtimeonly=yes, did you also add
   Ignore FileSet Changes=yes?
  
   If not, your next backup will default to a Full because the fileset
   doesnt match to the one your last Full was made with.
   
  
   Yes, I also have Ignore FileSet Changes=yes, sorry I didn't mention 
it. 
 And 
   if I didn't have it, estimate would show full too, wouldn't it?
  
 
  I dont know if estimate honors that or just takes what you give it. I
  personally only use estimate when making new filesets (to see if my
  excludes work correctly). And since using estimate with LVM Snapshots
  doesnt work anyways because they are not mounted for an estimate job,
  most of my filesets would show 0 files anyways. Sorry i cant help with 
that.
 
 I suppose estimate does honor the option (Ignore FileSet Changes=yes), 
 because the problem is not in estimate job's files, but rather in actual 
job's 
 files - the actual job just wants to back up all the files, but it should 
not.
 
 It seems that actual job (or the client of the job) doesn't honor the 
 mtimeonly=yes - it worked OK with my FreeBSD-client when I tested the 
similar 
 situation.
 
 -- 
 Silver

I submitted a bug report for that:
http://bugs.bacula.org:80/view.php?id=1318

--
Silver

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread Silver Salonen
On Thursday 25 June 2009 18:28:28 Christian Gaul wrote:
 Silver Salonen schrieb:
  PS. The limit to be able to write only one job to one disk-based device 
has 
  been a bizzare limit that just complicates the configuration, I still 
don't 
  understand why we have this limit in disk-based backups (the claim Bacula 
  uses disks as tapes is just as bizzare).
 

 I seem to be writing to disk based volumes just fine with multiple (5+)
 concurrent jobs.
 
 Maybe i am misunderstanding something, but the problem isnt N jobs to 1
 (disk based) volume, it is 1-N jobs to M (1) volumes at the same time
 which doesnt work.
 
 Also, one pool can only use one volume at one time for one job, and one
 job can only use one SD at one time,  so that might limit you too.
 
 Concurrent jobs (except for me currently, and only to tape) work quite
 nice. I personally opted to use spooling, in order to not have jobs
 interleaved on tape.
 
 
 But speaking of configuration complication, for me it complicates things
 that valid jobs (for comparing with Full, Diffs and Incrementals) are
 generated out of {Client,Fileset}and there is no way to tell it to use
 {Client,Fileset,Storage} or {Client,Fileset,Pool}.. for creating offsite
 backups i basically keep 2 identical filesets for every class of client
 because i want / need two unrelated backups on 2 different media types
 (no, copy jobs wont do because they are 2 different SDs)..
 
 Anyways, i am rambling, but multiple concurrent jobs to one disk volume
 work fine, as long as your jobs use the same pool and are allowed to use
 the same volume. If you try to use different pools ( - different
 volumes) for different jobs, then yes, they will wait till the SD can
 mount a volume in the second pool.

I meant the limitation from the configuration point of view - you cannot 
configure a device to accept multiple jobs concurrently. If you want to be 
able to actually do it, you have to hack the configuration - to show one 
actual device as different devices.

-- 
Silver

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread Christian Gaul
Silver Salonen schrieb:
 On Thursday 25 June 2009 18:28:28 Christian Gaul wrote:
   
 Silver Salonen schrieb:
 
 PS. The limit to be able to write only one job to one disk-based device 
   
 has 
   
 been a bizzare limit that just complicates the configuration, I still 
   
 don't 
   
 understand why we have this limit in disk-based backups (the claim Bacula 
 uses disks as tapes is just as bizzare).

   
   
 I seem to be writing to disk based volumes just fine with multiple (5+)
 concurrent jobs.

 Maybe i am misunderstanding something, but the problem isnt N jobs to 1
 (disk based) volume, it is 1-N jobs to M (1) volumes at the same time
 which doesnt work.

 Also, one pool can only use one volume at one time for one job, and one
 job can only use one SD at one time,  so that might limit you too.

 Concurrent jobs (except for me currently, and only to tape) work quite
 nice. I personally opted to use spooling, in order to not have jobs
 interleaved on tape.


 But speaking of configuration complication, for me it complicates things
 that valid jobs (for comparing with Full, Diffs and Incrementals) are
 generated out of {Client,Fileset}and there is no way to tell it to use
 {Client,Fileset,Storage} or {Client,Fileset,Pool}.. for creating offsite
 backups i basically keep 2 identical filesets for every class of client
 because i want / need two unrelated backups on 2 different media types
 (no, copy jobs wont do because they are 2 different SDs)..

 Anyways, i am rambling, but multiple concurrent jobs to one disk volume
 work fine, as long as your jobs use the same pool and are allowed to use
 the same volume. If you try to use different pools ( - different
 volumes) for different jobs, then yes, they will wait till the SD can
 mount a volume in the second pool.
 

 I meant the limitation from the configuration point of view - you cannot 
 configure a device to accept multiple jobs concurrently. If you want to be 
 able to actually do it, you have to hack the configuration - to show one 
 actual device as different devices.

   
I think i understand what you mean, but you actually can accept multiple
jobs to the same device.. just not to different pools. But you are
right, since it's disk volumes, one file based SD could act like a
virtual tape changer with unlimited slots and drives... i guess you
probably could create a virtual changer with like 20 drives, but you are
right, that is kind of hack-isch.

If i find time i might experiment with the current possibilities of
virtual libraries and see if i can work something out that doesnt look
like a hack.

-- 
Christian Gaul
otop AG
D-55116 Mainz
Rheinstraße 105-107
Fon: 06131.5763.310
Fax: 06131.5763.500
E-Mail: christian.g...@otop.de
Internet: www.otop.de

Vorsitzender des Aufsichtsrats: Christof Glasmacher
Vorstand: Dirk Flug
Registergericht: Amtsgericht Mainz
Handelsregister: HRB 7647 


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread John Drescher
 There is no such limit. If you want more than one pool to write
 concurrently have more than 1 storage device. With disks you can have
 as many as you want. They can all point to the same physical storage
 location.

 I meant the configuration limit - that I can't configure one device to accept
 multiple jobs concurrently.


I do this every single day at home. 5 jobs concurrently write to the
same exact volume.

-- 
John M. Drescher

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems restoring an Exchange plugin backup

2009-06-25 Thread Berend Dekens
Update: I think I found a restore setup that actually restores the
backup. The backup gets cut short when the DB is activated in Exchange
and this also crashes the Bacula FD on the target machine.

The end is Error: HrESERestoreComplete failed
with error 0xc7ff1004 - Unknown error. at which point the FD crashes.

When I try to mount the store (which is in fact 8GB in size, the same as 
it was before the backup - making me think the data should be there), I 
get an error and the event logs in Windows show this:

Information Store (5212) Recovery Storage Group: Attempted to attach
database 'C:\Program Files\Exchsrvr\Recovery Storage Group\Mailbox Store
(AXMAIL).edb' but it is a database restored from a backup set on which
hard recovery was not started or did not complete successfully.

So what is still going wrong here?

The complete transcript is down below (bconsole):
Connecting to Director axnet:9101
1000 OK: axnet-dir Version: 3.0.1 (30 April 2009)
Enter a period to cancel a command.
*restore
Automatically selected Catalog: MyCatalog
Using Catalog MyCatalog

First you select one or more JobIds that contain files
to be restored. You will be presented several methods
of specifying the JobIds. Then you will be allowed to
select which files from those JobIds are to be restored.

To select the JobIds, you have the following choices:
  1: List last 20 Jobs run
  2: List Jobs where a given File is saved
  3: Enter list of comma separated JobIds to select
  4: Enter SQL list command
  5: Select the most recent backup for a client
  6: Select backup for a client before a specified time
  7: Enter a list of files to restore
  8: Enter a list of files to restore before a specified time
  9: Find the JobIds of the most recent backup for a client
 10: Find the JobIds for a backup for a client before a specified time
 11: Enter a list of directories to restore for found JobIds
 12: Cancel
Select item:  (1-12): 5
Defined Clients:
  ...
  4: axmail-fd
  ...
 10: axemail-fd
Select the Client (1-10): 4
The defined FileSet resources are:
  1: AXMAIL Full Data Set
  2: Exchange
Select FileSet resource (1-2): 2
+---+---+--+---+-+---+
| JobId | Level | JobFiles | JobBytes  | StartTime   |
VolumeName|
+---+---+--+---+-+---+
|90 | F |   13 | 5,313,968,371 | 2009-06-24 15:36:10 |
Deventer_Exchange_Backup_0013 |
|90 | F |   13 | 5,313,968,371 | 2009-06-24 15:36:10 |
Deventer_Exchange_Backup_0014 |
|91 | I |5 | 2,671,174 | 2009-06-24 17:28:25 |
Deventer_Exchange_Backup_0014 |
|92 | I |5 |   233,882 | 2009-06-24 18:00:01 |
Deventer_Exchange_Backup_0014 |
|   118 | I |   17 |40,099,025 | 2009-06-25 18:00:02 |
Deventer_Exchange_Backup_0014 |
+---+---+--+---+-+---+
You have selected the following JobIds: 90,91,92,118


Building directory tree for JobId(s) 90,91,92,118 ...
24 files inserted into the tree.

You are now entering file selection mode where you add (mark) and
remove (unmark) files to be restored. No files are initially added, unless
you used the all keyword on the command line.
Enter done to leave this mode.

cwd is: /
$ mark *
29 files marked.
$ cd @EXCHANGE/Microsoft Information Store/First Storage Group
cwd is: /@EXCHANGE/Microsoft Information Store/First Storage Group/
$ unmark Public*
4 files unmarked.
$ lsmark
*C:\Program Files\Exchsrvr\mdbdata\E0002FC5.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FC6.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FC7.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FC8.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FC9.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FCA.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FCB.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FCC.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FCD.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FCE.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FCF.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FD0.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FD1.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FD2.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FD3.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FD4.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FD5.log
*C:\Program Files\Exchsrvr\mdbdata\E0002FD6.log
*Mailbox Store (AXMAIL)/
*C:\Program Files\Exchsrvr\mdbdata\priv1.edb
*C:\Program Files\Exchsrvr\mdbdata\priv1.stm
*DatabaseBackupInfo
$ done
Bootstrap records written to /var/bacula/axnet-dir.restore.17.bsr

The job will require the following
Volume(s) Storage(s)SD Device(s)
===

Deventer_Exchange_Backup_ File  FileStorage



25 

Re: [Bacula-users] Restoring large directory does not work

2009-06-25 Thread Martin Simmons
 On Wed, 24 Jun 2009 13:59:26 -0700, mehma sarja said:
 
 Thanks for all your help you guys. I am impressed with the level of
 expertise here!
 
  Error accessing memory address 0x7fbff000: Bad address.
   #0  0x0040c043 in add_findex ()
 
  The function add_findex is interesting, but I think like your bacula-dir
  was
 
  Try the following gdb commands (I assume you are running 64-bit FreeBSD):
 
  break *add_findex
  commands
  printf arguments: %x %x %x\n, $rdi, $rsi, $rdx
  end
  continue
 
  When it stops, enter the continue command again and time how long it takes
  before it stops again.
 
  Do this a few times and post the results (including the arguments:
  output).
 
 
 Yes, it is FreeBSD 64 bit. The continue command comes right back with these
 arguments:
 
 Breakpoint 1, 0x0040bfc0 in add_findex ()
 arguments: 1b17068 a0 5fe00b
 arguments: 1b17068 a0 5fe00b
 arguments: 1b17068 a0 5fe00b
 arguments: 1b17068 a0 5fe00b
 (gdb) continue
 Continuing.
 
 Breakpoint 1, 0x0040bfc0 in add_findex ()
 arguments: 1b17068 a0 5fe039
 arguments: 1b17068 a0 5fe039
 arguments: 1b17068 a0 5fe039
 arguments: 1b17068 a0 5fe039
 (gdb) continue
 Continuing.
 
 Breakpoint 1, 0x0040bfc0 in add_findex ()
 arguments: 1b17068 a0 5fe055
 arguments: 1b17068 a0 5fe055
 arguments: 1b17068 a0 5fe055
 arguments: 1b17068 a0 5fe055
 (gdb) continue
 Continuing.
 
 Breakpoint 1, 0x0040bfc0 in add_findex ()
 arguments: 1b17068 a0 5fe060
 arguments: 1b17068 a0 5fe060
 arguments: 1b17068 a0 5fe060
 arguments: 1b17068 a0 5fe060
 (gdb) continue
 Continuing.
 
 Breakpoint 1, 0x0040bfc0 in add_findex ()
 arguments: 1b17068 a0 5fe071
 arguments: 1b17068 a0 5fe071
 arguments: 1b17068 a0 5fe071
 arguments: 1b17068 a0 5fe071
 (gdb) continue
 Continuing.
 
 Breakpoint 1, 0x0040bfc0 in add_findex ()
 arguments: 1b17068 a0 5fe079
 arguments: 1b17068 a0 5fe079
 arguments: 1b17068 a0 5fe079
 arguments: 1b17068 a0 5fe079
 (gdb) continue
 Continuing.
 
 Breakpoint 1, 0x0040bfc0 in add_findex ()
 arguments: 1b17068 a0 5fe0ac
 arguments: 1b17068 a0 5fe0ac
 arguments: 1b17068 a0 5fe0ac
 arguments: 1b17068 a0 5fe0ac

OK, this shows why it is slow.  The algorithm in add_findex is only efficient
when called with consecutive index values (the third number printed).

The code for restore all in 2.4.4 doesn't do that, so it can take a very
long time to complete.  This was fixed in later version, so I think the best
solution is to upgrade Bacula.

__Martin

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Overriding Job and/or File Retention Periods?

2009-06-25 Thread teksupptom

Thanks for the suggestions Dirk. I was afraid that it would come down to 
scripting; being a temp Student admin I'm trying to keep things as 
straightforward as possible for the next person who has to deal with it.

I was hoping I had missed something and that the Job resource could also set 
the job retention period.

Thank You :)
Tom

+--
|This was sent by tomisom.s...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] concurrent backups to one volume?

2009-06-25 Thread user100
  On 25.06.2009 14:10, Silver Salonen wrote:
 On Thursday 25 June 2009 13:42:30 Andreas Schuldei wrote:
 hi!

 reading the documentation i understand that you should have several
 volumes for concurrent backups, on different devices/directories. (i
 work on disk for now.)

 However some people here on the list seem to be doing well with
 concurrent backups to only one volume. is that actually true or am i
 misunderstanding something?

 if so, how do i do that? i would like a reciept for that. what bacula
 version is required for that?

 /andreas
 Hello.

 As one device supports only one job, you have to create separate devices for
 each job you want to be able to run concurrently.


I´m not sure if you are talking about the same thing, but how should it 
possible to get interleaved volume blocks (where bacula must sort and 
take longer) if you never write to the same device (with or without 
spooling)? Quote from the current bacula-manual:

The Volume format becomes more complicated with multiple simultaneous 
jobs, consequently, restores may take longer if Bacula must sort through 
interleaved volume blocks from multiple simultaneous jobs. This can be 
avoided by having each simultaneous job write to a different volume or 
by using data spooling, which will first spool the data to disk 
simultaneously, then write one spool file at a time to the volume thus 
avoiding excessive interleaving of the different job blocks.


Previous to 3.x I run concurrent jobs and have just one device (a 
tapedrive in an autoloader) but a huge disk-space for spooling. And it 
worked well. However I (and I´m not the only one) got troubles with 
concurrent jobs on 3.x now (files mismatch). I´m sure I can avoid that 
by using different backup devices but I just have one tapedrive built in 
the loader so I would not do that.


Greetings,
user100

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup space quotas

2009-06-25 Thread Lee Huffman
Hello,

I'm trying to determine if there's a way to define space quotas on a per 
host basis in Bacula. I figured there might be a way to do it by 
limiting the size of volumes, number of volumes within a pool, and 
assigning each host it's own pool. I read in an old thread that this 
goes against the design principles of Bacula, but I could not find 
anything definitive.

Any direct assistance or ideas to explore would be greatly appreciated. 
Thank you!

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup space quotas

2009-06-25 Thread John Drescher
 I'm trying to determine if there's a way to define space quotas on a per
 host basis in Bacula. I figured there might be a way to do it by
 limiting the size of volumes, number of volumes within a pool, and
 assigning each host it's own pool.

That sounds fine. I would do exactly that if I had this requirement.

John

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems restoring an Exchange plugin backup

2009-06-25 Thread Berend Dekens
Another update: I found out that the database is in fact in a Dirty 
Shutdown state (eseutil.exe told me that) - hence it won't work (mount).

I found a discussion about this here:
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg30912.html

But it seems like there was no solution for the problem. Am I right to 
conclude that the database is in fact copied while active and as such in 
the dirt state? Shouldn't the plugin handle this?

Regards,
Berend Dekens

Op 06/25/09 19:37, Berend Dekens schreef:
 Update: I think I found a restore setup that actually restores the
 backup. The backup gets cut short when the DB is activated in Exchange
 and this also crashes the Bacula FD on the target machine.

 The end is Error: HrESERestoreComplete failed
 with error 0xc7ff1004 - Unknown error. at which point the FD crashes.

 When I try to mount the store (which is in fact 8GB in size, the same as
 it was before the backup - making me think the data should be there), I
 get an error and the event logs in Windows show this:

 Information Store (5212) Recovery Storage Group: Attempted to attach
 database 'C:\Program Files\Exchsrvr\Recovery Storage Group\Mailbox Store
 (AXMAIL).edb' but it is a database restored from a backup set on which
 hard recovery was not started or did not complete successfully.

 So what is still going wrong here?
snip

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread terryc
Silver Salonen wrote:
  Eg. if we want to do ordinary Grandfather-Father-Son rotation, we 
 create 3 different pools for every job - for full, differential and 
 incremental backups.

That is not how I understand GFS system, although it is a possibility. I 
understand it as Full, plus (incremental OR differential).

So important clients (like secretary's machine) receive a full backup 
each week and a differential (all changed files since full backup) 
nightly so that in the need for recovery, it would just be a process 
involving two tape/disk(?) for a full recovery.

OTOH, I might do a differential (all changed files since last backup, 
full or diff or inc) on something on something with humungous amount of 
file changes and non-core/non-critical files to simply keep the backup 
window small. The trade off is that every tape/disk since the full 
backup would need to be processed for a full client recovery.

GFS comes from having multiple complete BACKUPS, i.e. dated versions. 
This makes it a real backup system.


 Using only one pool with the whole backup-disk doesn't make sense to me, 
 because managing these backups would be extremely limited, wouldn't it?

If you want to run your backups like that, then use a raid array. You 
only have a point in the rare occassions where you have small backups 
and monstrous drives. The critical point about a real backup system is 
that it is not just a file copy, but a secure,protected file copy that 
can not be degraded. Writing a whole serious of jobss to one drive that 
sits in the system full time is not a proper backup system.


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migration Job pruging destination volumes!!

2009-06-25 Thread Robert LeBlanc
I've set-up a migration job to migrate jobs from one set of tape volumes to
disk volumes. I've configured the destination pool to use the volume once
and have a retention period of 2 months. For some reason when the migration
job completes and gets to the next queued migration job it marks the
destination volume as purged and then overwrites the contents with the next
job. The Jobs from the source pool are being marked as purged, so the data
is going straight into the bit bucket!

I'm using version 2.4.4 from Debian and here is the pool and job portion of
my conf files.

Pool {
  Name = 454FLX
  Pool Type = Backup
  AutoPrune = yes
  Storage = Neo8000-LTO4
  VolumeRetention = 3 years
  Recycle = yes
  Next Pool = DD-454FLX
}

Pool {
  Name = DD-454FLX
  Pool Type = Backup
  LabelFormat = 454FLX-
  Recycle = yes
  AutoPrune = yes
  Storage = DD-454FLX
  Volume Retention = 2 months
  Use Volume Once = yes
}

Job {
  Name = Migrate_454FLX
  Type = Migrate
  Level = Full
  Client = 454datarig-fd
  FileSet = FULL Windows
  Messages = Standard
  Pool = 454FLX
  Maximum Concurrent Jobs = 4
  Selection Type = Volume
  Selection Pattern = .*L4
}

Here is a piece of the output that is confirming my bit bucket suspicion:

25-Jun 17:50 babacula-dir JobId 37438: Start Migration JobId 37438,
Job=Migrate_454FLX.2009-06-25_15.15.54.27
25-Jun 17:50 babacula-dir JobId 37438: There are no more Jobs associated
with Volume 454FLX-0169. Marking it purged.
25-Jun 17:50 babacula-dir JobId 37438: All records pruned from Volume
454FLX-0169; marking it Purged
25-Jun 17:50 babacula-dir JobId 37438: Recycled volume 454FLX-0169
25-Jun 17:50 babacula-dir JobId 37438: Using Device DD-454FLX
25-Jun 17:50 lsbacsd0-sd JobId 37438: Ready to read from volume 02L4
on device Drive-2 (/dev/tape/drive2).
25-Jun 17:50 lsbacsd0-sd JobId 37438: Recycled volume 454FLX-0169 on
device DD-454FLX (/backup/pools/454FLX), all previous data lost.
25-Jun 17:51 babacula-dir JobId 37438: Volume used once. Marking Volume
454FLX-0169 as Used.
25-Jun 17:50 lsbacsd0-sd JobId 37438: Forward spacing Volume 02L4 to
file:block 418:0.

So, two questions. 1. What am I doing wrong? 2. Is there an easy way to
unpurge the jobs on the tape since they have not been recycled, or do I have
to run bscan on them?

Thanks,

Robert LeBlanc
Life Sciences  Undergraduate Education Computer Support
Brigham Young University
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems restoring an Exchange plugin backup

2009-06-25 Thread James Harper
 
 Another update: I found out that the database is in fact in a Dirty
 Shutdown state (eseutil.exe told me that) - hence it won't work
(mount).
 
 I found a discussion about this here:

http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg30912.
html
 
 But it seems like there was no solution for the problem. Am I right to
 conclude that the database is in fact copied while active and as such
in
 the dirt state? Shouldn't the plugin handle this?
 

How urgent is it to get this fixed?

As far as I can tell you are doing everything right... I'll look up
those error messages.

James

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] concurrent backups to one volume?

2009-06-25 Thread terryc
Attila Fülöp wrote:
 Andreas Schuldei wrote:
 hi!

 reading the documentation i understand that you should have several
 volumes for concurrent backups, on different devices/directories. (i
 work on disk for now.)

 However some people here on the list seem to be doing well with
 concurrent backups to only one volume. is that actually true or am i
 misunderstanding something?

Yes. The hard part is getting it configured in ALL the needed places. 
anything up to four places in a couple of file (dir  Sd) and in the 
client) as well.

The line is
maximum concurrent jobs = N   where N= number you want.

Note, there are some default settings in SD  FD that allow for ten(?) 
and only require a DIR change to work.

Caution; do not run any other job at the same time as catalog dumps 
(full catalog)
 
 This is true, we use such a setup with tape based volumes. The point is
 that you should use spooling to disk in such a case to avoid interleaving
 of the jobs on tape. 

AFAIUI, if the jobs is smaller than the spool size, then the jobs will 
not be interleaved. If the job is larger than the spool size, then 
segments will be interleaved with other running jobs.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread Silver Salonen
On Thursday 25 June 2009 19:41:10 John Drescher wrote:
  There is no such limit. If you want more than one pool to write
  concurrently have more than 1 storage device. With disks you can have
  as many as you want. They can all point to the same physical storage
  location.
 
  I meant the configuration limit - that I can't configure one device to 
accept
  multiple jobs concurrently.
 
 
 I do this every single day at home. 5 jobs concurrently write to the
 same exact volume.

My original claim was made in the context of disk-based backups (ie. multiple 
pools as I explained in the same message). Using the same exact volume (or 
pool) with disk-based backup-system is quite a big limitation to my mind. 

-- 
Silver

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: concurrent backups to one volume?

2009-06-25 Thread Silver Salonen
On Friday 26 June 2009 02:07:58 terryc wrote:
 Silver Salonen wrote:
   Eg. if we want to do ordinary Grandfather-Father-Son rotation, we 
  create 3 different pools for every job - for full, differential and 
  incremental backups.
 
 That is not how I understand GFS system, although it is a possibility. I 
 understand it as Full, plus (incremental OR differential).
 
 So important clients (like secretary's machine) receive a full backup 
 each week and a differential (all changed files since full backup) 
 nightly so that in the need for recovery, it would just be a process 
 involving two tape/disk(?) for a full recovery.
 
 OTOH, I might do a differential (all changed files since last backup, 
 full or diff or inc) on something on something with humungous amount of 
 file changes and non-core/non-critical files to simply keep the backup 
 window small. The trade off is that every tape/disk since the full 
 backup would need to be processed for a full client recovery.
 
 GFS comes from having multiple complete BACKUPS, i.e. dated versions. 
 This makes it a real backup system.

OK, yes.. you may do it as this too, but the point in this context was that we 
need multiple pools. In my case I need one pool for full backups, one for 
differentials and one for incrementals. In your case you need 2 pools: one for 
fulls and one for differentials.

  Using only one pool with the whole backup-disk doesn't make sense to me, 
  because managing these backups would be extremely limited, wouldn't it?
 
 If you want to run your backups like that, then use a raid array. You 
 only have a point in the rare occassions where you have small backups 
 and monstrous drives. The critical point about a real backup system is 
 that it is not just a file copy, but a secure,protected file copy that 
 can not be degraded. Writing a whole serious of jobss to one drive that 
 sits in the system full time is not a proper backup system.

Well.. I AM using RAID array everywhere in backup-systems and as it's so much 
more cost effective than using tapes, we just hope we can detect any soon-to-
failure storage soon enough :)

--
Silver

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users