[Bacula-users] Run a script after a manual Restoration

2009-08-12 Thread M. Sébastien LELIEVRE
Greetings,

We are now running Bacula-3.0.2 compiled from sources on an Ubuntu 8.04
LTS server distribution.

Our aim is to restore very specific files of a backup on another client
every week.

In order to do this, we use a shell script that specifies every options,
files and directories we need to mark and pipe them in the bconsole.

Our main problem here is that we cannot run an After Job script.

We have understood from the Bacula Documentation that we cannot schedule
a Bacula Restoration without restoring all the data from the Backup
specified. That is why we turn ourselves to the scripting solution.

Can we specify a Client Run After Job option from the bconsole during
a manual restoration?

Can we schedule a restoration job in Bacula that will only restore
specified files?

Here is the script we are currently using : http://pastebin.com/m3c126467

Best Regards from France,
-- 
M. Sébastien LELIÈVRE

Ingénieur Système  Base de Données

AZ Network
40, rue Ampère
61000 ALENÇON (ORNE)
FRANCE

Tel. : + 33 (0) 233 320 616
Port. : + 33 (0) 673 457 243

Poste : 120
e-mail : sebastien.lelie...@aznetwork.fr


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Anyone written any handy queries (query.sql)???

2009-08-12 Thread Russell Howe
John Lockard wrote, sometime around 11/08/09 19:39:
 I have modified my query.sql to include some queries that
 I use frequently and I thought maybe someone else would
 find them useful additions.  Also, I was wondering if anyone
 had queries which they find useful and would like to share.

Some of mine may only work on PostgreSQL - I dunno..


:List the most recent full backups
SELECT DISTINCT jobname, endtime, volumename from (
SELECT
 lastfullbackups.JobId, JobName, ScheduledTime, StartTime, EndTime, 
volumename
   FROM
 (
   SELECT
 jobid AS JobId,
 job.name AS JobName,
 job.schedtime AS ScheduledTime,
 job.starttime AS StartTime,
 job.endtime AS EndTime
   FROM
 (
   SELECT
 name as Job, max(endtime) AS EndTime
   FROM
 job
 INNER JOIN
 jobinfo
 ON
 job.name = JI_jobname
   WHERE
 type = 'B'
 AND level = 'F'
 AND jobstatus = 'T'
 AND ji_old = false
   GROUP BY name
 ) lastfullbackups
   LEFT OUTER JOIN
 job
   ON
   lastfullbackups.Job = job.name
 AND
   lastfullbackups.EndTime = job.endtime
 ) lastfullbackups
   LEFT OUTER JOIN
 jobmedia
   ON
 lastfullbackups.jobid = jobmedia.jobid
   LEFT OUTER JOIN
 media
   ON
 jobmedia.mediaid = media.mediaid
 ) foo


:List last 20 Full Backups for a named job
*Enter Job name:
SELECT DISTINCT Job.JobId,Client.Name AS 
Client,Job.StartTime,JobFiles,JobBytes,
   VolumeName
  FROM Client INNER JOIN Job on Client.ClientId = Job.ClientId INNER 
JOIN JobMedia ON Job.JobId = JobMedia.JobId INNER JOIN Media on 
JobMedia.MediaId=Media.MediaId
  WHERE Job.Name='%1'
  AND Level='F' AND JobStatus='T'
  ORDER BY Job.StartTime DESC LIMIT 20;


Also, I wanted to know which tapes needed to be offsite when I was 
running a full  differential combination. To do this, I created a table 
listing all the jobs and whether they should be included or not as I 
have quite a few old jobs in the catalog which I don't really care about:


CREATE TABLE jobinfo (
ji_primary SERIAL,
ji_old  boolean DEFAULT FALSE,
ji_jobname  varchar(45)
);


List the last completed full backup for each job, and the tape it is on:

CREATE VIEW lastfullbackuptapes AS
  SELECT DISTINCT foo2.jobname, foo2.endtime, foo2.volumename
FROM ( SELECT lastfullbackups.jobid, jobname, scheduledtime, 
starttime, endtime, volumename
FROM ( SELECT jobid, lastfullbackups.job AS jobname, 
job.schedtime AS scheduledtime, job.starttime, job.endtime
FROM ( SELECT ji_jobname AS job, max(endtime) AS endtime
FROM jobinfo
   LEFT JOIN ( SELECT job.name, job.endtime
FROM job
   WHERE job.type = 'B'::bpchar AND 
job.level = 'F' AND job.jobstatus = 'T') fulljobs ON 
jobinfo.ji_jobname = fulljobs.name
  WHERE ji_old = false
  GROUP BY ji_jobname) lastfullbackups
   LEFT JOIN job ON lastfullbackups.job = job.name AND 
lastfullbackups.endtime = job.endtime) lastfullbackups
   LEFT JOIN jobmedia ON lastfullbackups.jobid = jobmedia.jobid
LEFT JOIN media ON jobmedia.mediaid = media.mediaid) foo2
   ORDER BY foo2.jobname, foo2.endtime, foo2.volumename;


List all completed differential backups since the last full backup for 
each job and the tape they are on (uses the above view):

CREATE VIEW lastdiffbackuptapes AS
  SELECT DISTINCT res.name, res.endtime, media.volumename
FROM lastfullbackuptapes
JOIN ( SELECT job.jobid, job.job, job.name, job.type, job.level, 
job.clientid, job.jobstatus, job.schedtime, job.starttime, job.endtime, 
job.jobtdate, job.volsessionid, job.volsessiontime, job.jobfiles, 
job.jobbytes, job.joberrors, job.jobmissingfiles, job.poolid, 
job.filesetid, job.purgedfiles, job.hasbase
 FROM job
WHERE job.type = 'B' AND job.jobstatus = 'T' AND 
job.level = 'D') res ON lastfullbackuptapes.jobname = res.name AND 
lastfullbackuptapes.endtime  res.starttime
LEFT JOIN jobmedia ON res.jobid = jobmedia.jobid
LEFT JOIN media ON jobmedia.mediaid = media.mediaid
   ORDER BY res.name, res.endtime, media.volumename;



List all tapes which should be offsite in order to maintain the most up 
to date complete backup (for each job the most recent full and all 
subsequent differentials):

  SELECT DISTINCT res.volumename
FROM ( SELECT lastfullbackuptapes.volumename
FROM lastfullbackuptapes
UNION
  SELECT lastdiffbackuptapes.volumename
FROM lastdiffbackuptapes) res
   ORDER BY res.volumename;


-- 
Russell Howe, IT Manager. rh...@bmtmarinerisk.com
BMT Marine  Offshore Surveys Ltd.


[Bacula-users] ERROR in dircmd.c:155 Connection request failed.

2009-08-12 Thread JanJaap Scholing

Hi List,
In the log file of our storage daemon I see a lot of this error messages:
12-Aug 13:32 bacula-sd: ERROR in dircmd.c:155 Connection request 
failed.12-Aug 13:32 bacula-sd: ERROR in dircmd.c:155 Connection request failed.
Despite this error messages the backups seems to work.

What does this error message meens?
regrads
Jan Jaap
_
Nieuws, entertainment en de laatste roddels. Je vind het op MSN.nl!
http://nl.msn.com/--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bad Hello command from Director at client

2009-08-12 Thread Marek Simon
What are your director and file daemon versions?
Marek

David Touzeau napsal(a):
 Dear
 
 I have compiled latest bacula version with the following tokens 
 
 ./configure --config-cache --prefix=/usr --sysconfdir=/etc/bacula
 --with-scriptdir=/etc/bacula/scripts --sharedstatedir=/var/lib/bacula
 --localstatedir=/var/lib/bacula --with-pid-dir=/var/run/bacula
 --with-smtp-host=localhost --with-working-dir=/var/lib/bacula
 --with-subsys-dir=/var/lock
 --mandir=\${prefix}/share/man --infodir=\${prefix}/share/info
 --enable-smartalloc --with-tcp-wrappers
 --with-libiconv-prefix=/usr/include --with-readline=yes';
 --with-libintl-prefix=/usr/include --without-x
 --with-readline=yes --with-mysql --without-postgresql --without-sqlite3
 --enable-bwx-console --without-sqlite 
 
 when i'm connecting trough bconsole :
 
 r...@pc-dtouzeau:~# bconsole -d 500
 bconsole: lex.c:186-0 Open config file: /etc/bacula/bconsole.conf
 bconsole: lex.c:186-0 Open config file: /etc/bacula/bconsole.conf
 Connexion au Director PC-DTOUZEAU:9103
 bconsole: bsock.c:221-0 Current host[ipv4:127.0.0.1:9103] All
 host[ipv4:127.0.0.1:9103] host[ipv4:127.0.1.1:65535]
 host[ipv4:127.0.0.1:65535] 
 bconsole: bsock.c:155-0 who=Director daemon host=PC-DTOUZEAU port=9103
 bconsole: cram-md5.c:133-0 cram-get received: authenticate.c:81 Bad
 Hello command from Director at client: Hello *UserAgent* calling
 
 bconsole: cram-md5.c:138-0 Cannot scan challenge: authenticate.c:81 Bad
 Hello command from Director at client: Hello *UserAgent* calling
 
 Director authorization problem.
 Most likely the passwords do not agree.
 If you are using TLS, there may have been a certificate validation error
 during the TLS handshake.
 Please see
 http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00376
  for help.
 
 What's wrong ?
 
 I have read the documentation page, but nothing appears wrong in my
 configuration 
 
 console config
 --
 
 Director {
   Name = PC-DTOUZEAU-dir
   DIRport = 9103
   address = PC-DTOUZEAU
   Password = jflcVFlBiDrO0C3lkPQFXwfRSxE9QnkTDoO6uz8dRVbu
 }
 
 
  director config (bacula-dir.conf)
 --
 
 Director {# define myself
 Name = PC-DTOUZEAU-dir
 DIRport = 9103# where we listen for UA connections
 QueryFile = /etc/bacula/scripts/query.sql
 WorkingDirectory = /var/lib/bacula
 PidDirectory = /var/run/bacula
 Maximum Concurrent Jobs = 1
 Password = jflcVFlBiDrO0C3lkPQFXwfRSxE9QnkTDoO6uz8dRVbu # Console
 password
 Messages = Daemon
 }
 
 Client {
   Name = PC-DTOUZEAU-fd
   Address = PC-DTOUZEAU
   FDPort = 9102
   Catalog = MyCatalog
   Password = jflcVFlBiDrO0C3lkPQFXwfRSxE9QnkTDoO6uz8dRVbu  #
 password for FileDaemon
   File Retention = 30 days# 30 days
   Job Retention = 6 months# six months
   AutoPrune = yes # Prune expired Jobs/Files
 }
 
 File directives (bacula-fd.conf)
 --
 Director {
   Name = PC-DTOUZEAU-dir
   Password = jflcVFlBiDrO0C3lkPQFXwfRSxE9QnkTDoO6uz8dRVbu
 }
 
 
 Storage directives (bacula-sd.conf)
 --
 
 Storage { # definition of myself
   Name = PC-DTOUZEAU-sd
   SDPort = 9103
   WorkingDirectory = /var/lib/bacula
   Pid Directory = /var/run/bacula
   Maximum Concurrent Jobs = 20
 }
 
 Director {
   Name = PC-DTOUZEAU-dir
   Password = jflcVFlBiDrO0C3lkPQFXwfRSxE9QnkTDoO6uz8dRVbu 
 }
 
 
 
 
 
 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
 trial. Simplify your report design, integration and deployment - and focus on 
 what you do best, core application coding. Discover what's new with 
 Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Timeout (?) problems with some Full backups

2009-08-12 Thread Nick Lock
Hello list!

Sorry to trouble you with what's probably a simple problem, but I'm now
looking at the very real possibility of wiping all our backups clean and
starting from scratch if I can't fix it... :(

I'm having problems with some Full backups, which run for between 1 and
2 hours, appearing to time out after the data transfer from the FD to
the SD. The error message (shown below) shows that the data transfer
completes, often in about 1hr30min, and then Bacula does nothing until
the job has been running for 2 hours at which point it gives an FD
error.

Other Full backups (which don't take as long) run correctly, and for
most of the time Inc and Diff backups also run correctly. However, a
small % of backups will fail at random, also with FD errors but at
random times-elapsed during the job... this I have been ascribing to
network fluctuations! The difference is that re-running these random
failures will succeed, whilst this particular Full failure doesn't! ;)

I've already tried setting a heartbeat interval of 20 minutes in the
FD/SD and DIR conf files (thinking that the FD - Dir connection was
timing out) but this doesn't change anything.

In the time between the data transfer finishing and the timeout,
Postgres has an open connection with a COPY batch FROM STDIN
transaction in progress, which at the timeout produces errors in the
Postgres log that I have also shown below.

I'm happy to post portions of the conf files if needed, but they're huge
and might well lead to tl;dr!

Any suggestions as to how I can troubleshoot this further would be most
appreciated!

Nick Lock.


-
12-Aug 14:18 exa-bacula-dir JobId 5514: Start Backup JobId 5514,
Job=backup_scavenger.2009-08-12_14.18.06.04
12-Aug 14:18 exa-bacula-dir JobId 5514: There are no more Jobs
associated with Volume scavenger-full-1250. Marking it purged.
12-Aug 14:18 exa-bacula-dir JobId 5514: All records pruned from Volume
scavenger-full-1250; marking it Purged
12-Aug 14:18 exa-bacula-dir JobId 5514: Recycled volume
scavenger-full-1250
12-Aug 14:18 exa-bacula-dir JobId 5514: Using Device
FileStorageScavenger
12-Aug 14:18 exa-bacula-sd JobId 5514: Recycled volume
scavenger-full-1250 on device
FileStorageScavenger (/srv/bacula/volume/web-scavenger), all previous
data lost.
12-Aug 14:18 exa-bacula-dir JobId 5514: Max Volume jobs exceeded.
Marking Volume scavenger-full-1250 as Used.
12-Aug 15:49 exa-bacula-sd JobId 5514: Job write elapsed time =
01:31:41, Transfer rate = 401.4 K bytes/second
12-Aug 16:18 exa-bacula-dir JobId 5514: Fatal error: Network error with
FD during Backup: ERR=Connection reset by peer
12-Aug 16:18 exa-bacula-dir JobId 5514: Fatal error: No Job status
returned from FD.
12-Aug 16:18 exa-bacula-dir JobId 5514: Error: Bacula exa-bacula-dir
2.4.4 (28Dec08): 12-Aug-2009 16:18:09
  Build OS:   x86_64-pc-linux-gnu debian lenny/sid
  JobId:  5514
  Job:backup_scavenger.2009-08-12_14.18.06.04
  Backup Level:   Full
  Client: scavenger 2.4.4 (28Dec08)
i486-pc-linux-gnu,debian,5.0
  FileSet:full-scavenger 2009-04-16 15:58:05
  Pool:   scavenger-full (From Job FullPool override)
  Storage:FileScavenger (From Job resource)
  Scheduled time: 12-Aug-2009 14:18:03
  Start time: 12-Aug-2009 14:18:09
  End time:   12-Aug-2009 16:18:09
  Elapsed time:   2 hours 
  Priority:   10
  FD Files Written:   0
  SD Files Written:   81,883
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   2,208,578,175 (2.208 GB)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Storage Encryption: no
  Volume name(s): scavenger-full-1250
  Volume Session Id:  5
  Volume Session Time:1250080970
  Last Volume Bytes:  2,212,857,316 (2.212 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  OK
  Termination:*** Backup Error ***

-
Postgres Log:

2009-08-12 16:18:09 BST ERROR:  unexpected message type 0x58 during COPY
from stdin
2009-08-12 16:18:09 BST CONTEXT:  COPY batch, line 81884: 
2009-08-12 16:18:09 BST STATEMENT:  COPY batch FROM STDIN
2009-08-12 16:18:09 BST LOG:  could not send data to client: Broken pipe
2009-08-12 16:18:09 BST LOG:  could not receive data from client:
Connection reset by peer
2009-08-12 16:18:09 BST LOG:  unexpected EOF on client connection



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  

Re: [Bacula-users] Timeout (?) problems with some Full backups

2009-08-12 Thread John Drescher
 Sorry to trouble you with what's probably a simple problem, but I'm now
 looking at the very real possibility of wiping all our backups clean and
 starting from scratch if I can't fix it... :(

Its highly doubtful that this method will help.

 I'm having problems with some Full backups, which run for between 1 and
 2 hours, appearing to time out after the data transfer from the FD to
 the SD. The error message (shown below) shows that the data transfer
 completes, often in about 1hr30min, and then Bacula does nothing until
 the job has been running for 2 hours at which point it gives an FD
 error.

 Other Full backups (which don't take as long) run correctly, and for
 most of the time Inc and Diff backups also run correctly. However, a
 small % of backups will fail at random, also with FD errors but at
 random times-elapsed during the job... this I have been ascribing to
 network fluctuations! The difference is that re-running these random
 failures will succeed, whilst this particular Full failure doesn't! ;)

 I've already tried setting a heartbeat interval of 20 minutes in the
 FD/SD and DIR conf files (thinking that the FD - Dir connection was
 timing out) but this doesn't change anything.

 In the time between the data transfer finishing and the timeout,
 Postgres has an open connection with a COPY batch FROM STDIN
 transaction in progress, which at the timeout produces errors in the
 Postgres log that I have also shown below.

 I'm happy to post portions of the conf files if needed, but they're huge
 and might well lead to tl;dr!

 Any suggestions as to how I can troubleshoot this further would be most
 appreciated!


I would make the heartbeat interval much shorter.

Also I am interested on why the backup rate is this slow. Slow network?

John

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Run a script after a manual Restoration

2009-08-12 Thread Arno Lehmann
Hello,

12.08.2009 11:31, M. Sébastien LELIEVRE wrote:
 Greetings,
 
 We are now running Bacula-3.0.2 compiled from sources on an Ubuntu 8.04
 LTS server distribution.
 
 Our aim is to restore very specific files of a backup on another client
 every week.
 
 In order to do this, we use a shell script that specifies every options,
 files and directories we need to mark and pipe them in the bconsole.
 
 Our main problem here is that we cannot run an After Job script.

I'll trust you on this - I never tried it :-)

 We have understood from the Bacula Documentation that we cannot schedule
 a Bacula Restoration without restoring all the data from the Backup
 specified. That is why we turn ourselves to the scripting solution.
 
 Can we specify a Client Run After Job option from the bconsole during
 a manual restoration?

No, but...

 Can we schedule a restoration job in Bacula that will only restore
 specified files?
 
 Here is the script we are currently using : http://pastebin.com/m3c126467

... you could either use an admin script with two run scripts - one to 
initiate the actual restore (which could simply be the script you 
already have) and the other one the script you want to add, or just 
add your new Run after Job script to the script you use now.

Cheers,

Arno

 Best Regards from France,

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] WXP SP3 fd restore restores wrong file attributes, Ubuntu dir sd

2009-08-12 Thread takis peppas
Hello everybody,

I've compiled bacula to my ubuntu server and installed the fd fom the
binaries on my WXP SP3 laptop.

I take a backup of
c:\Documents and Settings\takis
and restore to
c:\tmp\bacula
where I've created the tree up to c:\tmp\bacula\c manually.

The restore reports an error, so I checked with attrib the original
and restored files and this is what I get, (o lines are the original
and r lines the restored):
oA  C:\Documents and Settings\takis\My
Documents\lifebookA26391-K225-Z120-en.pdf
rA  C:\tmp\bacula\c\Documents and Settings\takis\My
Documents\lifebookA26391-K225-Z120-en.pdf
o R C:\Documents and Settings\takis\My Documents\My Music
r   S   C:\tmp\bacula\c\Documents and Settings\takis\My Documents\My Music
oA  C:\Documents and Settings\takis\UserData\index.dat
rA  C:\tmp\bacula\c\Documents and Settings\takis\UserData\index.dat
oA  C:\Documents and Settings\takis\.bash_history
rA  C:\tmp\bacula\c\Documents and Settings\takis\.bash_history
oHR C:\Documents and Settings\takis\Application Data
r   SH  C:\tmp\bacula\c\Documents and Settings\takis\Application Data
o   SH  C:\Documents and Settings\takis\Cookies
rHR C:\tmp\bacula\c\Documents and Settings\takis\Cookies
o   C:\Documents and Settings\takis\Desktop
r   S R C:\tmp\bacula\c\Documents and Settings\takis\Desktop
o R C:\Documents and Settings\takis\Favorites
r   S   C:\tmp\bacula\c\Documents and Settings\takis\Favorites
o   SH  C:\Documents and Settings\takis\IECompatCache
rHR C:\tmp\bacula\c\Documents and Settings\takis\IECompatCache
o   SH  C:\Documents and Settings\takis\IETldCache
rHR C:\tmp\bacula\c\Documents and Settings\takis\IETldCache
o   C:\Documents and Settings\takis\InstalledSW
r   S R C:\tmp\bacula\c\Documents and Settings\takis\InstalledSW
oH  C:\Documents and Settings\takis\Local Settings
r   SHR C:\tmp\bacula\c\Documents and Settings\takis\Local Settings
o R C:\Documents and Settings\takis\My Documents
r   S   C:\tmp\bacula\c\Documents and Settings\takis\My Documents
oH  C:\Documents and Settings\takis\NetHood
r   SHR C:\tmp\bacula\c\Documents and Settings\takis\NetHood
oA   H  C:\Documents and Settings\takis\NTUSER.DAT
rA   H  C:\tmp\bacula\c\Documents and Settings\takis\NTUSER.DAT
oA   H  C:\Documents and Settings\takis\ntuser.dat.LOG
rA   H  C:\tmp\bacula\c\Documents and Settings\takis\ntuser.dat.LOG
o   SH  C:\Documents and Settings\takis\ntuser.ini
rA  SH  C:\tmp\bacula\c\Documents and Settings\takis\ntuser.ini
oH  C:\Documents and Settings\takis\PrintHood
r   SHR C:\tmp\bacula\c\Documents and Settings\takis\PrintHood
o   SH  C:\Documents and Settings\takis\PrivacIE
rHR C:\tmp\bacula\c\Documents and Settings\takis\PrivacIE
oHR C:\Documents and Settings\takis\Recent
r   SH  C:\tmp\bacula\c\Documents and Settings\takis\Recent
oHR C:\Documents and Settings\takis\SendTo
r   SH  C:\tmp\bacula\c\Documents and Settings\takis\SendTo
o R C:\Documents and Settings\takis\Start Menu
r   S   C:\tmp\bacula\c\Documents and Settings\takis\Start Menu
oH  C:\Documents and Settings\takis\Templates
r   SHR C:\tmp\bacula\c\Documents and Settings\takis\Templates
o   SH  C:\Documents and Settings\takis\UserData
rHR C:\tmp\bacula\c\Documents and Settings\takis\UserData
oA  C:\Documents and Settings\takis\_viminfo
rA  C:\tmp\bacula\c\Documents and Settings\takis\_viminfo
o   C:\Documents and Settings\takis
r   S R C:\tmp\bacula\c\Documents and Settings\takis
o   C:\Documents and Settings
r   C:\tmp\bacula\c\Documents and Settings
o   C:\tmp\bacula\c
o   C:\tmp\bacula
o   C:\tmp
o   SH  C:\partition\System Volume Information
r   SH  C:\tmp\bacula\c\partition\System Volume Information
o   C:\partition
r   S R C:\tmp\bacula\c\partition



Restore report from bacula:
From: (Bacula) r...@localhost
To: r...@localhost
Date: Tue, 11 Aug 2009 19:04:36 +0300 (EEST)
Subject: Bacula: Restore Error of lifebook-fd Full
11-Aug 19:03 covey-dir JobId 10: Start Restore Job
LBRestoreFiles.2009-08-11_19.03.38_16
11-Aug 19:03 covey-dir JobId 10: Using Device FileStorage
11-Aug 19:03 covey-sd JobId 10: Ready to read from volume LB00 on
device FileStorage (/tmp).
11-Aug 19:03 covey-sd JobId 10: Forward spacing Volume LB00 to
file:block 0:185.
11-Aug 19:04 lifebook-fd JobId 10: Error:
/home/kern/bacula/k/src/filed/restore.c:1000 Write error on
c:/tmp/bacula/c/Documents and Settings/takis/Cookies/: Access is
denied.

11-Aug 19:04 covey-dir JobId 10: Error: Bacula covey-dir 3.0.2
(18Jul09): 11-Aug-2009 19:04:36
 Build OS:   i686-pc-linux-gnu ubuntu 9.04
 JobId:  10
 Job:LBRestoreFiles.2009-08-11_19.03.38_16
 Restore Client: lifebook-fd
 Start 

[Bacula-users] Seeking advice re: offline archives

2009-08-12 Thread Ian Levesque
Hi -

I'm curious about using Bacula in a slightly different way than it was  
apparently intended. We have large directories of data that need to  
periodically be moved offline to tape. I've got a Scalar 100 with two  
LTO2 drives. I wouldn't be doing incremental/differential backups, or  
scheduled backups in any way. At first, I was using a shell script  
that wrapped around tar. Since my archives often need to span tapes  
and sometimes many tapes, it was becoming very difficult to manage. I  
think a solution like Bacula is *mostly* right for my needs, but my  
question is whether anyone on list has an installation similar to mine  
and can offer some management advice. How do you configure your jobs  
and schedules when it's just a bunch of one-off archives?

Best,
Ian

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Timeout (?) problems with some Full backups

2009-08-12 Thread John Lockard
While the job is running, keep an eye on the system which houses
your MySQL database and make sure that it isn't filling up a
partition with temp data.  I was running into a similar problem
and needed to move my mysql_tmpdir (definable in /etc/my.cnf)
to another location.

-John

On Wed, Aug 12, 2009 at 05:00:30PM +0100, Nick Lock wrote:
 Hello list!
 
 Sorry to trouble you with what's probably a simple problem, but I'm now
 looking at the very real possibility of wiping all our backups clean and
 starting from scratch if I can't fix it... :(
 
 I'm having problems with some Full backups, which run for between 1 and
 2 hours, appearing to time out after the data transfer from the FD to
 the SD. The error message (shown below) shows that the data transfer
 completes, often in about 1hr30min, and then Bacula does nothing until
 the job has been running for 2 hours at which point it gives an FD
 error.
 
 Other Full backups (which don't take as long) run correctly, and for
 most of the time Inc and Diff backups also run correctly. However, a
 small % of backups will fail at random, also with FD errors but at
 random times-elapsed during the job... this I have been ascribing to
 network fluctuations! The difference is that re-running these random
 failures will succeed, whilst this particular Full failure doesn't! ;)
 
 I've already tried setting a heartbeat interval of 20 minutes in the
 FD/SD and DIR conf files (thinking that the FD - Dir connection was
 timing out) but this doesn't change anything.
 
 In the time between the data transfer finishing and the timeout,
 Postgres has an open connection with a COPY batch FROM STDIN
 transaction in progress, which at the timeout produces errors in the
 Postgres log that I have also shown below.
 
 I'm happy to post portions of the conf files if needed, but they're huge
 and might well lead to tl;dr!
 
 Any suggestions as to how I can troubleshoot this further would be most
 appreciated!
 
 Nick Lock.
 
 
 -
 12-Aug 14:18 exa-bacula-dir JobId 5514: Start Backup JobId 5514,
 Job=backup_scavenger.2009-08-12_14.18.06.04
 12-Aug 14:18 exa-bacula-dir JobId 5514: There are no more Jobs
 associated with Volume scavenger-full-1250. Marking it purged.
 12-Aug 14:18 exa-bacula-dir JobId 5514: All records pruned from Volume
 scavenger-full-1250; marking it Purged
 12-Aug 14:18 exa-bacula-dir JobId 5514: Recycled volume
 scavenger-full-1250
 12-Aug 14:18 exa-bacula-dir JobId 5514: Using Device
 FileStorageScavenger
 12-Aug 14:18 exa-bacula-sd JobId 5514: Recycled volume
 scavenger-full-1250 on device
 FileStorageScavenger (/srv/bacula/volume/web-scavenger), all previous
 data lost.
 12-Aug 14:18 exa-bacula-dir JobId 5514: Max Volume jobs exceeded.
 Marking Volume scavenger-full-1250 as Used.
 12-Aug 15:49 exa-bacula-sd JobId 5514: Job write elapsed time =
 01:31:41, Transfer rate = 401.4 K bytes/second
 12-Aug 16:18 exa-bacula-dir JobId 5514: Fatal error: Network error with
 FD during Backup: ERR=Connection reset by peer
 12-Aug 16:18 exa-bacula-dir JobId 5514: Fatal error: No Job status
 returned from FD.
 12-Aug 16:18 exa-bacula-dir JobId 5514: Error: Bacula exa-bacula-dir
 2.4.4 (28Dec08): 12-Aug-2009 16:18:09
   Build OS:   x86_64-pc-linux-gnu debian lenny/sid
   JobId:  5514
   Job:backup_scavenger.2009-08-12_14.18.06.04
   Backup Level:   Full
   Client: scavenger 2.4.4 (28Dec08)
 i486-pc-linux-gnu,debian,5.0
   FileSet:full-scavenger 2009-04-16 15:58:05
   Pool:   scavenger-full (From Job FullPool override)
   Storage:FileScavenger (From Job resource)
   Scheduled time: 12-Aug-2009 14:18:03
   Start time: 12-Aug-2009 14:18:09
   End time:   12-Aug-2009 16:18:09
   Elapsed time:   2 hours 
   Priority:   10
   FD Files Written:   0
   SD Files Written:   81,883
   FD Bytes Written:   0 (0 B)
   SD Bytes Written:   2,208,578,175 (2.208 GB)
   Rate:   0.0 KB/s
   Software Compression:   None
   VSS:no
   Storage Encryption: no
   Volume name(s): scavenger-full-1250
   Volume Session Id:  5
   Volume Session Time:1250080970
   Last Volume Bytes:  2,212,857,316 (2.212 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  Error
   SD termination status:  OK
   Termination:*** Backup Error ***
 
 -
 Postgres Log:
 
 2009-08-12 16:18:09 BST ERROR:  unexpected message type 0x58 during COPY
 from stdin
 2009-08-12 16:18:09 BST CONTEXT:  COPY batch, line 81884: 
 2009-08-12 16:18:09 BST STATEMENT:  COPY batch FROM STDIN
 2009-08-12 16:18:09 BST LOG:  could not send data to client: Broken pipe
 2009-08-12 16:18:09 BST LOG:  could not receive data from 

[Bacula-users] bacula 1.38 file daemon compatibility with latest 3.0 Bacula Server Solution

2009-08-12 Thread Michael Halfhill
Has anyone confirmed if there are any complications or issues with  
remote bacula 1.38 file daemon clients working

with Bacula 3.0 Server solution?


Michael G. Halfhill
East Kentucky Network, LLC.
Appalachian Wireless
Information Technology
mhalfh...@ekn.com
(606) 477-2355 ext 144
(606) 791-9421 cell






--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Advice needed on Linux backup strategy to LTO-4 tape

2009-08-12 Thread rorycl

I'm going to cross-post this text on the Amanda and Bacula lists.
Apologies in advance if you see this twice.

Our company is about to provide centralised backups for several pools of
backup data of between 1 and 15TB in size. Each pool changes daily but
backups to tape will only occur once a month for each pool.

The backup tape format is to be LT04 and we have a second-hand Dell
PowerVault 124T 16 tape autoloader to work with currently. Backup from a
pool may be taken off a Linux LVM (or hopefully soon a BTRFS) snapshot
ensuring that the source data does not change during the backup process.
We have the possibility of pre-preparing backup or compressed images if
this is advisable.

An important aspect of the system is that the tapes should be readable
for 12 years, by other parties if necessary. From this point of view we
like the idea of providing a CD with each tape set of the software
needed to extract the contents, together with a listing of the enclosed
files in a UTF8 text file. We will be required to audit each backup set
by successfully extracting files from tape.

We are very familiar with working on the command-line in Linux,
Postgresql and Python.

As we have not run backup to tape on Linux before I would be very
grateful to receive advice on what approach members of this list would
take to meeting the above requirements.

Many thanks,
Rory

+--
|This was sent by r...@campbell-lange.net via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fateh Gouasmi/FR/BULL est absent(e).

2009-08-12 Thread Fateh . Gouasmi
Je serai absent(e) du  13/08/2009 au 28/08/2009.

Je répondrai à votre message dès mon retour.







--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fateh Gouasmi/FR/BULL est absent(e).

2009-08-12 Thread Fateh . Gouasmi
Je serai absent(e) du  13/08/2009 au 28/08/2009.

Je répondrai à votre message dès mon retour.







--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Advice needed on Linux backup strategy to LTO-4 tape

2009-08-12 Thread Arno Lehmann
Hello,

13.08.2009 00:19, rorycl wrote:
 I'm going to cross-post this text on the Amanda and Bacula lists.
 Apologies in advance if you see this twice.

Great, let's start a cross-list flame war ;-)

Welcome here anyway ;-)

 Our company is about to provide centralised backups for several pools of
 backup data of between 1 and 15TB in size. Each pool changes daily but
 backups to tape will only occur once a month for each pool.

Meaning that the monthly backups can get rather big, even if you don't 
do full backups each time... or are you planning to do only fulls? 
Then I hope you're not forced to use a slow and unstable WAN link...

 The backup tape format is to be LT04 and we have a second-hand Dell
 PowerVault 124T 16 tape autoloader to work with currently. Backup from a
 pool may be taken off a Linux LVM (or hopefully soon a BTRFS) snapshot
 ensuring that the source data does not change during the backup process.

A good start.

 We have the possibility of pre-preparing backup or compressed images if
 this is advisable.

That's not needed with Bacula - creating and destroying the LVM 
snapshots can be integrated into the jobs (as well as ignoring the 
directory you mount the snapshot to, ensuring only the original path 
names are stored).

 An important aspect of the system is that the tapes should be readable
 for 12 years,

So I hope you've got the proper storage conditions.

 by other parties if necessary.

No Problem.

 From this point of view we
 like the idea of providing a CD with each tape set of the software
 needed to extract the contents,

Simple with Bacula - you just need the bls and bextract programs. I 
would advise to distribute those as source file - in 12 years, you 
will have difficulties finding a system where today's executables will 
be of any use.

Also consider that finding the needed hardware in 12 years might be 
difficult, so you'll have to set up a routine to verify the hard- and 
software needed to access the data is available.

Bacula's tape format is, obviously, open source, so it's not a problem 
per se to ensure you can read it, but just to feel comfortable I would 
create a rescue media each year which runs Bacula (or at least bls / 
bextract) on the then-current hardware, and has the needed drivers to 
access whatever drives are then available to read your tapes.

 together with a listing of the enclosed
 files in a UTF8 text file.

Creating that is just a simple SQL query, which actually already is 
distributed with Bacula - have a look at the query command in bconsole:

 *que
 Automatically selected Catalog: BaculaCat
 Using Catalog BaculaCat
 Available queries:
...
 12: List Files for a selected JobId
...
 Choose a query (1-16): 12
 Enter JobId: 23456
 ++-+
 | Path   | Name|
 ++-+
 | /dev/  | |
 | /proc/ | |
 | /root/ | .asterisk_history   |
 | /root/ | .bash_history   |
 | /sys/  | |
 | /tmp/  | |

You can create that list automatically by a script which is run after 
each job.

 We will be required to audit each backup set
 by successfully extracting files from tape.

No problems expected.

 We are very familiar with working on the command-line in Linux,
 Postgresql and Python.

I don't think you will need much python for your tasks :-)

Creation (and ensuring it's useful) of a mini-linux distribution to 
read your tapes wouldn't be too much of a problem, too.

The biggest part of your work, when using Bacula, will be to set up 
and maintain the proper procedures and auditing, which is mostly 
writing text :-)

 As we have not run backup to tape on Linux before I would be very
 grateful to receive advice on what approach members of this list would
 take to meeting the above requirements.

Actually, I would start with a rather plain Bacula setup. Just make 
sure your tapes are never automatically reused, and ensure the files 
are pruned from the jobs in time, so your catalog doesn't grow 
excessively. Set up, test, and automate a build environment for your 
add-on media which has the tools and file list included.

Finally, glue it all together to make the add-on media creation fully 
automatic... and then starts the real fun - testing, maintaining, 
auditing.

Actually, I don't think that what you need is much of a challenge for 
Bacula and an experienced sysadmin :-)

Cheers,

Arno

 Many thanks,
 Rory
 

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de


[Bacula-users] disabling 'accurate' for a restore

2009-08-12 Thread James Harper
I do a full backup once a week, and then incremental backups for the
rest of the week.

MSSQL is configured to back up transaction logs every day and keep the
backup for 3 days (by which time it would have been backed up). It's
Thursday today and I want to restore the most recent full backup
(Saturday night) and all the subsequent transaction logs, but of course
because accurate backup is on I'm going to have to do multiple restores
at various points - restore as at Monday night to get Saturday + Sunday
logfiles, then restore at Wednesday night to get Monday + Tuesday
logfiles, etc. Restoring as at last nights backup will only give me the
logfiles that existed at the time.

Is there a way to tell Bacula that you want to restore the backups as if
accurate was not enabled at the time?

Thanks

James

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula 3.0.2 fileset file-list on the fly ?

2009-08-12 Thread Clark Hartness
I am attempting to send a list of files to Bacula from the Client using 
this syntax described in the manual:

Any file-list item preceded by a less-than sign () will be taken to be 
a file. This file will be read on the Director's machine at the time the 
Job starts, and the data will be assumed to be a list of directories or 
files, one per line, to be included. The names should start in column 1 
and should not be quoted even if they contain spaces. This feature 
allows you to modify the external file and change what will be saved 
without stopping and restarting Bacula as would be necessary if using 
the @ modifier noted above. For example:

Include {
  Options { signature = SHA1 }
  File = /home/files/local-filelist
}

If you precede the less-than sign () with a backslash as in \, the 
file-list will be read on the Client machine instead of on the 
Director's machine. Please note that if the filename is given within 
quotes, you will need to use two slashes.

Include {
  Options { signature = SHA1 }
  File = \\/home/xxx/filelist-on-client
}



If I use the \\ syntax I can not estimate or run the job to create the backup

I get errors like:

12-Aug 19:29 ws27-fd JobId 297:  Could not stat File = 
/mnt/fog006/ddrswa/jckh/dlr_leopard_dev_12/dfb/doc/001-10043-dfb-BROS-starB.doc:
 ERR=No such file or directory
12-Aug 19:29 ws27-fd JobId 297:  Could not stat File = 
/mnt/fog006/ddrswa/jckh/dlr_leopard_dev_12/dfb/doc/001-10043-dfb-BROS-starB.pdf:
 ERR=No such file or directory
12-Aug 19:29 ws27-fd JobId 297:  Could not stat File = 
/mnt/fog006/ddrswa/jckh/dlr_leopard_dev_12/dfb/doc/001-10043-dfb-BROS-starC.doc:
 ERR=No such file or directory



# List of files to be backed up - 19 lines

FileSet {

  Name = Test_FS_ws27

  Include {

Options {

compression=GZIP

signature = MD5

}

# Backup Specific Files From a WorkSpace on FOG

# This could be created by a script that would be

# RunBeforeJob to place a list of files that are not under

# Sync or ICM Control

File = /net/ws27/disk2/ddrsbu/bacula/Test_Job_ws27.txt

  }



#

  Exclude {

# Exclude SnapShot Directories

File = /.snapshot

  }

}



*estimate

The defined Job resources are:

 1: Job_Test_FileList

 2: Default_Job_WS27

 3: Default_Job_nfs5

 4: BackupCatalog_WS27

 5: RestoreFiles_Default_Job_WS27

 6: Camgian_nfs5_icmwa

Select Job resource (1-6): 1

Using Catalog MyCatalog

Connecting to Client ws27-fd at ws27.ms.camgian.com:9102

2000 OK estimate files=0 bytes=0

*

If I use the @ and restart bacula-dir I can estimate or run the job to 
create the backup

# List of files to be backed up - 19 lines
FileSet {
  Name = Test_FS_ws27
  Include {
Options {
compression=GZIP
signature = MD5
}
# Backup Specific Files From a WorkSpace on FOG
# This could be created by a script that would be
# RunBeforeJob to place a list of files that are not under
# Sync or ICM Control
File = @/net/ws27/disk2/ddrsbu/bacula/Test_Job_ws27.txt
  }

#
  Exclude {
# Exclude SnapShot Directories
File = /.snapshot
  }
}

*estimate
The defined Job resources are:
 1: Job_Test_FileList
 2: Default_Job_WS27
 3: Default_Job_nfs5
 4: BackupCatalog_WS27
 5: RestoreFiles_Default_Job_WS27
 6: Camgian_nfs5_icmwa
Select Job resource (1-6): 1
Using Catalog MyCatalog
Connecting to Client ws27-fd at ws27.ms.camgian.com:9102
2000 OK estimate files=49 bytes=28,406,715
*quit
[r...@ws27 bacula]#






--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula asking for a new tape volume in a file volume pool

2009-08-12 Thread Mark Walkom
I am seeing this error a lot in messages;
13-Aug 15:19 benjy-sd JobId 31: Job BenjyFull.2009-08-12_21.05.00.57
waiting. Cannot find any appendable volumes.
Please use the label  command to create a new Volume for:
Storage:  LTO2 (/dev/nst0)
Pool: Incremental
Media type:   LTO2

Here is our pools and volumes;
*list volumes
Pool: Default
+-++---+-++--+--+-+--+---+---+-+
| MediaId | VolumeName | VolStatus | Enabled | VolBytes   | VolFiles |
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
|
+-++---+-++--+--+-+--+---+---+-+
|  17 | WEEK1_001  | Used  |   1 | 39,160,267,776 |   75
|  950,400 |   1 |0 | 0 | LTO2  | 2009-08-05
12:12:27 |
|  18 | WEEK1_002  | Used  |   1 | 12,403,464,192 |   35
|  950,400 |   1 |0 | 0 | LTO2  | 2009-08-07
10:51:41 |
|  19 | WEEK1_003  | Append|   1 |  0 |   10
|  950,400 |   1 |0 | 0 | LTO2  | -00-00
00:00:00 |
|  20 | WEEK1_004  | Append|   1 |  0 |   22
|  950,400 |   1 |0 | 0 | LTO2  | -00-00
00:00:00 |
|  21 | WEEK1_005  | Append|   1 |  0 |0
|  950,400 |   1 |0 | 0 | LTO2  | -00-00
00:00:00 |
+-++---+-++--+--+-+--+---+---+-+
Pool: Incremental
+-+--+---+-++--+--+-+--+---+---+-+
| MediaId | VolumeName   | VolStatus | Enabled | VolBytes   |
VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType |
LastWritten |
+-+--+---+-++--+--+-+--+---+---+-+
|   8 | Incremental_0001 | Full  |   1 | 19,999,944,755 |
4 |  950,400 |   1 |0 | 0 | File  | 2009-08-04
21:51:22 |
|   9 | Incremental_0002 | Full  |   1 | 19,999,945,596 |
4 |  950,400 |   1 |0 | 0 | File  | 2009-08-04
22:21:46 |
|  10 | Incremental_0003 | Full  |   1 | 19,999,945,761 |
4 |  950,400 |   1 |0 | 0 | File  | 2009-08-04
22:47:28 |
|  11 | Incremental_0004 | Full  |   1 | 19,999,945,373 |
4 |  950,400 |   1 |0 | 0 | File  | 2009-08-04
23:24:09 |
|  12 | Incremental_0005 | Full  |   1 | 19,999,945,880 |
4 |  950,400 |   1 |0 | 0 | File  | 2009-08-04
23:47:44 |
|  13 | Incremental_0006 | Used  |   1 |  4,314,839,214 |
1 |  950,400 |   1 |0 | 0 | File  | 2009-08-04
23:52:51 |
|  14 | Incremental_0007 | Full  |   1 | 19,999,945,856 |
4 |  950,400 |   1 |0 | 0 | File  | 2009-08-07
10:33:09 |
|  15 | Incremental_0008 | Used  |   1 |  7,595,827,276 |
1 |  950,400 |   1 |0 | 0 | File  | 2009-08-07
10:42:37 |
|  16 | Incremental_0009 | Used  |   1 |896,336,707 |
0 |  950,400 |   1 |0 | 0 | File  | 2009-08-07
21:13:16 |
+-+--+---+-++--+--+-+--+---+---+-+


I am a bit unsure as to why it wants a tape volume in a file pool?
This is a backport install on Sarge;
*ver
benjy-dir Version: 2.4.4 (28 December 2008) i486-pc-linux-gnu debian 4.0


Thanks,
Mark
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users