Re: [Bacula-users] one full backup of a job always fails

2009-09-16 Thread Silver Salonen
On Monday 14 September 2009 16:08:19 Alan Brown wrote: On Mon, 14 Sep 2009, Silver Salonen wrote: Hello. I've got a problem in Bacula 3.0.2 - one backup of a server always fails with error: JobId 14638: Fatal error: sql_create.c:806 Fill Filename table Query failed: INSERT INTO

Re: [Bacula-users] Device FileStorage with MediaType File requested by DIR not found in SD Device resources.

2009-09-16 Thread Roger Meier
Martin Simmons schrieb: What does status storage=File print, in particular the Device status part? The Device Status Part of status storage=File prints out the following: Device status: Device File (/data/backup) is not open. Device LTO-2 (/dev/nst0) is not open. Device LTO-3 (/dev/nst1)

Re: [Bacula-users] Repeated job failures

2009-09-16 Thread Richard Mortimer
Hi, On 15/09/2009 16:54, Alan Brown wrote: 2.4.4 (not able to update yet) I keep getting errors like this on update slots and incremental updates of the bacula-dir server Subject: Bacula: Admin Fatal Error of UpdateSlots.2009-09-13_12.30.43 Full 14-Sep 18:24 msslay-dir JobId 104670:

Re: [Bacula-users] Repeated job failures

2009-09-16 Thread Alan Brown
On Wed, 16 Sep 2009, Richard Mortimer wrote: The immediate failure is because the number of affected rows is zero. I'm not sure what the underlying cause could be (apart from the fact that JobId 104670 is not in the Job table - pruning or a bug?) More likely bug than pruning. It's being run

Re: [Bacula-users] one full backup of a job always fails

2009-09-16 Thread Alan Brown
On Wed, 16 Sep 2009, Silver Salonen wrote: Yes, I did see this by googling, but I really doubted the server was out of disk space - there are several gigabytes free for MySQL DB and /tmp ain't full either. So I just couldn't believe it :) As you've discovered: It's surprising how big the temp

Re: [Bacula-users] scheduling problem

2009-09-16 Thread Roy Kidder
I stand corrected. I finally got through the full backup, swapping volumes as soon as I could once they were requested but the following 2 nights bacula scheduled another full backup (when it should have been incremental). If anyone has suggestions, I'd appreciate hearing them. Thanks in

[Bacula-users] Fileset manipulation by job

2009-09-16 Thread Joseph L. Casale
Is it possible to manipulate the fileset based on the job? For example, a RunScript parameter has a %l to pass the Job Level on, can the fileset somehow be manipulated like this as well? Thanks, jlc -- Come build with

Re: [Bacula-users] Creating a Volume on a tape times out

2009-09-16 Thread Watson, Joe
With lsscsi I never find a device for /dev/nst0 is why I went with /dev/sg0 is it possible bacula is expecting the latter and resolves it on its own somehow? This is the output from lsscsi -g [4:0:4:0]tapeHP Ultrium 4-SCSI W24W /dev/st0 /dev/sg0 [4:0:4:1]mediumx HP 1x8

Re: [Bacula-users] Creating a Volume on a tape times out

2009-09-16 Thread Watson, Joe
With lsscsi I never find a device for /dev/nst0 is why I went with /dev/sg0 is it possible bacula is expecting the latter and resolves it on its own somehow? This is the output from lsscsi -g [4:0:4:0]tapeHP Ultrium 4-SCSI W24W /dev/st0 /dev/sg0 [4:0:4:1]mediumx HP 1x8

Re: [Bacula-users] Creating a Volume on a tape times out

2009-09-16 Thread Mark Nienberg
Watson, Joe wrote: With lsscsi I never find a device for /dev/nst0 is why I went with /dev/sg0 is it possible bacula is expecting the latter and resolves it on its own somehow? This is the output from lsscsi -g [4:0:4:0]tapeHP Ultrium 4-SCSI W24W /dev/st0 /dev/sg0

[Bacula-users] quantum tape problems

2009-09-16 Thread Ulrich Leodolter
hi i have a quantum LTO-4 drive connected via lsi sas hba (LSISAS1068E). Device { Name = LTO-4-Drive Description = QUANTUM LTO-4 Tape Drive Media Type = LTO-4 Archive Device = /dev/nst0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes;

Re: [Bacula-users] Creating a Volume on a tape times out

2009-09-16 Thread Watson, Joe
3995 Bad autochanger unload slot 2, drive 0: ERR=Child exited with code 1 Results=Unloading drive 0 into Storage Element 2...mtx: Request Sense: Long Report=yes mtx: Request Sense: Valid Residual=no mtx: Request Sense: Error Code=70 (Current) mtx: Request Sense: Sense Key=Illegal Request mtx:

Re: [Bacula-users] Fileset manipulation by job

2009-09-16 Thread Joseph L. Casale
a RunScript parameter has a %l to pass the Job Level on Hey, that sounds interesting, I did note eaven know that, can you point me to the on line documentation where I could find more about that? Hannes, The area in the docs is under the Director Config Job Resource:

Re: [Bacula-users] Fileset manipulation by job

2009-09-16 Thread Hannes Gruber
Juche Joseph, a RunScript parameter has a %l to pass the Job Level on Hey, that sounds interesting, I did note eaven know that, can you point me to the on line documentation where I could find more about that? Hannes

[Bacula-users] Reports of saved files after a backup job

2009-09-16 Thread Jose Perez
Hi people: Is it possible to send an email to some user containing the list of all files saved on a backup? The Messages resource only mentions notsaved files. Could someone point me to a possible solution for this? Thanks

[Bacula-users] Restore only the contents of the last job

2009-09-16 Thread Jose Perez
Hi all: I'm running Bacula 3.0.2 with a policy of 1 Full Backup per month and the rest of days Incremental backups. I'm pretending to offer some kind of archive acces to my end users like this: 1. Trough Samba publish the contents of user backups 2. Create a directory named with the current date

Re: [Bacula-users] upgade bacula from 1.38.11 to 2.4.4 brings Fatal error: Error getting Volume info: 1990 Invalid Catalog Request:

2009-09-16 Thread Bruno Friedmann
Hi, with all your informations, it seems you get the right package. It could be 2 things, a pb of right between the director and mysql. And trouble with right as bacula should run now director and sd as non root. Just check the /var/lib/bacula and dir where you store backup are owned by the