[Bacula-users] FD command not found:

2009-11-02 Thread Pascal Clermont
I have added a new machine to be backed up with bacula and I cannot get a full 
backup, always have these weird errors.
the current setup is all centos 5.3 86_64 and every other host besides this one 
have been working flawlessly for over 6 months.

DIR Version: Version: 3.0.1 (30 April 2009)

Client-FD Version: Version: 3.0.1 (30 April 2009)


the error output:

JobId 2744: Fatal error: fd_cmds.c:181 FD command not found: 
wMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxZjUwMDczNWEw
MTRmMzYwMDAwMjQ3ZjAxMDEwMTAxMDEwMTAxMDEwMTAxMDENCjE4MmEzNzAwMzUwMTAxMDEw
MTAxMDEwMTAxMDEwMTAxZjcwMDMzMDEwMTAxMDEwMTAxNWE5YTY4NTQ1NzliODYxNjAxMDEw
MTNiOGEwMWE3ODcwMTAxMDE0ZWI5MmQwMTc4ZjhlYzAxMDEwMTAxMDEwMTAxMDEwMTAxNGZi
NGNhDQo1MTAxMDEwMTAxMDExYmM3N2UyMzRhYjAwMzFlMDFlNzAwOTFiODAwYzc4NTAwNTYw 
snip
DkwYzE4MzcyYjAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwZWI5Y2Ux
NjAxMGU1YjY1MDEzMg0KMDFiZDRmNGY3NWUwZDQ4MWIwZWFkMTM3MmIwMTAxMDFhMTAwYjNh
MDAxMDEwMTAxY2Y3MWUwZTBlMDgxODAwMTAxMDEwMTAxMDEwMTAxOTI3Y2NhYzQ0YTQ0ZTAw
YjAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDE4N227
JobId 2744: Fatal error: fd_cmds.c:172 Command error with FD, hanging up.

Here is its configuration file in the bacula-dir:
Job {
  Name = Vicenza Daily
  Client = vicenza.dmz.lexum.pri
  FileSet = Full Set Vicenza
  JobDefs = Daily
  Schedule = Daily
  Write Bootstrap = /var/lib/bacula/vicenza-daily.bsr
}
FileSet {
  Name = Full Set Vicenza
  Include {
Options {
  signature = MD5
}
File = /etc
File = /opt/zimbra/backup
  }
  Exclude {
  }
}
Client {
  Name = vicenza.dmz.lexum.pri
  Address = vicenza.dmz.lexum.pri
  FDPort = 9102
  Maximum Concurrent Jobs = 10
  Catalog = MyCatalog
  Password = somepassword
  File Retention = 6 months   
  Job Retention = 6 months
  AutoPrune = yes
}

Here is the FD configuration file :
Director {
  Name = catanzaro.lan.lexum.pri-dir
  Password = somepassword
}

Director {
  Name = bacula-mon
  Password = somepassword
  Monitor = yes
}
Messages {
  Name = Standard
  director = catanzaro.lan.lexum.pri-dir = all, !skipped, !restored
}

Would there be a way to make any sense on why the client would try to execute 
any FD commands?

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] FD command not found:

2009-10-28 Thread Pascal Clermont
I have added a new machine to be backed up with bacula and I cannot get a full 
backup, always have these weird errors.
the current setup is all centos 5.3 86_64 and every other host besides this one 
have been working flawlessly for over 6 months.

DIR Version: Version: 3.0.1 (30 April 2009)

Client-FD Version: Version: 3.0.1 (30 April 2009)


the error output:

JobId 2744: Fatal error: fd_cmds.c:181 FD command not found: 
wMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxZjUwMDczNWEw
MTRmMzYwMDAwMjQ3ZjAxMDEwMTAxMDEwMTAxMDEwMTAxMDENCjE4MmEzNzAwMzUwMTAxMDEw
MTAxMDEwMTAxMDEwMTAxZjcwMDMzMDEwMTAxMDEwMTAxNWE5YTY4NTQ1NzliODYxNjAxMDEw
MTNiOGEwMWE3ODcwMTAxMDE0ZWI5MmQwMTc4ZjhlYzAxMDEwMTAxMDEwMTAxMDEwMTAxNGZi
NGNhDQo1MTAxMDEwMTAxMDExYmM3N2UyMzRhYjAwMzFlMDFlNzAwOTFiODAwYzc4NTAwNTYw 
snip
DkwYzE4MzcyYjAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwZWI5Y2Ux
NjAxMGU1YjY1MDEzMg0KMDFiZDRmNGY3NWUwZDQ4MWIwZWFkMTM3MmIwMTAxMDFhMTAwYjNh
MDAxMDEwMTAxY2Y3MWUwZTBlMDgxODAwMTAxMDEwMTAxMDEwMTAxOTI3Y2NhYzQ0YTQ0ZTAw
YjAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDE4N227
JobId 2744: Fatal error: fd_cmds.c:172 Command error with FD, hanging up.

Here is its configuration file in the bacula-dir:
Job {
  Name = Vicenza Daily
  Client = vicenza.dmz.lexum.pri
  FileSet = Full Set Vicenza
  JobDefs = Daily
  Schedule = Daily
  Write Bootstrap = /var/lib/bacula/vicenza-daily.bsr
}
FileSet {
  Name = Full Set Vicenza
  Include {
Options {
  signature = MD5
}
File = /etc
File = /opt/zimbra/backup
  }
  Exclude {
  }
}
Client {
  Name = vicenza.dmz.lexum.pri
  Address = vicenza.dmz.lexum.pri
  FDPort = 9102
  Maximum Concurrent Jobs = 10
  Catalog = MyCatalog
  Password = somepassword
  File Retention = 6 months   
  Job Retention = 6 months
  AutoPrune = yes
}

Here is the FD configuration file :
Director {
  Name = catanzaro.lan.lexum.pri-dir
  Password = somepassword
}

Director {
  Name = bacula-mon
  Password = somepassword
  Monitor = yes
}
Messages {
  Name = Standard
  director = catanzaro.lan.lexum.pri-dir = all, !skipped, !restored
}

Would there be a way to make any sense on why the client would try to execute 
any FD commands?

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FD command not found:

2009-10-28 Thread Pascal Clermont

- Original Message -
From: Sean M Clark smcl...@tamu.edu
To: bacula-users@lists.sourceforge.net bacula-users@lists.sourceforge.net
Sent: Wednesday, October 28, 2009 11:11:57 AM
Subject: Re: [Bacula-users] FD command not found:

Pascal Clermont wrote:
 I have added a new machine to be backed up with bacula and I cannot
 get a full backup, always have these weird errors. the current setup
 is all centos 5.3 86_64 and every other host besides this one have
 been working flawlessly for over 6 months.
 
 DIR Version: Version: 3.0.1 (30 April 2009)
 
 Client-FD Version: Version: 3.0.1 (30 April 2009)
 
 
 the error output:
 
 JobId 2744: Fatal error: fd_cmds.c:181 FD command not found:
 wMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxZjUwMDczNWEw
[...]
 Would there be a way to make any sense on why the client would try to
 execute any FD commands?

I would assume the FD Commands are commands from the director to the
FD on the client.

I've been noticing similar errors on failed jobs lately in 3.0.x - I
think the FD command errors may just be a side-effect of a job being
interrupted for some reason rather than the cause of the problem (I
would swear I've even seen these errors once or twice as a result of
intentionally cancelling jobs.

How far into the backup does it get before it dies?
27-Oct 20:05 catanzaro.lan.lexum.pri-dir JobId 2744: Start Backup
27-Oct 22:05 catanzaro.lan.lexum.pri-sd JobId 2744: Fatal error:

its always a 2 hour on the nose and as you stated about canceling the jobs... 
I have just noticed these prior to the error :

27-Oct 22:05 catanzaro.lan.lexum.pri-dir JobId 2744: Fatal error: Network error 
with FD during Backup: ERR=Connection reset by peer
27-Oct 22:05 catanzaro.lan.lexum.pri-sd JobId 2744: Job 
Vicenza_Daily.2009-10-27_20.05.01_16 marked to be canceled.


seems like something is cutting the connection after 2 hours... weird that the 
issue is only with this host...

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup speed is slow

2009-04-09 Thread Pascal Clermont
Dan Langille wrote:
 Pascal Clermont wrote:
 Dan Langille wrote:
 Pascal Clermont wrote:
 Hi,
 I was reading the archives and stumbled upon [Bacula-users]
 Improving Backup speed thread which caught my eye since I am having
 issues.
 we backup over 6TB of small files during a full usually started on
 Friday night these are far from being finished on Mondays. the
 transfer on the wire is usually finished within 24 hours.
 My bottleneck being Dir inserting Attributes in my database. I only
 have 2.5 Gb of ram on that machine and intend to upgrade by the end
 of the month. (so far we intend to upgrade to 8gb of ram)

 Something in the thread caught my eye :

 
 On Thu, 15 Jan 2009 13:31:54 +0100, Bruno Friedmann br...@ioda-net.ch
 wrote:
 I added the following indexes:

 CREATE INDEX File_JobId_idx ON File(JobId);
 CREATE INDEX File_PathId_idx ON File(PathId);
 CREATE INDEX File_FilenameId_idx ON File(FilenameId);
 CREATE INDEX Path_PathId_idx ON Path(PathId);
 CREATE INDEX Job_FileSetId_idx ON Job(FileSetId);
 CREATE INDEX Job_ClientId_idx ON Job(ClientId);
 CREATE INDEX File_FilenameId-PathId_idx ON File(FilenameId,PathId);
 
 
 It remains to be seen whether the added indexing will impact
 insert/update
 performance, but I'll take a small performance hit if it means faster
 restores and sanity checks. 

 As I am no database expert, Would I benefit from creating these
 indexes in order to speed up attributes inserting?
 My database is already at 137G in size with only about 3 months of
 jobs. I've ran the dbcheck a few times ( which takes ~48 hours to
 complete ) but do not feel an improvement once that is done.


 Any tips/comments concerning improving database insertion would be
 greatly appreciated.
 I don't see any mention of MySQL or PostgreSQL.  Which are you using?
 I am using postgresql 8
 
 8.1?  8.3?  There are significant differences in performance there IIRC.
 
 The following query will tell: select version()
 
PostgreSQL 8.3.6 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
20071124 (Red Hat 4.1.2-42)
This was an RPM install, these are the only default settings changed in 
postgresql.conf :
shared_buffers = 140MB
work_mem = 132MB
maintenance_work_mem = 240MB
max_fsm_pages = 204800

This is with Bacula Version: 2.4.3 (10 October 2008)

Pascal S. Clermont

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup speed is slow

2009-04-08 Thread Pascal Clermont
Dan Langille wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Pascal Clermont wrote:
 Hi,
 I was reading the archives and stumbled upon [Bacula-users] Improving 
 Backup speed thread which caught my eye since I am having issues.
 we backup over 6TB of small files during a full usually started on Friday 
 night these are far from being finished on Mondays. the transfer on the wire 
 is usually finished within 24 hours.
 My bottleneck being Dir inserting Attributes in my database. I only have 
 2.5 Gb of ram on that machine and intend to upgrade by the end of the month. 
 (so far we intend to upgrade to 8gb of ram)

 Something in the thread caught my eye :

 
 On Thu, 15 Jan 2009 13:31:54 +0100, Bruno Friedmann br...@ioda-net.ch
 wrote:
 I added the following indexes:

 CREATE INDEX File_JobId_idx ON File(JobId);
 CREATE INDEX File_PathId_idx ON File(PathId);
 CREATE INDEX File_FilenameId_idx ON File(FilenameId);
 CREATE INDEX Path_PathId_idx ON Path(PathId);
 CREATE INDEX Job_FileSetId_idx ON Job(FileSetId);
 CREATE INDEX Job_ClientId_idx ON Job(ClientId);
 CREATE INDEX File_FilenameId-PathId_idx ON File(FilenameId,PathId);
 
 
 It remains to be seen whether the added indexing will impact insert/update
 performance, but I'll take a small performance hit if it means faster
 restores and sanity checks. 

 As I am no database expert, Would I benefit from creating these indexes in 
 order to speed up attributes inserting?
 My database is already at 137G in size with only about 3 months of jobs. 
 I've ran the dbcheck a few times ( which takes ~48 hours to complete ) but 
 do not feel an improvement once that is done.


 Any tips/comments concerning improving database insertion would be greatly 
 appreciated.
 
 I don't see any mention of MySQL or PostgreSQL.  Which are you using?
I am using postgresql 8

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] backup speed is slow

2009-04-07 Thread Pascal Clermont
Hi,
I was reading the archives and stumbled upon [Bacula-users] Improving Backup 
speed thread which caught my eye since I am having issues.
we backup over 6TB of small files during a full usually started on Friday night 
these are far from being finished on Mondays. the transfer on the wire is 
usually finished within 24 hours.
My bottleneck being Dir inserting Attributes in my database. I only have 2.5 
Gb of ram on that machine and intend to upgrade by the end of the month. (so 
far we intend to upgrade to 8gb of ram)

Something in the thread caught my eye :


On Thu, 15 Jan 2009 13:31:54 +0100, Bruno Friedmann br...@ioda-net.ch
wrote:
I added the following indexes:

CREATE INDEX File_JobId_idx ON File(JobId);
CREATE INDEX File_PathId_idx ON File(PathId);
CREATE INDEX File_FilenameId_idx ON File(FilenameId);
CREATE INDEX Path_PathId_idx ON Path(PathId);
CREATE INDEX Job_FileSetId_idx ON Job(FileSetId);
CREATE INDEX Job_ClientId_idx ON Job(ClientId);
CREATE INDEX File_FilenameId-PathId_idx ON File(FilenameId,PathId);


It remains to be seen whether the added indexing will impact insert/update
performance, but I'll take a small performance hit if it means faster
restores and sanity checks. 

As I am no database expert, Would I benefit from creating these indexes in 
order to speed up attributes inserting?
My database is already at 137G in size with only about 3 months of jobs. I've 
ran the dbcheck a few times ( which takes ~48 hours to complete ) but do not 
feel an improvement once that is done.


Any tips/comments concerning improving database insertion would be greatly 
appreciated.

Regards,

Pascal S. Clermont

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What is the bacula user's home directory?

2009-01-16 Thread Pascal Clermont
Martin Simmons wrote:
 On Wed, 14 Jan 2009 05:15:01 -0800, Kevin Keane said:
 In my configuration, the bacula-dir and bacula-sd both run as user 
 bacula rather than root, as recommended.

 Now I am trying to run a Run Before script that includes wget to a 
 password-protected site. The user name and password are in a .netrc 
 file. It doesn't work - wget does not use the user name and password 
 stored from the .netrc file. Apparently, the Run Before Job script is 
 using root's environment and home directory even when running as user 
 bacula. Consequently, .netrc is unreadable to the script.

 Is there a way to work around this without putting the user name and 
 password into the script itself?
 
 If wget is using $HOME to find the home directory, then you'll need to set
 that in the run before script to make it work.
 
 __Martin
IDK if you got your answer, but vipw displays 
bacula:x:405:6:Bacula:/var/lib/bacula:/sbin/nologin as you can see on 
my setup it is in /var/lib/bacula , vipw is your friend.

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] closing tape upon completion.

2009-01-13 Thread Pascal Clermont
Is it possible to have bacula close a tape once last job is run 
(backupcatalog in my case)?

currently it leaves tapes on a APPEND status, then bacula wants to 
reuse the tape instead of taking a new one in the library.

Pascal

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] closing tape upon completion.

2009-01-13 Thread Pascal Clermont
Timo Neuvonen wrote:
 Is it possible to have bacula close a tape once last job is run
 (backupcatalog in my case)?

 currently it leaves tapes on a APPEND status, then bacula wants to
 reuse the tape instead of taking a new one in the library.

 
 Easiest way (before knowing what you mean with last job) might be defining 
 for how long time Bacula may use the same tape, starting from first write 
 (after recycling). For example, if your jobs (from first to last) take a 
 maximum of 3 hours, you could set max volume use duration (check the correct 
 syntax from manual) to, say, 6 to 12 hours. While this time has expired 
 (next day?), Bacula will take another tape from the library.

this is what I use, but on the next bacula run, bacula kept wanting to
load the tape prior to take into consideration that the tape was past
the expiration time of usage.

 --
 TiN 
 
 
 
 --
 This SF.net email is sponsored by:
 SourcForge Community
 SourceForge wants to tell your story.
 http://p.sf.net/sfu/sf-spreadtheword
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] closing tape upon completion.

2009-01-13 Thread Pascal Clermont
John Drescher wrote:
 On Tue, Jan 13, 2009 at 10:41 AM, Pascal Clermont
 pascal-l...@lexum.umontreal.ca wrote:
 Is it possible to have bacula close a tape once last job is run
 (backupcatalog in my case)?

 currently it leaves tapes on a APPEND status, then bacula wants to
 reuse the tape instead of taking a new one in the library.

 Look for
 
 use volume once

I Have several jobs executed during the night (~20) and with my
comprehension of this value, every job during the same night would be
spawned to different tapes. This is most useful when the Media is a
file and you want a new file for each backup that is done. -bacula
documentation

 
 in the documentation.
 
 John



--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] closing tape upon completion.

2009-01-13 Thread Pascal Clermont
John Drescher wrote:
 On Tue, Jan 13, 2009 at 10:57 AM, Timo Neuvonen timo-n...@tee-en.net wrote:
 Is it possible to have bacula close a tape once last job is run
 (backupcatalog in my case)?

 currently it leaves tapes on a APPEND status, then bacula wants to
 reuse the tape instead of taking a new one in the library.

 Easiest way (before knowing what you mean with last job) might be defining
 for how long time Bacula may use the same tape, starting from first write
 (after recycling). For example, if your jobs (from first to last) take a
 maximum of 3 hours, you could set max volume use duration (check the correct
 syntax from manual) to, say, 6 to 12 hours. While this time has expired
 (next day?), Bacula will take another tape from the library.

 That sounds better than my answer. Since mine will only allow 1 job run...
 
 John
this is what I use, but on the next bacula run, bacula kept wanting to
load the tape prior to take into consideration that the tape was past
the expiration time of usage.

The big issue is the fact that the tape is no longer in the library 
since it is gone for safe keeping. So bacula finishes by dieing, until I 
restart it and manually mark the tape as full.

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] closing tape upon completion.

2009-01-13 Thread Pascal Clermont
John Drescher wrote:
 this is what I use, but on the next bacula run, bacula kept wanting to
 load the tape prior to take into consideration that the tape was past
 the expiration time of usage.

 The big issue is the fact that the tape is no longer in the library
 since it is gone for safe keeping. So bacula finishes by dieing, until I
 restart it and manually mark the tape as full.

 
 This is not normal behavior. How long have you allowed the volume usage?
 
 John

Pool {
   Name = Weekly
   Pool Type = Backup
   Storage = LTO
   Recycle = yes
   AutoPrune = yes
   Volume Use Duration = 72h
   Volume Retention = 1 months
}

the LTO is only used on a weekly/monthly basis. The backup process 
usually takes 24 to 48 hours.

If I remove the tape, and do a update slots scan or update slots in 
bconsole, the media is still flagged inchanger 1. this is an issue 
aswell, but I can live with that, since tapes are back in the library 
prior to the Volume Retention.

Pascal

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] closing tape upon completion.

2009-01-13 Thread Pascal Clermont
John Drescher wrote:
 If I remove the tape, and do a update slots scan or update slots in
 bconsole, the media is still flagged inchanger 1. this is an issue
 aswell, but I can live with that, since tapes are back in the library
 prior to the Volume Retention.

 Are you using bacula 2.4.X and not some ancient version like 1.38?
 
 John

Version: 2.4.3 (10 October 2008)

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Autochanger and Volume status in catalog

2009-01-05 Thread Pascal Clermont
I have a weird issue that I cannot seem to figure out what is wrong.

I have some backups that failed due to bacula wanting to load a tape
that is not currently in the autochanger.

I have tried several things such as update slots, update slots scan,
update volume to change the inchanger flag.

Here is the error message :
05-Jan 14:05 vicenza.lan.lexum.pri-sd JobId 884: 3307 Issuing   
autochanger unload slot 1, drive 0 command.
05-Jan 14:06 vicenza.lan.lexum.pri-sd JobId 884: 3301 Issuing
autochanger loaded? drive 0 command.
05-Jan 14:06 vicenza.lan.lexum.pri-sd JobId 884: 3302 Autochanger
loaded? drive 0, result: nothing loaded.
05-Jan 14:06 vicenza.lan.lexum.pri-sd JobId 884: 3304 Issuing
autochanger load slot 11, drive 0 command.
05-Jan 14:11 vicenza.lan.lexum.pri-sd JobId 884: Fatal error: 3992 Bad
autochanger load slot 11, drive 0: ERR=Child died from signal 15:
Termination.
Results=source Element Address 1011 is Empty
Program killed by Bacula watchdog (timeout)

05-Jan 14:11 cagliari.lan.lexum.pri JobId 884: Fatal error: job.c:1817
Bad response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data


here is the output of list volumes :
Pool: Weekly
+-++---+-+-+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | enabled | volbytes|
volfiles | volretention | recycle | slot | inchanger | mediatype |
lastwritten |
+-++---+-+-+--+--+-+--+---+---+-+
|   2 | RC5881L3   | Full  |   1 | 557,917,516,800 |
558 |2,592,000 |   1 |2 | 1 | DDS-4 | 2008-11-16
02:54:05 |
|   3 | RC5882L3   | Error |   1 | 298,272,844,800 |
299 |2,592,000 |   1 |3 | 1 | DDS-4 | 2008-11-17
11:26:24 |
|   4 | RC5883L3   | Full  |   1 |   2,709,504 |
0 |2,592,000 |   1 |4 | 1 | DDS-4 | 2008-11-22
21:16:54 |
|   5 | RC5880L3   | Full  |   1 | 569,828,689,920 |
578 |2,592,000 |   1 |1 | 1 | DDS-4 | 2008-11-15
21:59:49 |
|   6 | RC5884L3   | Full  |   1 | 578,136,093,696 |
579 |2,592,000 |   1 |5 | 1 | DDS-4 | 2008-12-14
15:44:57 |
|   7 | RC5885L3   | Full  |   1 | 729,953,215,488 |
730 |2,592,000 |   1 |6 | 1 | DDS-4 | 2008-12-15
09:14:35 |
|   8 | RC5886L3   | Full  |   1 | 645,604,549,632 |
646 |2,592,000 |   1 |7 | 1 | DDS-4 | 2008-12-21
18:57:23 |
|   9 | RC5887L3   | Full  |   1 | 694,786,627,584 |
695 |2,592,000 |   1 |8 | 1 | DDS-4 | 2008-12-22
12:39:17 |
|  10 | RC5890L3   | Append|   1 | 413,782,935,552 |
414 |2,592,000 |   1 |   11 | 1 | DDS-4 | 2008-12-23
15:03:14 |
|  11 | RC5891L3   | Append|   1 |  64,512 |
0 |2,592,000 |   1 |   12 | 1 | DDS-4 |
 |
+-++---+-+-+--+--+-+--+---+---+-+

version : vicenza.lan.lexum.pri-dir Version: 2.4.3 (10 October 2008)
i686-redhat-linux-gnu redhat Enterprise release

my bacula-sd config file:

Autochanger {
   Name = Autochanger
   Device = LTO
   Changer Command = /usr/lib/bacula/mtx-changer %c %o %S /dev/st0 %d
   Changer Device = /dev/sg1
}

Device {
   Name = LTO
   Drive Index = 0
   Media Type = DDS-4
   Archive Device = /dev/st0
   AutomaticMount = yes
   AlwaysOpen = yes
   RemovableMedia = yes
   RandomAccess = no
   AutoChanger = yes
   Spool Directory = /bacula/tapes/SPOOLDIR
}

My question is what can be done so that bacula knows what tapes are in
the autochanger ? because when this happens no other backups will work
until I restart the SD daemon(such as backup to another pool), and if I
run a backup from that pool it still errors out since the volume is not
there.

regards,

Pascal


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Promoting manually level of jobs.

2008-11-28 Thread Pascal Clermont
I have a daily backup at a incremental level with a 7 days retention 
period and this works fine.

I have one share that is very big and would be alot better if it could 
run its full on fridays. is it possible for me in the bconsole to 
promote the level to full on the fly ?

Or am I obligated to create a custom schedule for that specific share ?


Pascal S. Clermont

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Improving Backup speed

2008-11-18 Thread Pascal Clermont
Dan Langille wrote:
 On Nov 17, 2008, at 4:03 PM, Pascal Clermont wrote:
 
 Hi,
 Currently spending a lot of time on improving the speeds of this  
 network.
 After several tests I have realized that the database is inserting
 attributes was taking quite a lot of time. In order to improve this I
 have searched your forums and see that using batch mode would increase
 this action by 10 times. The guide states:
  One way is to see if the PostgreSQL library that Bacula will be  
 linked
 against references pthreads. This can be done with a command such as:
   nm /usr/lib/libpq.a | grep pthread_mutex_lock .

 The file /usr/lib/libpq.a does not exist, I am using postgreSQL 8.3.5
 and did not install from source, but from the package manager that  
 comes
 with the OS, YUM.
 
 The problem with operating systems is they each think theirs is the  
 file layout.
 
 Experiment a little, and you'll find it.  Mine, on FreeBSD, is here:
 
 $  nm /usr/local/lib/libpq.a | grep pthread_mutex_lock
   U pthread_mutex_lock
   U pthread_mutex_lock
 
 

 Would someone know how I can find out if thread safety is on without
 breaking everything?
 AND if it is on, will Batch Insert Code from bacula work anyways?  
 since
 the file it links against /usr/lib/libpq.a is not existent.
 
 I'm sure you have libpq.a
 
 Just keep in mind that location varies.
 


 Pascal S. Clermont

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's  
 challenge
 Build the coolest Linux based applications with Moblin SDK  win  
 great prizes
 Grand prize is a trip for two to an Open Source event anywhere in  
 the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
Ok here is the steps I took that made me believe that I do not have libpq.a:

[EMAIL PROTECTED] bacula]# find / -iname libpq.a
[EMAIL PROTECTED] bacula]#


[EMAIL PROTECTED] bacula]# updatedb  locate libpq.a
[EMAIL PROTECTED] bacula]#


[EMAIL PROTECTED] bacula]# whereis libpq.a
libpq:


[EMAIL PROTECTED] bacula]# rpm -ql postgresql-server
/etc/pam.d/postgresql
/etc/rc.d/init.d/postgresql
/etc/sysconfig/pgsql
/usr/bin/initdb
/usr/bin/ipcclean
/usr/bin/pg_controldata
/usr/bin/pg_ctl
/usr/bin/pg_resetxlog
/usr/bin/postgres
/usr/bin/postmaster
/usr/lib/pgsql
/usr/lib/pgsql/ascii_and_mic.so
/usr/lib/pgsql/cyrillic_and_mic.so
/usr/lib/pgsql/dict_int.so
/usr/lib/pgsql/dict_snowball.so
/usr/lib/pgsql/dict_xsyn.so
/usr/lib/pgsql/euc_cn_and_mic.so
/usr/lib/pgsql/euc_jis_2004_and_shift_jis_2004.so
/usr/lib/pgsql/euc_jp_and_sjis.so
/usr/lib/pgsql/euc_kr_and_mic.so
/usr/lib/pgsql/euc_tw_and_big5.so
/usr/lib/pgsql/latin2_and_win1250.so
/usr/lib/pgsql/latin_and_mic.so
/usr/lib/pgsql/plpgsql.so
/usr/lib/pgsql/test_parser.so
/usr/lib/pgsql/tsearch2.so
/usr/lib/pgsql/utf8_and_ascii.so
/usr/lib/pgsql/utf8_and_big5.so
/usr/lib/pgsql/utf8_and_cyrillic.so
/usr/lib/pgsql/utf8_and_euc_cn.so
/usr/lib/pgsql/utf8_and_euc_jis_2004.so
/usr/lib/pgsql/utf8_and_euc_jp.so
/usr/lib/pgsql/utf8_and_euc_kr.so
/usr/lib/pgsql/utf8_and_euc_tw.so
/usr/lib/pgsql/utf8_and_gb18030.so
/usr/lib/pgsql/utf8_and_gbk.so
/usr/lib/pgsql/utf8_and_iso8859.so
/usr/lib/pgsql/utf8_and_iso8859_1.so
/usr/lib/pgsql/utf8_and_johab.so
/usr/lib/pgsql/utf8_and_shift_jis_2004.so
/usr/lib/pgsql/utf8_and_sjis.so
/usr/lib/pgsql/utf8_and_uhc.so
/usr/lib/pgsql/utf8_and_win.so
/usr/share/locale/af/LC_MESSAGES/postgres.mo
/usr/share/locale/cs/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/cs/LC_MESSAGES/pg_resetxlog.mo
/usr/share/locale/cs/LC_MESSAGES/postgres.mo
/usr/share/locale/de/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/de/LC_MESSAGES/pg_resetxlog.mo
/usr/share/locale/de/LC_MESSAGES/postgres.mo
/usr/share/locale/es/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/es/LC_MESSAGES/pg_resetxlog.mo
/usr/share/locale/es/LC_MESSAGES/postgres.mo
/usr/share/locale/fa/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/fr/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/fr/LC_MESSAGES/pg_resetxlog.mo
/usr/share/locale/fr/LC_MESSAGES/postgres.mo
/usr/share/locale/hr/LC_MESSAGES/postgres.mo
/usr/share/locale/hu/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/hu/LC_MESSAGES/pg_resetxlog.mo
/usr/share/locale/hu/LC_MESSAGES/postgres.mo
/usr/share/locale/it/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/it/LC_MESSAGES/pg_resetxlog.mo
/usr/share/locale/it/LC_MESSAGES/postgres.mo
/usr/share/locale/ko/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/ko/LC_MESSAGES/pg_resetxlog.mo
/usr/share/locale/ko/LC_MESSAGES/postgres.mo
/usr/share/locale/nb/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/nb/LC_MESSAGES/pg_resetxlog.mo
/usr/share/locale/nb/LC_MESSAGES/postgres.mo
/usr/share/locale/nl/LC_MESSAGES/postgres.mo
/usr/share/locale/pl/LC_MESSAGES/pg_controldata.mo
/usr/share/locale/pl

[Bacula-users] Improving Backup speed

2008-11-17 Thread Pascal Clermont
Hi,
Currently spending a lot of time on improving the speeds of this network.
After several tests I have realized that the database is inserting 
attributes was taking quite a lot of time. In order to improve this I 
have searched your forums and see that using batch mode would increase 
this action by 10 times. The guide states:
 One way is to see if the PostgreSQL library that Bacula will be linked 
against references pthreads. This can be done with a command such as:
   nm /usr/lib/libpq.a | grep pthread_mutex_lock .

The file /usr/lib/libpq.a does not exist, I am using postgreSQL 8.3.5 
and did not install from source, but from the package manager that comes 
with the OS, YUM.

Would someone know how I can find out if thread safety is on without 
breaking everything?
AND if it is on, will Batch Insert Code from bacula work anyways? since 
the file it links against /usr/lib/libpq.a is not existent.


Pascal S. Clermont

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Postgresql 8 - database encoding

2008-10-16 Thread Pascal Clermont
Hi,

We have tested the bacula solution for a few weeks now. We are going to 
implement this in our production environment since we are very happy 
with all bacula as to offer.

I must ask before going forth with the implementation if the database 
will not cause any issues with a environment mixed of latin1  UTF-8 
encoding if using the SQL_ASCII encoding?

I know that the documentation points toward what I am asking, But I need 
a confirmation from the community.


Regards,

Pascal S. Clermont

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bweb + postgresql 8 ; cant get jobib logs

2008-10-06 Thread Pascal Clermont
Hi,
I am currently testing bacula in order to replace our current backup 
solution.
I have been using bweb, since I like how things are displayed.
My current issue is the fact that bweb cannot display any of the details 
of a jobs runned and it displays this : Can't get log for jobid 18.

here is what tail of /var/log/httpd/error_log outputs when I try to see 
the details :

Mon Oct 06 06:20:37 2008] [error] [client 192.168.4.201] DBD::Pg::db 
selectrow_hashref failed: ERROR:  function group_concat(text) does not 
exist, referer: 
http://192.168.4.236/cgi-bin/bweb/bweb.pl?action=job_zoom;jobid=18
[Mon Oct 06 06:20:37 2008] [error] [client 192.168.4.201] HINT:  No 
function matches the given name and argument types. You may need to add 
explicit type casts. at /usr/lib/perl5/site_perl/5.8.8/Bweb.pm line 
1396., referer: 
http://192.168.4.236/cgi-bin/bweb/bweb.pl?action=job_zoom;jobid=18

here is my bacula-dir.conf configuration:

#
# Default Bacula Director Configuration file
#
#  The only thing that MUST be changed is to add one or more
#   file or directory names in the Include directive of the
#   FileSet resource.
#
#  For Bacula release 2.4.2 (26 July 2008) -- redhat
#
#  You might also want to change the default email address
#   from root to your address.  See the mail and operator
#   directives in the Messages resource.
#

Director {# define myself
   Name = dhcpclt-236-dir
   DIRport = 9101# where we listen for UA connections
   QueryFile = /home/bacula/conf/query.sql
   WorkingDirectory = /home/bacula/working
   PidDirectory = /home/bacula/bin/working
   Maximum Concurrent Jobs = 1
   Password = cPAShD32jEb9i1+abtA8zTJdKdnONY347/yADyK9Aw7j # 
Console password
   Messages = Daemon
}

JobDefs {
   Name = DefaultJob
   Type = Backup
   Level = Incremental
   Client =  pisa.lan.lexum.pri
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = File
   Messages = Standard
   Pool = Default
   Priority = 10
}


#
# Define the main nightly save backup job
#   By default, this job will back up to disk in /home/bacula/vtapes
Job {
   Name = PISA
   Client = pisa.lan.lexum.pri
   JobDefs = DefaultJob
   Write Bootstrap = /home/bacula/working/pisa.bsr
}

#Job {
#  Name = Client2
#  Client = dhcpclt-2362-fd
#  JobDefs = DefaultJob
#  Write Bootstrap = /home/bacula/working/Client2.bsr
#}

# Backup the catalog database (after the nightly save)
Job {
   Name = BackupCatalog
   JobDefs = DefaultJob
   Level = Full
   FileSet=Catalog
   Schedule = WeeklyCycleAfterBackup
   # This creates an ASCII copy of the catalog
   # WARNING!!! Passing the password via the command line is insecure.
   # see comments in make_catalog_backup for details.
   # Arguments to make_catalog_backup are:
   #  make_catalog_backup database-name user-name password host
   RunBeforeJob = /home/bacula/conf/make_catalog_backup bacula bacula
   # This deletes the copy of the catalog
   RunAfterJob  = /home/bacula/conf/delete_catalog_backup
   Write Bootstrap = /home/bacula/working/BackupCatalog.bsr
   Priority = 11   # run after main backup
}

#
# Standard Restore template, to be changed by Console program
#  Only one such job is needed for all Jobs/Clients/Storage ...
#
Job {
   Name = RestoreFiles
   Type = Restore
   Client=pisa.lan.lexum.pri
   FileSet=Full Set
   Storage = File
   Pool = Default
   Messages = Standard
   Where = /home/bacula/vtapes/bacula-restores
}


# List of files to be backed up
FileSet {
   Name = Full Set
   Include {
 Options {
   signature = MD5
 }
#
#  Put your list of files here, preceded by 'File =', one per line
#or include an external list with:
#
#File = file-name
#
#  Note: / backs up everything on the root partition.
#if you have other partitons such as /usr or /home
#you will probably want to add them too.
#
#  By default this is defined to point to the Bacula build
#directory to give a reasonable FileSet to backup to
#disk storage during initial testing.
#
 File = /var/lib/pgsql/backups
   }

#
# If you backup the root directory, the following two excluded
#   files can be useful
#
   Exclude {
 File = /proc
 File = /tmp
 File = /.journal
 File = /.fsck
 File = /dev
 File = /sys
 File = /boot
 File = /misc
 File = /var/lib/nfs/rpc_pipefs
 File = /selinux
 File = /home/bacula/vtapes
 File = /net
   }
}

#
# When to do the backups, full backup on first sunday of the month,
#  differential (i.e. incremental since full) every other sunday,
#  and incremental backups other days
Schedule {
   Name = WeeklyCycle
   Run = Full 1st sun at 23:05
   Run = Differential 2nd-5th sun at 23:05
   Run = Incremental mon-sat at 23:05
}

# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
   Name = WeeklyCycleAfterBackup
   Run = Full sun-sat at 23:10
}

# This is the backup of the catalog
FileSet {
   Name = Catalog
   Include {