Re: [Bacula-users] Bacula backup speed

2015-12-10 Thread Christian Manal
On 10.12.2015 01:06, Lewis, Dave wrote:
> Does anyone know what’s causing the OS backups to be so slow and what I
> can do to speed them up?

Hi,

the problem might be number of files, as in, writing all the file
metadata to the catalog could very well be your bottle neck.

Try enabling attribute spooling, so all the metadata is collected and
commited to the DB in one go instead of file by file.


Regards,
Christian Manal

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs fail with duplicate key value violates unique constraint after upgrade to 7.0.5

2015-07-06 Thread Christian Manal
On 05.07.2015 22:15, Kern Sibbald wrote:
 On 05.07.2015 18:50, Christian Manal wrote:
 ...
 or 9.x you have to export them then reimport them.
 I dumped it in Postgres' custom format on the old host, with pg_dump
 -Fc, and then loaded it with pg_restore on the new host. But I just
 realized that I might'be messed up creating the new database. I used the
 SQL_ASCII encoding, but didn't set LC_COLLATE and LC_CTYPE to C. Could
 that be the problem?
 
 I don't know, because I am not an expert on PostgreSQL.  In any case,
 now that it is created and you have used it, it might be hard to go
 back.  I am sorry, but at this point, I am out of ideas.

Thanks your input anyway, Kern. I appreciate it.


I just dumped my current db and restored it into a new database with the
correct settings. Now I can find duplicates. In fact, there are 12
million duplicate rows in the Filename table alone. Seems like I found
my problem... now the question is what I can do about it.

Probably just gonna purge all file records and start fresh on that.


Regards,
Christian Manal

--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Jobs fail with duplicate key value violates unique constraint after upgrade to 7.0.5

2015-07-05 Thread Christian Manal
Hi list,

I recently upgraded my Bacula director and storage to 7.0.5 from 5.2.x
and now some jobs fail with the following error:

Fatal error: sql_create.c:851 Fill Filename table Query failed: INSERT
INTO Filename (Name) SELECT a.Name FROM (SELECT DISTINCT Name FROM
batch) as a WHERE NOT EXISTS (SELECT Name FROM Filename WHERE Name =
a.Name): ERR=ERROR:  duplicate key value violates unique constraint
filename_name_idx

Platform is Solaris 10, database PostgreSQL 9.3.

I tried reindexing the Filename table, which fails with a similar error,
yet when I query the table for duplicate entries, I can't find any.


Any help with this issue would be appreciated.


Regards,
Christian Manal

--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs fail with duplicate key value violates unique constraint after upgrade to 7.0.5

2015-07-05 Thread Christian Manal
Hi Kern,

 It appears to me that for some reason you have duplicate filename
 entries in the Filename table.  Perhaps you have a filename of NULL and
 a filename of , which perhaps PostgreSQL treats as equal in certain
 operations (e.g. during an import).
 
 The Bacula SQL statement is supposed to filter out duplicate names.  You
 might list the Filename table and take a look if there are two entries
 that seem to be identical (it seems like you already tried this). 

I tried querying for the values that were indicated as duplicates by the
error messages. I either got only one entry back or none at all. I also
tried looking for duplicates in general with something along these
lines, with no result:

select name,filenameid from filename group by name,filenameid having
count(name)  1;


 Another possibility for correcting it is to run the dbcheck program.  I
 think it has an option to check for duplicate filenames, but I am not
 100% certain since I have not used it in a long time.

Running that now, but I kinda doubt it'll turn anything up if manual
querying doesn't find duplicate rows.


 Did you recently change the backend from MySQL to PostgreSQL or make
 some other such modification or did your OS crash or PostgreSQL crash? 
 Normally, the table should never get messed up as it appears to be.

The last big thing I did with the catalog was moving it to another host
and upgrading the Postgres version from 8.3 to 9.3 in the process, but
everything worked fine until the upgrade to Bacula 7.0.5 a few days ago.


 Since reindexing the Filename table fails, most likely you really have
 an PostgreSQL table corruption problem that probably can only be fixed
 by some PostgreSQL repair command.

I figured as much when the reindexing failed, but since this only
started after the upgrade, just fixing the DB now would seem to be
treating the symptons when I'd rather find and fix the cause.


Regards,
Christian Manal


 On 05.07.2015 12:43, Christian Manal wrote:
 Hi list,

 I recently upgraded my Bacula director and storage to 7.0.5 from 5.2.x
 and now some jobs fail with the following error:

 Fatal error: sql_create.c:851 Fill Filename table Query failed: INSERT
 INTO Filename (Name) SELECT a.Name FROM (SELECT DISTINCT Name FROM
 batch) as a WHERE NOT EXISTS (SELECT Name FROM Filename WHERE Name =
 a.Name): ERR=ERROR:  duplicate key value violates unique constraint
 filename_name_idx

 Platform is Solaris 10, database PostgreSQL 9.3.

 I tried reindexing the Filename table, which fails with a similar error,
 yet when I query the table for duplicate entries, I can't find any.


 Any help with this issue would be appreciated.


 Regards,
 Christian Manal

 --
 Don't Limit Your Business. Reach for the Cloud.
 GigeNET's Cloud Solutions provide you with the tools and support that
 you need to offload your IT needs and focus on growing your business.
 Configured For All Businesses. Start Your Cloud Today.
 https://www.gigenetcloud.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 
 


--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs fail with duplicate key value violates unique constraint after upgrade to 7.0.5

2015-07-05 Thread Christian Manal
Am 05.07.2015 um 18:16 schrieb Kern Sibbald:
 On 05.07.2015 15:37, Christian Manal wrote:
 Hi Kern,

 It appears to me that for some reason you have duplicate filename
 entries in the Filename table.  Perhaps you have a filename of NULL and
 a filename of , which perhaps PostgreSQL treats as equal in certain
 operations (e.g. during an import).

 The Bacula SQL statement is supposed to filter out duplicate names.  You
 might list the Filename table and take a look if there are two entries
 that seem to be identical (it seems like you already tried this). 
 I tried querying for the values that were indicated as duplicates by the
 error messages. I either got only one entry back or none at all. I also
 tried looking for duplicates in general with something along these
 lines, with no result:

 select name,filenameid from filename group by name,filenameid having
 count(name)  1;


 Another possibility for correcting it is to run the dbcheck program.  I
 think it has an option to check for duplicate filenames, but I am not
 100% certain since I have not used it in a long time.
 Running that now, but I kinda doubt it'll turn anything up if manual
 querying doesn't find duplicate rows.


 Did you recently change the backend from MySQL to PostgreSQL or make
 some other such modification or did your OS crash or PostgreSQL crash? 
 Normally, the table should never get messed up as it appears to be.
 The last big thing I did with the catalog was moving it to another host
 and upgrading the Postgres version from 8.3 to 9.3 in the process, but
 everything worked fine until the upgrade to Bacula 7.0.5 a few days ago.
 
 When you did that PostgreSQL upgrade, did you export the database in
 .sql format before the upgrade then reload it back after the upgrade?
 
 If not, that is probably where things went wrong.  With MySQL, you just
 upgrade the program and all works.  With earlier PostgreSQL especially
 before 8.4 or 9.x you have to export them then reimport them.

I dumped it in Postgres' custom format on the old host, with pg_dump
-Fc, and then loaded it with pg_restore on the new host. But I just
realized that I might'be messed up creating the new database. I used the
SQL_ASCII encoding, but didn't set LC_COLLATE and LC_CTYPE to C. Could
that be the problem?


Regards,
Christian Manal


 Since reindexing the Filename table fails, most likely you really have
 an PostgreSQL table corruption problem that probably can only be fixed
 by some PostgreSQL repair command.
 I figured as much when the reindexing failed, but since this only
 started after the upgrade, just fixing the DB now would seem to be
 treating the symptons when I'd rather find and fix the cause.


 Regards,
 Christian Manal


 On 05.07.2015 12:43, Christian Manal wrote:
 Hi list,

 I recently upgraded my Bacula director and storage to 7.0.5 from 5.2.x
 and now some jobs fail with the following error:

 Fatal error: sql_create.c:851 Fill Filename table Query failed: INSERT
 INTO Filename (Name) SELECT a.Name FROM (SELECT DISTINCT Name FROM
 batch) as a WHERE NOT EXISTS (SELECT Name FROM Filename WHERE Name =
 a.Name): ERR=ERROR:  duplicate key value violates unique constraint
 filename_name_idx

 Platform is Solaris 10, database PostgreSQL 9.3.

 I tried reindexing the Filename table, which fails with a similar error,
 yet when I query the table for duplicate entries, I can't find any.


 Any help with this issue would be appreciated.


 Regards,
 Christian Manal

 --
 Don't Limit Your Business. Reach for the Cloud.
 GigeNET's Cloud Solutions provide you with the tools and support that
 you need to offload your IT needs and focus on growing your business.
 Configured For All Businesses. Start Your Cloud Today.
 https://www.gigenetcloud.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


 
 


--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of many small files

2013-11-04 Thread Christian Manal
On 04.11.2013 10:45, Willi Fehler wrote:
 Hi Bacula-Users,
 
 we want to backup our central nas-server.
 
 disk-usage: 885G
 ca. 10 millionen small files
 
 In the past the old it-colleges tried to use Bacula but the backup
 crashes. Anybody have expirience with many small files and Bacula or
 know a good alternative? I think if we try it again, we have to disable
 compression.
 
 The central nas server is a DRBD-Cluster(Primary/Secondary) on Debian
 Squeezy with lvm/ext3 running on a Dell PowerEdge R710.
 
 For any feedback, I thank you in advance.
 
 Regards - Willi
 
 
 

Hi Willi,

I have a file set with roughly 12 million files with a size of just over
1 TB. The only problem I had with that job (in terms of number of files)
was when I tried to enable accurate backups for it, which crashed
something (director/database/file daemon), because it needed a boatload
of RAM that I didn't have.


Regards,
Christian Manal

--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of many small files

2013-11-04 Thread Christian Manal
 1. How long does the backup?

The last full run took 1 day 1 hour 33 mins 25 secs, incrementals and
differentials take around 4 to 6 hours.


 2. Do you use compression?

Yes, but not Bacula's. The backups go to a ZFS pool and LTO tapes, which
do their own compression.


 3. Do you use incremental backups?

Yes, as well as differentials. My cycle is monthly, with a full run on
the first weekend of each month, differentials on the remaining weekends
and incrementals on all other days.


Regards,
Christian Manal


 Am 04.11.2013 11:22, schrieb Christian Manal:
 On 04.11.2013 10:45, Willi Fehler wrote:
 Hi Bacula-Users,

 we want to backup our central nas-server.

 disk-usage: 885G
 ca. 10 millionen small files

 In the past the old it-colleges tried to use Bacula but the backup
 crashes. Anybody have expirience with many small files and Bacula or
 know a good alternative? I think if we try it again, we have to disable
 compression.

 The central nas server is a DRBD-Cluster(Primary/Secondary) on Debian
 Squeezy with lvm/ext3 running on a Dell PowerEdge R710.

 For any feedback, I thank you in advance.

 Regards - Willi




 Hi Willi,

 I have a file set with roughly 12 million files with a size of just over
 1 TB. The only problem I had with that job (in terms of number of files)
 was when I tried to enable accurate backups for it, which crashed
 something (director/database/file daemon), because it needed a boatload
 of RAM that I didn't have.


 Regards,
 Christian Manal

 --
 Android is increasing in popularity, but the open development platform that
 developers love is also attractive to malware creators. Download this white
 paper to learn more about secure code signing practices that can help keep
 Android apps secure.
 http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 


--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about Catalog Backup

2013-03-20 Thread Christian Manal
Am 20.03.2013 20:28, schrieb Sergio Belkin:
 Hi,
 
 I've readn an example in the documentation as follows:
 
 
   RunBeforeJob = /home/kern/bacula/bin/make_catalog_backup
   RunAfterJob  = /home/kern/bacula/bin/delete_catalog_backup
 
 I don't understand it performs a backup and then  deletes it? Am I
 missing something?
 
 Thanks in advance!

Hi,

make_catalog_backup dumps the database into a textfile before the actual
backup runs, then the file is backed up and delete_catalog_backup cleans
up behind it by deleting the dump.


Regards,
Christian Manal

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] recommended exclude list for Windows 2008 clients

2012-12-20 Thread Christian Manal
Am 20.12.2012 19:14, schrieb Martin Simmons:
 On Thu, 20 Dec 2012 18:34:27 +0100, Tilman Schmidt said:

 The WinSxS folder is a real hog, growing relentlessly, and
 apparently just needed for uninstallations and reinstallations,
 but I'm a bit unsure whether it's really a good idea to exclude
 it.
 
 The WinSxS is not just needed for uninstallations and reinstallations!  It
 contains dlls that are used by almost every program, e.g. msvcrt and comctl32,
 so it is just as vital as the System32 directory.
 
 __Martin
 

WinSxS is also filled with hard links, so the actual size is more or
less zero. If Bacula picks that up correctly, it won't use up noticable
space in the backup.


Regards,
Christian Manal

--
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running complex commands or scripts remotely on a client

2012-09-06 Thread Christian Manal
On 06.09.2012 08:17, Alex Lucas wrote:
 On 06/09/12 12:10, ganiuszka wrote:
 W dniu 06.09.2012 03:23, Alex Lucas pisze:
 On 05/09/12 19:24, Christian Manal wrote:
 On 05.09.2012 12:43, Alex Lucas wrote:
 Dears,

 Is there a way to run a complex command or even a script on a client?

 So far I have tried two ways and failed:
 1) when a command (e.g. in ClientRunBeforeJob) has something like
 echo test  /tmp/test.out bacula runs it on the client as echo as
 command and the rest as the argument. i.e. there is no
 /tmp/test.out on
 the client
 2) when I run test.sh which is in the PATH on the bacula director it
 fails, I guess because there is no identical script on the client.

 Any suggestions?
 Hi,

 all the Run-statements execute what is defined directly, without a
 shell. So output redirection and stuff won't work unless you do
 something like this:

 ClientRunBeforeJob = /bin/bash -c 'echo foo /tmp/foo.out'
 Thank you, this does it. One related question: is because I have a few
 commands to run, is there a way to make the commands split across
 several lines (for readability) ?

 e.g.

 ClientRunBeforeJob = /bin/bash -c 'command one
command two'

 doesn't seem to work.

 Hi,

 It works for me. Did you try to use semicolon character for separate
 elementary commands?

 Example:

 ClientRunBeforeJob = /bin/bash -c 'echo aaa /tmp/foo1.out; echo bbb
 /tmp/foo2.out'
 Hi gani,
 It works with ; like you mentioned above but I was asking about
 splitting the command across multiple lines in the configuration file --
 when there are many commands it would make it easier to read.

Have you tried escaping the linebreaks with a backslash? Though, if you
have multiple commands to run, I'd rather go with a separate script,
instead of putting them into the config file.


Regards,
Christian Manal

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running complex commands or scripts remotely on a client

2012-09-05 Thread Christian Manal
On 05.09.2012 12:43, Alex Lucas wrote:
 Dears,
 
 Is there a way to run a complex command or even a script on a client?
 
 So far I have tried two ways and failed:
 1) when a command (e.g. in ClientRunBeforeJob) has something like
 echo test  /tmp/test.out bacula runs it on the client as echo as
 command and the rest as the argument. i.e. there is no /tmp/test.out on
 the client
 2) when I run test.sh which is in the PATH on the bacula director it
 fails, I guess because there is no identical script on the client.
 
 Any suggestions?

Hi,

all the Run-statements execute what is defined directly, without a
shell. So output redirection and stuff won't work unless you do
something like this:

   ClientRunBeforeJob = /bin/bash -c 'echo foo /tmp/foo.out'

If you want to call a script, you'd have to define the absolute path or
a relative one from the executing daemon's working directory.


Regards,
Christian Manal

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] cancel more than one job

2012-05-29 Thread Christian Manal
On 29.05.2012 14:58, Robert Kromoser wrote:
 Hi everybody.
 
  
 
 When I change my Bacula configurations I have sometimes the problem,
 
 that a list jobs show me 30 or more jobs with status running but I know,
 
 they don't run, because the job date is some days ago. with cancel I
 
 can only cancel one job after the other.
 
  
 
 Exist there a possibility to cancel more than one job at a time?
 
 E.g. cancel jobid=1500-1580
 
 or
 
 cancel jobid=1601, 1605, 1670,1678
 
  
 
 BR Robert

Hi,

I don't think so. The quick and dirty workaround would be something like
this:

   for jobid in {1500..1580}; do
  echo cancel jobid=$jobid
   done | bconsole


Regards,
Christian Manal

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running a fake incremental that can be considered as Full ( Database Dumps )

2012-04-16 Thread Christian Manal
On 16.04.2012 12:09, Hugo Letemplier wrote:
 Hello
 
 I use Bacula 5.0.3
 
 On few linux servers I have got BDD dumps that run every nights at a
 specified time.
 For synchronism reasons between databases theses backups are run via
 crontab and not directly from bacula.
 
 I need that bacula save theses databases dumps every morning
 - The filesystem is a read only LVM snapshot of a Virtual Machine (
 the backup is ran on the physical host and not on the virtual machine
 )
 - The snapshot is generated and mounted in a Run Before Job script
 
 Rotation schemas that deletes old dumps on the backed up server is not
 the same than on the configuration of bacula servers
 
 I need bacula to :
 - Run a full
 - Save only the dumps that haven't been already backed up .
 
 I must have a full:
 - If I do increments, I will need to keep the full and this is not
 what I want, if the full is deleted it will create a new one
 - Moreover a DB dump as no dependency in previous dumps
 
 I can't select only the dump of the day :
 - If bacula job is not working one day, the next one must backup the
 missed db dump that where not backed up during the failed job
 
 I can't use a predefined list of files in fileset because the estimate
 seems to be done before Run Before Job script that generates the
 snapshot so it doesn't validate the include path.
 File = \\|bash -c \find …… wont work because it's ran before my
 snapshot creation
 
 I think that it rests options sections from fileset but I didn't
 found anything that's fine
 
 In fact I want to run a full that saves only files since the last
 successful backup without using the incremental method because it will
 generate a full that will be deleted so I will have a useless FULL -
 INC dependence
 
 Have you got an idea ?
 
 Thanks

Hi,

if I understand you right, you want Bacula's virtual backup. You can run
your usual Full and Incremental jobs and then consolidate them into a
new Full backup. See

http://bacula.org/5.2.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION00137


Regards,
Christian Manal


--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with job restore

2012-04-13 Thread Christian Manal
Am 13.04.2012 16:37, schrieb Carlo Filippetto:
 Hi,
 I have cloned a server to use the clone as disaster recovery.
 I want to use bacula to restore the backup of the production server into
 his clone, but I have this error:
 
 /13-apr 16:21 bacula-dir JobId 9645: Fatal error: Cannot restore
 without a bootstrap file.
 You probably ran a restore job directly. All restore jobs must
 be run using the restore command.
 /
 
 
 I tried to read some documentation but I don't understand what I have to
 do for this problem.
 
 Thank's

Hi,

as the message says, if you want to run a restore job manually (i.e. via
the run command in bconsole), you need a boostrap file. The common way
to restore files is via the restore command.


Regards,
Christian Manal

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Charset in Bacula mails

2012-03-06 Thread Christian Manal
Am 06.03.2012 15:48, schrieb Tilman Schmidt:
 March is the only month whose German name contains a non-ASCII
 character (ä). So only during this month, Bacula's log messages
 on servers with de locale contain 8-bit characters, like in:
 
 02-Mär 22:05 backup-dir JobId 987: Start Backup JobId 987,
 Job=backup.2012-03-02_22.05.00_30
 
 Unfortunately Bacula sends its mail messages without MIME headers,
 so non-ASCII characters are strictly speaking illegal and their
 interpretation is left to chance. For example, my Thunderbird
 renders the above as:
 
 02-MÀr 22:05 backup-dir JobId 987: Start Backup JobId 987,
 Job=backup.2012-03-02_22.05.00_30
 
 Not a big problem, but ugly nevertheless.
 Any ideas how to fix that?
 
 Thanks,
 Tilman

Hi,

according to the bsmtp(1) manpage, it supports UTF-8. You just need to
add the -8 option to your mailcommand(s) in bacula-dir.conf.


Regards,
Christian Manal

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Testing Bacula configuration files

2012-02-01 Thread Christian Manal
Am 01.02.2012 14:28, schrieb Joe Nyland:
 
 Hello,
 
 I've trying to setup a verify jobs for my Bacula system.
 
 Before releasing the configuration to the production server, I would
 like to simply verify the configuration file, to make sure I've not
 messed some of the configuration up somewhere.
 
 From the manual, I have run:
 sudo bacula-dir -t -c bacula-dir.conf
 
 Nothing was returned from the command, so I presume it's ok. Just to
 make sure, I changed a few of the resources in the file, so that they
 were definitely invalid. I then re-ran the command above, but still
 nothing was returned...
 
 Does anyone know why this would happen? I've not been able to find any
 bugs logged on bugs.bacula.org along these lines, so I presume I'm doing
 something wrong!
 
 Kind regards,
 
 Joe Nyland


Hi,

if you only give a filename, without a path (even a relative one),
bacula-dir will look for it in the default configuration directory. For
example from my director:

  $ pwd
  /tmp
  $ truss bacula-dir -t -c bacula-dir.conf 21 | grep bacula-dir.conf
  open(/etc/bacula/bacula-dir.conf, O_RDONLY)   = 3
  $
  $ truss bacula-dir -t -c ./bacula-dir.conf 21 | grep bacula-dir.conf
  open(./bacula-dir.conf, O_RDONLY) = 3

Assuming the file you want to test is in your working directory, this
might be your problem.


Regards,
Christian Manal

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Trouble with Accurate flag

2012-01-19 Thread Christian Manal
Am 19.01.2012 00:12, schrieb Peter:
 
 A couple of weeks ago I decided to add the 
   Accurate = yes
 flag to my backup jobs. I thought it shouldn't do any harm and
 might make restores more precise.
 
 This worked fine until yesterday. Yesterday, all of a sudden,
 my incremental backups decided to backup ALL EXISTING FILES
 (and not only the changed ones)! At least they tried to - they
 ran out of storage, as this was not planned. Luckily, so I 
 noticed the crap.
 
 I did check the usual possibilities and didn't find a problem -
 the timestamps of my files hadn't been tampered with, the fileset
 description hadn't been modified, and there was no failed full backup
 in the database which could have triggered a new full backup. 
 Also, the joblog entry looked okay so far:
 
   Backup Level:   Incremental, since=2012-01-16 09:12:26
   FileSet:SysBack 2008-02-10 06:46:30
   Scheduled time: 17-Jan-2012 09:12:00
 
 I am doing daily Incrementals and this here correctly shows the time
 from Monday morning to Tuesday morning.
 I restarted client and server and retried, but still it insisted in 
 saving EVERY file.
 
 Then I removed the Accurate flag, and instantly the backup worked
 as it should!
 
 Then I figured - I had run OffSite backups during the night from
 Monday to Tuesday!
 
 My backup scheme is as follows: daily Incremental, monthly Full.
 And occasionally I run additional Full backups onto tape, to be 
 stored in a remote location.
 
 These OffSite backups are using the same Client and Fileset, but
 different Job. 
 And obviousely they do not write Catalog (fileinfo), because these 
 tapes aren't intended for single-file-restore.
 
 Obviousely these full backups, which have no catalog info, are
 used as the information-source for the Accurate flag! And with
 no Catalog Info, it looked like there were no files at all contained 
 in the previous backup, and therefore EVERY file would be saved in 
 the Incremental, disregarding timestamps.
 
 Understood so far, the solution was simple: I added another
 Client-stanza with a different name for each machine, and let the OffSite
 Backups run with that different Client name. I modified the old OffSite
 jobs in the database to that new Client-id, and now everything seems
 to behave fine again.
 
 Then I checked the manual. The manual says:
In accurate mode, the File daemon knowns exactly which files were
present after the last backup.
 But it does not say how we define the last backup.
 
 Whereas the Incremental Feature clearly defines what is the last
 backup:
all files specified in the FileSet that have changed since the
last successful backup of the the same Job using the same FileSet
and Client, will be backed up
 
 So, obviousely the Accurate Flag uses a different information-source
 (disregarding the Job name) than the Incremental backup itself.
 
 Operating Version here is 
   Client:  Disp 5.0.3 (04Aug10) i386-portbld-freebsd7.4,freebsd,7.4-STABLE
 
 Maybe thats already fixed in newer versions?
 
 But at least with this version, this means you cannot rely on the
 Accurate flag when running different Jobs on the same Client and
 Fileset; it might loose data.
 
 
 PMc


Hi,

I had a similar problem a while back. The thing with accurate backups
is, that only the client name and the fileset are used to identify
previous jobs. Here's the thread:

http://sourceforge.net/mailarchive/forum.php?thread_name=4DD38789.2010007%40informatik.uni-bremen.deforum_name=bacula-users


Regards,
Christian Manal

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole with json

2011-12-14 Thread Christian Manal
Am 14.12.2011 08:57, schrieb Geert Stappers:
 A few weeks ago I had the idea to clone bconsole into a json bconsole.
 It would mostly be a rewrite of the output functions so that
 the output would be in json ( see http://en.wikipedia.org/wiki/Json
 and RFC 4627 for details )
 That way it would be more easy to give bconsole a webinterface.
 
 I abandon the idea as I had no clue how to add session stuff
 so that a web request would be answered to the right connection.
 
 Now just only telling about it in the hope to spark someone
 more in the need for a webinterface then me.

Hi,

if you want web access to a bconsole, you could use shellinabox as a
sort of wrapper for that:

   http://code.google.com/p/shellinabox/


Regards,
Christian Manal

--
Cloud Computing - Latest Buzzword or a Glimpse of the Future?
This paper surveys cloud computing today: What are the benefits? 
Why are businesses embracing it? What are its payoffs and pitfalls?
http://www.accelacomm.com/jaw/sdnl/114/51425149/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client side functions

2011-11-04 Thread Christian Manal
Am 04.11.2011 17:14, schrieb Christopher Geegan:
 For example, it would be beneficial in some cases to be able to initiate
 a restore from the client.

Hi,

why don't you just install bconsole on your clients for that?


Regards,
Christian Manal

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FileSet Question (related to snapshots)

2011-10-31 Thread Christian Manal
Am 29.10.2011 01:12, schrieb Blake Dunlap:
 Greetings,
 
 Minor question, figured I'd try the users list first in case you guys could
 help. I have the following directory structure on a server that I back up,
 but there's a minor twist:
 
 FileSet {
   Name = filemonster-fs
   Include {
 *snip*
 
 File = /etc
 File = /usr/local/sbin
 File = /snapshot/webdata
   }
 *snip*
 
 }
 
 
 That snapshot directory (/snapshot/webdata) is actually a mounted snapshot
 of /data. Ideally, I would like bacula to store this as the actual path, and
 not the path it gets backed up from. It would greatly simplify restores
 among other things.
 
 I know there is the option of stripping X pieces of the path from a fileset,
 but it is fileset wide to my knowledge. Is the best practice to just add a
 snapshot dir to the beginning and keep the same path structure, and have a
 seperate FileSet for each such item?
 
 The reason I ask is I am also considering adding a PathReplace directive (or
 something similar) to facilitate the above, and I want to judge input first,
 and see if there is a better design option.
 
 
 -Blake

Hi,

I have something similar. I take most of my backups from ZFS snapshots,
which means I have '.zfs/snapshot/backup/' in in the middle of all my
paths. To restore from that, I use 'RegexWhere' in my restore job
definition, to snip that part out.


Regards,
Christian Manal

--
Get your Android app more play: Bring it to the BlackBerry PlayBook 
in minutes. BlackBerry App World#153; now supports Android#153; Apps 
for the BlackBerryreg; PlayBook#153;. Discover just how easy and simple 
it is! http://p.sf.net/sfu/android-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Freebsd snapshots and complaints Will not descend

2011-10-11 Thread Christian Manal
Am 10.10.2011 21:11, schrieb Troy Kocher:
 
 On 10,Oct 2011, at 1:12 PM, Martin Simmons wrote:
 
 On Mon, 10 Oct 2011 11:51:14 -0500, Troy Kocher said:


 08-Oct 23:57 kfoobarb-sd JobId 2858: Job write elapsed time = 14:45:49, 
 Transfer rate = 2.702 M Bytes/second 

 Are you running an automounter for home directories?  That could explain both
 the Will not descend messages and also why the warnings vary over time.

 __Martin


 
 I'm not running an automounter.  And as I mentioned this error is 
 intermittent.  I run this job incremental daily without complaint, I get this 
 issue on the differential weekly run.  Regarding the time warning, I 
 corrected this once by forcing an ntp on the fd client.  I think my ntp must 
 not be running properly over there.
 
 Beginning to feel like it's something with the snapshot (/mnt/foobar) not 
 responding as a normal file system under load, and telling bacula-fd access 
 is delayed/denied/?, then bacula understands the delay as device unreachable?
 
 Troy


Hi,

bacula won't recurse filesystems if you don't explicitly tell it to.
Look at the onefs option for the fileset resource:

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#8566


Regards,
Christian Manal

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Filling Database Table - very slow

2011-10-11 Thread Christian Manal
Am 11.10.2011 14:04, schrieb Jarrod Holder:
 Bacula version 5.0.3
  
 In BAT, when trying to restore a directory (roughly 31,000 files in 560 sub 
 folders)  The Filling Database Table takes an extremely long time to 
 complete (about an hour or so).
  
 I've been looking around for a way to speed this up.  Found a post on here 
 that referred to an article that basically said PostgreSQL was the way to go 
 as far as speed 
 (http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog).
   So I converted from MySQL to PostgreSQL using the conversion procedure in 
 the Bacula documentation.  We are now on PostgreSQL, but the speed seems just 
 as slow (if not slower).  Is there anything else that can be done to speed 
 this process up?
  
 I've also tried the running the DB under MySQL with MyISAM and InnoDB tables. 
  Both had the same slow performance here.  With MySQL, I also tried using the 
 my-large.cnf and my-huge.cnf files.  Neither helped.
  
 Server load is very low during this process (0.06).  BAT process is at about 
 3% cpu and 1.6% memory.  Postgres service is about 1%cpu, 0.6% memory.  Drive 
 array is pretty quiet also.
  
 Any help would be greatly appreciated.  If any extra info is needed, I will 
 gladly provide it.


Hi,

what OS are you running on? Did you built Bacula from the tarball? I had
a similar problem on Solaris 10, with the stock Postgres 8.3. Bacula's
'configure' didn't detect that Postgres was thread safe, so it omitted
--enable-batch-insert.

Without batch-insert, a full backup of my biggest fileset took roughly
24 hours. The backup of the data itself was (and still is) only 4 to 5
hours, the rest was despooling attributes into the database (I only
noticed this when I enabled attribute spooling).

With batch-insert (had to hack around in the 'configure' script a
little), the time for attribute despooling shrunk down down to maybe 20
_minutes_. It helps *a lot*.


Regards,
Christian Manal

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] query for file sizes in a job

2011-10-07 Thread Christian Manal
Am 07.10.2011 19:43, schrieb John Drescher:
 On Fri, Oct 7, 2011 at 12:51 PM, Jeff Shanholtz jeffs...@shanholtz.com 
 wrote:
 I appreciate that, but either you misunderstood what I'm trying to do or I
 just can't seem to make sense of the search results I'm getting as they
 apply to my issue. I did see one web page that decodes the base64 string
 from a member of this mailing list, but that operates on a single base64
 string, not on a whole job (and even if it did, I don't know how to get
 bacula to tell me the base64 strings).

 I want to either get a full list of files from a job complete with file
 sizes so I can sort on the file sizes, or query for files greater than a
 certain size. I also probably should have mentioned that I'm stuck on Bacula
 v3.03 because it runs on a windows server.

 Could you be a little more specific on what kind of answer I'm looking for
 in the google results?

 
 I believe you need to write a query that for every file it decodes the
 base64 strings. I remember this discussion although it has been a long
 time so I do not remember the details. I would normally try to track
 this down and help you out however I am swamped so for now this is all
 I can do..
 
 John

You are correct. There is a field called 'lstat' in the 'file' table
that contains base64 encoded file attributes. The file size is somewhere
in there. I think the function in the bacula source to decode that
base64 string is called 'decode_stat' (don't know where it sits exactly;
grep should help).


Regards,
Christian Manal

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Catalog Backup to tape

2011-08-27 Thread Christian Manal
Am 27.08.2011 17:15, schrieb John Drescher:
 On Sat, Aug 27, 2011 at 11:13 AM, John Drescher dresche...@gmail.com wrote:
 On Sat, Aug 27, 2011 at 4:06 AM, frank_sg
 bacula-fo...@backupcentral.com wrote:
 Hi,

 I do my catalog backups to tape. I have an autoloader (Dell TL2000) with an 
 LTO4 drive. I have created a pool Catalog containing two tapes. Since my 
 catalog is not that big  8)  all the catalog backups have been writen to 
 the first tape in the pool. But I would like to switch the tape for every 
 catalog backup - so first backup goes to first tape, second to second tape, 
 third to first tape again, and so on. How is this posible with bacula?


 Use volume once. With a 1 day retention period.
 
 If you want to put more than 1 catalog on a tape use a different pool
 for each of the tapes and use the schedule resource to schedule both
 pools every other day.
 
 John

A way to do it with both tapes in one pool would be a RunAfterJob
script, which changes the active tape to the status 'used' and the
inactive tape to 'append'.

Something along these lines:

   #!/bin/bash

   POOL=catalog

   ACTIVE_TAPE=$(echo list media pool=$POOL | bconsole | grep Append | \
 awk '{print $4}')
   INACTIVE_TAPE=$(echo list media pool=$POOL | bconsole | grep Used | \
   awk '{print $4}')

   cat  EOF | bconsole
   update volume=$ACTIVE_TAPE volstatus=Used
   update volume=$INACTIVE_TAPE volstatus=Append
   EOF


Regards,
Christian Manal

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Script Client Run and

2011-08-24 Thread Christian Manal
Am 24.08.2011 15:09, schrieb Yuri Timofeev:
 Problem in the symbol of .
 File /tmp/test.log is not created.
 Running only the first part of the command ls -la.
  /tmp/test.log not worked.

Hi,

from the documentation for RunScript [1]:

   In addition, the command string is parsed then fed to the OS, which
   means that the path will be searched to execute your specified
   command, but there is no shell interpretation, as a consequence, if
   you invoke complicated commands or want any shell features such as
   redirection or piping, you must call a shell script and do it inside
   that script.


Regards,
Christian Manal


[1]
http://bacula.org/5.1.x-manuals/en/main/main/Configuring_Director.html#7481

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Job - Cannot find previous jobids

2011-07-07 Thread Christian Manal
Am 18.05.2011 14:22, schrieb Christian Manal:
 Am 18.05.2011 13:08, schrieb Graham Keeling:
 On Wed, May 18, 2011 at 01:02:08PM +0200, Christian Manal wrote:
 Am 18.05.2011 12:26, schrieb Graham Keeling:
 On Wed, May 18, 2011 at 11:54:18AM +0200, Christian Manal wrote:
 Am 18.05.2011 11:13, schrieb Graham Keeling:
 If times don't explain it, take a look at this bacula code from
 src/cats/sql_get.c (function db_accurate_get_jobids()), which is getting
 the jobids from the database. You should be able to construct very 
 similar
 queries and run them by hand to see what the database says.
 Or add some debug to get the exact sql queries being used.

/* First, find the last good Full backup for this job/client/fileset 
 */
snip

 Thank you. The problem seems to be that the query doesn't account for
 the job name it is supposed to do, just the client and fileset. I have
 two jobs with the same fileset for each client. One backs up to local
 storage with a full/diff/incr cycle and a rather long retention period,
 the other does monthly full backups to another building for DR and gets
 immediately purged.

 I enabled accurate for the onsite job but the query returns the last
 full run of the offsite job. When I add AND Name = 'JobName' to the
 query it gets the right jobid.

 I think this qualifies for a bug, doesn't it?

 I agree with you, but...
 I have just remembered coming across this before. The thread starts here:
 http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg04050.html

 Kern:
 Bacula does not support this option.

 Me:
 It does appear to be *trying* to support it, as some parts of the code 
 that
 figure out dependent jobs take note of the job name, though others do not.

 Kern:
 I wouldn't exactly say that it is trying to support it, but rather that 
 since 
 the program is so complicated, and I try not to restrict it too much, 
 there 
 are places where it can seem to work, but it is just not designed to do so 
 (at least at the moment), and thus it will not work.  It isn't that I 
 don't 
 want it to work, but there is only so much that the developers can do in 
 the 
 time we have.

 Unfortunate what you are trying to do is simply not possible in the way 
 you 
 are trying to do it with the current code.

 Great... so I have to create two identical filesets to get this to work?

 Or add AND Name = 'JobName', as was your idea. Maybe it works fine.\
 
 Well, going by the thread you linked I just noticed that there is also
 the issue of Bacula using the wrong jobs to create the restore filetree.
 I'd rather not find out what else is affected by this and just do what
 works. I also lack the c(++) skills to look further into this, I'm afraid.
 
 Regards,
 Christian Manal
 
 

 If this kind of setup is not supported, it would be nice if I'd get at
 least a warning by './bacula-dir -t' or something.

 Thanks for the help, though, I'll fix my config.


 Regards,
 Christian Manal





 Regards,
 Christian Manal


Hi again list,

I now made seperate filesets for the offsite backups and waited for the
wrong jobs to cycle out to get around this issue. But when I enable
'accurate' on the affected jobs now I still get the same error:

   Fatal error: Cannot find previous jobids.
   Fatal error: Network error with FD during Backup: ERR=Interrupted
system call

Running the queries from src/cats/sql_get.c by hand returns the right
job ids. What else can I do to debug this problem?


Regards,
Christian Manal


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-06-29 Thread Christian Manal
Am 28.06.2011 18:40, schrieb Steve Costaras:
 
 
 How would the the various parts communicate if you're running multiple
 instances on different ports?   I would think just by creating multiple
 jobs would create multiple socket streams and do the same thing.

I should have gotten another coffee before writing that mail. Of course
you are right. Splitting the the job would be sufficient. No need to run
multiple FDs.


Regards,
Christian Manal


 
 
 
 On 2011-06-28 02:09, Christian Manal wrote:
   - File daemon is single threaded so is limiting backup performance.
 Is there was a way to start more than one stream at the same time for
 a single machine backup? Right now I have all the file systems for a
 single client in the same file set.

   - Tied in with above, accurate backups cut into performance even
 more when doing all the md5/sha1 calcs. Spliting this perhaps with
 above to multiple threads would really help.

   - How to stream a single job to multiple tape drives. Couldn't
 figure this out so that only one tape drive is being used.

   - spooling to disk first then to tape is a killer. if multiple
 streams could happen at once this may mitigate this or some type of
 continous spooling. How do others do this?

 Hi,

 I haven't tried, but shouldn't it be possible to run multiple instances
 of FDs on different ports? You could split up the fileset into multiple
 jobs which then can run concurrently on multiple FDs.


 Regards,
 Christian Manal

 --

 All of the data generated in your IT infrastructure is seriously
 valuable.
 Why? It contains a definitive record of application performance, security
 threats, fraudulent activity, and more. Splunk takes this data and makes
 sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-d2d-c2
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


 


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-06-28 Thread Christian Manal
  - File daemon is single threaded so is limiting backup performance. Is there 
 was a way to start more than one stream at the same time for a single machine 
 backup? Right now I have all the file systems for a single client in the same 
 file set.
 
  - Tied in with above, accurate backups cut into performance even more when 
 doing all the md5/sha1 calcs. Spliting this perhaps with above to multiple 
 threads would really help.
 
  - How to stream a single job to multiple tape drives. Couldn't figure this 
 out so that only one tape drive is being used.
 
  - spooling to disk first then to tape is a killer. if multiple streams could 
 happen at once this may mitigate this or some type of continous spooling. How 
 do others do this?


Hi,

I haven't tried, but shouldn't it be possible to run multiple instances
of FDs on different ports? You could split up the fileset into multiple
jobs which then can run concurrently on multiple FDs.


Regards,
Christian Manal

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fileset: How exclude all except...

2011-06-16 Thread Christian Manal
Am 16.06.2011 18:12, schrieb Stuart McGraw:
 I am having some difficulty specifying a fileset.
 
 I want to exclude all dot files in home directories 
 (/home/*/.*), *except* the directories /home/*/.backup/.
 
 Any hints on how to do this?
 

Hi,

I asked something simmilar a while back. Look here:

   http://sourceforge.net/mailarchive/message.php?msg_id=27098562


Regards,
Christian Manal

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding Encryption

2011-06-06 Thread Christian Manal
Am 27.05.2011 17:02, schrieb Tim Gustafson:
 one master key for each client wouldn't make that much sense, since you
 could just the client keys in a safe place. I have one master key for
 everything. But I don't keep the private key on the director. I have it
 on a pen drive and (to be extra sure) printed out in a safe on site and
 on an encrypted pen drive that I always carry with me.
 
 So, the master key is a second key that can be used to decrypt the backup 
 then.  The people whose severs I'm backing up might not want me to have 
 access to their data, so those users would have to manage their own master 
 keys, correct?

Sorry that I respond so late, I was out of office until now.

That would be correct.


Regards,
Christian Manal

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding Encryption

2011-05-26 Thread Christian Manal
Am 26.05.2011 17:24, schrieb Tim Gustafson:
 Hi there,
 
 I was just looking at the following documentation page:
 
 http://www.bacula.org/en/dev-manual/main/main/Data_Encryption.html
 
 That page contains information about generating a master key and then also 
 a set of client keys.  However, the page is not clear whether you're 
 supposed to use the same master key for all your clients, or if you should 
 have a different master key for each client.  Should I be sharing the 
 master.cert file with each client and keeping the master.key file on my 
 bacula-dir server, or does each client need its own master.cert and 
 master.key file?
 

Hi,

one master key for each client wouldn't make that much sense, since you
could just the client keys in a safe place.

I have one master key for everything. But I don't keep the private key
on the director. I have it on a pen drive and (to be extra sure) printed
out in a safe on site and on an encrypted pen drive that I always carry
with me.


Regards,
Christian Manal

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Accurate Job - Cannot find previous jobids

2011-05-18 Thread Christian Manal
Hi list,

I have a problem regarding accurate backups. When I set 'Accurate = yes'
for any given job in my setup, the next run fails with the following
error(s):

   Fatal error: Cannot find previous jobids.
   Fatal error: Network error with FD during Backup: ERR=Interrupted
system call

The strange thing is, contrary to everything google came up with for
these messages, that the catalog seems to be in order. At least I can
build a filetree for the most recent backups of all my clients in both
bconsole and bat and restore files without a problem.

Does anyone have an idea what could be going on here? My Bacula version
is 5.0.3 with a Postgres 8.3 catalog on Solaris 10. Any pointers would
be appreciated.


Regards,
Christian Manal

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Job - Cannot find previous jobids

2011-05-18 Thread Christian Manal
Am 18.05.2011 11:13, schrieb Graham Keeling:
 If times don't explain it, take a look at this bacula code from
 src/cats/sql_get.c (function db_accurate_get_jobids()), which is getting
 the jobids from the database. You should be able to construct very similar
 queries and run them by hand to see what the database says.
 Or add some debug to get the exact sql queries being used.
 
/* First, find the last good Full backup for this job/client/fileset */
snip

Thank you. The problem seems to be that the query doesn't account for
the job name it is supposed to do, just the client and fileset. I have
two jobs with the same fileset for each client. One backs up to local
storage with a full/diff/incr cycle and a rather long retention period,
the other does monthly full backups to another building for DR and gets
immediately purged.

I enabled accurate for the onsite job but the query returns the last
full run of the offsite job. When I add AND Name = 'JobName' to the
query it gets the right jobid.

I think this qualifies for a bug, doesn't it?


Regards,
Christian Manal

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Job - Cannot find previous jobids

2011-05-18 Thread Christian Manal
Am 18.05.2011 12:26, schrieb Graham Keeling:
 On Wed, May 18, 2011 at 11:54:18AM +0200, Christian Manal wrote:
 Am 18.05.2011 11:13, schrieb Graham Keeling:
 If times don't explain it, take a look at this bacula code from
 src/cats/sql_get.c (function db_accurate_get_jobids()), which is getting
 the jobids from the database. You should be able to construct very similar
 queries and run them by hand to see what the database says.
 Or add some debug to get the exact sql queries being used.

/* First, find the last good Full backup for this job/client/fileset */
snip

 Thank you. The problem seems to be that the query doesn't account for
 the job name it is supposed to do, just the client and fileset. I have
 two jobs with the same fileset for each client. One backs up to local
 storage with a full/diff/incr cycle and a rather long retention period,
 the other does monthly full backups to another building for DR and gets
 immediately purged.

 I enabled accurate for the onsite job but the query returns the last
 full run of the offsite job. When I add AND Name = 'JobName' to the
 query it gets the right jobid.

 I think this qualifies for a bug, doesn't it?
 
 I agree with you, but...
 I have just remembered coming across this before. The thread starts here:
 http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg04050.html
 
 Kern:
 Bacula does not support this option.
 
 Me:
 It does appear to be *trying* to support it, as some parts of the code that
 figure out dependent jobs take note of the job name, though others do not.
 
 Kern:
 I wouldn't exactly say that it is trying to support it, but rather that 
 since 
 the program is so complicated, and I try not to restrict it too much, there 
 are places where it can seem to work, but it is just not designed to do so 
 (at least at the moment), and thus it will not work.  It isn't that I don't 
 want it to work, but there is only so much that the developers can do in the 
 time we have.
 
 Unfortunate what you are trying to do is simply not possible in the way you 
 are trying to do it with the current code.

Great... so I have to create two identical filesets to get this to work?
If this kind of setup is not supported, it would be nice if I'd get at
least a warning by './bacula-dir -t' or something.

Thanks for the help, though, I'll fix my config.


Regards,
Christian Manal


 
 
 
 Regards,
 Christian Manal

 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 
 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 


--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Job - Cannot find previous jobids

2011-05-18 Thread Christian Manal
Am 18.05.2011 13:08, schrieb Graham Keeling:
 On Wed, May 18, 2011 at 01:02:08PM +0200, Christian Manal wrote:
 Am 18.05.2011 12:26, schrieb Graham Keeling:
 On Wed, May 18, 2011 at 11:54:18AM +0200, Christian Manal wrote:
 Am 18.05.2011 11:13, schrieb Graham Keeling:
 If times don't explain it, take a look at this bacula code from
 src/cats/sql_get.c (function db_accurate_get_jobids()), which is getting
 the jobids from the database. You should be able to construct very similar
 queries and run them by hand to see what the database says.
 Or add some debug to get the exact sql queries being used.

/* First, find the last good Full backup for this job/client/fileset */
snip

 Thank you. The problem seems to be that the query doesn't account for
 the job name it is supposed to do, just the client and fileset. I have
 two jobs with the same fileset for each client. One backs up to local
 storage with a full/diff/incr cycle and a rather long retention period,
 the other does monthly full backups to another building for DR and gets
 immediately purged.

 I enabled accurate for the onsite job but the query returns the last
 full run of the offsite job. When I add AND Name = 'JobName' to the
 query it gets the right jobid.

 I think this qualifies for a bug, doesn't it?

 I agree with you, but...
 I have just remembered coming across this before. The thread starts here:
 http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg04050.html

 Kern:
 Bacula does not support this option.

 Me:
 It does appear to be *trying* to support it, as some parts of the code that
 figure out dependent jobs take note of the job name, though others do not.

 Kern:
 I wouldn't exactly say that it is trying to support it, but rather that 
 since 
 the program is so complicated, and I try not to restrict it too much, there 
 are places where it can seem to work, but it is just not designed to do so 
 (at least at the moment), and thus it will not work.  It isn't that I don't 
 want it to work, but there is only so much that the developers can do in 
 the 
 time we have.

 Unfortunate what you are trying to do is simply not possible in the way you 
 are trying to do it with the current code.

 Great... so I have to create two identical filesets to get this to work?
 
 Or add AND Name = 'JobName', as was your idea. Maybe it works fine.\

Well, going by the thread you linked I just noticed that there is also
the issue of Bacula using the wrong jobs to create the restore filetree.
I'd rather not find out what else is affected by this and just do what
works. I also lack the c(++) skills to look further into this, I'm afraid.

Regards,
Christian Manal


 
 If this kind of setup is not supported, it would be nice if I'd get at
 least a warning by './bacula-dir -t' or something.

 Thanks for the help, though, I'll fix my config.


 Regards,
 Christian Manal





 Regards,
 Christian Manal

 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 
 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay

Re: [Bacula-users] How does Bacula back-up files?

2011-05-13 Thread Christian Manal
Am 13.05.2011 14:32, schrieb obviously:
 Hello,
 
 I have a question I can't solve...
 
 The is the situation:
 
 I create a file with: dd if=/dev/urandom of=test.bin bs=10M count=300
 This gives me a file of 3GB.
 I check it's MD5 with md5sum test.bin
 
 I clear my cache with echo 3  /proc/sys/vm/drop_caches.
 
 I check my chache with free -m.
 
 I start a backup with Bacula of only 1 file, namely test.bin
 
 Again, I flush the cache and when the back-up job is starting I remove the 
 test.bin file on the server.
 
 And Bacula doens't react at all, it keeps backing up the file like it is 
 still there.
 
 The backup finishes with no warnings, even it is removed during the backup.
 
 I restore the test.bin file from tape and checks the md5 of it, and strangely 
 the md5sum is the same... 
 
 So my question, how does Bacula do this? Cause I remove the file during the 
 backup and flush the cache frequently...
 
 I hope you guys understand my q, my english is realy bad :) excuse me...
 

Hi,

deleting usually just unlinks a file. You don't see it in the
filesystem anymore but it is still physically on the disk. And AFAIK the
space it is consuming isn't released until the last file handle
accessing it (in this case bacula) is closed. So it's nothing bacula
specific.


Regards,
Christian Manal

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Removing purged volumes form database

2011-05-12 Thread Christian Manal
Am 12.05.2011 13:14, schrieb Richard Marnau:
 Hi,
  
 for some reason old backups are still in the database, even if they are 
 purged.
  
 | 635 | Vol0635| Purged|   1 |928,733,695 |   
  0 |7,776,000 |   0 |0 | 0 | File  | 2010-10-11 
 23:47:09 |
 | 636 | Vol0636| Purged|   1 |562,976 |   
  0 |7,776,000 |   0 |0 | 0 | File  | 2010-10-11 
 23:47:53 |
 | 637 | Vol0637| Purged|   1 |  1,240,464 |   
  0 |7,776,000 |   0 |0 | 0 | File  | 2010-10-11 
 23:48:54 |
 | 638 | Vol0638| Purged|   1 |106,572,432 |   
  0 |7,776,000 |   0 |0 | 0 | File  | 2010-10-11 
 23:52:54 |
  
 Pool configuration:
  
 --
 Pool {
   Name = File
   Pool Type = Backup
   Recycle = no# Volumes nicht wieder verwenden.
   AutoPrune = yes # Prune expired volumes
   Action on Purge = Truncate  # Loeschen
   Volume Retention = 90 days #
   File Retention = 90 days   #
   Job Retention = 90 days#
   Maximum Volume Bytes = 20G  # Limit Volume size to something 
 reasonable
   Maximum Volumes = 900   # Limit number of Volumes in Pool
   LabelFormat = Vol
 }
  
 I've tried to update the volumes, the pool and the statistics as suggested by 
 the documentation, but the volumes are still in the list.
 What do I miss ?
  
 Cheers,
 
 Richard

Hi,

purging only removes all jobs and files associated to that volume from
the database. To get rid of the volume itself, you have to delete it
(echo delete volume=volume name | bconsole).


Regards,
Christian Manal

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula deciding to ignore files, again. Help please

2011-04-29 Thread Christian Manal
Am 29.04.2011 14:31, schrieb dobbin:
 Right, I used a little tool to update the file modified attribute to the 
 current time and then ran the backup. Now it's backing up the massive file.
 
 So bacula must be looking at this and deciding that it's already backed it up 
 because the modified attribute is older than the previous backup.
 But it doesn't actually check or compare against what it's already backed up.
 
 Is there any way around this?
 

Hi,

you could use accurate backups:

http://bacula.org/5.0.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION0081


Regards,
Christian Manal

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FileSet -- default options?

2011-04-19 Thread Christian Manal
Am 19.04.2011 15:30, schrieb hymie!:
 Can I set default options for FileSets the way I can for Jobs?  Or,
 can I somehow include an external file into my bacula-dir.conf file
 at a specific point?

Hi,

I don't think there is something like JobDefs for FileSets, but you can
include external files anywhere in your configuration by putting an @ in
front of the path.

Example:

   FileSet {
  Name = Windows-C
  Include {
 @/path/to/options.conf
 File = C:/
  }
   }


Regards,
Christian Manal

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] speed up my backups

2011-04-19 Thread Christian Manal
Am 19.04.2011 15:37, schrieb hymie!:
 
 So one of my machines has a few zillion tiny little files.
 
 My full backup took 44 hours.  I can deal with that if I have to.
 My incremental backup has been running for 10 hours now.
 Files=71,560 Bytes=273,397,510 Bytes/sec=7,666 Errors=0
 Files Examined=14,675,372
 
 I know that bacula has to look at each file to determine if it's
 changed.  And I am investigating whether or not we actually need to
 keep all of these little files, or if we can zip them up into archives.
 
 In the meantime,  I'm just wondering if there is some why to speed up my
 backups.  For example,
 signature=sizeonly
 or
 signature=stupid
 or some other undocumented and unrecommended (but needed) way to
 speed up the verification of file changes (or lack thereof).
 
 Thanks.
 
 --hymie!http://lactose.homelinux.net/~hymiehy...@lactose.homelinux.net

Hi,

do you use accurate backups? If not, the signature isn't used anyway.
Regular incrementals and differentials are done by timestamp. If you do,
you can specify the metadata that is used by accurate jobs to compare
files in the fileset:

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#8553


Regards,
Christian Manal

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] force backup of unchanged file in incremental backup

2011-04-14 Thread Christian Manal
Am 14.04.2011 08:33, schrieb James Harper:
 The last modified datestamp on MSSQL database files doesn't get
 changed unless the actual file dimensions change (eg it 'grows') or when
 the file is closed. This means that an incremental backup won't
 necessarily back up the database files unless they have changed.
 Accurate won't catch this either as the metadata it uses will be
 identical.
 
 Is there a way to force the backup of specific unchanged files during an
 incremental or differential backup? Eg:
 
 Option {
   File = C:/database/mydb.mdf
   Always Back Up = Yes
 }
 
 Thanks
 
 James

Hi,

does the file change at all? If so, you can just adjust the metadata
that Accurate uses to compare checksums. That is done in the fileset:

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#8553


Regards,
Christian Manal

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up ZFS with ACLs?

2011-04-05 Thread Christian Manal
Am 05.04.2011 21:24, schrieb Roy Sigurd Karlsbakk:
 Hi all
 
 How is Bacula suited to backing off ZFS with its ACLs? We're using ZFS for 
 Windows and unix storage, and the Windows ACLs (that is, ZFS ACLs) can be 
 rather diverse.
 
 Vennlige hilsener / Best regards
 
 roy

Hi,

ZFS ACLs haven been fully supported for quite some time. Just put
aclsupport=yes in the Options block of your fileset(s) and you are
good to go.

For reference:

http://bacula.org/5.0.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION008244000

http://bacula.org/5.0.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION0083

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#ACLSupport


Regards,
Christian Manal

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] My Documents

2011-03-30 Thread Christian Manal
Am 30.03.2011 23:05, schrieb Paul Fontenot:
 I am attempting to backup only the My Documents directory on my
 Windows machines and I'm not having any luck. Here is my FileSet
 directive
 
 FileSet {
 Name = My Documents
 Enable VSS = yes
 Include {
 File = C:/Documents and Settings/*/My Documents
 }
 }
 
 I get this error message Could not stat C:\Documents and
 Settings\*\My Documents: ERR=The filename, directory name, or volume
 label syntax is incorrect
 
 I imagine this is fairly simple to do and I am only over looking the obvious.
 

Hi,

look for wilddir in the FileSet docs.


Regards,
Christian Manal

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Resource Fileset.

2011-03-22 Thread Christian Manal
Am 22.03.2011 18:06, schrieb Angelo Braga:
 Hello Friends,
 
 I Have a Server with ther folder /opt2, inside this folder i have many 
 files and some files with de .gz extensions. In my Fileset config, how i 
 can specify the director of bacula to get only the files with extensions 
 .gz?
 
 remembering that I have to back up other folders within the server
 
 Thanks,
 
 Angelo
 

Hi,

this is showcased in the docs:

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION00188


Regards,
Christian Manal

--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-19 Thread Christian Manal
Am 18.03.2011 21:37, schrieb Martin Simmons:
 On Fri, 18 Mar 2011 20:47:03 +0100, Christian Manal said:

 Am 18.03.2011 19:26, schrieb Martin Simmons:
 On Fri, 18 Mar 2011 13:36:36 +0100, Christian Manal said:

 Am 18.03.2011 13:03, schrieb Martin Simmons:
 On Fri, 18 Mar 2011 11:37:33 +0100, Christian Manal said:

 Am 18.03.2011 10:40, schrieb Christian Manal:
 Am 16.03.2011 09:14, schrieb Christian Manal:
 Am 15.03.2011 19:12, schrieb Christian Manal:
 Am 15.03.2011 17:49, schrieb Kjetil Torgrim Homme:
 Christian Manal moen...@informatik.uni-bremen.de writes:

 Also, after several accurate jobs running without restarting Bacula,
 the total memory usage of the director and fd didn't go up anymore, 
 so
 I presume it comes down to the behavior of Solaris' free(), as
 described in the above quoted manpage.

 libumem may work better -- just set LD_PRELOAD, you don't have to
 recompile.  I'd appreciate it if you report back if you try it.


 Actually, I already did that. Modified the startup script for the
 affected fd (don't want the director crashing if things go wrong) and
 restarted. I will report the results tomorrow.

 Looks good. 

 Maybe I spoke too soon. Last night my director crashed with a segfault,
 after switching to libumem. Leading to that was an unusually long
 running job (the accurate one) which, going by the size, looked like it
 was doing a full instead of incremental for some reason.

 I have some output from mdb and pstack attached.

 And going by dbx, the dir went kaboom in Jmsg().
 ...
 =[1] Jmsg(0xbefe5be0, 0x1, 0x0, 0x0, 0xfee8e25e, 0xf6caddb0), at 
 0xfee6a580 
   [2] j_msg(0x80c360e, 0x154, 0xbefe5be0, 0x1, 0x0, 0x0), at 0xfee6a7ad 
   [3] start_storage_daemon_message_thread(0xbefe5be0, 0x80bc7f5, 
 0xfdc7f960, 0x0, 0x80bc798, 0xfde8fe6c), at 0x80834bc 
   [4] do_backup(0xbefe5be0, 0x4, 0x0, 0xfdf91200, 0xfeea26e4, 
 0xfdf91200), at 0x80658b0 
   [5] _ZL10job_threadPv(0xbefe5be0, 0x1, 0xfe7c0dc7, 0xfe8422cc, 
 0xfe8422c0, 0xfdf91200), at 0x807a96e 
   [6] jobq_server(0x80e5080), at 0x807d127 
   [7] _thr_setup(0xfdf91200), at 0xfe7c7e66 
   [8] _lwp_start(0xfee8e708, 0x0, 0x0, 0xfde8ea00, 0x7, 0x0), at 
 0xfe7c8150 

 It looks like it ran out of memory (the segfault is deliberate, due to 
 failure
 to create a thread in start_storage_daemon_message_thread).

 That's strange. I'm monitoring that box with Nagios + pnp4nagios.
 Neither did Nagios report unusually high memory usage nor do I see a
 spike on the pnp4nagios graphs for memory and swap.


 Did it write any info to the Bacula log?  It should say Cannot create 
 message
 thread: followed by the error message.

 The logfile just cleanly ends after the last finished job. But it seems
 to be in the coredump:

 core:msgchan.c:340 Cannot create message thread: Resource temporarily
 unavailable

 Resource temporarily unavailable occurs when Solaris can't allocate the
 stack for a new thread, so memory pressure is a likely reason.  It may be
 invisible to Nagios if the memory is just reserved rather than being in use
 (something that malloc implementations will do differently).


 Hm.. but this didn't happen until I switched the director to libumem and
 the servers runs several other services which didn't blow up with no
 memory. So it looks like it has something to do with dir+umem, doesn't it?
 
 Yes, but changing the memory allocator can have far-reaching consequences.
 How large was the core dump?
 

1.8G


 I think I may set up a test environment, when I have time, to take a
 closer look at this issue.
 
 You could try running pmap to see how the memory layout changes while it is
 doing the backup.
 
 Also, building Bacula as a 64-bit program might solve it (if you can get all
 of the dependent libraries in 64-bit format).
 

That's a good pointer. I will try that.


Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-19 Thread Christian Manal
Am 19.03.2011 01:18, schrieb Gary R. Schmidt:
 And don't rely on nagios or the like to spot the sort of transient 
 memory spike that can be caused by bacula-fd, you need to crank it up to 
 look every few seconds.

Well, the Bacula logs that are in the coredump but didn't make it into
the logfile show the

core:msgchan.c:340 Cannot create message thread: Resource temporarily
unavailable

message several times over a rather long timeframe (I don't have a
number right now, since I'm at home; will check further on monday). So I
would at least expect something that looks annormal on the graphs.


Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-18 Thread Christian Manal
Am 16.03.2011 09:14, schrieb Christian Manal:
 Am 15.03.2011 19:12, schrieb Christian Manal:
 Am 15.03.2011 17:49, schrieb Kjetil Torgrim Homme:
 Christian Manal moen...@informatik.uni-bremen.de writes:

 Also, after several accurate jobs running without restarting Bacula,
 the total memory usage of the director and fd didn't go up anymore, so
 I presume it comes down to the behavior of Solaris' free(), as
 described in the above quoted manpage.

 libumem may work better -- just set LD_PRELOAD, you don't have to
 recompile.  I'd appreciate it if you report back if you try it.


 Actually, I already did that. Modified the startup script for the
 affected fd (don't want the director crashing if things go wrong) and
 restarted. I will report the results tomorrow.
 
 Looks good. 

Maybe I spoke too soon. Last night my director crashed with a segfault,
after switching to libumem. Leading to that was an unusually long
running job (the accurate one) which, going by the size, looked like it
was doing a full instead of incremental for some reason.

I have some output from mdb and pstack attached.


Regards,
Christian Manal
Loading modules: [ libumem.so.1 libc.so.1 ld.so.1 ]
 ::status
debugging core file of bacula-dir (32-bit) from erwin
file: /services/bacula/sbin/bacula-dir
initial argv: /services/bacula/sbin/bacula-dir -v -c 
/services/bacula/etc/bacula-dir.conf
threading model: multi-threaded
status: process terminated by SIGSEGV (Segmentation Fault)
 ::stack
libbac-5.0.3.so`_Z4JmsgP3JCRixPKcz+0x4c9(befe5be0, 1, 0, 0, fee8e25e, f6caddb0)
libbac-5.0.3.so`_Z5j_msgPKciP3JCRixS0_z+0x146(80c360e, 154, befe5be0, 1, 0, 0)
_Z35start_storage_daemon_message_threadP3JCR+0xf9(befe5be0, 80bc7f5, fdc7f960, 
0, 80bc798, fde8fe6c)
_Z9do_backupP3JCR+0x260(befe5be0, 4, 0, fdf91200, feea26e4, fdf91200)
_ZL10job_threadPv+0x3de(befe5be0, 1, fe7c0dc7, fe8422cc, fe8422c0, fdf91200)
jobq_server+0x39c(80e5080)
libc.so.1`_thr_setup+0x4e(fdf91200)
libc.so.1`_lwp_start(fdf91200, 0, 0, fde8fff8, fe7c8150, fdf91200)
core 'core' of 9530:/services/bacula/sbin/bacula-dir -v -c 
/services/bacula/etc/bacula-dir
-  lwp# 1 / thread# 1  
 fe7c81ab __lwp_park (feea33cc, feea33b4, 8047c50) + b
 fe7c29d6 cond_wait_queue (feea33cc, feea33b4, 8047c50) + 5e
 fe7c2d53 cond_wait_common (feea33cc, feea33b4, 8047c50) + 1db
 fe7c2f85 _cond_timedwait (feea33cc, feea33b4, 8047ce8) + 51
 fe7c2ff0 cond_timedwait (feea33cc, feea33b4, 8047ce8) + 24
 fe7c302c pthread_cond_timedwait (feea33cc, feea33b4, 8047ce8, 4d82a788, 0, 0) 
+ 1e
 fee515af _Z11bmicrosleepii (3c, 0, 8047e3c, 8047dc0, feea26e4, 8047ecb) + e6
 0808b6e5 _Z17wait_for_next_jobPc (0, 154, befdf620, 1, 1, fef52320) + 18a
 080611c5 main (805dfa0, 0, 8047e14) + 782
 0805dfa0 _start   (4, 8047ea4, 8047ec5, 8047ec8, 8047ecb, 0) + 80
-  lwp# 2 / thread# 2  
 fe7cabc7 __pollsys (fe09e2b0, 1, 0, 0) + 7
 fe7746b2 pselect  (6, fe09eda4, fe840370, fe840370, 0, 0) + 18e
 fe7749a8 select   (6, fe09eda4, 0, 0, 0, fe09ef78) + 82
 fee53525 _Z18bnet_thread_serverP5dlistiP9workq_tagPFPvS3_E (fe21af40, 14, 
80e5180, 80ab333, fe7c67e1, fe83e000) + 5c9
 080ab645 connect_thread (fe21af40) + 49
 fe7c7e66 _thr_setup (fdf90200) + 4e
 fe7c8150 _lwp_start (fdf90200, 0, 0, fe09eff8, fe7c8150, fdf90200)
-  lwp# 3 / thread# 3  
 fe7c81ab __lwp_park (feea39f8, feea39e0, fdf8ef10) + b
 fe7c29d6 cond_wait_queue (feea39f8, feea39e0, fdf8ef10) + 5e
 fe7c2d53 cond_wait_common (feea39f8, feea39e0, fdf8ef10) + 1db
 fe7c2f85 _cond_timedwait (feea39f8, feea39e0, fdf8efb0) + 51
 fe7c2ff0 cond_timedwait (feea39f8, feea39e0, fdf8efb0) + 24
 fe7c302c pthread_cond_timedwait (feea39f8, feea39e0, fdf8efb0, fe843f4c, 
100, 0) + 1e
 fee82972 watchdog_thread (0) + 2d8
 fe7c7e66 _thr_setup (fdf90a00) + 4e
 fe7c8150 _lwp_start (fdf90a00, 0, 0, fdf8eff8, fe7c8150, fdf90a00)
-  lwp# 6 / thread# 6  
 fe7c81ab __lwp_park (fef9f750, fef9f770, fdd90f98) + b
 fe7c29d6 cond_wait_queue (fef9f750, fef9f770, fdd90f98) + 5e
 fe7c2d53 cond_wait_common (fef9f750, fef9f770, fdd90f98) + 1db
 fe7c2f85 _cond_timedwait (fef9f750, fef9f770, fdd90fd0) + 51
 fef767df umem_update_thread (0) + 17b
 fe7c7e66 _thr_setup (fdf91a00) + 4e
 fe7c8150 _lwp_start (fdf91a00, 0, 0, fdd90ff8, fe7c8150, fdf91a00)
-  lwp# 161 / thread# 161  
 fe7c81ab __lwp_park (befea704, 80e3b60, fdb6fd30) + b
 fe7c29d6 cond_wait_queue (befea704, 80e3b60, fdb6fd30) + 5e
 fe7c2d53 cond_wait_common (befea704, 80e3b60, fdb6fd30) + 1db
 fe7c2f85 _cond_timedwait (befea704, 80e3b60, fdb6fd9c) + 51
 fe7c2ff0 cond_timedwait (befea704, 80e3b60, fdb6fd9c) + 24
 fe7c302c pthread_cond_timedwait (befea704, 80e3b60, fdb6fd9c, 0, fe755cde, 
ea26e4) + 1e
 08082fc6 _Z35wait_for_storage_daemon_terminationP3JCR (befea4a0, fffc, 
befea778, fdb6fe18, fdb6fe08, fdb6fe00) + 90
 08064e74 _Z24wait_for_job_terminationP3JCRi (befea4a0, 0

Re: [Bacula-users] Accurate backup and memory usage

2011-03-18 Thread Christian Manal
Am 18.03.2011 10:40, schrieb Christian Manal:
 Am 16.03.2011 09:14, schrieb Christian Manal:
 Am 15.03.2011 19:12, schrieb Christian Manal:
 Am 15.03.2011 17:49, schrieb Kjetil Torgrim Homme:
 Christian Manal moen...@informatik.uni-bremen.de writes:

 Also, after several accurate jobs running without restarting Bacula,
 the total memory usage of the director and fd didn't go up anymore, so
 I presume it comes down to the behavior of Solaris' free(), as
 described in the above quoted manpage.

 libumem may work better -- just set LD_PRELOAD, you don't have to
 recompile.  I'd appreciate it if you report back if you try it.


 Actually, I already did that. Modified the startup script for the
 affected fd (don't want the director crashing if things go wrong) and
 restarted. I will report the results tomorrow.

 Looks good. 
 
 Maybe I spoke too soon. Last night my director crashed with a segfault,
 after switching to libumem. Leading to that was an unusually long
 running job (the accurate one) which, going by the size, looked like it
 was doing a full instead of incremental for some reason.
 
 I have some output from mdb and pstack attached.

And going by dbx, the dir went kaboom in Jmsg().

Regards,
Christian Manal
Reading bacula-dir
core file header read successfully
Reading ld.so.1
Reading libumem.so.1
Reading libintl.so.8.1.1
Reading libbacfind-5.0.3.so
Reading libbacsql-5.0.3.so
Reading libbacpy-5.0.3.so
Reading libbaccfg-5.0.3.so
Reading libbac-5.0.3.so
Reading libz.so.1.2.5
Reading libstdc++.so.6.0.10
Reading libpython2.4.so.1.0
Reading libpq.so.5.1
Reading libpthread.so.1
Reading libnsl.so.1
Reading libsocket.so.1
Reading libxnet.so.1
Reading libresolv.so.2
Reading librt.so.1
Reading libssl.so.0.9.8
Reading libcrypto.so.0.9.8
Reading libm.so.2
Reading libgcc_s.so.1
Reading libc.so.1
Reading libiconv.so.2.5.0
Reading libm.so.1
Reading libdl.so.1
Reading libssl.so.0.9.7
Reading libcrypto.so.0.9.7
Reading libgss.so.1
Reading libaio.so.1
Reading libmd.so.1
Reading libcrypto.so.0.9.8
Reading libcmd.so.1
Reading libcrypto_extra.so.0.9.7
t@489 (l@489) terminated by signal SEGV (no mapping at the fault address)
0xfee6a580: Jmsg+0x04c9:movb $0x,(%eax)
(dbx) where  
current thread: t@489
=[1] Jmsg(0xbefe5be0, 0x1, 0x0, 0x0, 0xfee8e25e, 0xf6caddb0), at 0xfee6a580 
  [2] j_msg(0x80c360e, 0x154, 0xbefe5be0, 0x1, 0x0, 0x0), at 0xfee6a7ad 
  [3] start_storage_daemon_message_thread(0xbefe5be0, 0x80bc7f5, 0xfdc7f960, 
0x0, 0x80bc798, 0xfde8fe6c), at 0x80834bc 
  [4] do_backup(0xbefe5be0, 0x4, 0x0, 0xfdf91200, 0xfeea26e4, 0xfdf91200), at 
0x80658b0 
  [5] _ZL10job_threadPv(0xbefe5be0, 0x1, 0xfe7c0dc7, 0xfe8422cc, 0xfe8422c0, 
0xfdf91200), at 0x807a96e 
  [6] jobq_server(0x80e5080), at 0x807d127 
  [7] _thr_setup(0xfdf91200), at 0xfe7c7e66 
  [8] _lwp_start(0xfee8e708, 0x0, 0x0, 0xfde8ea00, 0x7, 0x0), at 0xfe7c8150 
(dbx) threads
  t@1  a  l@1   ?()   sleep on 0xfeea33cc  in  __lwp_park() 
  t@2  a  l@2   connect_thread()   LWP suspended in  __pollsys() 
  t@3  a  l@3   watchdog_thread()   sleep on 0xfeea39f8  in  __lwp_park() 
  t@6  b  l@6   umem_update_thread()   sleep on (unknown) in  __lwp_park() 
t@161  a l@161   jobq_server()   sleep on 0xbefea704  in  __lwp_park() 
t@171  a l@171   jobq_server()   sleep on 0xfe165e44  in  __lwp_park() 
t@203  a l@203   msg_thread()   LWP suspended in  __lwp_park() 
t@222  a l@222   jobq_server()   LWP suspended in  _waitid() 
o  t@489  a l@489   jobq_server()   signal SIGSEGV in  Jmsg() 
t@490  a l@490   jobq_server()   LWP suspended in  _read() 
t@491  a l@491   jobq_server()   sleep on 0xfe133828  in  __lwp_park() 
t@492  a l@492   msg_thread()   sleep on 0xfe133828  in  __lwp_park() 
t@494  a l@494   jobq_server()   LWP suspended in  _read() 
t@495  a l@495   jobq_server()   LWP suspended in  _read() 
t@496  a l@496   jobq_server()   LWP suspended in  _read() 
t@497  a l@497   jobq_server()   LWP suspended in  _read() 
t@589  a l@589   msg_thread()   LWP suspended in  _read() 
t@590  a l@590   msg_thread()   LWP suspended in  _read() 
--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-18 Thread Christian Manal
Am 18.03.2011 13:03, schrieb Martin Simmons:
 On Fri, 18 Mar 2011 11:37:33 +0100, Christian Manal said:

 Am 18.03.2011 10:40, schrieb Christian Manal:
 Am 16.03.2011 09:14, schrieb Christian Manal:
 Am 15.03.2011 19:12, schrieb Christian Manal:
 Am 15.03.2011 17:49, schrieb Kjetil Torgrim Homme:
 Christian Manal moen...@informatik.uni-bremen.de writes:

 Also, after several accurate jobs running without restarting Bacula,
 the total memory usage of the director and fd didn't go up anymore, so
 I presume it comes down to the behavior of Solaris' free(), as
 described in the above quoted manpage.

 libumem may work better -- just set LD_PRELOAD, you don't have to
 recompile.  I'd appreciate it if you report back if you try it.


 Actually, I already did that. Modified the startup script for the
 affected fd (don't want the director crashing if things go wrong) and
 restarted. I will report the results tomorrow.

 Looks good. 

 Maybe I spoke too soon. Last night my director crashed with a segfault,
 after switching to libumem. Leading to that was an unusually long
 running job (the accurate one) which, going by the size, looked like it
 was doing a full instead of incremental for some reason.

 I have some output from mdb and pstack attached.

 And going by dbx, the dir went kaboom in Jmsg().
 ...
 =[1] Jmsg(0xbefe5be0, 0x1, 0x0, 0x0, 0xfee8e25e, 0xf6caddb0), at 0xfee6a580 
   [2] j_msg(0x80c360e, 0x154, 0xbefe5be0, 0x1, 0x0, 0x0), at 0xfee6a7ad 
   [3] start_storage_daemon_message_thread(0xbefe5be0, 0x80bc7f5, 0xfdc7f960, 
 0x0, 0x80bc798, 0xfde8fe6c), at 0x80834bc 
   [4] do_backup(0xbefe5be0, 0x4, 0x0, 0xfdf91200, 0xfeea26e4, 0xfdf91200), 
 at 0x80658b0 
   [5] _ZL10job_threadPv(0xbefe5be0, 0x1, 0xfe7c0dc7, 0xfe8422cc, 0xfe8422c0, 
 0xfdf91200), at 0x807a96e 
   [6] jobq_server(0x80e5080), at 0x807d127 
   [7] _thr_setup(0xfdf91200), at 0xfe7c7e66 
   [8] _lwp_start(0xfee8e708, 0x0, 0x0, 0xfde8ea00, 0x7, 0x0), at 0xfe7c8150 
 
 It looks like it ran out of memory (the segfault is deliberate, due to failure
 to create a thread in start_storage_daemon_message_thread).

That's strange. I'm monitoring that box with Nagios + pnp4nagios.
Neither did Nagios report unusually high memory usage nor do I see a
spike on the pnp4nagios graphs for memory and swap.


 Did it write any info to the Bacula log?  It should say Cannot create message
 thread: followed by the error message.

The logfile just cleanly ends after the last finished job. But it seems
to be in the coredump:

core:msgchan.c:340 Cannot create message thread: Resource temporarily
unavailable


Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-18 Thread Christian Manal
Am 18.03.2011 19:26, schrieb Martin Simmons:
 On Fri, 18 Mar 2011 13:36:36 +0100, Christian Manal said:

 Am 18.03.2011 13:03, schrieb Martin Simmons:
 On Fri, 18 Mar 2011 11:37:33 +0100, Christian Manal said:

 Am 18.03.2011 10:40, schrieb Christian Manal:
 Am 16.03.2011 09:14, schrieb Christian Manal:
 Am 15.03.2011 19:12, schrieb Christian Manal:
 Am 15.03.2011 17:49, schrieb Kjetil Torgrim Homme:
 Christian Manal moen...@informatik.uni-bremen.de writes:

 Also, after several accurate jobs running without restarting Bacula,
 the total memory usage of the director and fd didn't go up anymore, so
 I presume it comes down to the behavior of Solaris' free(), as
 described in the above quoted manpage.

 libumem may work better -- just set LD_PRELOAD, you don't have to
 recompile.  I'd appreciate it if you report back if you try it.


 Actually, I already did that. Modified the startup script for the
 affected fd (don't want the director crashing if things go wrong) and
 restarted. I will report the results tomorrow.

 Looks good. 

 Maybe I spoke too soon. Last night my director crashed with a segfault,
 after switching to libumem. Leading to that was an unusually long
 running job (the accurate one) which, going by the size, looked like it
 was doing a full instead of incremental for some reason.

 I have some output from mdb and pstack attached.

 And going by dbx, the dir went kaboom in Jmsg().
 ...
 =[1] Jmsg(0xbefe5be0, 0x1, 0x0, 0x0, 0xfee8e25e, 0xf6caddb0), at 
 0xfee6a580 
   [2] j_msg(0x80c360e, 0x154, 0xbefe5be0, 0x1, 0x0, 0x0), at 0xfee6a7ad 
   [3] start_storage_daemon_message_thread(0xbefe5be0, 0x80bc7f5, 
 0xfdc7f960, 0x0, 0x80bc798, 0xfde8fe6c), at 0x80834bc 
   [4] do_backup(0xbefe5be0, 0x4, 0x0, 0xfdf91200, 0xfeea26e4, 0xfdf91200), 
 at 0x80658b0 
   [5] _ZL10job_threadPv(0xbefe5be0, 0x1, 0xfe7c0dc7, 0xfe8422cc, 
 0xfe8422c0, 0xfdf91200), at 0x807a96e 
   [6] jobq_server(0x80e5080), at 0x807d127 
   [7] _thr_setup(0xfdf91200), at 0xfe7c7e66 
   [8] _lwp_start(0xfee8e708, 0x0, 0x0, 0xfde8ea00, 0x7, 0x0), at 
 0xfe7c8150 

 It looks like it ran out of memory (the segfault is deliberate, due to 
 failure
 to create a thread in start_storage_daemon_message_thread).

 That's strange. I'm monitoring that box with Nagios + pnp4nagios.
 Neither did Nagios report unusually high memory usage nor do I see a
 spike on the pnp4nagios graphs for memory and swap.


 Did it write any info to the Bacula log?  It should say Cannot create 
 message
 thread: followed by the error message.

 The logfile just cleanly ends after the last finished job. But it seems
 to be in the coredump:

 core:msgchan.c:340 Cannot create message thread: Resource temporarily
 unavailable
 
 Resource temporarily unavailable occurs when Solaris can't allocate the
 stack for a new thread, so memory pressure is a likely reason.  It may be
 invisible to Nagios if the memory is just reserved rather than being in use
 (something that malloc implementations will do differently).
 

Hm.. but this didn't happen until I switched the director to libumem and
the servers runs several other services which didn't blow up with no
memory. So it looks like it has something to do with dir+umem, doesn't it?

I think I may set up a test environment, when I have time, to take a
closer look at this issue.


Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-16 Thread Christian Manal
Am 15.03.2011 19:12, schrieb Christian Manal:
 Am 15.03.2011 17:49, schrieb Kjetil Torgrim Homme:
 Christian Manal moen...@informatik.uni-bremen.de writes:

 Also, after several accurate jobs running without restarting Bacula,
 the total memory usage of the director and fd didn't go up anymore, so
 I presume it comes down to the behavior of Solaris' free(), as
 described in the above quoted manpage.

 libumem may work better -- just set LD_PRELOAD, you don't have to
 recompile.  I'd appreciate it if you report back if you try it.

 
 Actually, I already did that. Modified the startup script for the
 affected fd (don't want the director crashing if things go wrong) and
 restarted. I will report the results tomorrow.

Looks good. Backups went fine, the director is at 2G of RAM again but
the fd that uses libumem is down to measly 11M :)

For everyone who wants to try this, I added the following lines at the
top of bacula-ctl-fd to make this work:

   LD_PRELOAD=libumem.so
   export LD_PRELOAD
   UMEM_OPTIONS=backend=mmap
   export UMEM_OPTIONS


Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-15 Thread Christian Manal
Am 15.03.2011 08:51, schrieb Ralf Gross:
 Christian Manal schrieb:
 Thanks for the pointer. From Solaris 10 malloc(3C):

  [...] After free()
  is executed, this space is made available for further  allo-
  cation  by  the application, though not returned to the sys-
  tem. Memory is returned to the system only upon  termination
  of  the  application. [...]

 So I have to restart Bacula after everything is done to get the memory
 back. That kinda sucks and is clearly not Bacula's fault.
 
 
 you may want to take a look at 2 recent bug reports:
 
 0001686: Memory Bacula-fd is not released after completed backup
 0001690: memory leak in bacula-fd (with accurate=yes)   
 
 http://bugs.bacula.org/
 
 Ralf

Thanks, but they don't seem to be relevant in this case. Both bugs are
reported for Linux and seem to be related to Linux' memory management.
Since I'm using Solaris I don't think this applies to me.

Also, after several accurate jobs running without restarting Bacula, the
total memory usage of the director and fd didn't go up anymore, so I
presume it comes down to the behavior of Solaris' free(), as described
in the above quoted manpage.


Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-15 Thread Christian Manal
Am 15.03.2011 17:49, schrieb Kjetil Torgrim Homme:
 Christian Manal moen...@informatik.uni-bremen.de writes:
 
 Also, after several accurate jobs running without restarting Bacula,
 the total memory usage of the director and fd didn't go up anymore, so
 I presume it comes down to the behavior of Solaris' free(), as
 described in the above quoted manpage.
 
 libumem may work better -- just set LD_PRELOAD, you don't have to
 recompile.  I'd appreciate it if you report back if you try it.
 

Actually, I already did that. Modified the startup script for the
affected fd (don't want the director crashing if things go wrong) and
restarted. I will report the results tomorrow.

Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Accurate backup and memory usage

2011-03-14 Thread Christian Manal
Hi list,

following up this

   http://sourceforge.net/mailarchive/message.php?msg_id=27098562

I temporarily set Accurate = yes for the job in question (currently
still backing up everything), to see what impact this has on
performance. The incremental after that didn't take much longer than
usual, but the memory usage of the director and file-daemon skyrocketed.

That itself is expected and mentioned in the docs. What I didn't expect
was that the memory consumption didn't go down after the job was done.
Both, director and fd, were hogging up over 2G of RAM each until I
restarted them. There were no jobs running, no database activity, nothing.

That can't be the intended behavior, right? Does anyone have an idea
what could be going on and/or how to resolve this?

I'm running Bacula 5.0.3 with Postgres 8.3 on Solaris 10. The director
and fd in question are running on the same box, if that's of any importance.


Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate backup and memory usage

2011-03-14 Thread Christian Manal
Am 14.03.2011 13:27, schrieb Martin Simmons:
 On Mon, 14 Mar 2011 09:20:39 +0100, Christian Manal said:

 Hi list,

 following up this

http://sourceforge.net/mailarchive/message.php?msg_id=27098562

 I temporarily set Accurate = yes for the job in question (currently
 still backing up everything), to see what impact this has on
 performance. The incremental after that didn't take much longer than
 usual, but the memory usage of the director and file-daemon skyrocketed.

 That itself is expected and mentioned in the docs. What I didn't expect
 was that the memory consumption didn't go down after the job was done.
 Both, director and fd, were hogging up over 2G of RAM each until I
 restarted them. There were no jobs running, no database activity, nothing.

 That can't be the intended behavior, right? Does anyone have an idea
 what could be going on and/or how to resolve this?

 I'm running Bacula 5.0.3 with Postgres 8.3 on Solaris 10. The director
 and fd in question are running on the same box, if that's of any importance.
 
 What happens when you run a second job?
 
 If it remains at 2G (rather than jumping to 4G) then my guess is that the OS's
 free isn't releasing memory back to the system.  It probably won't hog RAM
 forever, just swap space.
 

Thanks for the pointer. From Solaris 10 malloc(3C):

 [...] After free()
 is executed, this space is made available for further  allo-
 cation  by  the application, though not returned to the sys-
 tem. Memory is returned to the system only upon  termination
 of  the  application. [...]

So I have to restart Bacula after everything is done to get the memory
back. That kinda sucks and is clearly not Bacula's fault.


Regards,
Christian Manal

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include Dir Containing?

2011-03-03 Thread Christian Manal
Am 03.03.2011 17:20, schrieb Bob Hetzel:
 
 From: Christian Manal moen...@informatik.uni-bremen.de
 Subject: 
 To: bacula-users@lists.sourceforge.net
 Message-ID: 4d6cb79b.3070...@informatik.uni-bremen.de
 Content-Type: text/plain; charset=ISO-8859-1

 Am 26.02.2011 04:52, schrieb Dan Langille:
 On 2/25/2011 5:49 AM, Christian Manal wrote:
 Am 22.02.2011 14:06, schrieb Christian Manal:
 Am 22.02.2011 13:45, schrieb Marc Schiffbauer:
 * Christian Manal schrieb am 22.02.11 um 12:43 Uhr:
 Am 22.02.2011 12:26, schrieb Phil Stracchino:
 On 02/22/11 06:07, Christian Manal wrote:
 That's right, but not what I need. I want to include the 
 directory
 containing the specific file, not the file itself like it is 
 shown in
 the examples.

 To give an example: By default, nothing under

 /export/home

 will be backed up. Now a user foo creates a file .backmeup 
 in his
 home directory or a subdirectory of it. For example

 /export/home/foo/important/.backmeup

 The next backup should then include the directory

 /export/home/foo/important

 There is not, to my knowledge, any built-in functionality to do 
 this at
 this time.  You'd have to use a script to generate the fileset 
 on demand.


 I feared as much. Thanks for the replies anyway.

 Christian,

 before bacula had the Exclude Dir Containing feature I used
 something very similra to you example like that:

 File = \\|sh -c 'for D in /home; do find $D -xdev -name 
 .BACULA_NO_BACKUP \
  -type f -printf \%h\\n\; done | tee 
 /root/bacula_excluded_dirs.log'

 That worked very well over years.

 So I think something like that for include should work well for you.

 File = \\|sh -c 'for D in /home; do find $D -xdev -name 
 .BACULA_BACKUP \
  -type f -printf \%h\\n\; done | tee 
 /root/bacula_included_dirs.log'


 Thanks. That saves me the work of putting a working script together
 myself :-)

 Hi again,

 this turned out to be not viable for my setup. Running that find command
 in a shell on my fileserver takes about 9 hours for a ZFS dataset of
 about 620 GiB with far more than 10 million files. And having a 9 hour
 overhead in my backups just to create the fileset isn't acceptable.

 So I'm thinking about solving this another way, by letting the users
 create their own filesets by putting relative paths into a dotfile in
 the root of their homedir.

 To give an example: User foo puts a file '.backuprc' in his homedir
 containing the following:

 important/stuff
 other/important/stuff
 mostly/unimportant/stuff/importantfile.txt
 mostly/unimportant/stuff/importantfile2.txt

 which would be collected by something like this:
 (a first test run with '-name .bashrc' took only about 10 minutes)

 find /export/home -maxdepth 2 -type f -name .backuprc \
   -exec sh -c '/path/to/sanitize-paths.pl {}  {}' \;

 where 'sanitize-paths.pl' filters empty lines and comments, appends the
 relative path to the absolute path of the homedir, makes sure the users
 don't pull any shenanigans with this (like putting '../../../' as a
 path) and also informs them when they have invalid lines in their list.

 The output would look like this:

 /export/home/foo/important/stuff
 /export/home/foo/other/important/stuff
 /export/home/foo/mostly/unimportant/stuff/importantfile.txt
 /export/home/foo/mostly/unimportant/stuff/importantfile2.txt


 But now I'm concerned about the potential size of the fileset. I have
 over 3000 homedirs in that filesystem, which could result in some
 ten-thousand lines and more. Will Bacula handle that or do I have to
 expect performance issues or even crashes?

 Try it.  Find out.  Sorry, but I really don't know.  It is easily tested
 though.  Without involving your users.  Create a test case.

 I just did that. Made a test setup on a VirtualBox. I created 3000 home
 directories and filled them randomly with some dirs and files (around
 140,000 files total). The script generated fileset had about 18,000
 lines. I noticed no real problems during my test-runs.


 Regards,
 Christian Manal



 
 Here's one issue you're going to run into if you do it by including 
 rather than excluding:
 First, either way you're going to have to use ignore fileset changes = 
 yes.  If you don't do that you'll find that you get a full backup every 
 single time.  Usually that's not what is wanted.

I am aware of that.


 Now suppose you do a full backup on March 1.  On March 10, a user then 
 decides some old files that weren't being backed up before should be backed 
 up.  The files are all timestamped prior to March 1.  The end result is 
 that they won't get backed up until the next full backup.  If you do full 
 backups frequently you might not care much about it but you should be aware 
 of it.

Does it really work that way? I would have thought Bacula actually
notices if the fileset changes and backs up new files and directories
regardless of their time of last change.

If that's not the case, I have

Re: [Bacula-users] Include Dir Containing?

2011-03-01 Thread Christian Manal
Am 26.02.2011 04:52, schrieb Dan Langille:
 On 2/25/2011 5:49 AM, Christian Manal wrote:
 Am 22.02.2011 14:06, schrieb Christian Manal:
 Am 22.02.2011 13:45, schrieb Marc Schiffbauer:
 * Christian Manal schrieb am 22.02.11 um 12:43 Uhr:
 Am 22.02.2011 12:26, schrieb Phil Stracchino:
 On 02/22/11 06:07, Christian Manal wrote:
 That's right, but not what I need. I want to include the directory
 containing the specific file, not the file itself like it is shown in
 the examples.

 To give an example: By default, nothing under

 /export/home

 will be backed up. Now a user foo creates a file .backmeup in his
 home directory or a subdirectory of it. For example

 /export/home/foo/important/.backmeup

 The next backup should then include the directory

 /export/home/foo/important

 There is not, to my knowledge, any built-in functionality to do this at
 this time.  You'd have to use a script to generate the fileset on demand.


 I feared as much. Thanks for the replies anyway.

 Christian,

 before bacula had the Exclude Dir Containing feature I used
 something very similra to you example like that:

 File = \\|sh -c 'for D in /home; do find $D -xdev -name .BACULA_NO_BACKUP 
 \
  -type f -printf \%h\\n\; done | tee 
 /root/bacula_excluded_dirs.log'

 That worked very well over years.

 So I think something like that for include should work well for you.

 File = \\|sh -c 'for D in /home; do find $D -xdev -name .BACULA_BACKUP \
  -type f -printf \%h\\n\; done | tee 
 /root/bacula_included_dirs.log'


 Thanks. That saves me the work of putting a working script together
 myself :-)

 Hi again,

 this turned out to be not viable for my setup. Running that find command
 in a shell on my fileserver takes about 9 hours for a ZFS dataset of
 about 620 GiB with far more than 10 million files. And having a 9 hour
 overhead in my backups just to create the fileset isn't acceptable.

 So I'm thinking about solving this another way, by letting the users
 create their own filesets by putting relative paths into a dotfile in
 the root of their homedir.

 To give an example: User foo puts a file '.backuprc' in his homedir
 containing the following:

 important/stuff
 other/important/stuff
 mostly/unimportant/stuff/importantfile.txt
 mostly/unimportant/stuff/importantfile2.txt

 which would be collected by something like this:
 (a first test run with '-name .bashrc' took only about 10 minutes)

 find /export/home -maxdepth 2 -type f -name .backuprc \
   -exec sh -c '/path/to/sanitize-paths.pl {}  {}' \;

 where 'sanitize-paths.pl' filters empty lines and comments, appends the
 relative path to the absolute path of the homedir, makes sure the users
 don't pull any shenanigans with this (like putting '../../../' as a
 path) and also informs them when they have invalid lines in their list.

 The output would look like this:

 /export/home/foo/important/stuff
 /export/home/foo/other/important/stuff
 /export/home/foo/mostly/unimportant/stuff/importantfile.txt
 /export/home/foo/mostly/unimportant/stuff/importantfile2.txt


 But now I'm concerned about the potential size of the fileset. I have
 over 3000 homedirs in that filesystem, which could result in some
 ten-thousand lines and more. Will Bacula handle that or do I have to
 expect performance issues or even crashes?
 
 Try it.  Find out.  Sorry, but I really don't know.  It is easily tested 
 though.  Without involving your users.  Create a test case.
 

I just did that. Made a test setup on a VirtualBox. I created 3000 home
directories and filled them randomly with some dirs and files (around
140,000 files total). The script generated fileset had about 18,000
lines. I noticed no real problems during my test-runs.


Regards,
Christian Manal

--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include Dir Containing?

2011-02-25 Thread Christian Manal
Am 22.02.2011 14:06, schrieb Christian Manal:
 Am 22.02.2011 13:45, schrieb Marc Schiffbauer:
 * Christian Manal schrieb am 22.02.11 um 12:43 Uhr:
 Am 22.02.2011 12:26, schrieb Phil Stracchino:
 On 02/22/11 06:07, Christian Manal wrote:
 That's right, but not what I need. I want to include the directory
 containing the specific file, not the file itself like it is shown in
 the examples.

 To give an example: By default, nothing under

/export/home

 will be backed up. Now a user foo creates a file .backmeup in his
 home directory or a subdirectory of it. For example

/export/home/foo/important/.backmeup

 The next backup should then include the directory

/export/home/foo/important

 There is not, to my knowledge, any built-in functionality to do this at
 this time.  You'd have to use a script to generate the fileset on demand.


 I feared as much. Thanks for the replies anyway.

 Christian,

 before bacula had the Exclude Dir Containing feature I used
 something very similra to you example like that:

 File = \\|sh -c 'for D in /home; do find $D -xdev -name .BACULA_NO_BACKUP \
 -type f -printf \%h\\n\; done | tee 
 /root/bacula_excluded_dirs.log'

 That worked very well over years.

 So I think something like that for include should work well for you.

 File = \\|sh -c 'for D in /home; do find $D -xdev -name .BACULA_BACKUP \
 -type f -printf \%h\\n\; done | tee 
 /root/bacula_included_dirs.log'

 
 Thanks. That saves me the work of putting a working script together
 myself :-)

Hi again,

this turned out to be not viable for my setup. Running that find command
in a shell on my fileserver takes about 9 hours for a ZFS dataset of
about 620 GiB with far more than 10 million files. And having a 9 hour
overhead in my backups just to create the fileset isn't acceptable.

So I'm thinking about solving this another way, by letting the users
create their own filesets by putting relative paths into a dotfile in
the root of their homedir.

To give an example: User foo puts a file '.backuprc' in his homedir
containing the following:

   important/stuff
   other/important/stuff
   mostly/unimportant/stuff/importantfile.txt
   mostly/unimportant/stuff/importantfile2.txt

which would be collected by something like this:
(a first test run with '-name .bashrc' took only about 10 minutes)

   find /export/home -maxdepth 2 -type f -name .backuprc \
 -exec sh -c '/path/to/sanitize-paths.pl {}  {}' \;

where 'sanitize-paths.pl' filters empty lines and comments, appends the
relative path to the absolute path of the homedir, makes sure the users
don't pull any shenanigans with this (like putting '../../../' as a
path) and also informs them when they have invalid lines in their list.

The output would look like this:

   /export/home/foo/important/stuff
   /export/home/foo/other/important/stuff
   /export/home/foo/mostly/unimportant/stuff/importantfile.txt
   /export/home/foo/mostly/unimportant/stuff/importantfile2.txt


But now I'm concerned about the potential size of the fileset. I have
over 3000 homedirs in that filesystem, which could result in some
ten-thousand lines and more. Will Bacula handle that or do I have to
expect performance issues or even crashes?


Regards,
Christian Manal

--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Include Dir Containing?

2011-02-22 Thread Christian Manal
Hi list,

I know there is an option for filesets to exclude directories containing
a specific file (Exclude Dir Containing). Now I was wondering if there
is an easy way do it the other way around, i.e. only include directories
containing a specific file? I don't think this can be done with
Regex(|dir|file) or Wild(|dir|file).

The background is that I'd like to let the users in my environment
decide what should be backed up in their home directory and what shouldn't.

Of course I could do something like this:

   FileSet {
  Name = HomeDirs
  Ignore FileSet Changes = yes
  Include {
 Options {
signature = ...
...
 }
 File = \\|find /export/home -type f -name '.backmeup' \
-exec dirname {} \;
  }
   }

But I'd rather get it done with what Bacula offers, if possible. Any
help would be appreciated.


Regards,
Christian Manal

--
Index, Search  Analyze Logs and other IT data in Real-Time with Splunk 
Collect, index and harness all the fast moving IT data generated by your 
applications, servers and devices whether physical, virtual or in the cloud.
Deliver compliance at lower cost and gain new business insights. 
Free Software Download: http://p.sf.net/sfu/splunk-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include Dir Containing?

2011-02-22 Thread Christian Manal
Am 22.02.2011 11:33, schrieb Jeremy Maes:
 Op 22/02/2011 11:07, Christian Manal schreef:
 Hi list,

 I know there is an option for filesets to exclude directories containing
 a specific file (Exclude Dir Containing). Now I was wondering if there
 is an easy way do it the other way around, i.e. only include directories
 containing a specific file? I don't think this can be done with
 Regex(|dir|file) or Wild(|dir|file).

 http://www.bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION00188
 
 In those fileset examples should be at least one example of how to 
 include only specific (types of) files.
 

That's right, but not what I need. I want to include the directory
containing the specific file, not the file itself like it is shown in
the examples.

To give an example: By default, nothing under

   /export/home

will be backed up. Now a user foo creates a file .backmeup in his
home directory or a subdirectory of it. For example

   /export/home/foo/important/.backmeup

The next backup should then include the directory

   /export/home/foo/important


Regards,
Christian Manal

--
Index, Search  Analyze Logs and other IT data in Real-Time with Splunk 
Collect, index and harness all the fast moving IT data generated by your 
applications, servers and devices whether physical, virtual or in the cloud.
Deliver compliance at lower cost and gain new business insights. 
Free Software Download: http://p.sf.net/sfu/splunk-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include Dir Containing?

2011-02-22 Thread Christian Manal
Am 22.02.2011 12:26, schrieb Phil Stracchino:
 On 02/22/11 06:07, Christian Manal wrote:
 That's right, but not what I need. I want to include the directory
 containing the specific file, not the file itself like it is shown in
 the examples.

 To give an example: By default, nothing under

/export/home

 will be backed up. Now a user foo creates a file .backmeup in his
 home directory or a subdirectory of it. For example

/export/home/foo/important/.backmeup

 The next backup should then include the directory

/export/home/foo/important
 
 There is not, to my knowledge, any built-in functionality to do this at
 this time.  You'd have to use a script to generate the fileset on demand.
 

I feared as much. Thanks for the replies anyway.


Regards,
Christian Manal


--
Index, Search  Analyze Logs and other IT data in Real-Time with Splunk 
Collect, index and harness all the fast moving IT data generated by your 
applications, servers and devices whether physical, virtual or in the cloud.
Deliver compliance at lower cost and gain new business insights. 
Free Software Download: http://p.sf.net/sfu/splunk-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Include Dir Containing?

2011-02-22 Thread Christian Manal
Am 22.02.2011 13:45, schrieb Marc Schiffbauer:
 * Christian Manal schrieb am 22.02.11 um 12:43 Uhr:
 Am 22.02.2011 12:26, schrieb Phil Stracchino:
 On 02/22/11 06:07, Christian Manal wrote:
 That's right, but not what I need. I want to include the directory
 containing the specific file, not the file itself like it is shown in
 the examples.

 To give an example: By default, nothing under

/export/home

 will be backed up. Now a user foo creates a file .backmeup in his
 home directory or a subdirectory of it. For example

/export/home/foo/important/.backmeup

 The next backup should then include the directory

/export/home/foo/important

 There is not, to my knowledge, any built-in functionality to do this at
 this time.  You'd have to use a script to generate the fileset on demand.


 I feared as much. Thanks for the replies anyway.
 
 Christian,
 
 before bacula had the Exclude Dir Containing feature I used
 something very similra to you example like that:
 
 File = \\|sh -c 'for D in /home; do find $D -xdev -name .BACULA_NO_BACKUP \
 -type f -printf \%h\\n\; done | tee /root/bacula_excluded_dirs.log'
 
 That worked very well over years.
 
 So I think something like that for include should work well for you.
 
 File = \\|sh -c 'for D in /home; do find $D -xdev -name .BACULA_BACKUP \
 -type f -printf \%h\\n\; done | tee /root/bacula_included_dirs.log'
 

Thanks. That saves me the work of putting a working script together
myself :-)


Regards,
Christian Manal

--
Index, Search  Analyze Logs and other IT data in Real-Time with Splunk 
Collect, index and harness all the fast moving IT data generated by your 
applications, servers and devices whether physical, virtual or in the cloud.
Deliver compliance at lower cost and gain new business insights. 
Free Software Download: http://p.sf.net/sfu/splunk-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job type = 'c' vs 'C'

2011-02-21 Thread Christian Manal
Am 21.02.2011 15:52, schrieb Dan Langille:
 The values for the type field in the job table are defined here:
 
 http://www.bacula.org/5.0.x-manuals/en/developers/developers/Database_Tables.html
 
 Look for 'The Job Type (or simply Type) can have one of the following 
 values: '
 
 However, in my job table, I'm seeing two values not listed:
 
 bacula=# select distinct(type) from job order by 1;
   type
 --
   B
   C
   M
   R
   V
   c
   g
 (7 rows)
 
 bacula=#
 
 
 Specifically, 'c' and 'g',
 
 Any ideas about that?
 
 For what it's worth, I have just one job of type 'g'.
 
 And all my jobs of type 'c' appear to be Copy jobs.

Hi,

I think the Wiki has a list that is a bit more up to date:

http://wiki.bacula.org/doku.php?id=faq#what_do_all_those_job_status_codes_mean


Regards,
Christian Manal

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] full and diff backups to differnt storages

2011-02-03 Thread Christian Manal
Am 03.02.2011 12:04, schrieb Ronny Seffner:
 Hi list,
 
 I've a machine containig 2 different devices which I'ld like to use for
 backup. An RDX drive for full backups and an LTO drive for the differntials.
 
 So I defined:
 Jobs : full_Job and diff_Job
 FileSets : only one for both jobs togehter called data_FileSet
 Schedules : data_full_Schedule and data_diff_Schedule
 Storages : LTO_Storage and RDX_Storage
 
 Now I believed starting a full_Job to RDX_Storage allows to run a diff_Job
 to LTO_storage next time backing up only the difference. But my diff_Job
 makes also one full backup per schedule rotation.
 
 How it's possible to get both backups togehter, so that diff_Job's were
 based on last full_Job?


Hi,

you can't split full and diff backups into different jobs. When Bacula
makes a differential backup, it looks for a full backup of the same job
to base it on.

Just use the (Full|Differential|Incremental) Backup Pool options in a
*single* job definition to tell Bacula to put each level into a
different pool, then set the storage to use in the pool definitions.


Regards,
Christian Manal

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows Backup Junction Point / Filesystem Question

2011-01-31 Thread Christian Manal
Am 31.01.2011 13:33, schrieb Randy Katz:
 Hi,
 
 I am seeing this error:
 
 31-Jan 03:42 win1-fd JobId 157:  
 C:/inetpub/vhosts/Servers/3/localuser/dirrec is a junction point or a 
 different filesystem. Will not descend from C:/inetpub into it.
 
 I have many of these in C:/inetpub/vhosts/Servers/3/localuser/
 
 How can I get these backed up, the file statement is currently:
 
 File = C:/inetpub
 
 Do I need to do this for each directory?
 
 File = C:/inetpub/vhosts/Servers/3/localuser/dirrec
 
 Please advise, thanks.
 


Hi,

look at the onefs option for the FileSet resource:

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION00187


Regards,
Christian Manal

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can I use Bacula to backup...

2011-01-29 Thread Christian Manal
Hi,

Am 29.01.2011 20:02, schrieb BlackAdderDK:
 ... my servers, under the following conditions:
 
 I have 26 servers (SLES Linux), distributed across several weak WAN Links... 
 At each location there's an NAS-box attached to the server as a NFS mount 
 point. What I'm trying to achieve is as follows:
 
  - Installing an central backup server (bacula?)

That would be the Bacula director.


  - installing an agent/client on the servers behind WAN-links

Bacula file daemon.


  - Creating, and managing backup jobs on the central server - and the backup 
 media is supposed to be the NFS-mountpoint.

You would have to designate one server on each site as a Bacula storage
daemon with the local NAS as it's storage. Backup to file is no problem
at all.


  - ... and of course - the data must not travel across the WAN-links

If you back up to a storage daemon on site, the only data going to the
central point (director) would be the catalog data (i.e. the index of
what was backed up from/to where and so on) going into a database.


Regards,
Christian Manal

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] List all files from date

2011-01-27 Thread Christian Manal
Am 27.01.2011 21:47, schrieb Tom Sommer:
 Hi,
 
 A few questions concerning restores:
 
 - I'm looking for a way to print all files for a client, for a certain
 date (or just before, like when restoring), a simple list of all files
 (not just for a certain job, but a snapshot of all the files on a client,
 as they looked before a certain date)
 - Or.. Is it possible to simply restore all *.ini files on a client
 (research says no, hence the above question)
 
 Thanks a ton :)
 
 // Tom Sommer

Hi,

I remembered something like this was asked a while back and looked at
the archives. Maybe this will help you:

http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg44149.html


Regards,
Christian Manal

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule / Restore Questions

2011-01-25 Thread Christian Manal
Hi,

 Schedule Question:
 
 I know how to schedule a full backup then incremental and/or 
 differentials within a month, say full backup day 1 or first sunday
 of the month and then diffs on sun 2-5 and incres on mon-sat, etc...
 
 How would one schedule a full backup every 2 months, diffs on sundays 
 2-5 and incrementals on all the other days?

I think you have two options here. The first one is to make your
schedule look like something like this:

  Schedule {
Name = EveryOtherMonth

Run = Full on 1st sun jan at 00:00
Run = Differential on 1st sun feb at 00:00
Run = Full on 1st sun mar at 00:00
Run = Differential on 1st sun apr at 00:00
Run = Full on 1st sun may at 00:00
Run = Differential on 1st sun jun at 00:00
Run = Full on 1st sun jul at 00:00
Run = Differential on 1st sun aug at 00:00
Run = Full on 1st sun sep at 00:00
Run = Differential on 1st sun oct at 00:00
Run = Full on 1st sun nov at 00:00
Run = Differential on 1st sun dec at 00:00

Run = Differential on 2nd-5th sun at 00:00
Run = Incremental on mon-sat at 00:00
  }

The other would be to make use of the Max Full Interval statement for
the Job resource and omit Full runs in your schedule.


 Restore Question:
 
 If one desires a restore and there is data in incrementals, 
 differentials and a full at the beginning. In the case of the incrementals
 and differentials where data is selected does it restore the latest 
 version of the Jobs that are selected or?
 
 ex:
 
 /etc/resolv.conf backed up in Full then changed since, JobID 1
 /etc/resolv.conf backed up in Diffs, changed again. Last Diff JobID 26
 /etc/resolv.conf backed up in Incre, JobID 37
 
 so if I input JobID's 1, 26, 37 and ask to restore /etc/resolv.conf will 
 it restore the
 latest version (from JobID 37)?

Yes. Though, it may depend on the order of the JobIDs, if you input them
by hand. But I don't know that for sure.


Regards,
Christian Manal

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Avoid Incremental/Differential backups while Full are running

2011-01-24 Thread Christian Manal
Am 25.01.2011 03:30, schrieb Rodrigo Renie Braga:
 Hello everyone.
 
 I have two file servers that *each* takes up to 30 hours to run a Full
 Backup on the first Sunday of the month. But I also run a Incremental backup
 of these servers every day, and since I only let 1 job run at a time, the
 Incremental Backups on the first Monday of the month may wait a loong time
 to start to execute.
 
 My question is, is there a way to tell Bacula to just ignore a
 Incremental/Differential backup when it's Full backup is running? I just
 don't want to see several Incremental Jobs waiting to be executed...
 
 Thanks!


Hi,

take a look at Allow Duplicate Jobs and what follows:

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION00183


Regards,
Christian Manal

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Excluding subversion folders

2011-01-10 Thread Christian Manal
Am 10.01.2011 15:04, schrieb Guy:
 Hi all,
 
 I would like to exclude any folder on a client that is under subversion.  All 
 Directories which are maintained by subversion have a .svn directory 
 structure under them.
 
 Can any clever people create a FileSet exclude which will skip any directory 
 which contains a .svn folder?
 
 Cheers,
 ---Guy


Hi,

there is a fileset option called ExcludeDirContaining (look at the
docs for more info), which basically excludes all directories and their
children that contain a certain file. I don't know if that also works
with directory names, though.

But if it doesn't, you could alway run something like

   find / -type d -name .svn -exec touch {}/../.excludeme \;

as ClientRunBeforeJob.


Regards,
Christian Manal

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Will not descend from / into /dev

2010-11-01 Thread Christian Manal
Am 01.11.2010 12:19, schrieb eliassal:
 Hi, I am running a job to backup all my system in order to preapre a full 
 backup for disaster recovery.
 In the job log, I see the following
 
 Using Device FileStorage
 2010-11-01 10:49:46 bacula-sd Labeled new Volume DRLocal0016 on device 
 FileStorage (/media/Backup).
   Wrote label to prelabeled Volume 
 DRLocal0016 on device FileStorage (/media/Backup)
 2010-11-01 10:49:46 bacula-fd /mnt/publiciomega1 is a different 
 filesystem. Will not descend from / into /mnt/publiciomega1
 2010-11-01 10:49:52 bacula-fd  /proc is a different filesystem. Will not 
 descend from / into /proc
 2010-11-01 10:54:32 bacula-fd  Could not stat /home/salam/.gvfs: 
 ERR=Permission denied
 2010-11-01 10:57:04 bacula-fd  /var/lib/nfs/rpc_pipefs is a different 
 filesystem. Will not descend from / into /var/lib/nfs/rpc_pipefs
 2010-11-01 11:06:03 bacula-fd  /dev is a different filesystem. Will not 
 descend from / into /dev
 /media/Backup is a 
 different filesystem. Will not descend from / into /media/Backup
 
 I have both partitions in ext4 so I really can not understand why I am 
 getting this message and what is its impact on my backup.
 
 Thanks in advance
 


Hi,

have a look at the 'onefs' keyword in the FileSet resource:

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION00187


Bacula will not backup descending filesystems until you explicitly tell
it to or add the filesystem(s) to your FileSet.


Regards,
Christian Manal

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Off site backup and restore

2010-10-28 Thread Christian Manal
Am 28.10.2010 15:53, schrieb Kleber Leal:
 Hi all,
 
 I am trying to deploy a backup with off-site tapes.
 First, I have a disk backup for daily backup and restore. It is working
 fine. Backup and restore jobs are working properly.
 Now I have to create a pool of tapes to mantain a off site copy of data.
 I created a new pool and put all tapes for off site backup on it. I also
 created a Virtualfull job. This job is working perfectly, but when I need to
 restore files, my catalog is trying to restore from my off site pool and
 this is not expected for me, because these tapes are not on site.
 
 How can I set to my catalog never try to restore from off site tapes pool?
 Only if really is needed.
 
 Kleber Leal
 


Hi,

you may have another problem on your hands, if you use a VirtualFull
job. The virtual full backup will be seen as the latest real full
backup for the respective job. Any differential and incremental jobs
following are done in relation to it and not your on-site full backup.

I use copy jobs to get my backups from disk to tape, so the tapes are
only used for restore, after the jobs on disk have been pruned.


Regards,
Christian Manal

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup upgraded to FULL for no apparent reason

2010-09-14 Thread Christian Manal
Am 14.09.2010 08:36, schrieb Silver Salonen:
 Hello.
 
 I use Bacula 5.0.2 on FreeBSD 8.0. I have a huge archive-type backup that has 
 no full backup in its schedule:
 
 Schedule {
 Name = Archive
 Run = Level=Differential 1st fri at 23:05
 Run = Level=Incremental 2nd-5th fri at 23:05
 Run = Level=Incremental sat-thu at 23:05
 }
 
 The job builds its fileset dynamically with /usr/bin/find /path/to/backup 
 -type f -mtime +90d -print - it takes a while, but does its job. From 
 time-to-time the job just gets upgraded to full.. like today:
 
 13-Sep 23:05 velvet-dir JobId 9135: 13-Sep 23:05 velvet-dir JobId 9135: No 
 prior or suitable Full backup found in catalog. Doing FULL backup.
 13-Sep 23:05 velvet-dir JobId 9135: Start Backup JobId 9135, 
 Job=userdata-archive.2010-09-13_23.05.01_05
 
 Any ideas why this happens?
 
 Is there a way to continue with incremental backups once such a full backup 
 has been cancelled (eg. modifying Bacula DB manually)? It would really hurt 
 to start from scratch with this archive.
 


Hi,

did you set Ignore FileSet Changes = yes in your FileSet's
Options-Section? If not, the job level will be elevated to full, if
the FileSet changes.


Regards,
Christian Manal

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup upgraded to FULL for no apparent reason

2010-09-14 Thread Christian Manal
Am 14.09.2010 09:30, schrieb Silver Salonen:
 On Tuesday 14 September 2010 10:13:49 Christian Manal wrote:
 Am 14.09.2010 08:36, schrieb Silver Salonen:
 Hello.

 I use Bacula 5.0.2 on FreeBSD 8.0. I have a huge archive-type backup that 
 has no full backup in its schedule:

 Schedule {
 Name = Archive
 Run = Level=Differential 1st fri at 23:05
 Run = Level=Incremental 2nd-5th fri at 23:05
 Run = Level=Incremental sat-thu at 23:05
 }

 The job builds its fileset dynamically with /usr/bin/find /path/to/backup 
 -type f -mtime +90d -print - it takes a while, but does its job. From 
 time-to-time the job just gets upgraded to full.. like today:

 13-Sep 23:05 velvet-dir JobId 9135: 13-Sep 23:05 velvet-dir JobId 9135: No 
 prior or suitable Full backup found in catalog. Doing FULL backup.
 13-Sep 23:05 velvet-dir JobId 9135: Start Backup JobId 9135, 
 Job=userdata-archive.2010-09-13_23.05.01_05

 Any ideas why this happens?

 Is there a way to continue with incremental backups once such a full backup 
 has been cancelled (eg. modifying Bacula DB manually)? It would really hurt 
 to start from scratch with this archive.



 Hi,

 did you set Ignore FileSet Changes = yes in your FileSet's
 Options-Section? If not, the job level will be elevated to full, if
 the FileSet changes.


 Regards,
 Christian Manal
 
 Yes I did. Otherwise the job would be upgraded to full every time :)
 

Well, it could have been, that the dynamically generated FileSet only
changes now and then.


 This upgrade happened now after 35 days, ie the full backup was done 35 days 
 ago, on 10th of August. Meanwhile only incrementals/differentials were done.
 

What about your retention periods? Maybe the job or volume of the last
full backup got pruned.


Regards,
Christian Manal

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup upgraded to FULL for no apparent reason

2010-09-14 Thread Christian Manal
Am 14.09.2010 09:53, schrieb Silver Salonen:
 On Tuesday 14 September 2010 10:41:38 Christian Manal wrote:
 Am 14.09.2010 09:30, schrieb Silver Salonen:
 On Tuesday 14 September 2010 10:13:49 Christian Manal wrote:
 Am 14.09.2010 08:36, schrieb Silver Salonen:
 Hello.

 I use Bacula 5.0.2 on FreeBSD 8.0. I have a huge archive-type backup that 
 has no full backup in its schedule:

 Schedule {
 Name = Archive
 Run = Level=Differential 1st fri at 23:05
 Run = Level=Incremental 2nd-5th fri at 23:05
 Run = Level=Incremental sat-thu at 23:05
 }

 The job builds its fileset dynamically with /usr/bin/find 
 /path/to/backup -type f -mtime +90d -print - it takes a while, but does 
 its job. From time-to-time the job just gets upgraded to full.. like 
 today:

 13-Sep 23:05 velvet-dir JobId 9135: 13-Sep 23:05 velvet-dir JobId 9135: 
 No prior or suitable Full backup found in catalog. Doing FULL backup.
 13-Sep 23:05 velvet-dir JobId 9135: Start Backup JobId 9135, 
 Job=userdata-archive.2010-09-13_23.05.01_05

 Any ideas why this happens?

 Is there a way to continue with incremental backups once such a full 
 backup has been cancelled (eg. modifying Bacula DB manually)? It would 
 really hurt to start from scratch with this archive.



 Hi,

 did you set Ignore FileSet Changes = yes in your FileSet's
 Options-Section? If not, the job level will be elevated to full, if
 the FileSet changes.


 Regards,
 Christian Manal

 Yes I did. Otherwise the job would be upgraded to full every time :)


 Well, it could have been, that the dynamically generated FileSet only
 changes now and then.
 
 The dataset is big enough to contain at least a modified file for every day :)
 
 This upgrade happened now after 35 days, ie the full backup was done 35 
 days ago, on 10th of August. Meanwhile only incrementals/differentials were 
 done.


 What about your retention periods? Maybe the job or volume of the last
 full backup got pruned.
 
 Retention period of the full volume is 10 years. The client's file and job 
 retention periods are 6 months. So I don't think configuration of any of 
 these parameters' could have caused that.
 
 --
 Silver
 


Then I'm out of ideas. Sorry.

Regards,
Christian Manal

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bat ABORTING after director's hostname change

2010-09-13 Thread Christian Manal
Am 13.09.2010 13:02, schrieb xunil321:
 
 Dear all,
 we are running Bacula 5.0.3 with MySQL unter SLES 11 and changed after some
 testings 
 the hostname of the system where to director is installed from 'test-pc' to
 'backup-server' 
 by the /etc/hosts file.
 Of course did we adopt the names also in the dir, fd and sd.conf files.
 Starting BAT now we get this message
 
  bat ABORTING due to ERROR in console/console.cpp:155
  Failed to connect to 'test-pc' populate lists
 
 In what files do we have forgotten to substitute 'test-pc' by
 'backup-server'?
 Note: 
 Adding as an alias 'test-pc' to the /etc/hosts file everything is running
 fine again
 
 Many thx for any hint!
 Rainer
  
 
 


Hi,

did you update your bconsole.conf/bat.conf? That's where BAT gets the
address of the director from.


Regards,
Christian Manal

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users