Re: [Bacula-users] backup order in 2 devices

2013-09-08 Thread Adrian Reyer
On Sat, Sep 07, 2013 at 01:24:59PM -0400, Tony Peña wrote:
 I got tapelibrary and a space disk to backup my files
 and I have a mix way to save the data, because have this:
 Today need do a full backup
 TapeLibrary - Full
 StorageDisk - Incremental
 Next Day:
 TapeLibrary - Incremental
 StorageDisk - Incremental

I support Rudolph here and suggest you check Migration and copy jobs.
I make sure to have enough disk space and don't use Migration Jobs but
Copy Jobs. That way I have the files on disk and don't need to go and
change tapes for restore purpose of recently lost data. It is something
like this:
Job Retention: 13 months
Tape Pool Storage Retention: 13 months
Disk Pool Storage Retention 40 days

After the daily backup, all jobs are copied to Tape Pool.
If the client systems are very busy or behind slow links I use Virtual
Full instead of Full backups, this is why I have 40 days retention as I
usually do montly full backups, weekly differential and daily
incremential. With 40 days I am quite sure I have all data needed for a
virtual full available on disk.
If you try alternating targets for jobs with the same client name, you
will run in trouble while restoring. E.g. incrementals base on the most
recent backup, no matter where it went, tape or disk. In other words, if
you loose either disk or tape or have different retention times, you
will not be able to restore your data. Use migration/copy instead to
distribute media.

For the practical approach: you can only do copy/migration/virtualfull
from one pool to another, the one in the 'NextPool' statement. However,
if you need to target the same pool, you can exploit the above issue of
the most recent backup used as base by creating a fake pool that has
your disks pool as NextPool. If you are able to make sure you have
enough storage devices to read and write to the same pool, you will be
fine. I use vchanger for this.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58041391iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help: Cannot find any appendable volumes

2013-09-08 Thread Adrian Reyer
On Sun, Sep 08, 2013 at 08:27:32AM +0200, Luca Bertoncello wrote:
 Well, now I tried to reinitialize the disks (delete all files, initmag,
 label barcodes), but it didn't help. I always get this error.
 I searched Google for this error and I found some pages saying to
 reinitialize the disks. And so I made, but it doesn't run...
 Can someone help me and say what I have to do?

'delete all files' ist which one:
- deleting the files from disk
- deleting the volumes from bacula
- pruning the volumes
The first one won't help and the later one is a quite hardcore approach.
You should check your retention times, you maximum volume sizes and
backup sizes. The retention time only starts and counts down if the
volume is marked as Full/Used. In most configurations volumes that are
out of retention time are only pruned after some job on that pool
finished successfully.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart


signature.asc
Description: Digital signature
--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58041391iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Including unmodified GPL version of Bacula inside a closed source commercial product

2013-08-27 Thread Adrian Reyer
On Tue, Aug 27, 2013 at 05:09:53PM +0200, Nuno Brito wrote:
 What is your opinion?

If you need legal advice ask a lawyer.

 Does the described usage counts as a GPL compliant usage of the GPL 
 Bacula within a closed-source system?

Making cron call bacula within your closed system doesn't violate the
GPL.
You may use bacula in closed source systems as well as in open source
systems. If you distribute bacula within your product, you have to
fulfill the license requirements. In this case it should be, provide the
bacula source the very same way as well as the bacula license. If you
need to modify bacula for additional features, you have to publish the
source for this just as well.
If your program directly links against bacula code, your program will be
GPL2 as well and you will have to publish the source the same way you
publish your program.
A good sample are the various WLAN routers out there. Many of them have
parts of the used programs as source on CD/as download to fullfill the
license of these specific parts.
Apart from that, whatever your program does, consider making it open
source. There are many working business models that don't require a
closed source license and if your software even includes a backup system
like bacula it is probably not exactly zero maintaince.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] virtual fullbackup questions, diffs over wan/vpn; time difference between full/ diff backup for virtual full backup

2013-05-08 Thread Adrian Reyer
On Wed, May 08, 2013 at 12:00:43PM +0200, Stefan Fuhrmann wrote:
 We take the storage to datacenter, doing full backups
 after that we take the storage to the firm and doing only diff- backups over 
 wan/vpn to firm- storage
 doing virtual full backups then.
 Is taht a running way?

Should work.

 Another question: How old can the diff-backup be doing virtual full 
 backups? ex. full backup is half a year old and we are having hundrets of 
 diff-backups. 
 Is it possible/a good idea doing virtual full backups with that great time 
 difference between full- and diff- backups?

Well, when you do a VirtualFull, you have a new Full to base the next
one on.
It is always: Most recent Full + most recent Diff + all Incs since last
Diff. If you decide to carry your system to the data center, do a full
there, take it back, do differentials for 3 years on a weekly base and
then only a virtual full, you need your 3 year old Full backup + the
most recent differential.
However, as a Diff backup transfers all data since the last full, you
very likely have to transfer quite some data in a single Diff after some
time, i suggest doing VirtualFull more often and perhaps even don't do
Diffs at all but only Incrementals.
The usual reason to do Differential backups is to save tape changes
while restoring. Depending on the available space and the technology you
save your backups on, e.g. monthly VirtualFull + daily Incrementals
might be a good way.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and 
their applications. This 200-page book is written by three acclaimed 
leaders in the field. The early access version is available now. 
Download your free book today! http://p.sf.net/sfu/neotech_d2d_may
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula job replication to other machine

2013-05-02 Thread Adrian Reyer
On Thu, May 02, 2013 at 01:56:14PM +0200, Iban Cabrillo wrote:
   We are running the bacula 5.2.6+dfsg-8 on debian. All my backups are on
 tape (we are using spool directory to increase the tape performance).
   The most suggested option , looking in the web is to use rsync, to
 replicate the volume. i would like to know is the is a way to use to spool
 directory to replicate the jobs on the fly.
   Or maybe there is a best way to do this?

Unfortunately bacula can't write the same job on 2 media in one go.
I solved a similar scenario for me by doing
- Backup to disk
- Copy Job to copy the disk-backups to tape pool1
- Copy job to copy the disk-backups to tape pool2

In my setup, the job has a lifetime of 13 months, the disk-pools 35
days, one tape pool 13 months, the other 2 months.
If you can afford the diskspace for a set of backups, but not for as
many days, you can do a copy job and a migrate job instead of 2 copy jobs
or implement some other algorithm to recover the disk space.
I use sql to select the jobs I want to copy.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with 2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multi-homed SD single-homed FDs

2013-05-02 Thread Adrian Reyer
On Thu, May 02, 2013 at 02:21:15PM -0500, Adam Thompson wrote:
 I'm trying to setup a multi-homed DIR + SD to service two different VLANs 
 that are firewalled from each other. 

They are not firewalled anymore if you add a dualhomed host.
Why not backup from a third vlan that is firewalled against the other
two to only accespt the neccessary backup traffic?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level detail for bottlenecks, with 2% overhead.
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Postgres vs SQLite

2013-05-01 Thread Adrian Reyer
On Wed, May 01, 2013 at 08:40:51AM -0700, Tim Gustafson wrote:
 I've used MySQL in the past, and Bacula is just apparently not
 optimized for it (or vice-versa, I'm not sure which).  We run a fairly
 beefy MySQL server and we have hundreds of apps and web sites that all
 use that server and all of them work extremely well but when we used
 it for Bacula, the query that it used to build a list of files to
 restore took *ages* - in some cases more than 24 hours, and in some
 cases it never finished at all - for our data set.  When we switched

I like having one database server for serveral/almost all applications,
however, I always have a seperate one for bacula. The bacula workload is
different from most other database applications as is mainly writes.
I'd personally go with postgres.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with 2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow virtualfull job

2013-04-29 Thread Adrian Reyer
On Mon, Apr 29, 2013 at 08:33:42AM +, James Harper wrote:
 When loading the data into postgresql absolutely crawled along (~50kb/second 
 disk write speed with 100% iowait) I knew I had a problem.
 Something, somewhere has changed in my system that absolutely kills tiny sync 
 writes. Or alternatively, something has changed in my system that makes mysql 
 do tiny sync writes.

What do you expect from sync writes regarding bacula?
I don't use sync writes there as I am very sure they won't give me a
benefit. As soon as the DB-server dies, the job is failed, you can't add
the remaining attributes in a reasonable way later and sd/dir both
refuse to work without a database server.
I suggest turning sync writes off.
This is valid for bacula, not for some random other database
application.

 I'm working on trying to track down wtf is going on there, but in the 
 meantime I have set innodb_flush_log_at_trx_commit=0 which means it won't run 
 an fsync after each tiny little write but will instead wait for around a 
 second then flush everything. This means I stand to lose 1 second of database 
 commit in the event of a crash, but I also probably lose the whole backup job 
 anyway so I don't see it as a loss.

Exactly.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow virtualfull job

2013-04-26 Thread Adrian Reyer
On Thu, Apr 25, 2013 at 11:41:37PM +, James Harper wrote:
 What could have gone wrong with my mysql to make this happen? I've tried 
 rebooting it.

You very likely use MySQL with MyISAM tables. This is a very bad
combination for bacula. It will be better with InnoDB tables and
correctly tuned MySQL for these many inserts. However, postgres can be
tuned the same way and has the additional benefit of being able to use
parts of indices. As every insert has to update the tables and the
indices, you have way less writes with postgres.
I went the way MySQL 4GB MyISAM - 12GB MYISAM - 12GB InnoDB -
Postgres 12GB myself.

 How difficult is it to convert an existing installation over to postgresql? 
 I've been meaning to do this for a while and it may be faster than trying to 
 resolve the issue...

It is not that hard, if I remember right the easiest way is spomething
like that:
1. create a new postgres-bacula-db
2. dump the table contents from mysql
3. modify the dump to suit postgres
4. insert the dump into postgres

2.-4. I did in one line issuing something like
  mysqldump ... | sed ... | psql
The main things to do with 'sed' for me had been to replace the
different 0 timestamps. You get -00-00 00:00:00 with mysql and
postgres expects 1970-01-01 00:00 instead if I remember correctly.
I think I used
http://mtu.net/~jpschewe/blog/2010/06/migrating-bacula-from-mysql-to-postgresql/
back then as a hint, but it needed even further sequences.
http://www.bacula.org/manuals/en/catalog/catalog/Installi_Configur_PostgreS.html
is the base for the above post.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Files of an incomplete backup

2013-04-13 Thread Adrian Reyer
On Sat, Apr 13, 2013 at 12:45:50PM -0430, LDC - Gustavo El Khoury wrote:
 That's not really an option... The other day I purged the tape where the 
 incomplete job was, with purge jobs volume, and I lost an incremental backup 
 I had, that was stored in that tape. Your suggestion will only work if the 
 given tape would ONLY contain the incomplete backup, which is hardly the case.

As it has already been said, on purging tapes it is all or nothing.
However, there are 'Migrate' jobs. You can migrate the good jobs
somewhere else.
'somewhere else' needs its own pool and storage device. If you only have
one tape drive, you can go via a temporary disk storage pool.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Precog is a next-generation analytics platform capable of advanced
analytics on semi-structured data. The platform includes APIs for building
apps and a phenomenal toolset for data science. Developers can use
our toolset for easy data analysis  visualization. Get a free account!
http://www2.precog.com/precogplatform/slashdotnewsletter
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strange issue with backup size

2013-04-07 Thread Adrian Reyer
On Sun, Apr 07, 2013 at 09:03:34PM +0200, Radosław Korzeniewski wrote:
 I think it is not possible to properly handle encrypted sparse data blocks
 without compromising security. The main data block size is 64kB long, so
 encrypted block should be more than 64kB long. Now, if we have a sparse
 block then its size is tens of bytes instead of 64kB, so encrypted block
 will be at the tens of bytes too not 64kB. So, if we have an encryption
 stream with a number of 64kB blocks (block boundary information is
 available on volume) and suddenly we will got a short block then for sure
 it will be a sparse block (I'm sure sparse block has its own stream
 number), then we can predict content. It is not good for security if we can
 predict original content. Think about it.

I am no mathematican but I don't really see how sparse blocks compromise
security in a real way. All an attacker knows is that a file that claims
to be 10G is only 10M, if this really compromises the encryption of the
actual content, I'd regard the used algorithm really broken.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strange issue with backup size

2013-04-05 Thread Adrian Reyer
Hi Alberto,

On Thu, Apr 04, 2013 at 09:03:51AM +0200, Alberto Caporro wrote:
 I backed up our mail server, which is hosted in a virtual machine
 with two virtual disks sized at 20G and 200G respectively. I'm only
 backing up relevant files (/etc, /root and /opt) and measured on the
 server itself, the total size amounts to slightly more than 56G.

Please rethink that. Unless you took further steps, if your box crashes
you won't even know which packages have been installed, your /etc will
be mostly useless. And if you actually know which packages you had
installed, you would need to reconfigure them with debconf as the
information from /var is missing.
I strongly recommend to always backup the whole server. The saved few
100MB system data is never worth the time you have to spend resolving
all these configuration issues.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strange issue with backup size

2013-04-05 Thread Adrian Reyer
Hi Alberto,

On Fri, Apr 05, 2013 at 12:58:22PM +0200, Alberto Caporro wrote:
 Hi Adrian, thanks for your advice, I was actually already thinking of 
 switching to a full backup strategy; unfortunately that would not solve 
 tho size issue, which at this point is becoming quite puzzling :-)

Sorry, I read your original issue already as solved.
Perhaps the 'estimate' command is helpful:
estimate job=YOURJOB level=Full listing
should give you a file list of what is actually backed up. Perhaps this
sheds some light.
Other posiblities depend on your mailserver software/storage format and
filesystem.  How are the sizes compared with
- df
- du -s dir
- tar cf - dir | wc
- tar cSf - dir | wc

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Host check problem

2013-02-27 Thread Adrian Reyer
On Tue, Feb 26, 2013 at 01:46:55AM -0800, goorooj wrote:
 ok, with your informations and after a search i found out what's gone wrong: 
 i am on a debian system and have to enable /bin/bash for bacula user to su.
 http://wiki.bacula.org/doku.php?id=sshtunnel


su - bacula -s /bin/sh
would have been enough for a temporary job as 'bacula'. The '-' is
important as it gives you the actual environment of the bacula user,
e.g. $HOME.
For the ssh-call itself, you might consider
  ssh -t
to allocate a pseudo terminal, if the job you want to run would like to
have one.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull and NextPool definitions, good practice to create a loop ?

2013-02-24 Thread Adrian Reyer
Hi Christian,

On Fri, Feb 22, 2013 at 09:23:05AM +0100, Masopust, Christian wrote:
 I've now defined the necessary two pools but am unsure now if it is a good 
 practice to
 create a loop in these pools:
 I googled a lot and found some concerns that these configuration can cause a 
 deadlock, is that right?

If you only have one or two drives, you will have a deadlock for sure
quite fast. You need to make sure you have enough drives available for
at least 1 process writing the virtual full and 1 job reading the last
virtual full/differential/incremental backup. If you do backup-to-disk,
I suggest a vchanger with many drives.

 If not using that loop I found that the Virtual Full backups alternate 
 between the two pools but the
 differential and incremental are always done to the first pool.  What I would 
 like is that all diff and incr
 backups are done to that pool that has the most recent full backup and I 
 think that will be possible
 by defining the loop above, right?

Not really with that loop alone, and if I get it right not directly with
plain bacula config. You would need two alternating schedules for the
same job that select the right pool. Bacula defines target pools, but it
doesn't actually check the pool when it collects data, it just takes the
data it needs for the complete virtual full, regardless in which pools
it resides.

 So how do you configure your Virtual Full?  What is the best practice for 
 it?

An alternate way is to just have your backup pool for everyday backups
and a dummy virtualfull-pool for the virtual full run, just having your
normal backup pool as 'next pool', see
http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/14084
for a similar question, referenced from
http://article.gmane.org/gmane.comp.bacula.user/69268

E.g.:
Pool {
  Name = Pool1
  
}
Pool {
  Name = Pool2
  ...
  Next Pool = Pool1
}
Job {
  Name Job1
  Pool = Pool1
  ...
}
Schedule {
  Name = Schedule1
  Run = Level=VirtualFull Pool=Pool2 1st sat at 16:05
  Run = Level=Differential 2nd-5th sat at 20:05
  Run = Level=Incremental  sun-fri at 21:05
}

 And... how likely is it that a deadlock can/will occur?

Without enough drives: 100%
The minimum number for 'enough' is 2, if you have other jobs running
same time, you need more. 'Spool Data' might help, but technically it
doesn't sound especially bright to copy things from disk via disk to
disk instead of from disk to disk. Respectively for Tape: Tape-Disk-Tape
instead of Tape-Tape, while with tapes it might have a benefit in terms
of tape wearing.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Corrupted catalog on bad drive - please help!

2013-02-11 Thread Adrian Reyer
On Mon, Feb 11, 2013 at 04:09:52PM -0500, Michael Stauffer _g wrote:
 This seems to be a disk error, but I ran 'xfs_repair' anyway but it didn't
 fix anything, not surprising.

You could try a badblock scan as well.

 Catalog backups:
 There's a script running on the machine that backs up the catalog nightly to
 another disk using /usr/lib/bacula/make_catalog_backup and I have several
 days worth of these. This disc is backed up by bacula too. HOWEVER, looking
 at catalog.txt log (in the same dir as these backups, it looks like a log
 from make_catalog_backup ), it shows this error's been happening since Nov
 2012, so my recent catalog dumps aren't going to be good.

You could alwaqys try and delete the wrong line from the dump.

  what's next?
 After replacing the bad disk:
 Can i reconstruct the catalog just from the backup data itself? It's on
 tapes, fwiw. How do I go about doing this?

You can do so with bscan, it will take quite some time, though.

 If I can get one of the old catalog dumps off one of the backup tapes, can I
 recontruct from that even though there have been backups since then?

If you find an old dump and restore it, you should be fine by just using
bscan for all 'tapes' that are more recently used than the dump you
restored.

 Do I resolve this with Bacula 3.0.1 before upgrading to the latest? Seems to
 make sense.

I'd use 3.0.1 as it avoids any catalog upgrade hassle that might kick
in.

Be aware I only started with Bacla 5.0 and never use a 3.0 myself.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] full and incremental backups

2013-02-02 Thread Adrian Reyer
Hi,

On Fri, Feb 01, 2013 at 09:56:21AM +0200, Süleyman Kuran wrote:
 Now the problem; full and incremental backups are not bound to
 pools, bacula just picks up the last full backup to compare,
 regardless of the pool when preparing for the incremental. When I
 remove the cartridgeout of the office, there is a chance that the
 corresponding full backupsare still located in the library, and my
 montly backup is useless without a recent full backup.
 How can i make sure my monthly backup consists of a full backup and
 incrementals derived from the full *on the same **media*

What about a copy job, selected jobids by sql. You should be perfectly
able to create a valid and complete backup. The drawback is, you need
either more than 1 tape drive or sufficient disk storage to buffer it in
between.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup ESXi Virtual Machines with SCP

2013-02-02 Thread Adrian Reyer
On Fri, Feb 01, 2013 at 06:45:43AM -0500, Michael D. Wood wrote:
 If its useful to you or you can fit it to your needs, here it is.
 http://www.itsecuritypros.org/backup-esxi-virtual-machines-with-scp/

I actually don't have an active ESXi setup atm, but other solutions.
Might be I don't understand ESXi way of things. However, I have not seen
a statement there for shut down hosts or anything that prevents the image
from beeing changed. So this is suitable for a disaster recovery backup
basic recovery image with an additional file backup on the virtual
machine itself, right?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tutorial idea: supply everyone with a bacula-dir bacula-sd

2013-01-29 Thread Adrian Reyer
Hi Dan,

On Tue, Jan 29, 2013 at 11:23:17AM -0500, Dan Langille wrote:
 Bring a computer pre-installed with N running instances of bacula-dir 
 and bacula-sd, each with their own Catalog.
 This server would run one instance of PostgreSQL, containing one 
 Catalog for each Attendee.
 Each Attendee's home directory would contain bacula-dir.conf and 
 bacula-sd.conf.
 Each will need sudo so they can stop and start THEIR instance of the 
 daemons.
 Each instance will need to listen on a different set of ports.

All my bacula installations bacula-dir and bacula-sd reside in
individual Linux-VServers (http://linux-vserver.org/), basically because
I use them everywhere anyway. They have own IP-Adresses and are
accessible via them. A user can have root inside to do all things I'd
imagine to be needed for a bacula-tutorial, but they can't change IPs or
add iptable rules.
From the host view, it is just directories containing the vservers, you
can e.g. use tar and stash them away and extract again later for the
next attempt. In the special case of linux-vserver there is another
thing called 'unification', the binaries can be hardlinked between the
different VServers with added copy-on-write link breaking. That way you
safe space on disk (not important), but as they are the same inode in
all VServers, Linux shared memory management will just load the static
stuff once for all VServers and this safes quite a bit of ram if you
have many instances.
Similar things with linux are done by openvz or lxc, on Solaris and
various BSD-variants it is very likely called jail or container.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tutorial idea: supply everyone with a bacula-dir bacula-sd

2013-01-29 Thread Adrian Reyer
On Tue, Jan 29, 2013 at 04:46:25PM -0500, Dan Langille wrote:
 I was also thinking about jails for this.  That might actually be easier.
 Why? Every jail is the same.
 But I'd install only one instance of PostgreSQL.

This is basically the same I'd do. More PostgreSQL instances just need
more memory to be usable at all.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Set more then 3 pools in a backup job

2013-01-28 Thread Adrian Reyer
On Mon, Jan 28, 2013 at 04:21:24PM +0100, stefano scotti wrote:
 There are 4 different rotation rules and schedules... not three.
 How am i supposed to solve this?
 With my own scripts, i would define 4 pools, and a job type for each pool:
   Level 0 Pool  Type:Full
   Level 1 Pool  Type:Differential
   Level 2 Pool  Type:Incremental
   Level 3 Pool  Type:Incremental

Be careful here what is based on what.
Differential has all changes since last Full
Incremental has all changes since the last run of any of
Full/Differential/Incremental.
If you run
1234567123456712345671234567
FIIDIIDIIDII and delete the first 2 differential, your
1st 6 Is and last 6 Is would be usable, the 2nd and 3rd 6 Is are useless
as they have no base to work on.
Now if you enhance this for your 3-hourly Incrementals (i) a week looks
like this
DiiiIiiiIiiiIiiiIiiiIiiiIiii you imagine,
but it won't work like this, if you delete any of the i, you won't be
able to completly rstore to any following I or i point in time. It in
fact actually
DIII
I think there is no easy way out if this is about used backup space. One
way is to replace the I with D and then actually have the i as I.
Depending on the changed amount of data, you might convert the weeks D
into F.
You can try 2 different jobs instead, but you still will have more Full
backups than you like, you just need to keep the latest F, D and all I
since the last D for the second set.
Set1: FIIDIID___I___
Set2: FxDIIIDIII

Set1 '_' is just spaced out as here we need more fields for the time
where the 3-hourly are run. The 'x' are positions that are already
deleted in the second set.
You can alter the Pool of a job with the run directive. Check
http://www.bacula.org/5.2.x-manuals/en/main/main/Configuring_Director.html#SECTION00145

 In my opinion it is wrong to bind the concept of job type and the concept
 of pool, they are 2 different things.
 I use a pool to define a group of  volumes and their rotation rules, i
 specify a job type to define if those volumes will contain an
 incremental,differential or full backup.

I think it can be used like that.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spreading full backup load on 1st, 8th, 15th, 22nd strategy

2013-01-26 Thread Adrian Reyer
Hi,

On Thu, Jan 24, 2013 at 04:50:24AM -0800, g18c wrote:
 I am using Bacula to connect to servers remotely (in a different DC) and 
 backup.
 Due to speed constraints and no bogging the office network down during the 
 day, I dont see it is feasible to perform a full backup of our 4 servers at 
 the same time, so instead we can split out the full backups so each week one 
 server has a full backup, in rotation.

Have you ever considered VirtualFull Backups? Should reduce your remote
locations network load a lot.

 Retention
 Given any particular date, we want to keep a full backup, plus the last full 
 backup before that. All incremental from now to the most recent full backup 
 should be kept.

Unfortunately bacula has no way to keep backup runs active based on
successful full runs. You can specify retention times only by fixed time
intervals with standard configurations. You could do that using sql to
select the jobs that are not needed anymore manually and delete them.
However, personally I am much in favour of just deciding upon a
retention time and keeping all Full/Differential/Incremental backups the
same retention time.

 Full and Incremental in seperate pools
 I have two storage pools, one for incremental and one for full, how can I 
 tell Bacula to use pool-full if a incremental snapshot cannot find a previous 
 full, and how do I put full on storage1 and incremental on storage2 (which is 
 the same storage daemon, but different drives)?

Assign the drives to the pools, use different media types.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] A couple of related pool questions

2013-01-26 Thread Adrian Reyer
On Fri, Jan 25, 2013 at 10:07:37AM -0500, Alan McKay wrote:
 I wiped all the labels on my tapes and did label barcodes and
 everything seems to have gone fine and they all got added to the
 default pool.

You could do
label pool=Default storage=xy slots=1-5,7-16
or the like to not label a tape in slot 6 at all.

 Also there is the matter of the other tape, which is in slot 8.  I am
 not sure why it did not get added.   Now I'd like to manually add it
 to the pool but when I use the add command it gives me some odd
 options which don't seem to make sense.   What the heck does Enter
 Base Volume Name mean?  I want it to keep the name it has, I don't
 want a new name.
 *add
 You probably don't want to be using this command since it
 creates database records without labeling the Volumes.
 You probably want to use the label command.

Indeed, most definitly you don't want to use this with a new tape. This
saves me finding an answer to the actual question ;)

On Fri, Jan 25, 2013 at 10:28:31AM -0500, Alan McKay wrote:
 OK, this is odd.   I just tried the label command for that one slot
 and got an error
 Connecting to Storage daemon SL24 at 127.0.0.1:9103 ...
 Sending label command for Volume 06L4 Slot 8 ...
 3307 Issuing autochanger unload slot 1, drive 0 command.
 3304 Issuing autochanger load slot 8, drive 0 command.
 3305 Autochanger load slot 8, drive 0, status is OK.
 3910 Unable to open device SL24 (/dev/nst0): ERR=dev.c:506 Unable
 to open device SL24 (/dev/nst0): ERR=Read-only file system
 Label command failed for Volume 06L4.

Best guess: you (accidently) activated the write protection on the
tape. Take it out and check.
If this is not the issue: try writing to it while bacula-sd is stopped,
e.g.
mtx load 8
mt -f /dev/nst0 rewind (just in case)
dd if=/dev/zero of=/dev/nst0 bs=1024 count=100
- If there is an error, it is quite likely to be a bad tape.
mt -f /dev/nst0 rewind
mt -f /dev/nst0 eof
then start bacula-sd and try labeling again.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to troubleshoot a slow client, when there is no obvious answer

2013-01-22 Thread Adrian Reyer
On Tue, Jan 22, 2013 at 01:37:43PM -0600, dweimer wrote:
 If you have checked disk I/O, CPU, memory, network, on both the client 
 and the server, all seem great, both from a statistics look, showing all 

What about many small files, possibly very many in single directories? I
have seen these with huge impact on backup performance, especially on
busy directories.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Adhoc Job and waiting it out ;)

2013-01-18 Thread Adrian Reyer
On Fri, Jan 18, 2013 at 04:00:47PM +0200, Florian Heigl wrote:
 ssh client-to-backup -n bconsole run a backup for this box and keep
 sitting until the backup is completed

Something like this should do the trick:
echo run job=XY yes
wait job=XY | ssh client-to-backup bconsole

 How do you automate your off-schedule backups?

I admit I have never tested this, just remembered I've seen the 'wait'
statement

MfG,
Adrian Reyer
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master HTML5, CSS3, ASP.NET, MVC, AJAX, Knockout.js, Web API and
much more. Get web development skills now with LearnDevNow -
350+ hours of step-by-step video tutorials by Microsoft MVPs and experts.
SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122812
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] windows enterprise is not working, x32/x64.

2013-01-16 Thread Adrian Reyer
On Tue, Jan 15, 2013 at 11:02:33AM -0500, Dan Langille wrote:
  If I misgot this and future windows fd-development ceased to exist, the
  code about to be removed, I'd see it as a serious problem as it damages
  the credibility of bacula as an available system.
 Umm, where did you get that idea from?  Wild speculation, without foundation 
 from what I understand.
 There has been no mention of removing support for Windows clients.

This is why I wrote 'If I misgot this' as I have that very understanding
the windows code stays where it is, get patches as they are developed,
just noone of 'the community' provides precompiled binaries. Just like
noone provides precompiled debian binaries for current bacula. But if I
like to, I can provide windows binaries or debian binaries. And if I
find errors in the windows code, patch it and stick to the coding style
and FLA I'd expect it to be included into the community edition source.

 I think the project has no business supplying binaries.  I believe that is 
 the responsibility  of each
 project (e.g. FreeBSD, NetBSD, etc).

Indeed, that is how I understand it.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape Library and Cleaning Requests?

2013-01-16 Thread Adrian Reyer
Hi Frank,

On Wed, Jan 16, 2013 at 10:30:13AM +0100, f.staed...@dafuer.de wrote:
 Right now I'm able to clean the drive after the job has finished - but
 is there any possibility that bacula itself handles the cleaning requests?

No, bacula does not know about the needed cleaning, all you can do
within bacula is define which label is a cleaning tape to prevent
bacula trying to mount it.
As there is currently no way to suspend/resume a job within bacula,
there is no way to do some cleaning in between, either.
What you could do is, if you have some way to detect when cleaning is
needed, to patch the mtx-changer script to just load the cleaning tape
if a tape change and cleaning is needed.
Possibly 'mt status' gives the needed info.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] windows enterprise is not working, x32/x64.

2013-01-14 Thread Adrian Reyer
On Sun, Jan 13, 2013 at 06:00:18PM -0500, Bryan Harris wrote:
  Bacula community edition will continue, unix, linux, windows products?
 I think he means Will Windows be supported?, or Will Windows continue?, 
 or something along those lines.  Here is my understanding, feel free to 
 correct me if I'm wrong:

As I understand the mentioned page, there won't be precompiled Windows
Community Binaries anymore. However, windows as a client will still be
supported, but you have to compile it yourself.
As windows users/administrators are not as used to compiling things as
linux/unix users are, there is an offer of Bacula Systems on precompiled
Enterprise versions. Alternatively someone else could just step up and
offer the precompiled windows binaries.

If I misgot this and future windows fd-development ceased to exist, the
code about to be removed, I'd see it as a serious problem as it damages
the credibility of bacula as an available system. Bacula Systems is
not a solution to this as it is not free software and if tomorrow
Bacula Systems decides to only support e.g. Android as the single
plattform, there is no source to continue with. Don't misunderstand me
there, I really like Bacula Systems providing the Enterprise windows
binaries. I'd prefer them to provide the community binaries, though, and
while they are at it, perhaps 'certified' community binaries for the
major linux distributions.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122412
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy + Migrate to different ppols - was: Tape-to-tape copy job?

2012-11-11 Thread Adrian Reyer
Hi Wolfgang,

On Sun, Nov 11, 2012 at 03:05:12PM +0100, Wolfgang Denk wrote:
 OK, the task is:
 1) Backup some jobs to pool DISK
 2) Copy these jobs from pool DISK to pool ARCHIVE
 3) Migrate the same jobs from pool DISK to pool TAPE

I have the very same setup.
My solution had been to write a job that replaces the NextPool
statement and reloads bacula. I can give you the script.
However, there seems to be a better solution, someone posted it on
this list a few months ago:
Use a dummy-Pool with the correct NextPool statement, select the jobs to
be copied by e.g. jobid (I use an sql select there) and they will still
be copied, even if they are in some other pool. The original post is at
http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/14084

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Managing disk space for Bacula Backups

2012-11-05 Thread Adrian Reyer
On Mon, Nov 05, 2012 at 01:45:25PM +0100, Gumprich, Sebastian wrote:
 I see no downsides with this, except that I don't have backups that are
 older than 14 days, but that's not needed.
 The problem with recycling oldest volumes is, when Bacula uses the
 oldest volumes, it would use 5 incremental, small volumes, when the
 large full-backup-volumes are still there, taking up disk space.
 Any workaround to this, except for manually searching for old volumes
 and deleting them?

Erm, if you delete the Full the Incrementals are based on, you can throw
away the Incrementals just as well in almost all cases. So if you e.g.
do 1 Full every week, rest incrementals and radically delete everything
that is older than 14 days, you can restore 1-2 wekks, depending on the
date of failure. In that case an official retention period of 1 week is
much more honest together with recycle oldes volumes.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tape drive on 2nd server

2012-06-06 Thread Adrian Reyer
On Tue, May 29, 2012 at 02:19:01PM +, Bertrand, Guy wrote:
 Do you have any details on how you setup your iSCSI pass-through??  Any URL 
 you can share??

I use iSCSI instead of NFS, it feels quite a bit faster and enables me
to span a volume group on local+remote disks.
In your case it might be a better solution to import the tape via iSCSI
as I assume you want to do some Backup to disk to tape stuff.
The proxmox project has a guide for this. On a quick glance it looks not
like it is limited to KVM/windows/proxmox:
http://pve.proxmox.com/wiki/Tape_Drives#Using_Tape_Drives_as_iSCSI_target_.28for_Windows_KVM_guests.29

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] virtual backup

2012-05-03 Thread Adrian Reyer
On Thu, May 03, 2012 at 09:15:26AM +0400, Anton Gorlov wrote:
 In other words, you use the Full and exactly one Diff to get a
 VirtualFull of the very time of the Diff.
 how to specify what one diff to apply?

It uses the latest diff and the latest full available.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] virtual backup

2012-05-02 Thread Adrian Reyer
On Wed, May 02, 2012 at 10:56:12AM +0400, Anton Gorlov wrote:
 I have 1  full backup and 10 diff backup's.
 can virtual backup aggregate 1 full and 3 or 5 diff's to new full?

Diff = Full + all changes since then
In other words, you use the Full and exactly one Diff to get a
VirtualFull of the very time of the Diff.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Finding (and killing) Phanton job IDs

2012-04-28 Thread Adrian Reyer
On Fri, Apr 27, 2012 at 06:13:42PM -0700, Karyn Stump wrote:
 We have a working bacula (5.0.3) instance with a MySQL backend. A few
 weeks ago we had a power outage that caused the server to shut down and
 restart abruptly. The jobs are running successfully but now there is a
 complete set of phantom jobs running in parallel that fail. 

Your problem should't be possible to exist. At least not if the db is
fine, I checked with my bacula-mysql instance and it says

CREATE TABLE `Job` (
  `JobId` int(10) unsigned NOT NULL AUTO_INCREMENT,
...

If you still get 2 different sequences, I'd assume the database has
issues due to the crash. Have you tried 'mysqlrepair' aka 'mysqlcheck
-r' and possibly mysqlanalyze and mysqloptimze while you are at it?

If these find no errors, you can just dump the database and loead it
again, this should find inconsistencies as well.

Another possility would be not having 2 bacula-directors, but 2 mysqld
instances due to some strange startup error or some sort of redundant
setup with both instances running with the same IP.

Hope that helps.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] mtx and Overland Neo8000e

2012-04-25 Thread Adrian Reyer
On Wed, Apr 25, 2012 at 08:08:43PM +0200, Jesper Krogh wrote:
 Allthough related, this is not a bacula issue. Trying to get
 Bacula working with an Overland Neo8000e changer I have
 got mtx status/load/unload to work and mt reports
 the tapes as excected. The Changer is connected using SAS.

I am in progress of installing Bacula on some host with Neo4000e
attached. The load times are similar to the ones of e.g. Quantum
SuperLoader3.

 But the time it takes for status/load/unload is about 2 minutes per action
 where 1 minute and 45 seconds goes by doing nothing at all.

2 Minutes sound reasonable, you can see stuff moving through the front
window. What do you mean by 'nothing at all': just from the software
point of view, or if you look inside the hardware (if that is possible
with the 8000e)?

However, mtx, Bacula and the library itself have quite different
opinions on which way to number the slots. No big deal mostly but for
the 'mail magazine'. I have not yet further investigated this.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] New User with a question

2012-04-21 Thread Adrian Reyer
On Wed, Apr 18, 2012 at 11:12:12AM -0700, Sean Roe wrote:
 I am a new bacula user and I am having a bit of a quandary.  I want to use
 my openfiler setup as a storage device to backup my exisiting servers.  I
 am running two servers that are synced via corosync, drdb and pacemaker.
  If inderstand it correctly ( I am a new openfiler user too) I need to have
 the bacula-sd daemon controlled by pacemaker, is that correct?  If so has
 anyone done a setup like this?  I have spent the last week or so looking
 around the web and havent found a whole lot of info on this.

What about just making bacula-sd listen on 0.0.0.0 and use a cluster ip
as target for the actual active node?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multi client multy storaje multy job configuration

2012-03-26 Thread Adrian Reyer
Hi Anton,

On Sun, Mar 25, 2012 at 02:54:34PM +0400, Anton Nikiforov wrote:
 Yes you are right about bacula dealing with files/tapes.
 But my situation is differ. I have 4 storages with one device each. And
 my problem is that bacula do not want to backup different clients to
 different storages concurrently in case clients have less than 3 jobs
 allowed.
 I have no problem with storage. I have tested configurations with spool
 and two drives per storage and it works fine.
 But bacula still try to start concurrent jobs from one client.
 The thing i need is bacula backing up multiple clients at the same time.
 Not multiple jobs from the same client at the same time.

I think I got you right, let me summarize your issue in my own words:
- you have N clients
- you have 4 different SDs (in 4 different locations)
- each SD has 1 pool
- each SD should backup each client as fast as possible
- a single client should only run 1 single backup at a time
- spooling is not enabled
- if you reduce the 'Max Concurrent Jobs' on the FD to 1-2 the backups
  block each other as e.g. 1 job occupies the SD and waits for the FD
  while another occupies the FD and waits for the SD

Obviously you actually want to block the FD, so you need to unblock the
SD. You can achieve this by adding resources to the SD. The needed
ressources are actually drives with media. These you can get by either
- using vchanger with several drives
- using different pools for each client per SD that use different
  drives

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multi client multy storaje multy job configuration

2012-03-24 Thread Adrian Reyer
On Sat, Mar 24, 2012 at 12:08:36PM +0400, Anton Nikiforov wrote:
 (all 4 storages have different adresses and run on different machines)
 Each client should be backed up on each storage. So i have jobs for that
 like this:

May I ask why? Just for distributed backup Data?

 When i decrease number of concurrent jobs on the client to 1 or 2 - i
 reach situation when all jobs are is waiting on max Client jobs and
 some of them are is waiting on Storage storeage1 (or storage2, or
 storage3 or storage4). And server hangs forever waiting jobs to finish.

What about reducingnumber of jobs on client, raising concurrent jobs on
storage and enable spooling?

Another possibility is to add 'locking' to the client with before and
after jobs. Something like (typed by heart, not tested):
before-job:
#!/bin/bash
MYPID=$$
LOCKFILE=/var/lib/bacula/already-running
while [ -e ${LOCKFILE} ] || [ $MYPID -ne $(cat ${LOCKFILE}) ]; do
  while [ -e ${LOCKFILE} ]; do
sleep 10
  done
  echo $MYPID  ${LOCKFILE}
done

after-job:
#!/bin/bash
LOCKFILE=/var/lib/bacula/already-running
rm -f ${LOCKFILE}

However, this only prevents more than one job running on the client, it
won't prevent the storage waiting for some time unless you activate
spooling.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multi client multy storaje multy job configuration

2012-03-24 Thread Adrian Reyer
Hi Anton,

On Sat, Mar 24, 2012 at 11:35:38PM +0400, Anton Nikiforov wrote:
 I will try spooling, thanks for the suggestion, but i cannot understand
 why spooling will help with my problem. The storage is not tape or DVD.
 It is HDD, so there should no problems with or without spool.
 My problem is maximum jobs on the client.

The 'problem' is bacula. Bacula uses every storage type the same way. As
long as you only have 1 drive for that storage, only 1 job will run the
same time. The 2 common solutions to this are
- use a virtual changer (vchanger) and just have more drives, I failed
  to set that up myself in the context of virtual full backups
- use 1 pool per client, if you want to in the same directory and a
  common 'Recycle Pool'. It works great that way for me.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Scaling Bacula

2012-03-10 Thread Adrian Reyer
Hi Matt,

On Fri, Mar 09, 2012 at 04:11:17PM +, Matthew Macdonald-Wallace (lists) 
wrote:
  From previous email threads on this list, I've come to the conclusion 
 that the primary bottle neck is the Back-end database, not Bacula 
 itself.

This is my experience as well, though bacula-sd liekes to have quite
some CPU and eventually RAM as well, depending on your jobs.

 We have taken the design decision to place both the Director and the 
 MySQL instance on the same server - we figure that if the database or 
 the bacula-daemons are down, we can't backup either way.  This server 
 has 48G RAM and 10K SAS Disks so there is some flexibility surrounding 
 how it is configured.

I suggest you plan and do you Director and *Postgresql* instance on the
same server. There are a few inices in Baculas database layout that
contain other inices. MySQL has to do seperate indices for those,
postgres can use a single index for these. This results in way less
writes and compared to most other applications that use databases,
Bacula maily writes.
I moved from 4GB + MyISAM to 16GB + MyISAM to 16GB + InnoDB and now I am
happy at 8GB + Postgres. I always changed the database/backend when
backups didn't finish anymore.

bacula=# select count(*) from path;
 count  

 855516

bacula=# select count(*) from filename;
  count  
-
 4772037

bacula=# select count(*) from file;
   count   
---
 260460301

bacula=# select count(*) from jobmedia;
 count 
---
 35267

I plan and keep the filelists within the database for 13 months, the
problems started at month 3 and I switched database backends every
month till I reached the current setup. This is now running for 6 month.

 second HW RAID-1 array (possibly even RAID-0 if it gives us more 
 performance!) - from there I would concentrate on MySQL turning as 
 opposed to anything else.

Make sure you have cache ram on your raid controller and a battery
backup unit installed. MySQL and Postgres like to write in sync. With
BBU+cache the write is completed as soon as the controller has the data,
no need to wait for disks. I doubt RAID0 would gain you much if any.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Scaling Bacula

2012-03-10 Thread Adrian Reyer
On Sat, Mar 10, 2012 at 10:35:49PM +0100, Adrian Reyer wrote:
 same server. There are a few inices in Baculas database layout that
 contain other inices. MySQL has to do seperate indices for those,

That is 'indices', other typos are somewhat easy to understand.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Concurrent Jobs only for Backup Jobs

2012-03-08 Thread Adrian Reyer
Hi Markus,

On Thu, Mar 08, 2012 at 09:09:22AM +0100, Markus Kress wrote:
 defined sequence. In other words: only the backup and verify jobs should
 run concurrently, the admin jobs in a defined sequence before and after all
 other jobs.
 admin job 1
 admin job 2
 admin job 3
 backup job client 1, backup job client 2 ...
 verify job client 1, verify job client 2 ... (VolumeToCatalog)
 admin job 4
 admin job 5
 admin job 6

I solve similar things with priorities and a job to schedule the admin
jobs. The core statements are:
  Allow Mixed Priority = no
  Maximum Concurrent Jobs = 100

All backup jobs are run at e.g priority 50, then you can just run the
admin jobs beforhand with priorities between 1 and 40, those afterwards
with priorities between 60 and 100. The verify jobs you can run e.g. at
priority 55 all at once in a schedule or scheduled by the individual job
as a post-job, depending on how your load-szenario is.
The numbers are made up, I use priorities 5-20 myself and an admin job
ScheduleCopyMigrate (priority 15) priority that schedules 3 admin jobs,
CopyArch (prio 5), MigrateTape (prio 7), CopyTape (prio 8). The
ScheduleCopyArchive is timed after the backups (prio 10) start.
That way all backups are done, then CopyArch runs, which in turn
schedules a ton of copy jobs at priority 5, when they are done
MigrateTape schedules a few migration jobs at prio 7, after they are
done, CopyTape schedules more copy jobs at priority 8.
That way I make sure, all scheduled jobs are done before the next normal
backups start.

Yes, I am aware this has nothing to do with admin connections.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large backup to tape?

2012-03-08 Thread Adrian Reyer
On Thu, Mar 08, 2012 at 09:38:33AM -0800, Erich Weiler wrote:
 We have our 200TB in one directory.  From there we have about 10,000 
 subdirectories that each have two files in it, ranging in size between 
 50GB and 300GB (an estimate).  All of those 10,000 directories adds to 
 up about 200TB.  It will grow to 3 or so petabytes in size over the next 
 few years.

As it has already been said, on most filesystems it is not a good idea
to have 10k items and up in a single directory. You might want to hash
it on e.g. directory names
ABC789 - A/B/C/7/8/9
Depending on your directory stucture this might already be a good
preselection for filesets.

The reason for my mail is another point, though:
Depending on your data you might want to make sure it all has the same
timestamp. You could do this by using LVM and snapshots. By mounting and
backing up the snapshots, you make sure no files are modified while
backing up.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO media type mixup

2012-03-05 Thread Adrian Reyer
On Mon, Mar 05, 2012 at 11:00:06AM +0100, Tilman Schmidt wrote:
 Makes sense. In that case, wouldn't it be better if the default
 bacula-sd.conf distributed with Bacula would just say
 Media Type = LTO for all the LTO devices? That would reduce
 the temptation to change that field when changing drives.

Well, 'Media Type' is misleading. It is more a 'Media Group'. every
medium in the same group can be requested on every SD that support that
'Media Group'. It doesn't actually have anything to do with the mediums
capabilities/size.

 Another question: Is there a way to fix the mix-up I've created?
 update volume doesn't let me change the media type. Can I just
 run the MySQL query
   update Media set MediaType = 'LTO-2' where MediaType = 'LTO-1'
 or is there more to it?

Should work and worked elsewhere.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Override Next Pool

2012-03-02 Thread Adrian Reyer
Hie Tim,

On Thu, Mar 01, 2012 at 02:36:28PM -0800, Tim Krieger wrote:
 All our routine backups are done to disk to keep our backup window small
 Our data is rolled from disk to tape(long term archive) with a migration job 
 weekly(file pool recycled after two weeks)
 I have been asked to add an additional offsite backup to this setup and was 
 thinking of just running a copy job to usb disks.  The snag I have run into 
 is that the copy job just wants to send things to the tape archive as that is 
 the next pool as defined in the file storage pool resources.
 Any ideas?  Can I specify next pool in the run command somehow?

I have the very same setup, I solved it with a wrapper job that changed
the 'Next Pool' statement. If you want to, you can have the script.
But recently Jan Lentfer asked basically the same in 'Virtual Full - Set
NextPool for the virtual job only', Martin Simmons linked to
http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/14084
and I like that approach quite more as it doesn't need bacula-dir
reloading. I would do it that way if I had to do it again.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD crashes

2012-02-13 Thread Adrian Reyer
Hi Joe,

On Mon, Feb 13, 2012 at 07:21:03AM +, Joe Nyland wrote:
 I hope someone would be able to offer any suggestions of why I am seeing the 
 following behaviour in my current Bacula setup:
 Since the tail end of last week, I have been having issues with my MySQL 
 backups in Bacula, where they would randomly appear to 'crash', normally when 
 performing a copy of a backup to another pool - but I'm not sure yet if this 
 is the trigger.

With bacula 5.0.3 I had frequent crashes on Copy Jobs as I ran out of
memory. The SD-box has only 4GB RAM, now I added 8GB swap and it seems
to run fine.

 NOTE: bconsole appears to crash here - no further output is produced, and 
 bconsole does not respond to any key presses. I have to Ctrl + C to exit out 
 from bconsole. Furthermore, the only way I can clear our the failed jobs from 
 the 'Running jobs queue' is to exit from bconsole, issue 'sudo service 
 bacula-sd stop' twice, then restart the SD and restart bacula-director.

Here the bacula-sd crashes and misses from process list.

I have another issue I have not been able to track down so far. The tape
changer seems to claim it has 0 slots now and then and bacula-sd really
dislikes that. Seems mostly to happen when tapes are moving and some
'mtx status'-like command is issued. If this happens, I need to stop
bacula-sd, it will take some time to umount the tape (bacula-sd has 'D'
state in 'ps'), only afterwards it can be started again and all is fine.
'update slots' without restart won't help, even as 'mtx status' gives
correct output again. Perhaps this is comparable to your issue 'sudo
service bacula-sd stop' twice.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] mysql authorization fail (Could not open Catalog) on a clean 5.2.5 install. what did i miss in my config?

2012-02-12 Thread Adrian Reyer
On Sat, Feb 11, 2012 at 06:56:54PM -0800, p50...@150ml.com wrote:
   bacula-dir -d -v -t -c bacula-dir.conf
   bacula-dir: dird.c:954 Could not open Catalog MyCatalog,
   database bacula.
   bacula-dir: dird.c:959 mysql.c:203 Unable to connect to MySQL
   server.
   Database=bacula User=bacula
   MySQL connect failed either server not running or your
   authorization is incorrect.
   11-Feb 18:39 bacula-dir ERROR TERMINATION
   Please correct configuration file: bacula-dir.conf

Perhaps there are different assumtions on where your mysql.sock resides?
It doesn't exactly claim 'Permission denied', 'Authorization failed' or
similar, it might as well be unable to access the socket due to
permissions or location.
You could try and connect via tcp/ip to verify, just add the
DB Address = localhost;
statement to your catalog definition.

Catalog {
  Name = MyCatalog
  dbname = bacula; dbuser = bacula; DB Address = localhost; dbpassword = 
DBPass
}

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: sql_create.c:894 Fill File table Query failed

2012-02-10 Thread Adrian Reyer
Hi Maria,

On Wed, Feb 08, 2012 at 11:46:24AM -0800, Maria McKinley wrote:
 05-Feb 22:25 billie-dir JobId 3931: Fatal error: sql_create.c:894 Fill 
 File table Query failed: INSERT INTO File (FileIndex, JobId, PathId, 
 FilenameId, LStat, MD5)SELECT batch.FileIndex, batch.JobId, Path.PathId, 
 Filename.FilenameId,batch.LStat, batch.MD5 FROM batch JOIN Path ON 
 (batch.Path = Path.Path) JOIN Filename ON (batch.Name = Filename.Name): 
 ERR=Table './bacula/File' is marked as crashed and last (automatic?) 
 repair failed

Have you tried 'mysqlrepair bacula File' or 'mysqlrepair -A'?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy Jobs - Am I doing something wrong?

2012-01-30 Thread Adrian Reyer
On Mon, Jan 30, 2012 at 12:18:30PM +, Joe Nyland wrote:
 When run manually this morning, I was given a little more information:
 run job=FileServer1_Copy pool=FileServer1_Full storage=FileServer1_Full
 30-Jan 06:55 FileServer1-dir JobId 0: Fatal error: No Next Pool specification 
 found in Pool Default.
 30-Jan 06:55 FileServer1-dir JobId 0: Fatal error: No Next Pool specification 
 found in Pool Default.

AFAIK you need a Next Pool statement for Copy/Migration jobs.
Perhaps some of your backups went on Default pool instead of the one you
intended them to be?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule virtual servers so their not running concurrently

2012-01-30 Thread Adrian Reyer
On Mon, Jan 30, 2012 at 11:20:15AM +, keith wrote:
 We have about 80 virtual servers running on 20 Physical servers that 
 need backed up.

I tend and just ignore that problem as my virtual servers have quite
different data on it and the network is the limiting factor here, not
the disk io. Nonetheless a few thoughts...

 Our current plan is to have just one Schedule that will start at 1am 
 then we will give each virtual server a unique priority, We plan to give 
 the the first Job a priority of 5 then the next 10 and on.This seems 
 a messy but I can't see any other way to stop or limit the number of 
 virtual servers on the same physical server from running at the same time.

What about scheduling the next job on each physical server as a post-job
to the last one?
You could add the info which virtual server resides on which phyical one
to an extra table in the bacula database you have anyway, then just
write a generic job that selects the next job to be run.
e.g. a table
# Be aware, pseudo sql code, won't work, just to illustrate the idea
create table my_schedule_classes {
  id int autoincrement,
  physical varchar(255),
  virtual varchar(255),
}

SELECT virtual FROM my_schedule_classes WHERE id(SELECT id FROM 
my_schedule_classes WHERE virtual='%my-current-job') AND physical=(SELECT 
physical FROM my_schedule_classes WHERE virtual='%my-current-job') ORDER BY id 
ASC LIMIT 1
gives the job name.

Bacula has some variable for '%my-current-job'.
If you add the physical servers jobs to the table, you can just start
the schedule by scheduling the physical servers job. Instead of a
schedule for the virtual servers you need to add them to the table.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Need some help with copy jobs

2012-01-16 Thread Adrian Reyer
On Mon, Jan 16, 2012 at 01:23:37PM +, Joe Nyland wrote:
 However, I am still unable to see what this query is doing :-(. If I run it 
 against my catalogue DB, I just get a long list of JobIDs, which don't look 
 right for my SVN_Full pool. I have a feeling I may be over complicating thigs 
 using a SQL query?

Well, this special SQL selects all jobs of any pool that is not the
'ARCH' pool. In your case, as you only wish to copy a specific pool, it
would be, still with 'ARCH' as the target pool:

select a.JobId from Job AS a JOIN Pool AS b
ON a.PoolId=b.PoolId AND b.Name='SVN_Full' AND b.Name!='TAPE'
WHERE a.Type='B' AND a.JobID NOT IN
(SELECT PriorJobId FROM Job AS c JOIN Pool AS d
ON c.PoolId=d.PoolId AND d.Name='ARCH')
AND a.EndTime(now()-INTERVAL '10 DAY') ORDER BY a.StartTime ASC;

 PoolUncopiedJobs
This selection which copies all jobs from a pool to an other pool
 which were not copied before is available only for copy Jobs.
 What are the pro's/con's with this option, over a SQL query, such as the one 
 kindly suggested by Adrian Reyer a few days ago?

I have not used this option because of my special setup as I have to
copy each original job twice. This is only possible with SQL. For you it
might just work.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup strategy advice / help

2012-01-16 Thread Adrian Reyer
Hi Sebastian,

On Mon, Jan 16, 2012 at 11:38:50AM +0100, Sebastien Douche wrote:
  I think bacula is not the ideal tool for running additional offsite
  backups. And very likely rsync is not a good way if you use bacula.
 I rsync data, catalog and bsr files on external disks and I would know
 what it's not a goold solution.

Well, as you copy the matching catalog as well it should just be fine.
It depends all on what you want to be your backup be for. In my case I
want to be save from loosing single/some backup media while the backup
server is still fine as I run redundant servers anyway. You obviously
plan for a complete backup system breakdown on the expense of a harder
time while restoring single lost media. On the other hand, if I
experience a complete backup server outage I have to bscan the offsite
tapes.
Benefits of both ways you get by e.g.
- copyjob for media offsite, if some medium fail, the copy job gets
  active if you delete it.
- sql-server replication offsite, alternatively dumprestore
- having all configuration files ready offsite. As I run
  'Linux-VServers' I would just rsync the backup-server itself.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Need some help with copy jobs

2012-01-15 Thread Adrian Reyer
On Sat, Jan 14, 2012 at 11:46:21PM +, Joe Nyland wrote:
 I have a copy job as follows:
   Selection Type = SQLQuery
   Selection Pattern = SELECT DISTINCT Job.JobId,Job.StartTime FROM Job,Pool
   WHERE Pool.Name = 'SVN_Full'
   AND Pool.PoolId = Job.PoolId
   AND Job.Type = 'B'
   AND Job.JobStatus = 'T'
   AND Job.JobId
   NOT IN (SELECT PriorJobId FROM Job WHERE TYPE = 'C' AND Job.JobStatus = 
 'T' AND PriorJobId != 0)  ORDER BY Job.StartTime;

I do something similar and it works for me. However, in my special case
I do additional messin with the pools. Actually I backup to disk, then
copy it to one tape TAPE and one tape ARCH, this is only for the 'ARCH'
job. The 'ARCH' is actually just an offsite version and in no way an
archive, only backups within the last 10 days are considered, otherwise
you start and copy old expired purged jobs again ater you deleted them
from the db.
- INTERVAL... should be less than your retention time if you delete
   purged jobs now and then
As my statement looks pretty much the same, I'd check the 'NOT IN'-part,
especially the 'JobStatus'.

Selection Pattern = select a.JobId from Job AS a JOIN Pool AS b
  ON a.PoolId=b.PoolId AND  b.Name!='ARCH'
  AND b.Name!='TAPE'
  WHERE a.Type='B' AND a.JobID NOT IN
(SELECT PriorJobId FROM Job AS c JOIN Pool AS d
 ON c.PoolId=d.PoolId AND d.Name='ARCH')
  AND a.EndTime(now()-INTERVAL '10 DAY') ORDER BY a.StartTime ASC;

Postgres syntax here, but I *think* I had the very same statement a few
months ago when I started with MySQL.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup strategy advice / help

2012-01-04 Thread Adrian Reyer
On Wed, Jan 04, 2012 at 12:16:55PM +, keith wrote:
 460M Dec 29 23:19 Full-0001
 26.3G Dec 30 23:52 Full-0003
 702MDec 24 23:05 Inc-0001
 10.0GDec 30 01:54 Inc-0002
 2.3G  Dec 31 00:06 Inc-0004
 3.1G  Dec 31 00:56 Inc-0005
 611MDec 31 00:56 Inc-0006

Is the 10G on 24th mostly additional, changed or moved data? Are these
compressed backups?

 Now that the backups seems to be working I need to figure out how to 
 implement an offsite strategy, I want to use a combination of removable 
 disks and rsync to do this.

I think bacula is not the ideal tool for running additional offsite
backups. And very likely rsync is not a good way if you use bacula.

I have 3 possibilities in mind:

1. If you are not talking about windows clients, I'd consider using rsync
(e.g. via rsnapshot) to run the complete offsite backup unrelated to
bacula. Run one rsync/rsnapshot job per client and the 'new' client will
just run longer, independent of the others except the shared bandwidth.
Via rsnapshot you only need to do 1 full backup per client, changed
files just lead to new full backupsets, but only the difference needs
to be transferred. We do that on several locations and wrote a wrapper
round rsnapshot (which is a wrapper round rsync), debian packages are
available at
deb http://ftp.lihas.de/debian stable main
package rsnapshot-backup.
If you add some file unification tool, you get away with far less used
diskspace.
+ only changes need to be transferred
+ initial backup can easily be transferred on external media to save bandwidth
- no bacula, no bacula indexes
- no backup of windows clients / anything that doesn't have rsync

2. Alternatively you can use the normal bacula backup + a copy job.
As copy jobs only work on same bacula-sd, you could e.g. NFS-mount some
external server and store the target pools there. The copy full pool is
to local disks on individual mountpoints. Move the volumes to the remote
location and replace it with links to remote NFS.
+ works with all clients
- regularily transporting volumes offsite is required

3. Run a complete seperate job instance to the remote site using a
bacula-sd installed there. Use virtual full backups to create the fulls
from the full/diff/inc backups. Initially a full backup has to pass the
remote connection.
+ works with all clients
0 initial full might be expensive in bandwidth

Currently I use 1. and 2. myself. With 3. I ran into trouble selecting
the correct pools in my environment and virtual full in general
including a tape changer with a single drive.

 If I add a new server to be backed up to Bacula midweek it does a full 
 backup in the INC pool. This might be a big backup and screw-up my Rsync 
 job.
 Does this seem like a good idea and goes anyone know how keep Full 
 backups out of the INC or DIFF pool

Just do a manual initial full backup on the new client. As I assume they
don't appear magically in your backup setup.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Resouce address scripting?

2012-01-03 Thread Adrian Reyer
On Fri, Dec 30, 2011 at 01:23:42PM +0100, René Moser wrote:
 This means, from the bacula-fds point of view, the storage daemon has
 several different addresses depending on the clients network.

Ok.

 So, this means I need to define as much storage resources to the same
 device as networks exists, which is kind of inconvenient.

Why? Wont using 0.0.0.0 as listen address work?

 I tried to script the storage resource address like
 Storage {
 ...
 Address = @|sh /etc/bacula/scripts/get-backup-network %n
 ...
 }

The config is parsed at startup time, so no reasonable %n will be there.

 or define the address on client side:
 Storage {
 ...
 Address = \\/etc/bacula/storage.conf
 ...
 }

You should be able to use a hostname there. Either fqdn or just a
hostname.
If this hostname has to point to a different IP on each server, you
could
a) use some small DNS-Server to provide the correct entry for that
subnet, e.g. dnsmasq, overriding the default hostname
b) if the servers in differnet networks have their own search pathes,
just add the bacula-sd-host as unqualified name and resolvie it via the
dns
c) just place an appropriate entry in /etc/hosts

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart


signature.asc
Description: Digital signature
--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula and PostgreSQL in UTF8

2011-12-30 Thread Adrian Reyer
On Mon, Dec 26, 2011 at 09:54:58AM -0430, reynie...@gmail.com wrote:
 psql: FATAL: role root does not exist.

By default your database user is the unix(/linux user you are connecting
as. As you seemingly work as 'root', that one is used.
Postgres comes with user 'postgres' presetup and connectable from unix
user 'postgres'. You could e.g. connect by becoming user postgres first:
  su postgres
  psql
I'd advise you to create an extra user for your bacula database using
'createuser' and a database belonging to that user with 'createdb',
parameters '-E' and '-l' should be helpful.

 So my question here is: it's posibble to use this PgSQL to store Bacula
 Catalog? Then here I have two problems: one for the role root which not
 exists in my PgSQL and the other for the encoding then which are the steps
 I must follow in order to get this work?

My catalog is in postgres, however, as I migrated from MySQL i never
used the original initdb scripts for postgres and had a completly
different set of troubles during migration.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compatibility with QM-SL3-LTO5S/16 Tapeloader

2011-11-20 Thread Adrian Reyer
On Wed, Nov 16, 2011 at 06:05:18PM +0100,  Wankmüller David 
david.wankmuel...@isko-engineers.de wrote:
 our old tape loader has now definetly his last tape written and dont wanne do
 his work anymore so we are about to by a new one. As we playing around with
 bacula for a few months we want to migrate it as our primary backup solution.
 Today we received an offer on a quot;QM-SL3-LTO5S/16 quot; (also called
 quantum 16 slot SuperLoader) connected with SAS(SF-8088).
 Does anyone have some experience of using bacula with this tapeloader and 
 could
 provide me some informations wether it works, device definition, mtx-script 
 and
 so on?

I bought 3 of them for me and my customers as they are at a reasonable
prize and came with 3 years support contracts by default.
However, one of the trays fell down and broke. The superlodaer
firmware refuses to do anything without the trays plugged in. As this
has been a user error, service contract didn't kick in and it took
Quantum 1 month to deliver a new tray. 1 month without a backup,
just because they decided to refuse operations with only one tray
plugged in. Keep in mind, it has a mailslot, you would not need a
tray at all for some sort of emergency service.
No idea if others are better, but this is really a show stopper for me.
Apart from that: the changer is somewhat slow, but it works and loader
is recognized by mtx, mt and bacula without any special configuration.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup on the schedule specific clients but not all clients

2011-10-20 Thread Adrian Reyer
On Mon, Oct 17, 2011 at 11:57:46AM -0400, Kiryl Hakhovich wrote:
 - weekly on sunday - full backup and these tapes go off site  (this is 
 in rotation)
 - daily we run incremental
 problem at hand, if user want to restore something from last week, i 
 need the tapes that went off site back, so i can restore data.

I have a similar setup here. I solved it by backing up to disk, then
running a copy job from disk to tape. 3TB are not that many disks as
2TB disks are cheap. May be well worth the hassle and you get faster
every day restore times as well:
- 1 Pool for tapes, called it TapePool here, long retention time
- 1 Pool on disk at least, I do 1 pool per client, NextPool = TapePool
  retention time is as short as you need it to be to fit on your disks,
  I do 5 weeks
- The Full/differential/Incremental jobs targetting the disk
- after the backup jobs finished, run a copy job to copy the backups to
  TapePool, I do it via SQL selection.

Job {
   Name = CopyTape
   JobDefs = CopyDefaults
   Schedule = None
   Spool Data = Yes
   Spool Attributes = Yes
   Priority = 8
   Pool = DiskPool
   Selection Type = SQLQuery
   Selection Pattern = select a.JobId from Job AS a JOIN Pool AS b ON 
a.PoolId=b.PoolId AND b.Name!='TapePool' WHERE a.Type='B' AND a.JobID NOT IN 
(SELECT PriorJobId FROM Job AS c JOIN Pool AS d ON c.PoolId=d.PoolId AND 
d.Name='TapePool') ORDER BY a.StartTime ASC;
}

 thus, i was hoping that i can do a full backup weekly (these tapes go 
 off site) and one full backup once every 3 months or so and then run 
 incremental daily.

I do monthly full, weekly differential, daily incremental. Except
database servers, there it is monthly full and daily differential.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up lvm snapshots?

2011-09-20 Thread Adrian Reyer
On Sun, Sep 18, 2011 at 05:47:34PM +0200, Tobias Schenk wrote:
 I try use bacula 5.0.3 on suse linux to backup lvm snapshots.
 I cannot simply mount /dev/dm-6 to somewhere because the contents is a 
 partitioned raw device of a kvm instance.

You could check the path with something like 'ls -l'.
On the other hand, you could use 'kpartx' and make the snapshots
partitions actually mountable if you 'speak' the used filesystem. The
benefit would be you only backup teh changed files in an incremental
backup instead of having to save the whole image even with one byte
changed.
In my installations I ahd always been able to just run a baclua-fd
inside the KVM, though.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full/Incremental on Copy Jobs? Why?

2011-09-15 Thread Adrian Reyer
On Thu, Sep 15, 2011 at 10:45:33AM -0300, Rodrigo Renie Braga wrote:
 I'd like to know if the Level option on a Copy Job makes any difference at
 all for the job. Since my Copy Job looks at JobID to copy (using an SQL
 Statement), it won't know that that JobID was Full or Incremental, right?

I think the level is only important for jobs that talk to bacula-fd as
they need to sort out what to actually backup. For Copy/Migrate jobs
only the jobid is important.
Your Copy/Migrate job will have that level and typically owns 0 files in
0 bytes with the level you stated and you can just delete it. The
resulting copy of a job will have the original level.

 In this Copy Job I'm selecting all previous Full Backups JobID using the SQL
 Statement, but I could very well change it to select all the previous
 Incremental Backups, hence the Level = Full makes no difference, right?

Indeed, you slect via SQL and you only get a bunch of jobids. No further
info required for the job.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Doing More with Less: The Next Generation Virtual Desktop 
What are the key obstacles that have prevented many mid-market businesses
from deploying virtual desktops?   How do next-generation virtual desktops
provide companies an easier-to-deploy, easier-to-manage and more affordable
virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions about spooling

2011-09-01 Thread Adrian Reyer
On Thu, Sep 01, 2011 at 02:36:08AM -0700, frank_sg wrote:
 To get a bigger spool fs, there might be some options:
 1) What is better: bigger spool fs or faster spool fs? So first option: 3,6 
 TB RAID0 with 12 SAS disks direct attatched vs second option: 2 or 3 120 GB 
 SSDs? 
 2) Does it make any sense to have a spool fs much bigger than the tape size? 
 (LTO4 - 800MB, with compression up to 1,6 TB - so does it make sense to use a 
 fs  1,6 TB?) 
 3) Specially with the SSDs - will I run in problems because of MTBF? Is 
 anybody using SSDs for spool fs?
 I have an autoloader with an SAS-LTO4 drive and I would like to get the drive 
 to steam as fast as possible.

LTO-4 I have here as well. I experience data rates between 45MB/s and
95MB/s, no use in SAS or SSD, recent SATA would be enough to feed that
as you operate them basically in streaming mode.
We run 30-40 concurrent jobs into the same 1TB spool directory, resided
on a RAID-5 of 4 recent 2 TB SATA2 disks. Limits are the tape drive and
the network, not the disks.
For the right size fo the spool: I set it so big a job never needs to
wait for tape to continue spooling as there are a few very big clients
and a few that are attached via low bandwidth, but I limit a single job
spool to 800GB.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions about spooling

2011-09-01 Thread Adrian Reyer
On Thu, Sep 01, 2011 at 11:06:47AM -0400, mark.berg...@uphs.upenn.edu wrote:
   [A], while it should be close to [B]. The reason for
   the decrease in performance is that bacula stops all
   spooling as soon as it starts de-spooling.
 In an ideal configuration, there could be multiple spool directories defined,
 and bacula would open a new spool file in the next directory as soon as it
 begins despooling.

We run 1 spool directory and several concurrent jobs spool to it. While
one gets despooled, the others continue spooling. However, if you run
out of spool space, spooling is stopped on all jobs till a complete
despooling is done.
Best practice IMHO with big disks:
- SpoolSize large enough to hold most of your backups completly
- Spool Directory large enough to hold several jobs

In my case most backups are 50GB, incrementals often only 100MB. I use
a 1TB Spool Directory and a Spool Size of 800GB, large enough for all
but my biggest backup job. It works fine here.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Perfomance improvement moving from mysql 5.0 to 5.1 or 5.5?

2011-08-28 Thread Adrian Reyer
On Sun, Aug 28, 2011 at 03:22:01PM +0200, Maria Arrea wrote:
  We are using bacula 5.0.3 with mysql 5.0.77 on RHEL 5.7 x64. We are backing 
 tens millons of file and we are staring to see performance problems with 
 mysql. Our server has 6 GB of ram and 1 quad core Intel Xeon E5520 @2.27 Ghz.

I went from
1. 4GB MySQL 5.1 MyISAM via
2. 16GB MySQL MyISAM via
3. 16GB MySQL InnoDB to
4. 8GB PostgreSQL 9.0

9M files in last full backup, 2TB data, 35 clients.
See Message-ID: 20110811154047.ga27...@r2d2.s.lihas.de
Subject: Re: [Bacula-users] Performance with many files
2011-08-11 and 2011-07-06 for the background.

Postgres works so far, but I am only at month 4 of 13 now. A noteable
difference is the time needed for spooling jobs to one of my 1GB-sized
File-volumes. At the beginning it had been 3*1GB/minute with MySQL, but
the performance constantly decreased and in the end it had been
1GB/3minutes with otherwise unchanged disk-parameters (iSCSI-volume).
With Postgres I run at 2-3*1GB/minute for 1 month now. The limit is the
iSCSI-part, which I will get rid of soon.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ignore fileset changes

2011-08-23 Thread Adrian Reyer
On Tue, Aug 23, 2011 at 01:38:23AM +0200, Olle Romo wrote:
 What I mean is that if I have a drive removed, run the job then attach  
 the drive and run the job again, Bacula will do a full backup of the  
 drive even if it previously had done an incremental backup on the same  
 drive. Ideally I want it to just continue the incremental backup.

Use 2 filesets and 2 jobs. One like you have now, but exclude the
removable drive, The other only has the removable dribe and you only
run it when the drive is there, e.g. checked by a pre-script.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-08-11 Thread Adrian Reyer
On Fri, Jul 08, 2011 at 08:30:17AM +0200, Adrian Reyer wrote:
 Speed improved many many times. My incremental backup finished after
 just 10 minutes while it took 2h earlier.

This had been the benefit of using InnoDB over MyISAM. However, at 12GB
RAM and 8900 File entries (12GB file on disk) it became slow again
and I took the step to convert to PostgreSQL.
While I only gave 8GB of memory to PostgreSQL it is quite a bit faster
so far. A full backup that took 1day 1 month ago with less entries in the
database was up to 3 days on MySQL this month. With PostgreSQL it had
been down to 1day again.
The hardware is the same system with 16GB RAM it has been before, serving
as an iSCSI-storage for enhancing the bacula-sd residing on some other
box the same time. The import read the dump at a constant 2MB/s whicht I
regarded somewhat slow, but I think the 'constant' is the important part
here. I did the migration with Bacula manual and
http://mtu.net/~jpschewe/blog/2010/06/migrating-bacula-from-mysql-to-postgresql/

Just to throw in some numbers that might help others.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate File-Storage without autochanger to vchanger Skript?

2011-08-08 Thread Adrian Reyer
Hi Eric,

I am pretty sure I have all the possible Max Concurrent jobs set.
They are set in
  - Director (50)
  - JobDefaults that are included everywhere (100)
  - Clients (default, but I only want 1 job/client)
  - Storage (100 for files, 4 for my tape drive, all with spooled data)

On Mon, Aug 08, 2011 at 12:16:19AM -0700, Eric Pratt wrote:
 I also thought about the different methods of migrating the data, but
 decided it wasn't worth doing.  I have vchanger auto-create the new
 volumes with initmag.  I then use Bacula's 'label barcodes' command to
 bulk label the empty volumes vchanger created.  I tell the existing
 Bacula job to start using the new autochanger device but I keep the
 old volumes on disk until their files and jobs expire.  When they do
 expire, I then remove the old volumes from disk and use 'delete
 volume' in Bacula.  Once you've done that for all old volumes, you're
 completely migrated to vchanger.

I know this would be the easiest way to do it. Unfortunately the used
data is ~12TB on file storage and I have only 500GB left and due to the
'missing' concurrent jobs I am unable to migrate some data to tape
during the day to get more space to finally start a decent file pool.
Just running round in circles.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migrate File-Storage without autochanger to vchanger Skript?

2011-08-07 Thread Adrian Reyer
Hi,

my VirtualFull backups keep failing on me as I have many of them. They
run from 2 different File Pools and target 1 autochanger Tape library.
No matter how many concurrent jobs I declare and despite I activate
Spool Data and it actually does the spooling, only a single job can
access a singel File storage at a time. So now one job waits for the
autochanger while the other waits for the File and so they are in a
deadlock.
The idea is now to just use the vchanger and provide multiple File
drives to bacula to get rid of the issue. Obviously the File-Volumes are
already written and labeled and the naming is incompatibe to the
vchanger-names.
- Is there a different way to solve this deadlock situation permanently,
  even if the jobs have to wait some time after being scheduled because of
  other jobs blocking the devices?
- Is there an easy way to make the current File volumes known to the
  vchanger and Bacula?
  The current approach would be to stop bacula, move and rename the
  files and update the occurances of the File names in Job and Media
  tables.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Pruning/Purging and Volume Retention

2011-07-31 Thread Adrian Reyer
On Sun, Jul 31, 2011 at 06:24:11PM +0100, Graham Sparks wrote:
 Thanks for that advise.  I'll adjust the retentions and see if I can achieve 
 what you described.  Provided I make sure the full backups definitely run 
 every month, this should be an adequate solution.

If 'run' is a matter of hosts being reachable, you might consider
VirtualFull backups.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Non-bacula: Tar restore in an autochanger

2011-07-25 Thread Adrian Reyer
On Mon, Jul 25, 2011 at 05:37:46PM +0100, Alan Brown wrote:
 I've been handed a tar backup comprising 66 (yes really) LTO4 tapes.
 For obvious reasons I'd prefer not to have to load the tape drive 
 manually for each tape. Does anyone have pointers (or even better a 
 script) on automating this kind of restore?

Sorry, I have no script and never have been in a situation like yours.
But tar is 'tape archiver' and it knows
 -M for multi volume archive
 -F run script at end of each tape (implies -M)

I'd use -F to call a script that
- unmounts the current tape
- mounts the next one
  - either by 'mtx next' + some end of slots logic
  - mtx load SLOT
- if it has been the last tape in the changer prompts for a new set of
  tapes
- else exits for tar to continue

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Where to stuff bacula hints/tricks/documentation?

2011-07-18 Thread Adrian Reyer
On Mon, Jul 18, 2011 at 01:06:33PM +0100, Martin Simmons wrote:
 It says Except where otherwise noted... so perhaps you can put your own
 license in article?

Would be an option, but when you actually want to submit text it states:
Note: By editing this page you agree to license your content under the
following license: CC Attribution-Noncommercial-Share Alike 3.0 Unported

If it is fine to just add a GFDL-Statement to the page as well, I can
perfectly live with it.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Write spooled data to 2 volumes

2011-07-17 Thread Adrian Reyer
I have a similar problem here, however, as we generally backup to Disk
and only Copy the jobs to 2 sets of Tapes afterwards, one of them being
brought offsite, it is bearable.
We achieve this by using Selection Type = SQLQuery and a Query selecting
all backups that have not yet been migrated into the specific pool.
Then we run these jobs from within an Admin job that only exists to
replace the 'NextPool' statement with the 'right' one for the current
copy job, runs the copy job and changes back the Next Pool statement.
If backing up to a tape pool as 'cache' is an option for you I can
provide more details on my setup and the scripts I wrote to change the
pools.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Where to stuff bacula hints/tricks/documentation?

2011-07-17 Thread Adrian Reyer
Hi,

I just wanted to write a HOWTO for using bacula with multiple
NextPool statements and assumed wiki.bacula.org would be a good place to
stuff it. But I don't like the licence (CC Non Commercial), is there
some other bacula related page where I can publish things for free use,
e.g. according to the GFDL or do I need to setup a seperate spot?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape BLOCKED waiting for.....

2011-07-11 Thread Adrian Reyer
On Mon, Jul 11, 2011 at 11:12:00AM -0400, Christian Tardif wrote:
 I know I can just kill the job, label my tape, and restart the job. But 
 there are certain times where I just can't do that. So, how can I 
 unblock the device to be abe to label my tape and let the job continue 
 normally?

Never tried it myself, but I have seen some documentation yesterday
about labling tapes when needed by issuing 'unmount' first. Doesn't this
resolve your 'blocked' status?
-
http://www.bacula.org/5.0.x-manuals/en/main/main/Brief_Tutorial.html#SECTION001610

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy/Migration utilization

2011-07-10 Thread Adrian Reyer
Hi Harry,

On Sun, Jul 10, 2011 at 01:38:52PM +0200, Harald Schmalzbauer wrote:
 of a average transfer rate of 65MB/s makes me worrying about massive
 repositioning.

AFAIK LTO-Drives have adaptive speeds compared to older technologies. If
the data comes in slower, the drive will just run slower on a somewhat
constant speed. No more stop-and-go.

 Can I optimize my setup so that there won't be so many new files written
 on tape? Or should the creation of a new file mark been done without
 interruption of the transfer, and there's something wrong with my setup?

Do you use 'Spool Data = yes'?
To my understanding you can run multiple jobs to storage the same time,
but they end up interleaved. Spooling the data will write full jobs or
at least bigger chunks of a job in one run.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart


signature.asc
Description: Digital signature
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy/Migration utilization

2011-07-10 Thread Adrian Reyer
Hi Harry,

On Sun, Jul 10, 2011 at 03:39:36PM +0200, Harald Schmalzbauer wrote:
 The server is at remote site, so I can't hear any mechanicals, but I
 guess at rest means stop, thus my worries about extensive repositioning.

Sorry, I am no expert on drives and only use bacula my very self for 6
weeks now.

 I have no backup jobs using the tape drive, so no spool is in use. I
 only use the tape drive for migration (or sometimes copy) jobs. And in
 the disk-pool I use Use Volume Once = yes, so every job has it's own
 file without interleaved data, which has exactly the size the job
 summary reports.

I found using Spool Data for copy jobs to be faster for my setup. I have
fast local disks for spooling, but some of my disk storage is accessed
vie iSCSI on 1-GBit/s-links.
However, I am currently running a few copy jobs and the limiting factor
seems to be my bacula-sd consuming 1 complete CPU, throttling me at
55MB/s. The CPU is an older 'AMD Athlon(tm) 64 X2 Dual Core Processor
3800+'

 Maybe this was not an issue with slower tape drives. LTO2 would only
 suffer from about 6% performance loss, if my wild guess has any truth...

LTO4 as well here, and no ear next to the drive. However, 'mt status'
won't run as the drive is in use by the copy jobs, how you got that
info?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart


signature.asc
Description: Digital signature
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-07-08 Thread Adrian Reyer
On Wed, Jul 06, 2011 at 11:08:44AM -0400, Phil Stracchino wrote:
 for table in $(mysql -N --batch -e 'select
 concat(table_schema,'.',table_name) from information_schema.tables where
 engine='MyISAM' and table_schema not in
 ('information_schema','mysql')'); do mysql -N --batch -e alter table
 $table engine=InnoDB ; done

actually the outer ' in the first mysql need to be replaced by  or the
inner ' to be quoted.
However, for some reason mysql 5.1 with compiled in innodb calced a lot
on the tables but never actually changed them to innodb. So I just did a
classic mysqldump, changed the MyISAM for InnoDB and loaded that again.
Speed improved many many times. My incremental backup finished after
just 10 minutes while it took 2h earlier.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Test Autochanger with mtx-changer

2011-07-07 Thread Adrian Reyer
On Thu, Jul 07, 2011 at 04:41:52PM +0200, Robert Kromoser wrote:
 /usr/local/bacula/bin/mtx-changer /dev/sg0 list 0 /dev/nst0 0
 mtx: Request Sense: Sense Key=Illegal Request

This is a typical error if you hit the wrong /dev/sg*, Try e.g.
/dev/sg1 instead of /dev/sg0.

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-07-06 Thread Adrian Reyer
On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
 should I use for my tables? is MyISAM.[1]  At this point, wherever
 possible, EVERYONE should be using InnoDB.

I will, if the current backup ever finishes. For a start on MySQL 5.1
though (Debian squeeze). I am aware InnoDB has a more stable performance
according to the posts I have found in various bacula-mysql related
posts. Your post gives me some hope I can get away with converting the
table format instead of migrating to postgres. Simple for the fact I
have nicer backup scripts for mysql than for postgres.

 your MySQL configuration using MySQLtuner (free download from
 http://mysqltuner.com/mysqltuner.pl; requires Perl, DBI.pm, and DBD::mysql.)

I am using that one and tuning-primer.sh from http://www.day32.com/MySQL/

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users