[Bacula-users] bacula-dir/-sd under hyper-v using iscsi storage

2014-03-26 Thread James Harper
I have a new server running Windows 2012R2 with Hyper-V for running Windows 
VM's. Rather than buy a separate server for running Bacula on, I'm thinking we 
can use a SAN with an iSCSI connection to a Linux VM and run the director and 
sd on that.

Is anyone doing this and could comment on the performance and/or pitfalls of 
doing this? I don't know how I'll achieve the virtual-to-usb yet either.

The SAN is a Netgear NAS with an iSCSI facility. It seems there was a version 
of bacula available to run on the NAS itself but I don't think it's been 
updated in a number of years.

Thanks

James

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restoring backed up catalogue alongside live catalogue

2013-12-04 Thread James Harper
I'm documenting the restore procedure for a customer site. For offsite media, I 
am backing up the catalogue to the offsite media, and then purging the records 
for that media from the catalog (so that subsequent virtualfull backups don't 
try and use the offsite media as a source, which leads to all sorts of 
confusion).

There are several TB of onsite Bacula storage, which will cover 99% of restore 
scenario's, but if a user did want a file that had fallen off the end of the 
onsite storage and I needed to restore from archived media, I think the 
procedure might go like:

1. bextract the catalogue from the offsite media (I save the bsr file too so 
this should be quick)
2. load the catalogue into a different mysql database (or sqlite3 maybe, with 
an appropriate dump converter?)
3. create a restore job that uses this new catalogue
4. restore the file

Is my idea of using a different restore job with a different catalogue to what 
Bacula uses normally going to work? The config file appears to allow for this 
but that doesn't mean it works in practice...

Thanks

James

--
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Spooling attrs takes forever

2013-11-15 Thread James Harper
 
 When it does a full backup of one client (also running Mint 15) of
 around 40G it gets stuck on the Sending spooled attrs to the Director.
 Despooling 151,437,267 bytes   This one step takes over eight hours
 to complete.  For instance it started at 03:10 my time today and it's
 now 08:20 and it's still running.
 

I had this spring up suddenly after a kernel upgrade. strace showed that mysql 
was constantly doing 512 byte writes + fsync's, and I guess there was a kernel 
update that handled fsync differently. I have a RAID1 of SATA disks, and a 
7200RPM SATA disk gets something like 75 IOPS, and if each op is 512 bytes 
that's a very low throughput.

I think I added:

innodb_flush_log_at_trx_commit=0

to my config file and the performance went back up to what I was used to, at 
the expense of a slight exposure to data loss in the event of a power failure.

James


--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Explanation regarding archive pool type

2013-11-05 Thread James Harper
 
 Okay, I was asking about it only because I had seen the webmin interface
 with that option.
 However, if you want to backup some datas forever (and, then, delete
 them from the original source) what is the best procedure? Create a
 different pool every time you have to save something or keep everything in
 a single pool?
 

Do you need to catalogue to stay available, or are you happy for it to expire 
and to bscan the media back in in the rare case where you might what to restore 
it?

What would be nice is a way to store a sub-catalogue on the media itself, and a 
way of importing that catalogue back in again when required, as a bscan isn't 
always ideal.

James


--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most 
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Improve bacula performance with software compression

2013-10-23 Thread James Harper
 Ways to resolution problem?
 1. Backups uncompressed, then unpack volume with external tools,
 compress and pack again.
 (..How Bacula does operate with that again? No way..)
 
 What else?
 
 I want to store files in volumes compressed without losts network copy
 performance.
 

I've just started using btrfs on all backup media. Btrfs can use LZO 
compressions so is quite fast and I'm much happier with the compression load on 
the sd rather than on the fd.

James

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135991iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Improve bacula performance with software compression

2013-10-23 Thread James Harper
 
  I've just started using btrfs on all backup media. Btrfs can use LZO
  compressions so is quite fast and I'm much happier with the compression
  load on the sd rather than on the fd.
 
 btrfs is still under development or I'm wrong?

Yes it is. It is highly recommended to use a recent kernel to get the most 
stable version possible. The FAQ 
(https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F) says to keep 
backups... a curious comment given our use case :)

I think the only major flaw is that running out of space can cause some 
problems that can be difficult to resolve, but this information may be out of 
date.

On the plus side, you get fast compression, and you are better protected for 
data corruption issues.

James
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135991iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Off-site Backups

2013-09-25 Thread James Harper
 
 I am struggling to find a method of keeping consistent off-site without
 breaking easy restores.
 
 My planned schedule was as follows:
 
 First Sunday of Month: Full Backup to Disk
 Monday-Saturday: Incremental Backup to Disk
   Friday (after the incremental):  Virtual Full Backup to Tape
 Subsequent Sundays in Month:  Differential Backup to Disk
 
 Keeping 3 months (or more depending on rate of change and how well
 compression works, which are currently unknowns) on disk for restores,
 and the Friday virtual full going off-Site in case of disaster.
 
 The problem I ran into (well not actually still in testing, and not
 production), is on the Second Friday of the month, the virtual full
 fails, because it can't find the previous virtual full to build the new
 one from.  How do I make it go back to the actual full instead of the
 previous Virtual Full?
 
 Will I be stuck creating a virtual full to another disk pool, then
 running a copy job for off-site backups, which unfortunately greatly
 reduces the history of on disk data I can keep for restores.
 
 Or is there some method I have yet to discover that will allow me to
 mark the tape volume unavailable so that it ignores that job on the
 subsequent virtual fulls.  I have tried playing with setting the enabled
 status to disabled, and updating the volstatus parameter, without
 getting anywhere.
 

My backup regime is:
. Full backups Friday and Saturday night (spread over two nights because too 
much data to do in one night)
. Incrementals 3 times a day every other day
. Every Sunday-Thursday night a virtual full + catalog backup to USB disk for 
offsite
. After the virtualfull, purge the offsite volume

The offsite disk will only ever be used in the case of a total loss of the 
backup server, so a catalog restore will be required then anyway, so purging 
the volume isn't a problem.

I originally modified Bacula so that it could exclude the virtual full medium, 
but that was a bit of a hack.

My post-catalog backup script does the purging. I have one director doing 
backups for two sites so there are actually two offsite usb disks (one for each 
site).

I use autofs for mounting the usb disk automatically. It gets mounted on 
/backup/offsite. The sd's are completely separate machines to the director too, 
hence the need for the scp.

The script I use follows this email (some stuff redacted).

James

#!/bin/sh

/etc/bacula/scripts/delete_catalog_backup

/usr/bin/mysql --skip-column-names bacula -ubacula -ppassword EOF |
SELECT DISTINCT VolumeName
FROM Job
JOIN Pool
ON Job.PoolId = Pool.PoolId
JOIN JobMedia
ON Job.JobId = JobMedia.JobId
JOIN Media
ON JobMedia.MediaId = Media.MediaId
WHERE Pool.Name IN ('site1-offsite', 'site2-offsite');
EOF
while read media
do
  echo Purging $media
  echo purge volume=$media | /usr/bin/bconsole /dev/null 2/dev/null
done

# copy catalog bsr to usb too
scp /var/lib/bacula/BackupCatalog.bsr 
site1-sd-server:/backup/offsite/BackupCatalog.bsr

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] filesystem setup for disk based backups

2013-09-01 Thread James Harper
In the past for setting up disk based backups I've used multiple physical disks 
each with a  single sd device on them and limited each device to a single job 
at a time, and each fd backing up to its own pool, with one device per job. 
This minimises seeks and fragmentation etc but does mean I'm making guesses 
about what client to back up to which disk to balance occupancy and workload.

I'm setting up a new server for backups now with 4 x 3TB disks and am thinking 
about the best parameters for filesystems. I'd like for multiple jobs to be 
able to run at once (still one volume per job) and with all the SD data sitting 
on one giant partition on RAID5 to maximise storage use.

I think I should be able to tweak the sd media filesystem to best suit this and 
minimise fragmentation. The parameters I'm looking at are:
. large allocation unit - maybe up to 1MB
. high commit time to maximise the case of streaming writes. Up to 30 seconds 
should be acceptable, provided a sync is done at the conclusion of each job
. data in writeback mode (eg no ordering of data writes, only metadata is 
journaled, on the basis that on a power failure the current backup is 
considered dead anyway).

Anything else I should be considering? Does any of the above sound like a 
recipe for disaster?

Thanks

James


--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Wrong storage proposed while restoring

2013-06-24 Thread James Harper
 Hello list,
 
 I have this annoying situation: while restoring, after selecting the 
 destination
 client, bacula proposes the following restore job parameters:
 
 JobName: RestoreFiles
 Bootstrap:   /var/lib/bacula/hagrid-dir.restore.11.bsr
 Where:   /home/restores
 Replace: always
 FileSet: client_3
 Backup Client:   client_1-fd
 Restore Client:  client_1-fd
 Storage:  client_2
 When:2013-06-24 12:55:17
 Catalog: client_1
 Priority:10
 Plugin Options:  *None*
 
 
 But the storage is wrong! it should be the storage of client_1, not client_2.
 This mismatching cause the job to fail.
 I have to modify manually this parameter every time i have to restore a file.
 
 Actually the Storage parameter seems to be taken randomly by bacula,
 sometime is client_1, sometime client_2, sometime client_3, etc...
 
 Is it an intended behaviour? and if it is, why?
 

What is the definition of your RestoreFiles job?

James


--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] why restore so slow after bscan?

2013-06-17 Thread James Harper
I needed to restore from some old disk media which had long since been purged 
from the catalog. I bscan'd it back in and started a restore but it's taking 
_ages_. Normally a restore from disk gets started in seconds and then is 
basically limited by network/media speed. This restore seems to still be 
seeking on the disk after 5 minutes (based on status stor). It's like it 
doesn't know where the file is so it has to re-scan the entire media...

Is there something else I need to do in a bscan or does the bscan not import 
all the required information so a seek is not possible?

Thanks

James


--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] default file retention period

2013-06-06 Thread James Harper
I get occasional Fatal error: Cannot find previous jobids. messages, which I 
believe are due to the file records for a job having been purged. My pool 
definition says Volume Retention = 29 days, and the docs say file retention 
defaults to 60 days and job retention defaults to 180 days, yet it seems that 
the files are being purged out long before that. My client record has a File 
Retention of 30 days and  Job retention of 6 months.

When I look at the Pool records, the File and Job retention are set to 0, which 
I assume implies the default retention period mentioned in the documentation.

How can I find out what is going on? There was definitely a previous full 
backup since the retention period, and I assume if there wasn't Bacula would 
just upgrade the incremental job to full anyway.

Thanks

James

--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] default file retention period

2013-06-06 Thread James Harper
 
 I get occasional Fatal error: Cannot find previous jobids. messages, which I
 believe are due to the file records for a job having been purged. My pool
 definition says Volume Retention = 29 days, and the docs say file retention
 defaults to 60 days and job retention defaults to 180 days, yet it seems that
 the files are being purged out long before that. My client record has a File
 Retention of 30 days and  Job retention of 6 months.
 

Disregard. The volumes just needed to be updated to reflect the retention in 
the pools. Must have been that way for a while.

James

--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Benefits of Hardware compression versus Software compression for highly compressable data

2013-05-14 Thread James Harper
 Hello,
 
 we do daily fullbackups for our database backups and currently I use
 software compression. The reason behind this is that I do disk2disk2tape and
 I want 2 weeks of backups readily on disk for fast restore.
 
 Four our databases we do the following (we have around 500 gig of DB
 Backup data each day):
 

You mean hardware compression performed by the tape drive itself right?

For backup throughput you have to consider:
. How fast can data be read off the disk (and compressed) on the fd
. How fast can data be transmitted to the sd
. How fast can the sd write out the data to the storage medium

If your fd-sd link is slow (eg 1mbit broadband or something) then software 
compression is a clear win as it happens at the fd and increases the effective 
backup throughput.

If you fd-sd link is fast then compression is likely going to slow you down. 
With compression on I max out at around 2mbytes/second, vs 20-50mbytes/second 
when compression is off. I'm mainly backing up to USB2 attached storage which 
peaks around 20mbytes/second write speed anyway, and I need the storage space, 
so I use compression.

Hardware compression happens on the tape drive itself so obviously less load on 
the fd's, but higher bandwidth utilisation. If your data is compressed on disk 
on your sd (because your fd's compressed it) and your sd has high speed disks 
that can feed data to the tape drive as fast as it can take it then your backup 
is going to be faster.

Lots of variables to consider! If it was my setup I'd be doing the compression 
at the fd's and turning off hardware compression.

I think there is a new compression algorithm available in newer versions of 
Bacula but I'm not sure it made it into the windows version yet. If this is 
significantly faster then it would make fd compression a better option again.

I'd like to see sd-side compression - then you could offload to a GPU or 
dedicated encryption/compression hardware and really fly!

James

--
AlienVault Unified Security Management (USM) platform delivers complete
security visibility with the essential security capabilities. Easily and
efficiently configure, manage, and operate all of your security controls
from a single console and one unified framework. Download a free trial.
http://p.sf.net/sfu/alienvault_d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] RAIT anyone?

2013-05-10 Thread James Harper
 
 Now that we know there is a way to use multiple disks without adding the
 'moving symlink' script (thanks Kern),
 
 is there by any chance an undocumented trick to write two copies of the
 same job at once? (aka RAIT.) Like maybe listing two Pools in the JobDef?
 

Unlikely. Building redundancy at this level when RAID already exists seems like 
a waste of effort.

I see from your other posts that you have problems with RAID on your 
hardware... seems that fixing your hardware would be the best way to proceed.

James

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and 
their applications. This 200-page book is written by three acclaimed 
leaders in the field. The early access version is available now. 
Download your free book today! http://p.sf.net/sfu/neotech_d2d_may
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Postgres vs SQLite

2013-04-30 Thread James Harper
 Hi,
 
 I was wondering if there was any information about the performance
 difference between running Bacula with a Postgres database vs an
 SQLite database.  I don't have any other need for a Postgres server,
 so if I can get Bacula to perform as well with SQLite as it does with
 Postgres, then I'd prefer to drop Postgres altogether.
 

If you are backing up one machine with a small number of files then sqlite 
might be okay, but otherwise you'll probably find it will be a performance 
bottleneck for anything bigger.

I recommend you go with postgresql (or mysql).

James


--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with 2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow virtualfull job

2013-04-29 Thread James Harper
 Hello James,
 
 It looks like you have found the most important things: turn on
 attribute spooling,

This actually made it worse. Instead of distributing the tiny little writes 
throughout the backup job, it saved them all to the end when nothing else was 
running. It would be nice if the next job could start while attributes were 
still spooling...

 and switch to InnoDB.

I switch to InnoDB last time this happened.

 Before switching to PostgreSQL, you might try running
 the MySQL tunning program mysqltune (I forgot the exact name).  It tells
 you
 items you should tune.  Often users do not give it enough memory, or even
 do the opposite which can cause problems, that is give it too much.
 
 There are several scripts that switch from MySQL to PostgreSQL that work
 fine. They also allow you to keep your MySQL running until you get a good
 config for PostgreSQL.  Tunning PostgreSQL is much more complicated, but
 it gives *far* better results for big jobs.
 

When loading the data into postgresql absolutely crawled along (~50kb/second 
disk write speed with 100% iowait) I knew I had a problem.

Something, somewhere has changed in my system that absolutely kills tiny sync 
writes. Or alternatively, something has changed in my system that makes mysql 
do tiny sync writes.

iostat showed ~50kb/second with 100% iowait while loading the catalogs, and 
nothing I did changed this. The following dd also behaved appallingly:

# dd if=/dev/zero of=test.bin bs=512 count=1024 oflag=sync
1024+0 records in
1024+0 records out
524288 bytes (524 kB) copied, 56.2482 s, 9.3 kB/s

While on an old system (~10 years old) with a single ATA/100 harddisk:

# dd if=/dev/zero of=test.bin bs=512 count=1024 oflag=dsync
1024+0 records in
1024+0 records out
524288 bytes (524 kB) copied, 1.10022 seconds, 477 kB/s

I'm working on trying to track down wtf is going on there, but in the meantime 
I have set innodb_flush_log_at_trx_commit=0 which means it won't run an fsync 
after each tiny little write but will instead wait for around a second then 
flush everything. This means I stand to lose 1 second of database commit in the 
event of a crash, but I also probably lose the whole backup job anyway so I 
don't see it as a loss.

Performance is now back to normal and I can take my time figuring out why this 
happened.

Thanks

James


--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow virtualfull job

2013-04-29 Thread James Harper
 
 On Mon, Apr 29, 2013 at 08:33:42AM +, James Harper wrote:
  When loading the data into postgresql absolutely crawled along
 (~50kb/second disk write speed with 100% iowait) I knew I had a problem.
  Something, somewhere has changed in my system that absolutely kills tiny
 sync writes. Or alternatively, something has changed in my system that
 makes mysql do tiny sync writes.
 
 What do you expect from sync writes regarding bacula?
 I don't use sync writes there as I am very sure they won't give me a
 benefit. As soon as the DB-server dies, the job is failed, you can't add
 the remaining attributes in a reasonable way later and sd/dir both
 refuse to work without a database server.
 I suggest turning sync writes off.
 This is valid for bacula, not for some random other database
 application.
 

The thing that bothers me is that it was working perfectly well up until a week 
ago when it suddenly slowed to a crawl, and I can't figure out what changed.

James
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula exchange 2013 plugin

2013-04-29 Thread James Harper
 Hi all,
 
 there's noone that have used bacula with exchange 2013?
 
 I would like to know if I can use bacula plugins or I have to use script to
 backup it.
 

I wrote the original Exchange plugin for Exchange 2003. It uses the 'streaming 
backup' interface of Exchange that is claimed not to exist beyond 2007, where 
you are supposed to use VSS.

But it is actually possible to turn on streaming backups in Exchange 2010 by 
adding a registry key. If you go to 
HKLM\System\CurrentControlSet\services\MSExchangeIS\ParametersSystem\ and then 
create a DWORD value called Enable Local Streaming Backup and set it to 1, 
then restart Exchange, the Bacula plugin will complete. Restore appears to be 
limited to only in-place restore, you can't redirect it to another place or 
anything useful like that. But you do get logfile truncation etc.

If you have a system to test on, I'd be very interested to hear if it still 
works under Exchange 2013. I suspect it won't, but I was very surprised to find 
that it worked under 2010.

James


--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with 2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow virtualfull job

2013-04-26 Thread James Harper
 
 On Thu, Apr 25, 2013 at 11:41:37PM +, James Harper wrote:
  What could have gone wrong with my mysql to make this happen? I've
 tried rebooting it.
 
 You very likely use MySQL with MyISAM tables. This is a very bad
 combination for bacula. It will be better with InnoDB tables and
 correctly tuned MySQL for these many inserts. However, postgres can be
 tuned the same way and has the additional benefit of being able to use
 parts of indices. As every insert has to update the tables and the
 indices, you have way less writes with postgres.
 I went the way MySQL 4GB MyISAM - 12GB MYISAM - 12GB InnoDB -
 Postgres 12GB myself.

I converted to innodb last time this happened. It didn't fix it but the problem 
went away by itself shortly after so I never got a chance to investigate 
further.

James

--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow virtualfull job

2013-04-25 Thread James Harper
 
 My setup has suddenly slowed down when running virtualfull jobs. I don't
 think it's all virtualfull jobs as if I kill off the one that's stalled then 
 a few more
 run okay until it hits another bad one.
 
 iostat on the sd shows pretty much nothing happening most of the time...
 bursts of 1MB/second or so then nothing for the next 10 seconds.
 
 Logging mysql queries I'm seeing lots of 'insert into batch ...' but there 
 are a
 few other jobs running concurrently so I can't be sure that these relate to my
 slow job.
 
 Bacula says the job is 'running' but way slower than it should. It's going to
 USB2 disk so I'd expect and have previously observed speeds of
 20Mbytes/second.
 
 Once the other jobs have completed I'll start stracing and monitoring the
 mysql queries to get a better picture of where it's going wrong, but any other
 suggestions for what to monitor would be greatly appreciated!
 

I switched on attribute spooling and now the job flies along until it gets to 
inserting attributes then it stops for ages. A virtual job that took 2 minutes 
to run is still inserting attributes 2 hours later. It is making progress, just 
very very slowly, and mysql is using lots of CPU.

What could have gone wrong with my mysql to make this happen? I've tried 
rebooting it.

How difficult is it to convert an existing installation over to postgresql? 
I've been meaning to do this for a while and it may be faster than trying to 
resolve the issue...

Thanks

James


--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] very slow virtualfull job

2013-04-24 Thread James Harper
My setup has suddenly slowed down when running virtualfull jobs. I don't think 
it's all virtualfull jobs as if I kill off the one that's stalled then a few 
more run okay until it hits another bad one.

iostat on the sd shows pretty much nothing happening most of the time... bursts 
of 1MB/second or so then nothing for the next 10 seconds.

Logging mysql queries I'm seeing lots of 'insert into batch ...' but there are 
a few other jobs running concurrently so I can't be sure that these relate to 
my slow job.

Bacula says the job is 'running' but way slower than it should. It's going to 
USB2 disk so I'd expect and have previously observed speeds of 20Mbytes/second.

Once the other jobs have completed I'll start stracing and monitoring the mysql 
queries to get a better picture of where it's going wrong, but any other 
suggestions for what to monitor would be greatly appreciated!

Thanks

James


--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring a large file to windows results in an empty file

2013-03-28 Thread James Harper
 Hello all,
 
 I am currently running Bacula 5.2.5 (the Windows Client I refer to is version
 2.4.5).
 My aim is to restore a file with a size of 700MB using BAT, but everytime I 
 try,
 only a file with a size of 0B is restored (although BAT says, everything was
 successful). A 30MB was successfully restored.
 

Anything tucked away in the job log?

 
 This is the client's .conf:
 ...
 # Configuration Messages
 Messages {
   Name = Standard
   director = bacula-x-director = all, !skipped, !restored
 }
 

Get rid of !restored so that restored files are logged.

James


--
Own the Future-Intel(R) Level Up Game Demo Contest 2013
Rise to greatness in Intel's independent game demo contest. Compete 
for recognition, cash, and the chance to get your game on Steam. 
$5K grand prize plus 10 genre and skill prizes. Submit your demo 
by 6/6/13. http://altfarm.mediaplex.com/ad/ck/12124-176961-30367-2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to perform a backup for Sql Server 2012

2013-03-06 Thread James Harper
 
  I would use the T-SQL  backup command to backup the sql database to a
  file then have bacula backup that file. I have tried scripting dumps
  however it seems that the T-SQL BACKUP / RESTORE works better than the
  management studio scripting to .sql ( at least in my testing).
 
 
 What he said.
 
 I'm a big fan of dumping to text, backing up that text.
 
 To be sure, copy that text to another server, then load that DB up and see
 how it goes.  I do that every day for every database I backup.
 

What I do is:

. Full backup once a week
. Diff backup once a day
. Log backups hourly (done completely separately of backup)

The full and diff backups are done via an SQL script that enumerates all 
databases and backs them all up. I don't have a way of excluding backups at 
this time.
After each full backup runs:
Restore a copy of master, model, and msdb to an alternate location then detach

The last step is done because you need master, model, and msdb in place to 
bootstrap mssql so having these present is really useful.

I can post scripts but would need to redact them first :)

James



--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to perform a backup for Sql Server 2012

2013-03-06 Thread James Harper
Also one very important thing to not about mssql server if you start doing diff 
and inc backups - diff and inc backups are based on the most recent normal 
backup taken.

If someone does a backup external to your automated Bacula backups, future diff 
and inc backups are based from that one, not the one Bacula did, and if that 
backup gets put somewhere that Bacula doesn't back up, or deleted, you will be 
unable to use your future diff and inc backups to restore from.

If you want to do a backup outside of Bacula, use the WITH COPY_ONLY option to 
ensure that the backup sequence isn't interrupted.

James

 -Original Message-
 From: James Harper [mailto:james.har...@bendigoit.com.au]
 Sent: Thursday, 7 March 2013 5:54 PM
 To: Dan Langille; Carlo Filippetto
 Cc: bacula-users
 Subject: Re: [Bacula-users] Best way to perform a backup for Sql Server 2012
 
  
   I would use the T-SQL  backup command to backup the sql database to a
   file then have bacula backup that file. I have tried scripting dumps
   however it seems that the T-SQL BACKUP / RESTORE works better than
 the
   management studio scripting to .sql ( at least in my testing).
 
 
  What he said.
 
  I'm a big fan of dumping to text, backing up that text.
 
  To be sure, copy that text to another server, then load that DB up and see
  how it goes.  I do that every day for every database I backup.
 
 
 What I do is:
 
 . Full backup once a week
 . Diff backup once a day
 . Log backups hourly (done completely separately of backup)
 
 The full and diff backups are done via an SQL script that enumerates all
 databases and backs them all up. I don't have a way of excluding backups at
 this time.
 After each full backup runs:
 Restore a copy of master, model, and msdb to an alternate location then
 detach
 
 The last step is done because you need master, model, and msdb in place to
 bootstrap mssql so having these present is really useful.
 
 I can post scripts but would need to redact them first :)
 
 James
 
 
 
 --
 Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester
 Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the
 endpoint security space. For insight on selecting the right partner to
 tackle endpoint security challenges, access the full report.
 http://p.sf.net/sfu/symantec-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup Windows 12??

2013-03-04 Thread James Harper
 Hi all,
 I trying to install bacula 5.2.10 on Windows 2012 server machine.
 The installation have always several problem on start-up the daemon, but
 when all is solved it seems to work.
 I use estimate and it works fine, but when I tried to make the first 
 test-job:
 

Have you definitely installed the 64 bit exe?

If so, are there any useful looking events in the event logs?

Also, what version is the director? Does that 5.0.1 indicate the version?

James

 04-Mar 16:00 sharepoint-db-fd JobId 13143: Fatal error: VSS API failure 
 calling
 InitializeForBackup. ERR=Unexpected error. The error code is logged in the
 error log file.
 04-Mar 16:00 sharepoint-db-fd JobId 13143: Fatal error: VSS was not
 initialized properly. ERR=Operazione completata.
 
 04-Mar 16:00 sharepoint-db-fd JobId 13143: Error: VSS API failure calling
 BackupComplete. ERR=Object is not initialized; called during restore or not
 called in correct sequence.
 04-Mar 16:00 sharepoint-db-fd JobId 13143: Fatal error: VSS API failure 
 calling
 GatherWriterStatus. ERR=Object is not initialized; called during restore or
 not called in correct sequence.
 04-Mar 16:00 netbackup-sd JobId 13143: Job write elapsed time = 00:00:06,
 Transfer rate = 0  Bytes/second
 04-Mar 16:00 netbackup-dir JobId 13143: Error: Bacula netbackup-dir 5.0.1
 (24Feb10): 04-Mar-2013 16:00:11
   Build OS:   i686-pc-linux-gnu redhat
   JobId:  13143
   Job:job-sharepoint-db-bck.2013-03-04_16.00.03_04
   Backup Level:   Full
   Client: sharepoint-db-fd 5.2.10 (28Jun12) Microsoft  
 (build 9200),
 64-bit,Cross-compile,Win32
 
 
 What does it means?
 Any one had worked successfully with this version of Windows?
 
 Thank you
 


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] using usb disk for daily off site backups with vchanger

2013-02-18 Thread James Harper
 Hi,
 
 i want to backup to usb disks every day, so i decided to install vchanger and
 use one usb disk for every day.
 
 my problem is that vchanger/bacula sometimes doesn't recognize my disks.
 

I tried vchanger for this once but couldn't get it to do what I wanted. I had 
much more success with using autofs.

auto.master looks like:

/backup /etc/auto.backup --timeout=30

/etc/auto.backup looks like:

offsite -fstype=auto,rw 
:/dev/disk/by-path/pci-\:02\:00.0-usb-0\:1\:1.0-scsi-0\:0\:0\:0-part1

so when you access /backup/offsite, the disk in the nominated USB slot is 
mounted, and is then unmounted 30 seconds after it is last accessed.

the bacula-sd config looks like:

Device {
  Name = offsite
  Media Type = File
  Archive Device = /backup/offsite
  Device Type = File
  AutomaticMount = yes;
  AlwaysOpen = no;
  Volume Poll Interval = 10;
  Random Access = yes;
  Spool Directory = /var/spool/bacula
  Label Media = no
}

This works really well for me.

James


--
The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, 
is your hub for all things parallel software development, from weekly thought 
leadership blogs to news, videos, case studies, tutorials, tech docs, 
whitepapers, evaluation guides, and opinion stories. Check out the most 
recent posts - join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] using usb disk for daily off site backups with vchanger

2013-02-18 Thread James Harper
 Hi James,
 
 i'll try that.
 
 How do you tell bacula which volumes it should use on the disks?
 

I forgot to mention that. I only have one volume on each disk. I suppose if you 
had multiple volumes you would need to set overwrite protection such that 
Bacula could only use one of them, eg with 5 disks rotated on a mon-fri 
schedule and 3 volumes per disk you might say each volume can only be reused 
after 18 days (a period that is longer than 2 weeks but shorter than 3).

James

--
The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, 
is your hub for all things parallel software development, from weekly thought 
leadership blogs to news, videos, case studies, tutorials, tech docs, 
whitepapers, evaluation guides, and opinion stories. Check out the most 
recent posts - join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring pruned files

2013-01-03 Thread James Harper
 
 I am trying to restore some files from pruned volumes and I am having a bit
 of difficulties. I did run bscan like:
 
 bscan -b h34.bsr -h localhost -P password -v -s -m -c bacula-sd.conf H23-
 DataStore-full
 
 Bscan finished normally and I went into bconsole, there I started the restore
 and it showed this:
 

Could it be that Bacula noticed that the files were old enough to be pruned and 
so just pruned them again as soon as the bscan had finished?

Try pushing out the prune date and doing the bscan again.

James

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Microsoft Windows Binaries

2012-12-19 Thread James Harper
 
 While it might be that PayPal solves some trouble it also create some
 additional grief. We have no PayPal company account and probably never
 will, so the question is how to pay the binary fee without PayPal.
 

Last time I made a payment somewhere (ebay I think) I was able to use my credit 
card via paypal without actually having a paypal account. I'm not sure if 
that's a universal thing or if it depends on the recipient of the payment.

James

--
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow disks RAID10 vs Fast disks RAID5

2012-11-22 Thread James Harper
 
 And for the file volumes either:
 
 12 x 600GB 15K SAS (RAID 5 or 6?) - faster disks (or is it in a raid 5?) and 
 less
 space (approx 6TB)
 
 OR
 
 12 x 2TB 7.2K SAS/SATA (RAID 10) - slower disks but more space (approx
 11.18 TB)
 

If you are backing up over 1Gbit/s ethernet then you require a storage that can 
write at a maximum of 100mbytes/s and either configuration should be more than 
capable of handling way more than that, assuming that it never gets 
ridiculously fragmented. If you use RAID5 then I recommend a hardware RAID 
controller with battery backed write cache, although it's less of a requirement 
for streaming storage than random access database storage where small ( 
stripe size) writes are frequent.

I would start by looking at how fast Bacula can push things on to disk. I 
wonder if there is a way of backing up to /dev/null so this can be measured? 
Once you have that figure then you can think about what sort of disk throughput 
you need.

James


--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity (variable?)

2012-10-01 Thread James Harper
 
 Hi,
 
 I ran some btape tests today to verify that I'd be improving throughput by
 changing blocksize from 256KB to 2MB and found that this does indeed
 appear to be true in terms of increasing compression efficiency, but it
 doesn't seem to affect incompressible data much, if at all.  Still, it seems
 worth changing and I thank you for pointing me in that direction.
 
 More importantly, I realized that my testing 6 months ago was not on all
 4 of my drives, but only 2 of them.  Today, I discovered one of my drives
 (untested in the past) is getting 1/2 the throughput for random data writes as
 the others!!
 

Is it definitely LTO3 and definitely using LTO3 media? LTO2 was about half the 
speed, including using LTO2 media in an LTO3 drive.

James

--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity (variable?)

2012-09-25 Thread James Harper
 
 I've found LTO drives of any variety rarely need cleaning.  I've found that 
 one
 cleaning tape will usually be sufficient for the life of a library.  Your 
 field
 support may very well be right.
 

The drive has an internal cleaning mechanism that is activated on load 
http://en.wikipedia.org/wiki/Linear_Tape-Open#Cleaning which is normally enough 
to keep the heads clean.

In my experience, when the drive starts telling you it needs cleaning, it's 
probably failing.

James

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity (variable?)

2012-09-24 Thread James Harper
 
 Hello all,
 
 This is not likely a bacula questions, but in the chance that it is, or the
 experience on this list, I figured I would ask.
 
 We've been using LTO3 tapes with bacula for a few years now.  Recently I've
 noticed how variable our tape capacity it, ranging from 200-800 Gb.
   Is that strictly governed by the compressibility of the actual data being
 backed up?  Or is there some chance that bacula isn't squeezing as much
 onto my tapes as I would expect?
 
 200Gb is not very much!
 

I don't think this explains your issue, but LTO drives will write the data to 
the tape, and then immediately read it again (the read head is placed such that 
this is possible). If the read is bad the drive will rewrite the data. This 
ensures that you get a good write, but obviously decreases the effective 
capacity of your tape.

Your tapes would have to be pretty worn out to drop the capacity to 25% though.

The tape and/or drive should record the margin and other figures, but I don't 
know of any Linux tools to read that information.

James


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Acces denied (Zugriff verweigert) on windows file restore

2012-09-11 Thread James Harper
 
 Hello
 
 we tested a restore on a somewhat larger jobs (Windows 2003 client) with
 the following result:
 
 10-Sep 21:28 bacula-test-dir JobId 116: Error: Bacula bacula-test-dir
 5.2.10 (28Jun12):
Build OS:   x86_64-pc-linux-gnu ubuntu 12.04
JobId:  116
Job:RestoreFiles.2012-09-10_16.53.15_09
Restore Client: xpression-fd
Start time: 10-Sep-2012 16:53:17
End time:   10-Sep-2012 21:28:33
Files Expected: 565,485
Files Restored: 565,485
Bytes Restored: 241,268,365,509
Rate:   14608.2 KB/s
FD Errors:  343
FD termination status:  Error
SD termination status:  OK
Termination:*** Restore Error ***
 
 
 The 343 errors are all but one of the following:
 
 10-Sep 20:27 xpression-fd JobId 116: Error: filed/restore.c:1475 Write error
 on h:/g/tmp/Service/eclipse_training_dvd/Uebungsmaterial/Kapitel
 09/Lektion 09.03/: Zugriff verweigert
 
 Strange enough the files are all there and i can't find any special 
 permissions
 on them.
 
 Bacula is 5.2.10 on Server and client, OS is Windows 2003 32-Bit.
 
 Any idea what is worng with these files?
 

I find process monitor from sysinternals.com very useful for debugging stuff 
like this. You should be able to filter it on that file/directory to just give 
you the records you are interested in.

James


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.2.10 on Windows 2008 R2 server : 1 MByte/sec transfer rate, very slow

2012-09-05 Thread James Harper
 Using iperf I measured following performances :
 
 bacula-fd = the windows server 2008 R2 host, 1Gb/sec NIC bacula-dir = a linux
 ubuntu 10.04 PC, 1 100Mb/sec NIC bacula-sd = a linux ubuntu 10.04 server, 1
 Gbite/sec NIC
 
 
 iperf serveriperf clientPerformance
 --- 
 
 bacula-fdbacula-dir10 MBytes/sec
 bacula-fdbacula-sd111 MBytes/sec
 bacula-dirbacula-fd10 MBytes/sec
 bacula-sdbacula-fd 26 MBytes/sec
 
 So normally the bacula client should be able to write to the bacula storage at
 26MBytes/sec ?
 
 Any suggestions ?
 

Any crappy computer made in the last 5 years should be able to saturate a 
gigabit link using iperf. The fact that you are only getting 26Mbytes/second 
fd-sd is a bit worrying... it's well above the 1Mbit/second that bacula 
appears to be limited to but it's still an indication of a major problem. I 
haven't had that much experience with Hyper-V for performance testing but it 
should be able to approach Xen which easily gets gigabit speeds for Windows 
VMs. Is your switch up to the job?

James


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.2.10 on Windows 2008 R2 server : 1 MByte/sec transfer rate, very slow

2012-09-04 Thread James Harper
  Hi
 
 I have Bacula 5.01 Director installed on a Linux Ubuntu 10.04 server, and a
 Bacula 5.2.10 Client (Bacula-fd) running on a Windows Server 2008
 R2 SP1 server, with Hyper-V role installed. Purpose is her to backup the host
 server, not the virtual machines.
 
 Initially the backup transfer rate was extremely slow (kbytes/sec).
 Changing the Network adapter (Broadcom NetXstreme 5714) settings to
 Large end Offload (LSO) = off as suggested in some posts increased the
 transfer rate to 1MB/sec, which is still 10 to 80 times slower than the 
 transfer
 rates I have with other servers running Linux or Windows 7. Transferring a 
 file
 'by hand' over the net runs at 80 MB/sec ...So I suspect a problem with
 Bacula-fd.
 
 Any idea how to configure the server so that I can get decent backup transfer
 speeds ? I can't imagine these servers can't be managed by Bacula.
 

First use something like iperf to make sure that the problem is not bacula. 
Test all possible combinations of send/receive for the following:

. host server
. bacula sd server
. another pc/server that is separate (can be linux or windows)

That should give you concrete evidence as to whether the problem is related to 
bacula. Hyper-V network can be terribly difficult in some cases.

James


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Windows client using ntfs change log?

2012-08-20 Thread James Harper
 Hello List
 
 i recently read about the NTFS change journal and its aim to support backups
 in finding files for incremental backups. Are there any plans to maybe include
 such a feature in the windows client? I suspect this would be really helpful 
 on
 bigger sized filesystems with a lot of files. Information about the NTFS
 feature can for example found here:
 http://msdn.microsoft.com/en-us/library/aa363798.aspx
 

I've implemented this in an unsubmitted patch. The problems I found are:

. takes a bit of memory to hold the data
. it doesn't completely solve the accurate problem for any 'since' date unless 
you keep the complete log in memory rather than 'this file has changed' 
(although it would in the regular since-last-backup case)
. bacula really needs to store the FRN in the catalog, which takes extra space

Another idea I had was to cross reference the MFT vs the accurate file 
information. Enumerating the MFT isn't particularly quick, but it is orders of 
magnitude faster than actually traversing the entire filesystem looking for 
changes. If Bacula recorded the FRN and the last USN then a is file changes? 
would be straightforward.

I can send the patch (against 5.2.0 I think) if you would like to use it as a 
starting point.

James


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull

2012-07-11 Thread James Harper
I have a backup_catalog_pre and backup_catalog_post scripts. The _pre script 
just calls the catalog backup script so it can be backed up by the job. The 
_post script is on the end of this email. It gathers any volumes associated 
with jobs in the offsite pool and purges them. Then for good measure it also 
copies the bsr file to the media.  I use postgresql, you'd need to rewrite it a 
bit to use mysql etc.

After the script runs, bacula will have no memory of the jobs on the offsite 
media and so will not choose them as a source for any VirtualFull jobs. In my 
case, needing to restore from offsite media would be because the backup server 
was completely wiped out so a catalog restore is required in that case anyway.

At some point I would also like to copy my bacula configs and even static 
binaries to the USB media as part of the job, but I haven't done that yet. Even 
better would be making the USB backup drive bootable too, although that is more 
effort to maintain.

James

#!/bin/bash

/etc/bacula/scripts/delete_catalog_backup

/usr/bin/psql -Atc 
SELECT DISTINCT VolumeName
FROM Job
JOIN Pool
ON Job.PoolId = Pool.PoolId
JOIN JobMedia
ON Job.JobId = JobMedia.JobId
JOIN Media
ON JobMedia.MediaId = Media.MediaId
WHERE Pool.Name = 'offsite';
 | while read media
do
  #echo Purging $media
  echo purge volume=$media | /usr/bin/bconsole /dev/null 2/dev/null
done

cp /var/lib/bacula/BackupCatalog.bsr /backup/offsite

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull

2012-07-10 Thread James Harper
 I have a problem with VirtualFull and I don't know whether this behavior is by
 design or I messed up my configuration:
 
 I'm trying to run a Virtualfull backup but bacula is trying to read the last
 virtualfull (which is stored offsite) so it fails. I thought bacula would 
 construct
 a VirtualFull from the last Full-incr-diff. Since I haven't run a Full since 
 the last
 VirtualFull it seems to me bacula is trying to construct the Virtualfull from
 VirtualFull-incr-Diff. Is this the normal behavior?

I think that's the way it's supposed to work. VirtualFull is a way to 
consolidate previous Full+Inc backups into a single Virtual Full backup. I am 
doing much the same as you, but at the end of my virtual full backups I run a 
catalog backup to my offsite media (USB disk) then purge all the offsite jobs 
from the database. The offsite jobs are only for disaster recovery, so they 
don't need to be in the live catalog anyway - I would never restore from an 
offsite volume unless my backup server was completely destroyed.

I can post my script for doing this if you want.

James

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows 2008: Fatal error: VSS was not initialized properly

2012-05-31 Thread James Harper
 Hi,
 i installed bacula-win32-5.2.6 on Windows 2008 x64. When i try to start the
 backup, the server logs say: Fatal error: VSS was not initialized properly. 
 As
 far as i read, i have to add the user that the bacula service is running 
 under to
 the registry key (HKLM-System-currentControlSet-services-VSS-
 VssAccessControl).
 There is already one key defined (NT Authority\Network Service). The
 bacula-service runs as Lokales Systemkonto which might be local system
 or local system account. Whatever key i try to add there in the registry, 
 the
 error remains. My knowledge of Windows is quite limited.
 What would be the right key that i have to add there? Or should i change the
 account that bacula uses? Or should i disable VSS?
 Thanks for any help!
 

You need the 64 bit version - bacula-win64-5.2.6

James

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange plugin always truncating logs (when it should not)

2012-05-04 Thread James Harper
 
 I've been using bacula to backup an exchange 2003 server for some time with
 success. You could say I'm an happy user :)
 
 Lately I noticed bacula started truncating logs on EVERY backup, also
 incrementals and differentials, when it should not.
 
 I'm not sure when this happened. I'm inclined to thing it started when
 I upgraded to 5.2.6 But I don't have logs handy till them. I can try to
 reover them from some backup perhaps.
 
 The machine is a Windows 2003 R2 server, 32 bit. I used the installer
 from the project page on sourceforge to install bacula.
 
 If needed I can try installing a previous version on this client and see
 if this makes the problem disappear. Is such a scenario(older client)
 supported?
 

That's really strange. The code to do this is pretty straightforward... at job 
init it does this:

  switch (context-job_level) {
  case 'F':
 if (context-notrunconfull_option) {
context-truncate_logs = false;
 } else {
context-truncate_logs = true;
 }
 break;
  case 'D':
 context-truncate_logs = false;
 break;
  case 'I':
 context-truncate_logs = false;
 break;
  default:
 _DebugMessage(100, Invalid job level %c\n, context-job_level);
 return bRC_Error;
  }
  break;

Then later it does this:

  if (context-truncate_logs) {
 _DebugMessage(100, Calling HrESEBackupTruncateLogs\n);
 result = HrESEBackupTruncateLogs(hccx);
 if (result != 0) {
_JobMessage(M_FATAL, HrESEBackupTruncateLogs failed with error 
0x%08x - %s\n, result, ESEErrorMessage(result));
 } else {
_JobMessage(M_INFO, Truncated database logs for Storage Group 
%s\n, name);
 }
  } else {
 _JobMessage(M_INFO, Did NOT truncate database logs for Storage Group 
%s\n, name);
  }

And nothing else touches truncate_logs in between that I can see.

Could you please try doing:

set debug level=200 trace=1

for your fd, run a backup, and then email me the resulting trace file (in the 
fd working directory I think). I'm wondering if something has changed and the 
callback ordering is different or something.

James


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] cancel job if no usable media found

2012-05-01 Thread James Harper
I want Bacula to simply cancel the job if no useable media is found. If the 
current media is not yet past its retention period then it means the operator 
hasn't put the disk in, and the job cannot run so it should cancel immediately. 
I thought I could use Max Start Delay or Max Wait Time = some low value but 
that doesn't work for the case where multiple jobs are queued as the second job 
waits for a bit then cancels.

Is there an easy way to do this?

Thanks

James

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full backup forced if client changes

2012-03-24 Thread James Harper
 Bacula 5.0.2. For the following example job:
 
 snip
 
 more than one client is available to backup the (shared) storage. If I change
 the name of the client in the Job definition, a full backup always occurs the
 next time a job is run. How do I avoid this?
 

That's definitely going to confuse Bacula. As far as it is concerned you are 
backing up a separate client with separate storage.

You might be able to invent a new name for the client and put it in /etc/hosts 
and change /etc/hosts when you want to back up via a different host. I'm not 
sure how that works for the fd-sd connection though.

DNS with a very short TTL (eg measured in seconds) might work too, as long as 
you wait for a bit between changing the DNS entry and starting the backup.

James


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] exchange tapes with Backup-Exec

2012-03-09 Thread James Harper
 Hello
 
 I'm a new starter with Bacula, I need to read some tapes written by Bacula
 with Backup-Exec 2010. I tryed to do this, but it seems that Backup-Exec can't
 read those tapes. On the Backup-Exec forums told me that BE can read all
 tapes written as .mtf format. My question is how can I made Bacula write
 tapes in mtf format ? Is that possible ??
 

I don't think Bacula understands mtf. How are your coding skills?

http://laytongraphics.com/mtf/MTF_100a.PDF

james


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large backup to tape?

2012-03-08 Thread James Harper
 Thanks for the suggestions!
 
 We have a couple more questions that I hope have easy answers.  So, it's
 been strongly suggested by several folks now that we back up our 200TB of
 data in smaller chunks.  This is our structure:
 
 We have our 200TB in one directory.  From there we have about 10,000
 subdirectories that each have two files in it, ranging in size between 50GB
 and 300GB (an estimate).  All of those 10,000 directories adds to up about
 200TB.  It will grow to 3 or so petabytes in size over the next few years.
 
 Does anyone have an idea of how to break that up logically within bacula,
 such that we could just do a bunch of smaller Full backups of smaller
 chunks of the data?  The data will never change, and will just be added to.  
 As
 in, we will be adding more subdirectories with 2 files in them to the main
 directory, but will never delete or change any of the old data.
 
 Is there a way to tell bacula to back up all this, but do it in small 6TB 
 chunks
 or something?  So we would avoid the massive 200TB single backup job +
 hundreds of (eventual) small incrementals?  Or some other idea?
 
 Thanks again for all the feedback!  Please reply-all to this email when
 replying.
 

I was in a similar situation... I had a directory that was only every appended 
to, it was only around 10GB but the backup was over a 512kbps link so the 
initial 10GB couldn't be reliably backed up in one hit during the after-hours 
window I had available. So what I did was create a fileset that included around 
5% of the files (eg aa*-an*), then progressively changed that fileset to 
include more and more files each backup. The important thing here is to use the 
IgnoreFileSetChanges = yes flag in the fileset so Bacula doesn't want to do a 
full backup every time the fileset changed. This is all using incremental 
backups. Once I had it all backed up it just backed up the overnight changes 
and everything was good.

My situation was different though in that I was doing a weekly virtual full to 
consolidate the backups into one volume, which is harder to do with 2PB of 
data, but if your challenge is getting the initial data backed up in less than 
a single 200TB chunk then you can do it by manipulating the fileset as long as 
you have IgnoreFileSetChanges = yes.

I don't think you would need accurate=yes to do the above either.

James


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Start / restart the windows bacula client remotely

2012-02-27 Thread James Harper
 Hi folks.
 
 I'm rolling out my new Bacula network and I'm working on automating the
 config process.
 
 I already have a script on my Linux server using smbclient to pull the bacula-
 fd.conf from the windows client, extract the password, and then configure
 the client in bacula-dir.conf.
 
 Sometimes, when the clients are installed, it doesn't set up the local monitor
 entry, leaving the default values for the name and password. The Bacual
 client then fails to start.
 
 As part of my script above, I fix the bacula-fd.conf and push it back to the
 windows client.
 
 Does anyone know how once I've done this I can start the service on the
 Windows client from my Linux server?
 

winexe can do this (in that it can do pretty much anything) 
http://sourceforge.net/projects/winexe/

If you find a more elegent solution please do post it. With winexe you would 
execute something like net start bacula-fd to start the service, but getting 
success or failure from that operation makes it a bit messy.

James


--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] external usb disk formatted as NTFS

2012-02-20 Thread James Harper
When I buy pretty much any external USB disk it comes formatted as NTFS. 
Changing to ext3 or some other FS isn't a big deal, but has anyone done any 
performance testing against ntfs-3g?

Thanks

James

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore on windows with VSS stops file daemon service

2012-02-16 Thread James Harper
 
 I am trying to implement BACULA 5.2.5 on my company and so far, i managed
 to configure a server with several linux and windows clients and one AIX.
 
 I've done full backups without any errors on all clients.
 
 I just can't restore any files on windows clients using VSS.
 
 When i start the restore job, the file daemon service stops.
 
 I tried running the same file sets without VSS and the restore job ran with
 success. It just dont work with VSS.
 
 I Also ran the restore with the client in debug mode.
 

What debug level?

James

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Needing some advices, tape drive over iscsi

2012-01-10 Thread James Harper
 
 My bacula server is on a virtual machine, I pass my Tape Drive via iscsi to my
 bacula server. This tape drive is also shared sometimes with my older backup
 server in order to restore old backups.

Can you pass the tape drive through from the driver VM? Any enterprise 
virtualisation should allow you to do this.

Xen supports scsi passthrough and also pci passthrough. I use the former for 
running Backup Exec test restores in a Windows VM, and I've never used the 
latter but it should work fine.

James

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows client error

2012-01-02 Thread James Harper
 : line 35, col 24 of file C:\ProgramData\Bacula/bacula-fd.conf
 
   Name = @monitor_name@
 

Post the contents of the c:\programdata\bacula\bacula-fd.conf file, but I 
suspect you just need to change the @monitor_name@ to the same as the director 
name earlier in the config file but with an extension of -mon or something. Or 
just remove the monitor section entirely which is what I do (I find the tray 
monitor causes crashes).

James

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] purge/mark unavailable all volumes on a given storage volume

2011-12-27 Thread James Harper
I backup to permanently attached USB disk (3 weeks retention of weekly full + 
3x daily differentials) then to offsite USB disk (virtual full), and one of the 
permanently attached disks has just failed. Is there a shortcut to tell Bacula 
to purge all volumes on that disk (they aren't coming back), prompting it to 
just do a full backup next time the jobs run?

My fallback is to just create a shell script based on a query result but maybe 
there's another trick someone knows?

Thanks

James

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Noob user impressions and why I chose not to use Bacula

2011-12-06 Thread James Harper
 The closest thing to a custom eject tape builtin command in bconsole
I
 could came up with is this:
 
 # admin job to manually eject tape from within bconsole Job {
  Name = TapeEject
  Type = Admin
  FileSet = LinuxDefaultSet
  Client = serverlinux-fd
  Storage = Tape
  Pool = Tape
  Messages = Standard
  RunBeforeJob = echo 'umount storage=Tape' | bconsole
  RunAfterJob = mt-st -f /dev/nst0 offl
 }
 
 This works on my setup, where I have a single tape drive (LTO-1, for
the
 record). No autochanger.
 

It also supposes that the tape drive is local to the director, and not
connected to another machine.

james

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Noob user impressions and why I chose not touse Bacula

2011-12-05 Thread James Harper
 Regarding the disaster recovery, I have a suggestion for the bacula
team:
 
 Why not make the director write the bacula config files and any
relevant bsr
 files at the beginning of each tape?
 The space wasted on the tape to save these file would be very small.
 

A script to email the bsr file to a gmail/Hotmail/whatever account would
suffice. It's not like the file contains any sensitive information.

James



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Data spooling for migration jobs?

2011-11-30 Thread James Harper
 
 I'm migrating my tapes from an older slow drive to a newer one.
 I registred much stops of the new drive at the time of migration.
 
 I've not set an Spool Data directive to my migration job
configuration.
 
 Is it possible to have an Spool Data directive for migrating jobs?
 
 I'll check that if the running job has to been finished which needs a
couple of
 time, but it will be fine anybody can give me an answer in the
meantime.
 

I don't think you can do that. You could migrate to fast disk first then
to the fast tape. Not as efficient as spooling but if you have lots of
jobs it wouldn't be that bad.

James

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] problem install bacula-win32-5.2.1.exe on win 7Ultimate

2011-11-14 Thread James Harper
 
 Hi  forks#65306;
 My OS is win 7 Ultimate simple Chinese 32 bit .
 When I double click bacula-win32-5.2.1.exe, I can see the
bacula-win32-
 5.2.1.exe is running in window tasker.
 But no GUI is showed on the desktop .
 Any hints is appreciated.
 Thank you!
 

Is UAC on? It shouldn't make a difference, but try right click and run
as administrator.

James

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VSS Microsoft Exchange 2010 SP1 - logs not flushed?

2011-11-09 Thread James Harper
 
 Being pretty new to bacula I'd first like to apologize for probably
stupiod
 questions. But I couldn't find proper answers yet:
 
 * The Exchange Plugin does not support 2010 yet, correct? Is
it going to be
 developed further?
 
 * I assume I should be able to backup Exchange 2010 SP1 using
VSS. I do
 indeed get a backup, but the transaction logs aren't flushed, though I
set the
 backup-level to full. What am I missing here?
 

The existing bacula Exchange plugin uses the esebcli2.dll api, which is
not supported under Exchange 2010. Exchange 2010 can only use VSS to do
backups, and while Bacula supports VSS in the bacula.org edition, it
doesn't involve the writers in the backup so you effectively only get a
'crash consistent' backup, rather than all the bells and whistles of the
VSS writers like transaction log flushing etc.

So you either need to do the backup another way to a file, or use
circular logging, making sure you understand what circular logging takes
away from you in terms of exposure to crashes etc.

James


--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] windows 5.2.1 fd performance

2011-11-09 Thread James Harper
Is anyone else getting poor performance from the 5.2.1 Windows FD? I
just upgraded a machine from 2.2.8 to 5.2.1 and the performance is
awful. A backup that should take less than an hour is still running
after 10 hours and is then cancelled due to exceeding the allowed
running time.

Alternatively, is anyone else getting good performance? I've only
upgraded the one server so far so I haven't determined if it's just this
install or a general problem.

Thanks

James

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VSS reporting files corrupted or unreadable

2011-11-07 Thread James Harper
 
 We have been running Bacula on Windows with VSS disabled, but would
like
 to turn it on for disaster recovery purposes.  In our tests on a fully
patched
 version of Windows Server 2003, we are running into a problem where
files
 that can be backed up without VSS are being reported as corrupted or
 unreadable when VSS is enabled.  I posted a sample log to
 http://pastebin.com/NRYgmRQy
 
 This does appear to be a Windows VSS problem rather than an issue with
the
 Bacula client, as the system logs report an ntfs error when it occurs.
A chkdsk
 did not help.
 

Are there any other messages in the event logs about the vss snapshot
process?

What you describe would indicate that the problem is Windows rather than
Bacula. Can you try the following suggestions, in no particular order:

While the backup is running, does the command 'vssadmin list writers'
show any writers with errors? The way Bacula uses VSS is fairly
simplistic and doesn't involve the writers, but it does give a 'crash
consistent' copy of the drive. A writer in an error status would be an
indication of a problem though.

Is the MySQL very heavily used? VSS likes to try and find a period of
'idle time' to do its work. Whatever happens, the outcome should never
be a corrupt snapshot but maybe you've discovered a bug. Is it possible
to make MySQL idle (or stop it altogether but ideally you'd test with
the files still in use) and see if the problem persists?

Do a chkdsk /f on the drive where the database is. Probably best to do
it on reboot rather than force a dismount of the drive. Obviously the
volume is working well enough but there could be some latent corruption
or something that only comes out in the snapshot.

Create a snapshot manually. This guy blogs about how to do it
http://blogs.msdn.com/b/adioltean/archive/2005/01/20/357836.aspx and map
it to a drive letter. The vshadow tool that he talks about is part of
the VSS SDK... newer versions are available but I assume this version
http://www.microsoft.com/download/en/details.aspx?displaylang=enid=2349
0 might do the trick on 2003. Once you've created the snapshot, see if
you can access the files just by copying them to somewhere else. At
least then you'll know if you have a general VSS problem or if it is
specific to Bacula.

Good luck!

James

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance over WAN links

2011-10-17 Thread James Harper
Disregard. It's flying along now at the expected speeds. I blame sun
spots.

James

 -Original Message-
 From: James Harper [mailto:james.har...@bendigoit.com.au]
 Sent: Monday, 17 October 2011 4:14 PM
 To: bacula-users@lists.sourceforge.net
 Subject: [Bacula-users] performance over WAN links
 
 I'm revisiting a remote backup, and am troubled by the fact that
Bacula
 appears to be making using of only a fraction of the available
 bandwidth.
 
 iperf tells me there is around 750KBits/second of usable TCP bandwidth
 in the fd-sd direction, but Bacula only reports a Bytes/sec rate of
 30Kbytes/second which is quite removed from the ~70Kbytes/second I'd
 expect.
 
 A tcpdump of a 30 second snapshot of the traffic shows that it isn't
 buffer overhead - there really were only around 30Kbytes/second of
data
 transmitted.
 
 Any hints on how to speed this up a bit?
 
 Thanks
 
 James
 


--
 All the data continuously generated in your IT infrastructure contains
a
 definitive record of customers, application performance, security
 threats, fraudulent activity and more. Splunk takes this data and
makes
 sense of it. Business sense. IT sense. Common sense.
 http://p.sf.net/sfu/splunk-d2d-oct
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] performance over WAN links

2011-10-16 Thread James Harper
I'm revisiting a remote backup, and am troubled by the fact that Bacula
appears to be making using of only a fraction of the available
bandwidth.

iperf tells me there is around 750KBits/second of usable TCP bandwidth
in the fd-sd direction, but Bacula only reports a Bytes/sec rate of
30Kbytes/second which is quite removed from the ~70Kbytes/second I'd
expect.

A tcpdump of a 30 second snapshot of the traffic shows that it isn't
buffer overhead - there really were only around 30Kbytes/second of data
transmitted.

Any hints on how to speed this up a bit?

Thanks

James

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] seeking advice re. splitting up large backups -- dynamic filesets to prevent duplicate jobs and reduce backup time

2011-10-12 Thread James Harper
 
 In an effort to work around the fact that bacula kills long-running
jobs, I'm
 about to partition my backups into smaller sets. For example, instead
of
 backing up:
 
   /home
 
 I would like to backup the content of /home as separate jobs. For
example:
 
   /home/[0-9]*
   /home/[A-G]*
   /home/[H-M]*
   /home/[N-Q]*
   /home/[R-U]*
   /home/[V-Z]*
   /home/[a-g]*
   /home/[h-m]*
   /home/[n-q]*
   /home/[r-u]*
   /home/[v-z]*
 
 I'm looking for advice for how prevent multiple jobs of different
names, that
 access the same client, from running simultaneously. For example, to
 prevent an incremental of job home0-9 running at the same time as a
full
 of job homeA-G.
 
 The only method I can think of is to use a dynamic fileset in the
director to
 generate the different filesets, so that there's only a single named
job that
 backs up a different set of files on each full backup. This way the
Allow
 Duplicate Jobs setting can be effective.
 

Does Bacula really kill long running jobs? Or are you seeing the effect
of something at layer 3 or below (eg TCP connections timing out in
firewalls)?

I think your dynamic fileset idea would break Bacula's 'Accurate Backup'
code. If you are not using Accurate then it might work but it still
seems like a lot of trouble to go to to solve this problem.

If you limited the maximum jobs on the FD it would only run one at once,
but if the link was broken it might fail all the jobs.

Another option would be a Run After to start the next job. Only the
first job would be scheduled, and it would run the next job in turn.
Then they would all just run in series. You could even take it a step
further and have the Run After script to retry the same job if it
failed due to a connection problem, and to give up after so many
retries. Maybe it could even start pinging the FD to see if it was
reachable (if backing up over an unreliable link is the problem you are
trying to solve).

James

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] multiple spool files per job

2011-10-08 Thread James Harper
Is there a way to make bacula write multiple spool files per job? Two
would do. What I'm seeing is that 4 jobs start, all hit their spool
limit around the same time, then all wait in a queue until the file is
despooled. The despool happens fairly quickly (much quicker than the
spooling due to network and server fd throughput) so it isn't a huge
problem, but it would be better if the sd could just switch over to
another spool file when despooling starts so that the backup can
continue uninterrupted.

I'm spooling to internal RAID, then despooling to external USB. While
spooling isn't really advised when the backup target is a disk, doing it
this way means I can run multiple jobs at once without causing
interleaving in the backup file (single sd volume) or severe filesystem
fragmentation (if one sd volume per job). Internal RAID writes at
~100MB/second while the USB disk writes at ~30MB/second so it turns out
to be a pretty effective way to do what I want except that despooling is
causing a bottleneck.

Any suggestions?

Thanks

James



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] error backing up Exchange 2003

2011-10-05 Thread James Harper
  What error is Bacula giving? What errors are in the event logs?
 
  One thing - if Exchange is set to 'circular logging' then you will
  have problems backing up.
 Not really. All circular logging means is that the old database log
files will be
 deleted before the next backup. In normal operation, these old log
files are
 never used anyway.

I should have qualified this with the bacula Exchange plugin assumes
you have circular logging disabled and won't work if circular logging is
enabled .

 
 The only reason somebody might want to keep them is to allow restoring
the
 Exchange database to one particular point in time. Instead of the
traditional
 restore the last full backup, and then restore the incrementals you
would
 restore the full backup plus all the log files since then. Then you
replay the
 log files until the exact point in time you want.
 
 In practical terms, most people never needed this capability, so
circular
 logging actually is perfectly fine.
 

You are kidding right? Turning on circular logging limits your restore
options immensely. I've had numerous situations where a server has
failed (usually a power surge that has blown the UPS and the server)
leaving the Exchange database corrupt. With all the logs since the last
backup in place (because circular logging is disabled) you simply
restore the database. Exchange then starts up, notices all the logs are
still there and brings everything up to date. It's a no-loss recovery.
No mucking around with eseutil, no fretting over the weekends worth of
email you might have missed, it's all just there.

The 'restore to a point in time' was a useful thing in the past but it's
not the reason you would want circular logging disabled - retention
policies take care of recovering accidental deletions etc these days.

James

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] error backing up Exchange 2003

2011-10-05 Thread James Harper
  In Exchange 2010, circular logging is the default.
 
 
 
 Circular logging is NOT the default in Exchange 2010
 

I think that's not true for Small Business Server 2011 (which includes
Exchange 2010). I'm pretty sure that by default it is on for SBS 2008
too (Exchange 2007). This means you can take your regular Windows
Backup snapshot backups without worrying about the disk filling up due
to excessive logfiles, and I guess is considered good enough for SBS
users...

James

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] error backing up Exchange 2003

2011-10-04 Thread James Harper
 I have this error backing up Exchange on a Windows 2008 Standard 64
bit
 system.
 HrESEBackupSetup seems to be a Windows call. Bacula reports the
error
 and yet seems to back up the proper amount of data though I cannot
access
 it since it is flagged as failed. Microsoft has an article about this
call which ties
 into the ntbackup. Article ID: 820272.
 
 The article is for Windows 2003, but it basically says that is the
Exchange data
 and the OS are on the same partition you will have problems. They
 recommend backing up the main server first and then the exchange data.
I
 am currently doing this and I will see what the results are.

Best practice says you shouldn't have them on the same partition, but it
shouldn't break anything if you do.

What error is Bacula giving? What errors are in the event logs?

One thing - if Exchange is set to 'circular logging' then you will have
problems backing up.

James

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] error restoring Exchange 2003

2011-09-16 Thread James Harper
 Maybe some of you have a good advice to give me...
 
 For quite a while we've backed up multiple Exchange servers with
different
 Bacula 5.x servers and although we have just managed to restore
Exchange
 2007, we seem to be out of luck with Exchange 2003.
 
 While trying to restore directly into Exchange, the restore job
freezes at
 restoring .log files.
 

I see from another post that this is a restore of a full backup. That
should work without any problems.

Does the windows event log contain anything useful?

Does it hang on the last logfile or somewhere in the middle? Always the
same logfile?

If it hangs at a particular logfile you might be able to just not select
that and subsequent logfiles  and restore. If that works, you might be
able to restore only that logfile and subsequent logfiles (which will
fail but might still get the files on disk and then trying the first
restore should get you running again).

If it hangs on the last logfile then it's probably the actual restore
process that is hanging. That can take quite a while depending on the
database size. How big is the database? How long have you let it sit
for? While it is 'hung', run perfmon and see if there is lots of disk
activity.

James

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
http://p.sf.net/sfu/rim-devcon-copy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange Plugin Restore

2011-08-16 Thread James Harper
 
 I will try that.   I did not the plugin wasn't loading, and I fixed
that.
 Now I am getting this error:
 
 JobId 598: Fatal error: DatabaseBackupInfo file must exist and must be
first
 in directory
 

That indicates that the file sequence is incorrect or DatabaseBackupInfo
file is missing. Is such a file selected in the selection list?

James

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange Plugin Restore

2011-08-16 Thread James Harper
Is it possible to restore a catalog that contains that fileset? Maybe
there is a bug with restoring plugin data without a full copy of the
catalog.

James

 -Original Message-
 From: Duncan McQueen [mailto:dmcqu...@vomer.com]
 Sent: Wednesday, 17 August 2011 00:12
 To: James Harper
 Cc: Bacula-users@lists.sourceforge.net
 Subject: RE: [Bacula-users] Exchange Plugin Restore
 
 It won't let me select the files since it says:
 
 
 For one or more of the JobIds selected, no files were found,
 so file selection is not possible.
 Most likely your retention policy pruned the files.
 
 
 
 From: James Harper [james.har...@bendigoit.com.au]
 Sent: Monday, August 15, 2011 6:11 PM
 To: Duncan McQueen
 Cc: Bacula-users@lists.sourceforge.net
 Subject: RE: [Bacula-users] Exchange Plugin Restore
 
 
  One slight change I am doing is using regex to filter out only the
 exchange
  portions.  I am passing in EXCHANGE to the Regexp question.
 
 
 Can you try just marking the /@EXCHANGE root directory?
 
 James

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange Plugin Restore

2011-08-15 Thread James Harper
 James,
 
 Are there any trips or techniques on restoring using the Exchange
plugin?  I
 assume that one issue could be this is a new clean server we are
restoring to,
 and the plugin might be erroring out.  Should I do a debug trace (and
what is
 the best way to do that keeping the service running as a Local system
 account)?
 

Can you first try restoring only from the full job? I have had trouble
restoring from full+incremental jobs before, but if the full job by
itself doesn't work then you have a different problem.

James

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange Plugin Restore

2011-08-15 Thread James Harper
 
 One slight change I am doing is using regex to filter out only the
exchange
 portions.  I am passing in EXCHANGE to the Regexp question.
 

Can you try just marking the /@EXCHANGE root directory?

James

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange Plugin Restore

2011-08-11 Thread James Harper
 
 The root job says All File records purged.  I know a full restore
brought
 back all files, except for the Exchange ones.
 
 We are restoring to a newly rebuilt machine so is there anything we
should be
 concerned about (settings, etc)?

Do you mean the job was purged and the rescanned?

James

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange Plugin Restore

2011-08-10 Thread James Harper
That just looks like an incremental backup... is that the case?

James

 -Original Message-
 From: Duncan McQueen [mailto:dmcqu...@vomer.com]
 Sent: Thursday, 11 August 2011 07:02
 To: James Harper; Bacula-users@lists.sourceforge.net
 Subject: RE: [Bacula-users] Exchange Plugin Restore
 
 Looking back at it - looks like backup didn't occur:
 
 See attached
 
 
 From: James Harper [james.har...@bendigoit.com.au]
 Sent: Tuesday, August 09, 2011 6:37 PM
 To: Duncan McQueen; Bacula-users@lists.sourceforge.net
 Subject: RE: [Bacula-users] Exchange Plugin Restore
 
 
  Has anyone gotten these errors:
 
  2011-08-09 14:06:02 svr-fd JobId 575: Error:
  /tmp/bacula/bacula/src/findlib/mkpath.c:49 Cannot create directory
  /@EXCHANGE/Microsoft Information Store/First Storage Group/Mailbox
 Store
  (VOMMAIN-SVR)/C:: ERR=Invalid argument
 
  2011-08-09 14:21:26 svr-fd JobId 575: Error:
  /tmp/bacula/bacula/src/findlib/mkpath.c:49 Cannot create directory
  /@EXCHANGE/Microsoft Information Store/First Storage Group/Public
 Folder Store
  (VOMMAIN-SVR)/C:: ERR=Invalid argument
 
  2011-08-09 14:26:07 svr-fd JobId 575: Error:
  /tmp/bacula/bacula/src/findlib/mkpath.c:49 Cannot create directory
  /@EXCHANGE/Microsoft Information Store/First Storage Group/C::
 ERR=Invalid
  argument
 
  When trying to restore Exchange from the Plugin?  This is Exchange
 2003.  Is
  there a way around these, or is the backup effectively broken?
 
 
 There is a problem with restoring from incremental backups, but your
 problem looks different and I haven't seen it before.  Can you send me
 the output of 'list files' from the backup job?
 
 James

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange Plugin Restore

2011-08-09 Thread James Harper
 
 Has anyone gotten these errors:
 
 2011-08-09 14:06:02 svr-fd JobId 575: Error:
 /tmp/bacula/bacula/src/findlib/mkpath.c:49 Cannot create directory
 /@EXCHANGE/Microsoft Information Store/First Storage Group/Mailbox
Store
 (VOMMAIN-SVR)/C:: ERR=Invalid argument
 
 2011-08-09 14:21:26 svr-fd JobId 575: Error:
 /tmp/bacula/bacula/src/findlib/mkpath.c:49 Cannot create directory
 /@EXCHANGE/Microsoft Information Store/First Storage Group/Public
Folder Store
 (VOMMAIN-SVR)/C:: ERR=Invalid argument
 
 2011-08-09 14:26:07 svr-fd JobId 575: Error:
 /tmp/bacula/bacula/src/findlib/mkpath.c:49 Cannot create directory
 /@EXCHANGE/Microsoft Information Store/First Storage Group/C::
ERR=Invalid
 argument
 
 When trying to restore Exchange from the Plugin?  This is Exchange
2003.  Is
 there a way around these, or is the backup effectively broken?
 

There is a problem with restoring from incremental backups, but your
problem looks different and I haven't seen it before.  Can you send me
the output of 'list files' from the backup job?

James

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula web 5.1.0 alpha

2011-08-02 Thread James Harper
 
 Hi James,
 
 no problem.
 
 Just a question, why you don't try the latest version (5.1.0 alpha) ?
 
 This version include new features and is bug less.
 

I hadn't gotten that far yet. Does the 5.1.0 bacula-web work with 5.0.2
bacula?

James

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula web download forbidden

2011-07-31 Thread James Harper
Following the links from the bacula.org web pages, I get to
http://bacula-web.dflc.ch/index.php/download/articles/bacula-web-503.htm
l but the download link on that page is 'Forbidden'

James

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula web download forbidden

2011-07-31 Thread James Harper
 
 Following the links from the bacula.org web pages, I get to
http://bacula-
 web.dflc.ch/index.php/download/articles/bacula-web-503.html but the
download
 link on that page is 'Forbidden'
 

Ignore me. I was doing the download from the wrong machine - it was
behind a proxy server :(

James

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-26 Thread James Harper
  Disable software compression. The tape drive will compress much
faster
  than the client.
 
  If you can find compressible patterns in the encrypted data stream
then
  you are not properly encrypting it. The only option would be to
compress
  before encryption which means you can't use the compression function
in
  the tape drive unless the tape drive also does the encryption (some
do).
 
  Use a lower GZIP compression level to see if it gets you better
speed
  without sacrificing too much performance... I suspect the speed hit
is
  going to be the encryption though.

 I was under the impression that _all_ LTO4 drives implemented
encryption
 (though if having the data traversing the LAN encrypted is your goal,
 you'd still have to do something).  I don't know enough about it to
know
 how good the encryption in LTO4 is, however (or for that matter, how
the
 key is specified).
 

I'm pretty sure that LTO4 drives are required to identify an encrypted
tape if one is inserted, but the actual support for encryption is
optional. I think they use AES encryption or some variant of it.

James

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is Anyone Backing up iPhone and iPad using Bacula?

2011-07-26 Thread James Harper
 
 Is there a simple method to back these devices up?
 

If they sync to a PC then just backing up the backup on the PC would be
sufficient. If you are talking about backing them up over wireless or
something then that would be a pretty big drain on the battery and the
network...

James

--
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance

2011-07-25 Thread James Harper
 2011/7/25 Rickifer Barros rickiferbar...@gmail.com:
  Hello Guys...
 
  This weekend I did a backup with a size of 41.92 GB that took 1 hour
and 24
  minutes with a rate of 8.27 MB/s.
 
  My Bacula Server is installed in a IBM server connected in a Tape
Drive LTO4
  (120 MB/s) via SAS connection (3 Gb/s).
 
  I'm using Encryption and Compression Gzip6.
 
 Disable software compression. The tape drive will compress much faster
 than the client.
 

If you can find compressible patterns in the encrypted data stream then
you are not properly encrypting it. The only option would be to compress
before encryption which means you can't use the compression function in
the tape drive unless the tape drive also does the encryption (some do).

Use a lower GZIP compression level to see if it gets you better speed
without sacrificing too much performance... I suspect the speed hit is
going to be the encryption though.

James


--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual diff instead of full

2011-07-12 Thread James Harper
 Hi!
 
 I have problem with Virtual full on one set of backups where it does
not give
 me a virtual full but a virtual diff instead.
 
 The last fullbackup was about 125 GB and an new estimate says that a
new full
 would be about 200 GB.
 
 When I ask i to produce a new VirtualFull it starts reading from the
last Diff
 and gives me a virtual full file of about 50 GB wich is about the size
of my
 diffs.
 
 Does anyone have any pointers they would be greatly appressiated
 

If you want to get your hands dirty and like sifting through logfiles
you can turn on mysql logging (assuming you are using mysql) and have a
look at what queries are used to determine the volumes that make up the
virtualfull.

Regular bacula debug logging may help too.

James

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catastrophic error. Cannot write overflow block to device LTO4

2011-07-10 Thread James Harper
 
 3000 OK label. VolBytes=1024 DVD=0 Volume=FA0016 Device=LTO4 (/dev/nst0)
 Requesting to mount LTO4 ...
 3905 Bizarre wait state 7
 Do not forget to mount the drive!!!
 2011-07-10 03SD-loki JobId 6: Wrote label to prelabeled Volume FA0016 on
 device LTO4 (/dev/nst0)
 2011-07-10 03SD-loki JobId 6: New volume FA0016 mounted on device LTO4
 (/dev/nst0) at 10-Jul-2011 03:51.
 2011-07-10 03SD-loki JobId 6: Fatal error: block.c:439 Attempt to write on
 read-only Volume. dev=LTO4 (/dev/nst0)
 2011-07-10 03SD-loki JobId 6: End of medium on Volume FA0016 Bytes=1,024
 Blocks=0 at 10-Jul-2011 03:51.

This probably isn't helpful, but why does Bacula think that the volume is 
read-only?

James

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catastrophic error. Cannot write overflow block to device LTO4

2011-07-10 Thread James Harper
 
 no idea, if we can find out what triggered the original message. Without
 doing anything physical, I did an umount storage=LTO4 from bacula and then
 went and did a full btape rawfill without a single problem on the volume:
 
 *status
  Bacula status: file=0 block=1
  Device status: ONLINE IM_REP_EN file=0 block=1
 btape: btape.c:2133 Device status: 641. ERR=
 *rewind
 btape: btape.c:578 Rewound LTO4 (/dev/nst0)
 *rawfill
 btape: btape.c:2847 Begin writing raw blocks of 2097152 bytes.
 +++ (...)
 Write failed at block 384701. stat=-1 ERR=No space left on device
 btape: btape.c:410 Volume bytes=806.7 GB. Write rate = 106.1 MB/s
 btape: btape.c:608 Wrote 1 EOF to LTO4 (/dev/nst0)
 *
 
 zero problems at all.
 

Just had a quick look... the read-only message is this in stored/block.c:

   if (!dev-can_append()) {
  dev-dev_errno = EIO;
  Jmsg1(jcr, M_FATAL, 0, _(Attempt to write on read-only Volume. 
dev=%s\n), dev-print_name());
  return false;
   }

And can_append() is:

int can_append() const { return state  ST_APPEND; }

so it does seem pretty basic unless there is a race somewhere in getting the 
value of 'state'.

Are there any kernel messages that might indicate a problem somewhere at that 
time?

James
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VSS for volumes without drive letter?!?

2011-06-15 Thread James Harper
 
 Dear all,
 
 I have problems with the following fileset:
 File = C:/
 File = D:/
 File = D:/windvsw1/DATEV/DATEN
 
 
 The latter is a volume without a drive letter.
 How can I tell bacula that this is a volume for which VSS should be
used?
 I get the expected cannont backup because file is opened error on
all
 the database files on that volume.
 

I think the best you could do at this stage is to add a drive letter to
the volume (in addition to the mount point you already have) and back
that up instead of the mount point.

James

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO2 get filled at 136GB or 85GB

2011-06-10 Thread James Harper
 hi,
 
 i am running Bacula 5.0.2 on Debian Lenny amd64 with a Certance LTO2 streamer.
 
 since a while my backupjobs does not get finished, as bacula says
 
 
 10-Jun 06:54 lx01-sd JobId 2422: Despooling elapsed time = 00:00:51, Transfer
 rate = 22.89 M Bytes/second
 10-Jun 06:56 lx01-sd JobId 2422: Writing spooled data to Volume. Despooling
 1,167,755,347 bytes ...
 10-Jun 06:57 lx01-sd JobId 2422: Despooling elapsed time = 00:00:48, Transfer
 rate = 24.32 M Bytes/second
 10-Jun 06:58 lx01-sd JobId 2422: Writing spooled data to Volume. Despooling
 1,167,755,351 bytes ...
 10-Jun 06:59 lx01-sd JobId 2422: End of Volume LX03DO01 at 159:2982 on
 device Certlto2 (/dev/nst0). Write of 64512 bytes got -1.
 10-Jun 06:59 lx01-sd JobId 2422: Re-read of last block succeeded.
 10-Jun 06:59 lx01-sd JobId 2422: End of medium on Volume LX03DO01
 Bytes=158,745,581,568 Blocks=2,460,713 at 10-Jun-2011 06:59.
 
 LTO2 should be able to store 200GB on a media?
 

158GB is probably a bit low. 85GB is definitely a problem. Is this something 
that has been getting worse over time? How old are the tapes?

Does your tape drive vendor provide any drive assessment tools? Eg HP provide 
Library  Tape Tools.

LTO is pretty smart and the heads are organised so that immediately after the 
media passes under the write head it passes under the read head and the data 
just written is read. If there is a problem reading it the data is rewritten. 
Obviously every time a block has to be rewritten the effective capacity of the 
tape is reduced. The drive assessment tools should tell you how much this is 
happening (margin, I think) and otherwise report the condition of the drive and 
the tapes.

Some of our tapes are down to about 175GB capacity. Curiously it is the set of 
5 tapes used on Fridays that are suffering from this problem which is strange 
as they are used about 5 times less than the other tapes.

James
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula IPv6 status (unofficially)

2011-06-10 Thread James Harper
 Hi,
 
 just a short note to say that I've been testing Bacula's IPv6 support
of
 late and have generally found it to be good.
 

Not really directly bacula related, but one of the concerns I have with
switching to IPv6 for LAN scale traffic is the performance of the
various offload features in the network adapters. Did you do any
throughput testing?

James


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore to Windows Client

2011-06-08 Thread James Harper


 -Original Message-
 From: Christian Tardif [mailto:christian.tar...@servinfo.ca]
 Sent: Thursday, 9 June 2011 10:17
 Cc: bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] Restore to Windows Client
 
 On 08/06/2011 00:57, James Harper wrote:
 
 
   If the files were backed up from C:\dir1\dir2\dir3, and you tell
bacula
   to restore to C:\tmp\bacula-restores, it will restore to
   C:\tmp\bacula-restores\c\dir1\dir2\dir3. You can use a
regexwhere to
   remove the C:\dir1\dir2\dir3 prefix and replace it with a
   C:\tmp\bacula-restores prefix.
 
   That said, the files should have been restored somewhere if
Bacula says
   it was successful...
 
 
 
 OK, so if I understand correctly (since my files are backed up from a
Linux
 box under /mnt/rsync/blahblah/thisfile.txt, restoring this file to my
Windows
 box should be available under c:\tmp\bacula-
 restores\mnt\rsync\blahblah\thisfile.txt. But since this is a unix
path (so
 these are slashes, not backslashes), would it try to restore to
c:\tmp\bacula-
 restores\mnt/rsync/blahblah/thisfile.txt which wouldn't work, since /
in a
 filename can't be ? Or does the Windows client do the translation
itself?
 
 I will try right away to create a regex to manulaay translate a / to
\, and
 see how it works, in case this is the problem.
 
 The file hasn't been restore at all, to answer the question. I've
searched a
 file ending with thisfile.txt (for example), and it just wasn't there
at all.
 

You should always use /'s under Bacula. Bacula will take care of
substituting it appropriately under Windows.

James

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore to Windows Client

2011-06-07 Thread James Harper
 
 Let's say I would want to restore to c:\tmp\bacula-restores. How
should
 the Where clause be typed?
 

If the files were backed up from C:\dir1\dir2\dir3, and you tell bacula
to restore to C:\tmp\bacula-restores, it will restore to
C:\tmp\bacula-restores\c\dir1\dir2\dir3. You can use a regexwhere to
remove the C:\dir1\dir2\dir3 prefix and replace it with a
C:\tmp\bacula-restores prefix.

That said, the files should have been restored somewhere if Bacula says
it was successful...

James

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] MySQL versus Postgres

2011-06-06 Thread James Harper
 
 A decent set of RAID0 SSD's for spooling/despooling data and will be
perfect!.
 

Are there any performance implications for SSD's when they get
fragmented?

James

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Clarification on Exchange plugin

2011-05-31 Thread James Harper
 The error I am getting is the following:
 
 bacula-dir Start Backup JobId 59,
Job=Backup_Exchange.2011-05-31_10.40.23_29
 
   Using Device FileStorage
 
 
 sbs-fd Warning: VSS was not initialized properly. VSS support is
disabled.
 ERR=An attempt was made to reference a token that does not exist.
 
 Fatal error: /home/kern/bacula/k/bacula/src/filed/fd_plugins.c:223
Command
 plugin exchange:@EXCHANGE/Microsoft Information Store requested, but
is not
 loaded.
 bacula-sd Volume HERE previously written, moving to end of data.
Ready to
 append to end of Volume HERE size=12959845363
 bacula-dir
 

Questions I should have asked you earlier... this looks a lot like you
are trying to run a 32 bit fd on a 64 bit windows system with those VSS
failures. Is that the case? If so, you probably aren't using Exchange
2003, and I'm pretty sure that the plugin doesn't work on anything
newer.

James


--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Clarification on Exchange plugin

2011-05-26 Thread James Harper
 
 c) Be very very careful with this. I believe that the plugin doesn't
work
 properly. Specifically, it will restore from:
 * A full backup
 * A full backup, plus one incremental.
 But then it may not restore from:
 * A full backup, plus two incrementals.
 And be less and less likely to work for each incremental that you add.
 And without attempting to restore, it will seem as if it is working.
 
 See bacula bug number 0001647, which has the status closed with the
 resolution won't fix:
 http://marc.info/?l=bacula-bugsm=129690630228142w=2
 
 You might be OK if you always did Full backups, but that defeats the
point.
 

I spent some time recently trying to fix this as I was able to reproduce
it. If you are careful it is possible to do the restore but you need to
know what you are doing.

Unfortunately I haven't had time to work on it since then :(

James

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Prevent new jobs from being scheduled while old jobshasn't finished

2011-05-21 Thread James Harper
 
 hi everyone,
 
 i have a question about bacula job scheduling. I have a pretty simple
 schedule for my backup: do a full backup at the beginning at the month
 and then incremental ones every day:
 
 
 Schedule {
   Name = DefaultCycle
   Run = Incremental mon-sun at 23:05
   Run = Full 1st sun at 23:05
 }
 
 now, the problem with this schedule is that full backups often take
 several days to complete. during that period if (for various reasons)
 no previous full backup is available, new incremental backups get
 scheduled and automatically promoted to full backups since no full
 backup exists. is there a way to tell bacula not to schedule new jobs
 unless the previous job from the same schedule has finished?
 

You can add a setting to disallow duplicate jobs (see the manual) on a
per job basis, although when the duplicate tries to run Bacula logs an
error which can be a bit annoying. It works fine apart from that though.

James


--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Prevent new jobs from being scheduled while old jobs hasn't finished

2011-05-21 Thread James Harper
 Hi,
 
 you could use Priorities.
 
 Full Backup Priority = 1
 Incremental Priority = 2
 
 Priority 2 will always wait for Priority 1 Jobs to finish.
 
 Have a look at the Job Resource.
 

I think that that still schedules the job concurrently though, so if the
Full hasn't finished when the Incremental starts Bacula will do the
find previous full job query and not find the currently running full
(because it hasn't finished).

James

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] OneFS = no doesn't work

2011-05-18 Thread James Harper
 
 Working on setting up Bacula backup of a fileserver, I can't make OneFS = no
 work. The server is running OpenIndiana and has a few terabytes of storage.
 The home directories under /tos-data/home/${username} are each a ZFS
 filesystem/dataset. The configuration below looks good to me, but Bacula still
 complains about /tos-data/home/znw is a different filesystem. Will not
 descend from /tos-data/home into it.
 
 How can I make it decend automatically? We have ~30 users on this site, and
 it'll be far more flexible to just backup the lot than backing up each and
 every one of them
 
 roy (see below for config)
 
 # Home directories
 FileSet {
   Name = verdande.nilu.no-home-fileset
   Include {
 Options {
   signature = MD5
   OneFS  = no
   FSType = zfs
 }
 Options {
   Exclude = yes
   WildFile = *.mp3
 }
 File = /tos-data/home
   }
 }
 

The docs for fstype say The permitted filesystem-type names are: ext2, jfs, 
ntfs, proc, reiserfs, xfs, usbdevfs, sysfs, smbfs, iso9660. I don't see zfs in 
that list... maybe it has to be hardcoded into the source code?

Is there anything under /tos-data/home that isn't zfs? Would it be easier to 
exclude those manually?

James

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup to dvd not working

2011-05-14 Thread James Harper
 
 On Fri, May 13, 2011 at 12:26:20PM -0400, John Drescher wrote:
   I was trying to set up bacula on my parents' laptop the other day
so
   that they could do proper backups to DVD+RW disks. Only it isn't
   working.
  
  All I can say is the dvd writing code is alpha quality or even worse
  since the developer working on that quit the project many years ago.
  On top of that the developers (including Kern) also have discussed
  removing that feature entirely.
 
 Right; I was afraid someone was going to say that.
 
  I would suggest that you just make 4GB disk volumes and externally
  burn these to dvd media then delete the 4GB disk volumes from your
  server.
 
 Not an option -- I'm doing it this way since explaining dad how k3b
 works seemed to be too difficult... I wanted to have something that
said
 click here for backup, or some such.
 
 I guess I'll have to find something else, then.
 

A 4GB memory stick wouldn't do the job for you? It should be heaps more
reliable than a DVD+RW (won't care about dirty marks, scratches, etc),
and faster, and given that 4GB is on the very low end of what's
available these days they should be pretty cheap too.

James


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Win2008 RC2 64 Bit Backup fails

2011-05-10 Thread James Harper
 Hi,
 
 I tried to Backup my Windows 2008 RC2 Server with the newest Bacula-FD
 64 Bit Client. I´ve enabled VSS but I still get the follow Error Messages:
 
 Could not open directory c:/Documents and
 Settings/Administrator/AppData/Local/Anwendungsdaten/Anwendungsdaten/Anwendung
 sdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendu
 ngsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwen
 dungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anw
 endungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/A
 nwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten
 /Anwendungsdaten/Anwendungsdaten/Anwendungsdaten/Anwendungsdaten:
 ERR=Der Dateiname konnte durch das System nicht zugeordnet werden.
 
  Options {
signature = MD5
onefs = no


^^^ that is your problem. Windows Vista put everything under /users and used 
junction points to make sure anything that referenced /Documents and Settings 
still worked. A junction point is a mount point as far as bacula is concerned 
and by letting it follow into different filesystems (even though it's the same 
filesystem) you end up recursing. Set onefs = yes and you might have more luck.

James

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help Troubleshooting Windows FD

2011-05-04 Thread James Harper
 On Wed, 4 May 2011 08:17:18 -0400
 Mingus Dew shon.steph...@gmail.com wrote:
 
   I am trying to install the latest Win64 client downloaded from
  bacula.org on a Windows 2008 server. The application does appear to
  install correctly, but when trying to start the service it fails to
  start and generates Error 1067: The process terminated
unexpectedly.
 
   I actually have gotten this error, on multiple client versions,
  on all my different version (2k, 2k3, 2k8) Windows servers for over
2
  years now. I've not EVER been able to run Bacula on Windows and
  really need some help figuring out why this is.
 
 Start with debugging.
 Assuming you have installed it under %programfiles%\bacula, do this
 (supposedly you will need to do this using an administrative account):
 1) Run cmd.exe
 2) cd %programfiles%\bacula
 3) Run: bacula-fd.exe -c %programfiles%\bacula\bacula-fd.conf -t
This should check your config file (specified via -c; you have to
specify it explicitly because otherwise Bacula tends to look for it
somewhere under your current user's %appdata%).
 If Bacula reports any errors, fix them.
 If it still doesn't work after fixing any errors detected,
 4) Run: bacula-fd.exe -c %programfiles%\bacula\bacula-fd.conf -d 100
Which will create a trace file (called whatever.trace in the
bacula's installation directory with whatever being the name of
this FD instance read from the configuration file).
Look at that file to see what happens when bacula-fd starts up.
 

From bconsole, set debug level=100 trace=1 will do the same as the
above, prompting you for the client if you have more than one (or you
can specify it with client= I think.
 
James

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup to remote cifs share

2011-04-26 Thread James Harper
 Hey
 
 We have a few clients where we use bacula to make a backup to a remote
 cifs share (usually a windows fileserver).
 This is implemented as a File Storage device in the sd, with a
requires
 mount = yes and the necessary mount commands etc.
 
 Now this has never been an issue until last week when a client had DNS
 problems, the share wasn't accessible, and bacula decided to just fill
 the local disk.
 I had always been under the impression that a failure to mount would
 cause the job to fail? (and it has in some tests a few months ago if I
 recall correctly)
 
 It is probably worth mentioning though that in an effort to resolve
some
 early permissions problems I gave bacula permissions on the mountpoint
 itself (/mnt/bacula).
 Could this be the reason bacula just decided to use the local disk
 because it *could*, even though the mount clearly failed with an
error.
 

If your share mounts on /mnt/bacula, and the share isn't mounted, bacula
will quite happily just write to the /mnt/bacula directory. To avoid
this problem I always back up to a subdirectory inside that share. Eg In
that case I would back up to /mnt/bacula/volumes/ or something like
that, although I wouldn't normally call my mount point bacula so it
would be more like /mnt/cifs/bacula where cifs is the mount point and
bacula is a directory under it. That way if the share isn't mounted, the
directory won't exist and bacula won't do something you aren't
expecting. I think this is advisable for CIFS and USB attached storage,
the latter is what I normally use.

I assume you also allow Bacula to create new volumes... that would
probably be a mitigating factor here too.

James


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole delayed response

2011-04-22 Thread James Harper
 
 Hello All,
 
 issuing run command in bconsole sometimes takes a long time, up to 20
 minutes, to respond.
 
 What could be the problem?
 
 Any hint would be highly appreciated.
 

If the job you are running needs a volume recycled then purging the
records can take quite a while and will lock the database until it's
complete.

James

--
Fulfilling the Lean Software Promise
Lean software platforms are now widely adopted and the benefits have been 
demonstrated beyond question. Learn why your peers are replacing JEE 
containers with lightweight application servers - and what you can gain 
from the move. http://p.sf.net/sfu/vmware-sfemails
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Duplicate entries on mysql database

2011-04-15 Thread James Harper
 Hello Friends,
 
 I'm using bacula at one month.. no problems, but this week, many jobs
 fail, with this error:
 
 2011-04-15 20:19:25 JobId 637: Fatal error: sql_create.c:875 Fill
 Filename table Query failed: INSERT INTO Filename (Name) SELECT a.Name
 FROM (SELECT DISTINCT Name FROM batch) AS a WHERE NOT EXISTS (SELECT
 Name FROM Filename AS f WHERE f.Name = a.Name): ERR=Duplicate entry
 '20821966' for key 1
 
 My database really have big number of register, but have a limit?
 
 I use mysql database, I already try to repair db using myisamchk but
not
 fix problem.
 
 Anybody already see this problem?
 

What version of Bacula are you using?

James

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] force backup of unchanged file in incremental backup

2011-04-14 Thread James Harper
The last modified datestamp on MSSQL database files doesn't get
changed unless the actual file dimensions change (eg it 'grows') or when
the file is closed. This means that an incremental backup won't
necessarily back up the database files unless they have changed.
Accurate won't catch this either as the metadata it uses will be
identical.

Is there a way to force the backup of specific unchanged files during an
incremental or differential backup? Eg:

Option {
  File = C:/database/mydb.mdf
  Always Back Up = Yes
}

Thanks

James


--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   4   5   >