Re: [Bacula-users] I love Bacula. Have to confess.

2014-10-30 Thread Jari Fredrisson
On 30.10.2014 5:03, Dan Langille wrote:
 On Oct 29, 2014, at 6:08 PM, Jari Fredrisson ja...@iki.fi wrote:

 Hands down the best software I have used Ever. This software has never
 laid me down.

 Thank You, Kern Sibbald and Bacula Systems!
 Which database are you using? :)

 — 
 Dan Langille

We use Maria DB 5.5. No problems whatsoever with Bacula, while MySQL 5.5
is sometimes picky with other apps.

br. jarif




signature.asc
Description: OpenPGP digital signature
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] I love Bacula. Have to confess.

2014-10-30 Thread heitor
Hi there,

Spreading the love: I think we need to organize a community get together in 
the next Bacula Conference.

Regards,
 
Heitor Medrado de Faria 
Need Bacula training? 10% discount coupon code at Udemy: bacula-users 
+55 61 2021-8260 | 8268-4220 
Site: www.bacula.com.br | Facebook: heitor.faria | Gtalk: heitorfa...@gmail.com 
 


- Original Message -
From: Jari Fredrisson ja...@iki.fi
Cc: bacula-users bacula-users@lists.sourceforge.net
Sent: Thursday, October 30, 2014 4:45:24 AM
Subject: Re: [Bacula-users] I love Bacula. Have to confess.

On 30.10.2014 5:03, Dan Langille wrote:
 On Oct 29, 2014, at 6:08 PM, Jari Fredrisson ja...@iki.fi wrote:

 Hands down the best software I have used Ever. This software has never
 laid me down.

 Thank You, Kern Sibbald and Bacula Systems!
 Which database are you using? :)

 — 
 Dan Langille

We use Maria DB 5.5. No problems whatsoever with Bacula, while MySQL 5.5
is sometimes picky with other apps.

br. jarif



--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unable to recover data using bscan

2014-10-30 Thread Ana Emília M . Arruda
Hi Keith,

I'm affraid it's a database version problem. Your 2012 backups have
database vesion=12 and your current catalog uses version 14. If this is the
problem, I'm not sure it is, then you can solve it creating an 5.0.3
director virtual machine with an empty database and restore the 8 volumes
and jobs with bscan.

For old backups, I normally use bextract for restores, mainly because I
don't have file retention times too long and with bextract I can easly do
full restores or select folders/files I want be restored (and you can use
only one volume or all of them).

Best regards,
Ana

On Thu, Oct 30, 2014 at 1:23 AM, Keith T keithb...@yahoo.com wrote:

 Hi Ana,

 Thanks for your kind advise! Will post after tried.

 But I would ask whether it is possible to restore data with one volume
 instead of multiple volume since the full backup done for eight volumes
 (and seems not able to restore it without creating the first volume).
 Besides, if possible, I shall use the job id (e.g.7) that has created when
 bscan multiple volumes as bscan one volume did not see a new job ID
 created? When use job ID 7, no more folders recovered after only bscan
 volume 7 even volfiles has value 232 now (re-run bscan -V COMHKOB2012HDD7
 -v -s -S -m -c /etc/bacula/bacula-sd.conf /mnt/usb3docking1).

 [root@BSVR iscsi]# echo list media|bconsole
 Connecting to Director BSVR:9101
 1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
 Enter a period to cancel a command.
 list media
 Automatically selected Catalog: MyCatalog
 Using Catalog MyCatalog
 Pool: Default
 No results to list.
 Pool: YearlyPool

 +-+-+---+-+-+--+--+-+--+---+---+-+
 | mediaid | volumename  | volstatus | enabled | volbytes|
 volfiles | volretention | recycle | slot | inchanger | mediatype |
 lastwritten |

 +-+-+---+-+-+--+--+-+--+---+---+-+
 |  11 | COMHKOB2012HDD1 | Archive   |   1 | 998,951,619,932
 |0 |   31,536,000 |   0 |0 | 0 | File  |
 2013-02-18 09:56:26 |
 |  12 | COMHKOB2012HDD2 | Archive   |   1 | 999,122,570,931
 |0 |   31,536,000 |   0 |0 | 0 | File  |
 2013-02-19 01:13:42 |
 |  13 | COMHKOB2012HDD3 | Archive   |   1 | 999,118,420,008
 |0 |   31,536,000 |   0 |0 | 0 | File  |
 2013-02-19 10:16:15 |
 |  14 | COMHKOB2012HDD4 | Archive   |   1 | 999,122,062,160 |
 232 |   31,536,000 |   0 |0 | 0 | File  | 2013-02-19
 22:43:03 |
 |  15 | COMHKOB2012HDD5 | Archive   |   1 | 999,126,549,297
 |0 |   31,536,000 |   0 |0 | 0 | File
 | |
 |  16 | COMHKOB2012HDD6 | Archive   |   1 | 999,121,777,768 |
 232 |   31,536,000 |   0 |0 | 0 | File
 | |
 |  17 | COMHKOB2012HDD7 | Archive   |   1 | 999,126,642,823 |
 232 |   31,536,000 |   0 |0 | 0 | File
 | |
 |  18 | COMHKOB2012HDD8 | Archive   |   1 | 398,969,357,878
 |   92 |   31,536,000 |   0 |0 | 0 | File
 | |

 +-+-+---+-+-+--+--+-+--+---+---+-+

 Best regards,
 Keith

   On Wednesday, October 29, 2014 7:17 PM, Ana Emília M. Arruda 
 emiliaarr...@gmail.com wrote:


 Hi Keith,

 Have you checked with bls the contents of these volumes?

 Best regards,
 Ana

 On Wed, Oct 29, 2014 at 6:34 AM, Keith T keithb...@yahoo.com wrote:

 Dear All,

 I was trying to restore data that had been backup on year 2012 but some
 folders not found after recreated Catalog using the command bscan as
 described on below. Appreciate if you have any idea to fix this.

 #Tried recovery catalog by bscan command
 bscan -V
 COMHKOB2012HDD1\|COMHKOB2012HDD2\|COMHKOB2012HDD3\|COMHKOB2012HDD4\|COMHKOB2012HDD5\|COMHKOB2012HDD6\|COMHKOB2012HDD7\|COMHKOB2012HDD8
 -v -s -S -m -c /etc/bacula/bacula-sd.conf /mnt/usb3docking1

 All the eight medias have been created but some have 0 volfiles?

 Pool: YearlyPool

 +-+-+---+-+-+--+--+-+--+---+---+-+
 | mediaid | volumename  | volstatus | enabled | volbytes|
 volfiles | volretention | recycle | slot | inchanger | mediatype |
 lastwritten |

 +-+-+---+-+-+--+--+-+--+---+---+-+
 |  11 | COMHKOB2012HDD1 | Archive   |   1 | 998,951,619,932
 |0 |   31,536,000 |   0 |0 | 0 | File  |
 2013-02-18 09:56:26 |
 

[Bacula-users] Job transfer rate

2014-10-30 Thread Jeff MacDonald
Hi,

I have some backups going at 2MB/s which for a 380gig backup is just too slow. 
I’m trying to find my bottleneck.

Some questions:

- Is the rate of the backup only shown in “messages” or is it stored in the db 
anywhere. Or could I just do jobbytes / endtime-starttime in the jobs table?

- Does bacula write data to disk via a stream or lots of little latency 
dependant writes?

My environment looks like this

- Bacula (and postgres on the same VM), a MS Small business server and 3 or 4 
other VMs run on a 6 disk array of 7200rpm SATA disks ( I bet this is already 
my slowpoint )
- Bacula stores its backups on a NFS mounted NAS, about .7ms of ping away.

Tips/Suggestions?

Jeff.



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread John Drescher
On Thu, Oct 30, 2014 at 9:27 AM, Jeff MacDonald j...@terida.com wrote:
 Hi,

 I have some backups going at 2MB/s which for a 380gig backup is just too 
 slow. I’m trying to find my bottleneck.

 Some questions:

 - Is the rate of the backup only shown in “messages” or is it stored in the 
 db anywhere. Or could I just do jobbytes / endtime-starttime in the jobs 
 table?

 - Does bacula write data to disk via a stream or lots of little latency 
 dependant writes?

 My environment looks like this

 - Bacula (and postgres on the same VM), a MS Small business server and 3 or 4 
 other VMs run on a 6 disk array of 7200rpm SATA disks ( I bet this is already 
 my slowpoint )
 - Bacula stores its backups on a NFS mounted NAS, about .7ms of ping away.

 Tips/Suggestions?


Did you benchmark the client filesystem? Are there loads of small
files? Did you try enabling attribute spooling? Did you tune your
database?

John

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] automatic restore Test

2014-10-30 Thread Tim Macholl
Hello,

is it possible to restore a file automatically. The reason is I need an
auto-restore test for bacula. Once per day I write date 
/mnt/share/restore-test and after daily backup I restore this file and
send a mail with the file content. The problem is, I don't know how to
restore the file automatically.

Regards,
Tim



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Bryn Hughes
On 14-10-30 06:27 AM, Jeff MacDonald wrote:
 Hi,

 I have some backups going at 2MB/s which for a 380gig backup is just too 
 slow. I’m trying to find my bottleneck.

 Some questions:

 - Is the rate of the backup only shown in “messages” or is it stored in the 
 db anywhere. Or could I just do jobbytes / endtime-starttime in the jobs 
 table?

 - Does bacula write data to disk via a stream or lots of little latency 
 dependant writes?

 My environment looks like this

 - Bacula (and postgres on the same VM), a MS Small business server and 3 or 4 
 other VMs run on a 6 disk array of 7200rpm SATA disks ( I bet this is already 
 my slowpoint )
 - Bacula stores its backups on a NFS mounted NAS, about .7ms of ping away.

 Tips/Suggestions?

 Jeff.

What is the content of your backups?  Some things (ie thousands of tiny 
files) will cause a lot of seeks on the machine to be backed up.  If you 
aren't using attribute spooling then each backed up file also causes a 
record to be inserted in to the database, which may take time depending 
on your DB environment.

The 'suggestions' for tuning will be different if you are backing up a 
few dozen 10GB files versus backing up a million 10kb files.

Bryn

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Jeff MacDonald
 
 
 Tips/Suggestions?
 
 Jeff.
 
 What is the content of your backups?  Some things (ie thousands of tiny 
 files) will cause a lot of seeks on the machine to be backed up.  If you 
 aren't using attribute spooling then each backed up file also causes a 
 record to be inserted in to the database, which may take time depending 
 on your DB environment.
 
 The 'suggestions' for tuning will be different if you are backing up a 
 few dozen 10GB files versus backing up a million 10kb files.

Its mostly a windows os, with all its sundry smaller files and a few larger 
database dumps etc.

I guess what I have to accertain is the slow part getting the data FROM the 
servers or the slow part putting the data TO the storage.

I’m not sure which value the rate is in the job report, or if rate is somehow 
encompassing both.

jeff




 
 Bryn
 
 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net mailto:Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users 
 https://lists.sourceforge.net/lists/listinfo/bacula-users
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] automatic restore Test

2014-10-30 Thread Davide Giunchi
Hi Tim,
I think that you should create a script that invoke the restore bconsole's 
command:

echo -e restore client=client_to_restore-fd jobid=123 where=/tmp/restore-test/ 
restoreclient=client-where-restore-fd file=/tmp/listfile select current 
done\nyes|bconsole

Take a look at the console manual to get detailed informations:
http://www.bacula.org/7.0.x-manuals/en/console/index.html
(restore section and restore section of the admin manual).

Regards

- Messaggio originale -
 Da: Tim Macholl mach...@scienlab.de
 A: Bacula-users bacula-users@lists.sourceforge.net
 Inviato: Giovedì, 30 ottobre 2014 14:59:43
 Oggetto: [Bacula-users] automatic restore Test
 
 Hello,
 
 is it possible to restore a file automatically. The reason is I need an
 auto-restore test for bacula. Once per day I write date 
 /mnt/share/restore-test and after daily backup I restore this file and
 send a mail with the file content. The problem is, I don't know how to
 restore the file automatically.
 
 Regards,
 Tim
 
 
 
 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-- 
Certificazioni: RHCVA, LPI 1
SOASI - www.soasi.com
Sviluppo Software e Sistemi Open Source
Sede: Via Gandhi 28, 47121 Forlì (FC)
Tel.: +39 0543 090053 - Fax: +39 0543 579928

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Bryn Hughes

On 14-10-30 07:50 AM, Jeff MacDonald wrote:



Tips/Suggestions?

Jeff.


What is the content of your backups?  Some things (ie thousands of tiny
files) will cause a lot of seeks on the machine to be backed up.  If you
aren't using attribute spooling then each backed up file also causes a
record to be inserted in to the database, which may take time depending
on your DB environment.

The 'suggestions' for tuning will be different if you are backing up a
few dozen 10GB files versus backing up a million 10kb files.


Its mostly a windows os, with all its sundry smaller files and a few 
larger database dumps etc.


I guess what I have to accertain is the slow part getting the data 
FROM the servers or the slow part putting the data TO the storage.


I’m not sure which value the rate is in the job report, or if rate is 
somehow encompassing both.


jeff

The job report rate will be the final average rate of the job, it 
doesn't know/specify the difference between the 'input' rate and the 
'output' rate.


Yep, you're going to need to do some investigation on the storage side 
of the VM machine you are backing up, the director itself, the storage 
daemon itself (though I'm guessing it is on the same system as the 
director for you) and the final storage.


Also it's not quite clear from your description, is the final storage on 
a different NAS all together from your VMs? (hoping so!) What 
virtualization platform are you running?


Finally the question about attribute spooling is a big one - if you are 
backing up a lot of small files and you do not have attribute spooling 
turned on, you will have abysmal performance especially if the director 
is running on the same disks that you are backing up.


Database writes are (almost) always synchronous writes, meaning the 
system will stop and wait for the storage layer to say yes the data is 
ACTUALLY committed to disk before proceeding.  If you are seeking all 
over backing up a bunch of small files, then trying to do a whole ton of 
tiny DB writes at the same time to the same spindles your hard drive 
heads are going to be flying around like crazy.  An array of 7200 RPM 
disks in any sort of parity RAID configuration will not be able to 
handle more than 50-90 random IOPs (Operations per Second) at best in 
real life, with a DB write or a file read counting as an IOP.  If you 
are backing up lots of small files randomly distributed around the 
storage you are quite likely hitting an IOP wall - an IOP to read the 
file and an IOP to write the DB record means not more than 25-45 files 
per second.  4kb files = 100-180kb/sec and a completely maxed out 
storage layer.


Even WITH attribute spooling enabled you are still going to be in a 
less-than-ideal position since the spooled attributes still need to be 
written to the same spindles with the hardware configuration you've 
described.


Bryn


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Jeff MacDonald

 On Oct 30, 2014, at 12:17 PM, Bryn Hughes li...@nashira.ca wrote:
 The job report rate will be the final average rate of the job, it doesn't 
 know/specify the difference between the 'input' rate and the 'output' rate.
 
 Yep, you're going to need to do some investigation on the storage side of the 
 VM machine you are backing up, the director itself, the storage daemon itself 
 (though I'm guessing it is on the same system as the director for you) and 
 the final storage.
 
 Also it's not quite clear from your description, is the final storage on a 
 different NAS all together from your VMs? (hoping so!)  What virtualization 
 platform are you running?
 
 Finally the question about attribute spooling is a big one - if you are 
 backing up a lot of small files and you do not have attribute spooling turned 
 on, you will have abysmal performance especially if the director is running 
 on the same disks that you are backing up.  
 
 Database writes are (almost) always synchronous writes, meaning the system 
 will stop and wait for the storage layer to say yes the data is ACTUALLY 
 committed to disk before proceeding.  If you are seeking all over backing up 
 a bunch of small files, then trying to do a whole ton of tiny DB writes at 
 the same time to the same spindles your hard drive heads are going to be 
 flying around like crazy.  An array of 7200 RPM disks in any sort of parity 
 RAID configuration will not be able to handle more than 50-90 random IOPs 
 (Operations per Second) at best in real life, with a DB write or a file read 
 counting as an IOP.  If you are backing up lots of small files randomly 
 distributed around the storage you are quite likely hitting an IOP wall - an 
 IOP to read the file and an IOP to write the DB record means not more than 
 25-45 files per second.  4kb files = 100-180kb/sec and a completely maxed out 
 storage layer.
 
 Even WITH attribute spooling enabled you are still going to be in a 
 less-than-ideal position since the spooled attributes still need to be 
 written to the same spindles with the hardware configuration you've 
 described.  
 
 Bryn
 

This was really helpful and basically just answered all of my questions without 
having to investigate the actual setup very much.

I’m using VMWare for my virt platform. Bacula and its postgres live on the same 
disks that they are backing up (which is local storage) and data is sent off to 
to a remote NAS via gige.

My guess is that its an IOP wall like you mentioned.Its running a bunch of VMs 
that are under heavy usage by the staff.

Making a stronger and stronger arguement for me to recommend dedicated bacula 
appliance. 16 gigs of ram, 4 cores. 1tb of 7200 for postgres and a tape drive :)

jeff.


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] automatic restore Test

2014-10-30 Thread Radek Svoboda
HI all,
  Just one question to this theme

is it possible to restore a file on a selected drive (I mean  - other 
drive, than this one, which was used for backup) ?

Regards
RAdek
--
Dne 30.10.2014 v 15:54 Davide Giunchi napsal(a):
 Hi Tim,
 I think that you should create a script that invoke the restore bconsole's 
 command:

 echo -e restore client=client_to_restore-fd jobid=123 
 where=/tmp/restore-test/ restoreclient=client-where-restore-fd 
 file=/tmp/listfile select current done\nyes|bconsole

 Take a look at the console manual to get detailed informations:
 http://www.bacula.org/7.0.x-manuals/en/console/index.html
 (restore section and restore section of the admin manual).

 Regards

 - Messaggio originale -
 Da: Tim Macholl mach...@scienlab.de
 A: Bacula-users bacula-users@lists.sourceforge.net
 Inviato: Giovedì, 30 ottobre 2014 14:59:43
 Oggetto: [Bacula-users] automatic restore Test

 Hello,

 is it possible to restore a file automatically. The reason is I need an
 auto-restore test for bacula. Once per day I write date 
 /mnt/share/restore-test and after daily backup I restore this file and
 send a mail with the file content. The problem is, I don't know how to
 restore the file automatically.

 Regards,
 Tim



 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread John Drescher
 Making a stronger and stronger arguement for me to recommend dedicated bacula 
 appliance. 16 gigs of ram, 4 cores. 1tb of 7200 for postgres and a tape drive 
 :)

Maybe an enterprise ssd for postgres.

John

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Jeff MacDonald

 On Oct 30, 2014, at 12:36 PM, John Drescher dresche...@gmail.com wrote:
 
 Making a stronger and stronger arguement for me to recommend dedicated 
 bacula appliance. 16 gigs of ram, 4 cores. 1tb of 7200 for postgres and a 
 tape drive :)
 
 Maybe an enterprise ssd for postgres.
 
 John

Agreed, they’re not even that much of a $$ hit.
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Bryn Hughes
On 14-10-30 08:27 AM, Jeff MacDonald wrote:
 On Oct 30, 2014, at 12:17 PM, Bryn Hughes li...@nashira.ca wrote:
 The job report rate will be the final average rate of the job, it doesn't 
 know/specify the difference between the 'input' rate and the 'output' rate.

 Yep, you're going to need to do some investigation on the storage side of 
 the VM machine you are backing up, the director itself, the storage daemon 
 itself (though I'm guessing it is on the same system as the director for 
 you) and the final storage.

 Also it's not quite clear from your description, is the final storage on a 
 different NAS all together from your VMs? (hoping so!)  What virtualization 
 platform are you running?

 Finally the question about attribute spooling is a big one - if you are 
 backing up a lot of small files and you do not have attribute spooling 
 turned on, you will have abysmal performance especially if the director is 
 running on the same disks that you are backing up.

 Database writes are (almost) always synchronous writes, meaning the system 
 will stop and wait for the storage layer to say yes the data is ACTUALLY 
 committed to disk before proceeding.  If you are seeking all over backing 
 up a bunch of small files, then trying to do a whole ton of tiny DB writes 
 at the same time to the same spindles your hard drive heads are going to be 
 flying around like crazy.  An array of 7200 RPM disks in any sort of parity 
 RAID configuration will not be able to handle more than 50-90 random IOPs 
 (Operations per Second) at best in real life, with a DB write or a file read 
 counting as an IOP.  If you are backing up lots of small files randomly 
 distributed around the storage you are quite likely hitting an IOP wall - an 
 IOP to read the file and an IOP to write the DB record means not more than 
 25-45 files per second.  4kb files = 100-180kb/sec and a completely maxed 
 out storage layer.

 Even WITH attribute spooling enabled you are still going to be in a 
 less-than-ideal position since the spooled attributes still need to be 
 written to the same spindles with the hardware configuration you've 
 described.

 Bryn

 This was really helpful and basically just answered all of my questions 
 without having to investigate the actual setup very much.

 I’m using VMWare for my virt platform. Bacula and its postgres live on the 
 same disks that they are backing up (which is local storage) and data is sent 
 off to to a remote NAS via gige.

 My guess is that its an IOP wall like you mentioned.Its running a bunch of 
 VMs that are under heavy usage by the staff.

 Making a stronger and stronger arguement for me to recommend dedicated bacula 
 appliance. 16 gigs of ram, 4 cores. 1tb of 7200 for postgres and a tape drive 
 :)

 jeff.

Just be aware that you might not see a dramatic increase in speed just 
moving Bacula itself!

If you are using VMWare with VMDK files on a VMFS volume you need to be 
aware that any IO by a guest requires a reservation of the entire VMFS 
volume.  Locking is happening at the SCSI layer - if one guest wants to 
read one byte of data nobody else can do anything until its IO operation 
is complete.  Remembering that you probably are only going to get around 
75 IOPs you can see how a VMFS volume with more than a handful of 
virtual machines on it can very quickly end up performing very poorly, 
especially with spinning rust underneath it.  A good RAID card with a 
LOT of cache memory can help with overall system performance, but 
backups by definition are going to be touching lots of areas of data 
that aren't likely to be in cache.

What I'm getting at is you might actually need to focus your efforts and 
dollars on the storage underneath your VMs before you do too much with 
your backup system.  A great big nice happy dedicated Bacula server 
would be nice, but if the VMs are still IOP constrained ESPECIALLY if 
they are actively in use while being backed up you probably won't see 
that much of an improvement.

An easy way to validate this would be to ensure you have attribute 
spooling turned on and to set up the attribute spooling to write to your 
NAS rather than to local storage.  That will get the VM storage 
infrastructure out of your backup pathway.

Bryn


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Jeff MacDonald
 
 Just be aware that you might not see a dramatic increase in speed just 
 moving Bacula itself!
 
 If you are using VMWare with VMDK files on a VMFS volume you need to be 
 aware that any IO by a guest requires a reservation of the entire VMFS 
 volume.  Locking is happening at the SCSI layer - if one guest wants to 
 read one byte of data nobody else can do anything until its IO operation 
 is complete.  Remembering that you probably are only going to get around 
 75 IOPs you can see how a VMFS volume with more than a handful of 
 virtual machines on it can very quickly end up performing very poorly, 
 especially with spinning rust underneath it.  A good RAID card with a 
 LOT of cache memory can help with overall system performance, but 
 backups by definition are going to be touching lots of areas of data 
 that aren't likely to be in cache.
 
 What I'm getting at is you might actually need to focus your efforts and 
 dollars on the storage underneath your VMs before you do too much with 
 your backup system.  A great big nice happy dedicated Bacula server 
 would be nice, but if the VMs are still IOP constrained ESPECIALLY if 
 they are actively in use while being backed up you probably won't see 
 that much of an improvement.
 
 An easy way to validate this would be to ensure you have attribute 
 spooling turned on and to set up the attribute spooling to write to your 
 NAS rather than to local storage.  That will get the VM storage 
 infrastructure out of your backup pathway.
 
 Bryn

This has been a fantastic education. Thanks. I’ll recommend to the client that 
their IO is slow.. and I’ll get told “Oh! It seems fine to us!” :)


I googled and found documentation about turning on Data Spooling, but not 
indepnedantly turning on Attribute Spooling.

Could you point me at that please.. ( I know I Know.. I’ll keep looking :) )

Jeff.


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread John Lockard
Yes, but which IO?

Disk IO on the client?
Network IO from the client to the network?
Network IO from the network to the Bacula Director?
Network IO from the Bacula Director to the Bacula SD?
Disk IO on the Bacula SD?
Database IO on the Bacula Director?


Seems like you have more work to do than just saying it's the IO.  Not
sure of the tools on Windows to interrogate IO at disk or network, but on
Linux/Unix a good place to start is the sar (sysstat) utilities.

-John

On Thu, Oct 30, 2014 at 12:20 PM, Jeff MacDonald j...@terida.com wrote:


 Just be aware that you might not see a dramatic increase in speed just
 moving Bacula itself!

 If you are using VMWare with VMDK files on a VMFS volume you need to be
 aware that any IO by a guest requires a reservation of the entire VMFS
 volume.  Locking is happening at the SCSI layer - if one guest wants to
 read one byte of data nobody else can do anything until its IO operation
 is complete.  Remembering that you probably are only going to get around
 75 IOPs you can see how a VMFS volume with more than a handful of
 virtual machines on it can very quickly end up performing very poorly,
 especially with spinning rust underneath it.  A good RAID card with a
 LOT of cache memory can help with overall system performance, but
 backups by definition are going to be touching lots of areas of data
 that aren't likely to be in cache.

 What I'm getting at is you might actually need to focus your efforts and
 dollars on the storage underneath your VMs before you do too much with
 your backup system.  A great big nice happy dedicated Bacula server
 would be nice, but if the VMs are still IOP constrained ESPECIALLY if
 they are actively in use while being backed up you probably won't see
 that much of an improvement.

 An easy way to validate this would be to ensure you have attribute
 spooling turned on and to set up the attribute spooling to write to your
 NAS rather than to local storage.  That will get the VM storage
 infrastructure out of your backup pathway.

 Bryn


 This has been a fantastic education. Thanks. I’ll recommend to the client
 that their IO is slow.. and I’ll get told “Oh! It seems fine to us!” :)


 I googled and found documentation about turning on Data Spooling, but not
 indepnedantly turning on Attribute Spooling.

 Could you point me at that please.. ( I know I Know.. I’ll keep looking :)
 )

 Jeff.




 --

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  105 South State St. | 4325 North Quad
  jlock...@umich.edu |Ann Arbor, MI  48109-1285
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-764-2475 FAX
---
- The University of Michigan will never ask you for your password -
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] automatic restore Test

2014-10-30 Thread Heitor Faria
Hey Mr. Radek,

If you mean by drive file system partitions yes, I think its pretty possible 
with the File Relocation feature, at bconsole mod restore prompt:

1. Strip prefix, old drive. 2. Add prefix: new drive.

If you mean by drive backup devices yes, it's also doable.

Regards,
--
Heitor Medrado de Faria
+55 61 82684220
Precisa de treinamento Bacula presencial, telepresencial ou online? Acesse: 
http://www.bacula.com.br

Em 30 de outubro de 2014 13:10:22 BRST, Radek Svoboda radek.svob...@upp.cz 
escreveu:
HI all,
  Just one question to this theme

is it possible to restore a file on a selected drive (I mean  - other 
drive, than this one, which was used for backup) ?

Regards
RAdek
--
Dne 30.10.2014 v 15:54 Davide Giunchi napsal(a):
 Hi Tim,
 I think that you should create a script that invoke the restore
bconsole's command:

 echo -e restore client=client_to_restore-fd jobid=123
where=/tmp/restore-test/ restoreclient=client-where-restore-fd
file=/tmp/listfile select current done\nyes|bconsole

 Take a look at the console manual to get detailed informations:
 http://www.bacula.org/7.0.x-manuals/en/console/index.html
 (restore section and restore section of the admin manual).

 Regards

 - Messaggio originale -
 Da: Tim Macholl mach...@scienlab.de
 A: Bacula-users bacula-users@lists.sourceforge.net
 Inviato: Giovedì, 30 ottobre 2014 14:59:43
 Oggetto: [Bacula-users] automatic restore Test

 Hello,

 is it possible to restore a file automatically. The reason is I need
an
 auto-restore test for bacula. Once per day I write date 
 /mnt/share/restore-test and after daily backup I restore this file
and
 send a mail with the file content. The problem is, I don't know how
to
 restore the file automatically.

 Regards,
 Tim




--
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Forcing baccula to purge/recycle a tape instead of appending?!

2014-10-30 Thread Thorsten Reichelt
Hi!

That's a great idea!
But this would mean that I have to generate jobs for each month... for
each client. Or am I wrong?!

 If you truly want the monthly tapes to only ever be used for that 
 particular month, you may wish to consider creating a pool for each 
 month instead.  Then in your Schedule you can specify the specific pool 
 for that month:

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Forcing baccula to purge/recycle a tape instead of appending?!

2014-10-30 Thread Thorsten Reichelt
Thank you so much for your answers!!

At the moment I have unfortunately verry litle time to try out all your
hints :(

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Forcing baccula to purge/recycle a tape instead of appending?!

2014-10-30 Thread John Drescher
 That's a great idea!
 But this would mean that I have to generate jobs for each month... for
 each client. Or am I wrong?!


No. You can select the pool in the schedule resource.

http://www.bacula.org/5.2.x-manuals/en/main/main/Configuring_Director.html#SECTION00145

John

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users