Re: [Bacula-users] tuning lto-4

2011-12-20 Thread Marcello Romani
Il 19/12/2011 17:32, gary artim ha scritto:
 Thanks for the advice, _most_ responsive list I belong to! cheers! gary
 
 2011/12/19 Radosław Korzeniewski rados...@korzeniewski.net:
 Hello,

 2011/12/16 gary artim gar...@gmail.com

 No, just Spool Attributes = yes. g.


 A direct backup from FD into a SD tape device is performed with a little
 buffering and require more computation, disk seeks and context switching. It
 is a single, complicated chain with two threads one for FD and one for SD
 connected through logical network layer (even when a full backup is
 performed on local machine). With this chain it is very hard to get a full
 LTO speed. As opposite when you perform SD data spooling, you can easy
 achieve a full LTO speed because a writing chain (despooling) in SD is very
 simple and effective. Unfortunately doing a data spool takes a long time and
 slow down an overall backup speed.

 best regards
 --
 Radosław Korzeniewski
 rados...@korzeniewski.net

Would be curious to know if data spooling made any difference...
Cheers!

-- 
Marcello Romani

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-20 Thread Radosław Korzeniewski
Hello,

W dniu 20 grudnia 2011 10:30 użytkownik Marcello Romani 
mrom...@ottotecnica.com napisał:

  A direct backup from FD into a SD tape device is performed with a
 little
  buffering and require more computation, disk seeks and context
 switching. It
  is a single, complicated chain with two threads one for FD and one for
 SD
  connected through logical network layer (even when a full backup is
  performed on local machine). With this chain it is very hard to get a
 full
  LTO speed. As opposite when you perform SD data spooling, you can easy
  achieve a full LTO speed because a writing chain (despooling) in SD is
 very
  simple and effective. Unfortunately doing a data spool takes a long
 time and
  slow down an overall backup speed.

 Would be curious to know if data spooling made any difference...
 Cheers!


In my reference system I've got an average backup speed of 30-40MB/s when I
run a backup job with a FD-SD-LTO chain (without spooling) on local
machine (full backup of a few very large files). And I've got an 80MB/s
despooling speed on the same machine and LTO.

best regards

-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-19 Thread Radosław Korzeniewski
Hello,

2011/12/16 gary artim gar...@gmail.com

 No, just Spool Attributes = yes. g.


A direct backup from FD into a SD tape device is performed with a little
buffering and require more computation, disk seeks and context switching.
It is a single, complicated chain with two threads one for FD and one for
SD connected through logical network layer (even when a full backup is
performed on local machine). With this chain it is very hard to get a full
LTO speed. As opposite when you perform SD data spooling, you can easy
achieve a full LTO speed because a writing chain (despooling) in SD is very
simple and effective. Unfortunately doing a data spool takes a long time
and slow down an overall backup speed.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-19 Thread gary artim
Thanks for the advice, _most_ responsive list I belong to! cheers! gary

2011/12/19 Radosław Korzeniewski rados...@korzeniewski.net:
 Hello,

 2011/12/16 gary artim gar...@gmail.com

 No, just Spool Attributes = yes. g.


 A direct backup from FD into a SD tape device is performed with a little
 buffering and require more computation, disk seeks and context switching. It
 is a single, complicated chain with two threads one for FD and one for SD
 connected through logical network layer (even when a full backup is
 performed on local machine). With this chain it is very hard to get a full
 LTO speed. As opposite when you perform SD data spooling, you can easy
 achieve a full LTO speed because a writing chain (despooling) in SD is very
 simple and effective. Unfortunately doing a data spool takes a long time and
 slow down an overall backup speed.

 best regards
 --
 Radosław Korzeniewski
 rados...@korzeniewski.net

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-16 Thread Uwe Schuerkamp
On Thu, Dec 15, 2011 at 10:25:04AM -0800, gary artim wrote:
 no, full, doing a mod on run command in bconsole to force full backup
 on every test. cheers
 

40MB/sec sounds very much like a natural RAID speed limit you're
hitting. Have you tried running some i/o benchmarks on the disks like
bonnie+ or dd? 

Cheers, Uwe 

-- 
NIONEX --- Ein Unternehmen der Bertelsmann AG



--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-16 Thread Marcello Romani
Il 15/12/2011 20:31, gary artim ha scritto:
 will do, interestingly I picked another slot on my autochanger
 (different lto vol) and got (see below), running a peak time no less.
 Last nite I ran a 9pm, the hours of the dead, and was getting about
 2.4gb min, today 2.66gb min. -- go figure...
 
 15-Dec 10:58 bacula-dir JobId 1: Bacula bacula-dir 5.0.3 (04Aug10):
 15-Dec-2011 10:58:11
   Build OS:   x86_64-redhat-linux-gnu redhat
   JobId:  1
   Job:Prodbackup.2011-12-15_09.20.55_03
   Backup Level:   Full
   Client: bacula-fd 5.0.3 (04Aug10)
 x86_64-redhat-linux-gnu,redhat,
   FileSet:FileSetProd 2011-12-15 09:20:55
   Pool:   FullProd (From Job FullPool override)
   Catalog:MyCatalog (From Client resource)
   Storage:LTO-4 (From Job resource)
   Scheduled time: 15-Dec-2011 09:20:47
   Start time: 15-Dec-2011 09:20:58
   End time:   15-Dec-2011 10:58:11
   Elapsed time:   1 hour 37 mins 13 secs
   Priority:   10
   FD Files Written:   35,588
   SD Files Written:   35,588
   FD Bytes Written:   257,543,473,820 (257.5 GB)
   SD Bytes Written:   257,548,885,611 (257.5 GB)
   Rate:   44152.8 KB/s
   Software Compression:   None
   VSS:no
   Encryption: no
   Accurate:   no
   Volume name(s): fufu01
   Volume Session Id:  1
   Volume Session Time:1323969555
   Last Volume Bytes:  257,601,203,200 (257.6 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  OK
   SD termination status:  OK
   Termination:Backup OK
 
 
 
 On Thu, Dec 15, 2011 at 11:25 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Dec 15, 2011 at 2:24 PM, gary artim gar...@gmail.com wrote:
 no, seperate drive. g.


 Try enabling attribute spooling. Like the other poster said.

 John
 
 --
 10 Tips for Better Server Consolidation
 Server virtualization is being driven by many needs.  
 But none more important than the need to reduce IT complexity 
 while improving strategic productivity.  Learn More! 
 http://www.accelacomm.com/jaw/sdnl/114/51507609/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

Was that job run supposed to use data spooling ? I ask because I have
SpoolData = yes in job definitions (and that automatically enables
SpoolAttributes = yes, btw), and I get messages like this:

15-dic 21:47 serverlinux-dir JobId 7965: Start Backup JobId 7965,
Job=FileServerTape.2011-12-15_21.15.00_18
15-dic 21:47 serverlinux-dir JobId 7965: Using Device TapeStorage
15-dic 21:47 serverlinux-dir JobId 7965: FD compression disabled for
this Job because AllowCompress=No in Storage resource.
15-dic 21:48 serverlinux-sd JobId 7965: Recycled volume ThursdayTape
on device TapeStorage (/dev/nst0), all previous data lost.
15-dic 21:48 serverlinux-sd JobId 7965: Spooling data ...
15-dic 23:10 serverlinux-sd JobId 7965: Job write elapsed time =
01:22:17, Transfer rate = 8.879 M Bytes/second
15-dic 23:10 serverlinux-sd JobId 7965: Committing spooled data to
Volume ThursdayTape. Despooling 43,882,656,406 bytes ...
15-dic 23:52 serverlinux-sd JobId 7965: Despooling elapsed time =
00:42:04, Transfer rate = 17.38 M Bytes/second
15-dic 23:54 serverlinux-sd JobId 7965: Sending spooled attrs to the
Director. Despooling 31,284,446 bytes ...
15-dic 23:55 serverlinux-dir JobId 7965: Bacula serverlinux-dir 5.0.2
(28Apr10): 15-dic-2011 23:55:20
  Build OS:   i486-pc-linux-gnu debian 5.0.4
  JobId:  7965
  Job:FileServerTape.2011-12-15_21.15.00_18
  Backup Level:   Full
  Client: fileserver-fd 2.4.4 (28Dec08)
x86_64-pc-linux-gnu,debian,lenny/sid
  FileSet:LinuxDefaultSet 2011-09-15 09:11:26
  Pool:   Tape (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:Tape (From Job resource)
  Scheduled time: 15-dic-2011 21:15:00
  Start time: 15-dic-2011 21:47:57
  End time:   15-dic-2011 23:55:20
  Elapsed time:   2 hours 7 mins 23 secs
  Priority:   11
  FD Files Written:   86,344
  SD Files Written:   86,344
  FD Bytes Written:   43,822,576,114 (43.82 GB)
  SD Bytes Written:   43,839,416,416 (43.83 GB)
  Rate:   5733.7 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): ThursdayTape
  Volume Session Id:  242
  Volume Session Time:1323245911
  Last Volume Bytes:  60,996,096,000 (60.99 GB)
  Non-fatal FD errors:   

Re: [Bacula-users] tuning lto-4

2011-12-16 Thread gary artim
I'm working on going to SATA 3, current raid is SATA 1 (1.5). I have
run benchmarks, but have to dig (find) them or rerun...g.)

On Fri, Dec 16, 2011 at 1:18 AM, Uwe Schuerkamp
uwe.schuerk...@nionex.net wrote:
 On Thu, Dec 15, 2011 at 10:25:04AM -0800, gary artim wrote:
 no, full, doing a mod on run command in bconsole to force full backup
 on every test. cheers


 40MB/sec sounds very much like a natural RAID speed limit you're
 hitting. Have you tried running some i/o benchmarks on the disks like
 bonnie+ or dd?

 Cheers, Uwe

 --
 NIONEX --- Ein Unternehmen der Bertelsmann AG



--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-16 Thread gary artim
No, just Spool Attributes = yes. g.

On Fri, Dec 16, 2011 at 3:27 AM, Marcello Romani
mrom...@ottotecnica.com wrote:
 Il 15/12/2011 20:31, gary artim ha scritto:
 will do, interestingly I picked another slot on my autochanger
 (different lto vol) and got (see below), running a peak time no less.
 Last nite I ran a 9pm, the hours of the dead, and was getting about
 2.4gb min, today 2.66gb min. -- go figure...

 15-Dec 10:58 bacula-dir JobId 1: Bacula bacula-dir 5.0.3 (04Aug10):
 15-Dec-2011 10:58:11
   Build OS:               x86_64-redhat-linux-gnu redhat
   JobId:                  1
   Job:                    Prodbackup.2011-12-15_09.20.55_03
   Backup Level:           Full
   Client:                 bacula-fd 5.0.3 (04Aug10)
 x86_64-redhat-linux-gnu,redhat,
   FileSet:                FileSetProd 2011-12-15 09:20:55
   Pool:                   FullProd (From Job FullPool override)
   Catalog:                MyCatalog (From Client resource)
   Storage:                LTO-4 (From Job resource)
   Scheduled time:         15-Dec-2011 09:20:47
   Start time:             15-Dec-2011 09:20:58
   End time:               15-Dec-2011 10:58:11
   Elapsed time:           1 hour 37 mins 13 secs
   Priority:               10
   FD Files Written:       35,588
   SD Files Written:       35,588
   FD Bytes Written:       257,543,473,820 (257.5 GB)
   SD Bytes Written:       257,548,885,611 (257.5 GB)
   Rate:                   44152.8 KB/s
   Software Compression:   None
   VSS:                    no
   Encryption:             no
   Accurate:               no
   Volume name(s):         fufu01
   Volume Session Id:      1
   Volume Session Time:    1323969555
   Last Volume Bytes:      257,601,203,200 (257.6 GB)
   Non-fatal FD errors:    0
   SD Errors:              0
   FD termination status:  OK
   SD termination status:  OK
   Termination:            Backup OK



 On Thu, Dec 15, 2011 at 11:25 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Dec 15, 2011 at 2:24 PM, gary artim gar...@gmail.com wrote:
 no, seperate drive. g.


 Try enabling attribute spooling. Like the other poster said.

 John

 --
 10 Tips for Better Server Consolidation
 Server virtualization is being driven by many needs.
 But none more important than the need to reduce IT complexity
 while improving strategic productivity.  Learn More!
 http://www.accelacomm.com/jaw/sdnl/114/51507609/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 Was that job run supposed to use data spooling ? I ask because I have
 SpoolData = yes in job definitions (and that automatically enables
 SpoolAttributes = yes, btw), and I get messages like this:

 15-dic 21:47 serverlinux-dir JobId 7965: Start Backup JobId 7965,
 Job=FileServerTape.2011-12-15_21.15.00_18
 15-dic 21:47 serverlinux-dir JobId 7965: Using Device TapeStorage
 15-dic 21:47 serverlinux-dir JobId 7965: FD compression disabled for
 this Job because AllowCompress=No in Storage resource.
 15-dic 21:48 serverlinux-sd JobId 7965: Recycled volume ThursdayTape
 on device TapeStorage (/dev/nst0), all previous data lost.
 15-dic 21:48 serverlinux-sd JobId 7965: Spooling data ...
 15-dic 23:10 serverlinux-sd JobId 7965: Job write elapsed time =
 01:22:17, Transfer rate = 8.879 M Bytes/second
 15-dic 23:10 serverlinux-sd JobId 7965: Committing spooled data to
 Volume ThursdayTape. Despooling 43,882,656,406 bytes ...
 15-dic 23:52 serverlinux-sd JobId 7965: Despooling elapsed time =
 00:42:04, Transfer rate = 17.38 M Bytes/second
 15-dic 23:54 serverlinux-sd JobId 7965: Sending spooled attrs to the
 Director. Despooling 31,284,446 bytes ...
 15-dic 23:55 serverlinux-dir JobId 7965: Bacula serverlinux-dir 5.0.2
 (28Apr10): 15-dic-2011 23:55:20
  Build OS:               i486-pc-linux-gnu debian 5.0.4
  JobId:                  7965
  Job:                    FileServerTape.2011-12-15_21.15.00_18
  Backup Level:           Full
  Client:                 fileserver-fd 2.4.4 (28Dec08)
 x86_64-pc-linux-gnu,debian,lenny/sid
  FileSet:                LinuxDefaultSet 2011-09-15 09:11:26
  Pool:                   Tape (From Job resource)
  Catalog:                MyCatalog (From Client resource)
  Storage:                Tape (From Job resource)
  Scheduled time:         15-dic-2011 21:15:00
  Start time:             15-dic-2011 21:47:57
  End time:               15-dic-2011 23:55:20
  Elapsed time:           2 hours 7 mins 23 secs
  Priority:               11
  FD Files Written:       86,344
  SD Files Written:       86,344
  FD Bytes Written:       43,822,576,114 (43.82 GB)
  SD Bytes Written:       43,839,416,416 (43.83 GB)
  Rate:                   5733.7 KB/s
  Software Compression:   None
  VSS:                    no
  Encryption:             no
  Accurate:               no
  Volume name(s):         ThursdayTape
  Volume 

Re: [Bacula-users] tuning lto-4

2011-12-15 Thread gary artim
using this bacula-sd.conf, the best I get is about 2.4GB a minute. I'm
not working with network backups, this is just a straight raid 5 to
lto-4. I'm now thinking that my db (mysql) or raid is the
drag/slowdown since I can get over 180MBs with btape. Any suggestions
welcomes, I feel I've exhausted trying every blocksize, file size,
much appreciated:

Device {
 Name = LTO-4
 Media Type = LTO-4
 Archive Device = /dev/nst0
 AutomaticMount = yes;   # when device opened, read it
 AlwaysOpen = yes;
 RemovableMedia = yes;
 RandomAccess = no;
 Maximum Block Size = 2M   # ( about 2.4GB/minute/with 12GB Max File Size  )
 Maximum File Size = 12GB
 Autochanger = yes
 Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
 Alert Command = sh -c 'smartctl -H -l error %c'
}


On Fri, Dec 2, 2011 at 2:12 PM, gary artim gar...@gmail.com wrote:
 180 MBs, 256MB min/max blocksize.

 [root@genepi1 bacula]# tapeinfo -f /dev/nst0
 Product Type: Tape Drive
 Vendor ID: 'HP      '
 Product ID: 'Ultrium 4-SCSI  '
 Revision: 'B12H'
 Attached Changer API: No
 SerialNumber: 'HU17450M8L'
 MinBlock: 1
 MaxBlock: 16777215
 SCSI ID: 1
 SCSI LUN: 0
 Ready: yes
 BufferedMode: yes
 Medium Type: Not Loaded
 Density Code: 0x46
 BlockSize: 0
 DataCompEnabled: yes
 DataCompCapable: yes
 DataDeCompEnabled: yes
 CompType: 0x1
 DeCompType: 0x1
 Block Position: 471909
 Partition 0 Remaining Kbytes: 799204
 Partition 0 Size in Kbytes: 799204
 ActivePartition: 0
 EarlyWarningSize: 0
 NumPartitions: 0
 MaxPartitions: 0
 [root@genepi1 bacula]#  btape -c /etc/bacula/bacula-sd.conf /dev/nst0
 Tape block granularity is 1024 bytes.
 btape: butil.c:284 Using device: /dev/nst0 for writing.
 02-Dec 13:07 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 
 command.
 02-Dec 13:07 btape JobId 0: 3302 Autochanger loaded? drive 0, result
 is Slot 12.
 btape: btape.c:476 open device LTO-4 (/dev/nst0): OK
 *speed file_size=20 nb_file=10 skip_raw
 btape: btape.c:1082 Test with zero data and bacula block structure.
 btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
 524288 bytes.
 +
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 177.4 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
 btape: btape.c:384 Total Volume bytes=214.7 GB. Total Write rate = 179.5 MB/s

 btape: btape.c:1094 Test with random data, should give the minimum throughput.
 btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
 524288 bytes.
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 26.47 MB/s
 ++
 btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 26.51 MB/s
 ++


 On Wed, Nov 30, 2011 at 8:02 AM, gary artim gar...@gmail.com wrote:
 Hi --

 Getting about 41.6/MBs and hoping for closer to the 

Re: [Bacula-users] tuning lto-4

2011-12-15 Thread John Drescher
On Thu, Dec 15, 2011 at 1:09 PM, gary artim gar...@gmail.com wrote:
 using this bacula-sd.conf, the best I get is about 2.4GB a minute. I'm
 not working with network backups, this is just a straight raid 5 to
 lto-4. I'm now thinking that my db (mysql) or raid is the
 drag/slowdown since I can get over 180MBs with btape. Any suggestions
 welcomes, I feel I've exhausted trying every blocksize, file size,
 much appreciated:


Are you talking about full backups? Remember Incremental and
differential will have low rates because they generally spend more
time searching for files to backup then to actually backup although
still the job completes in way less time then a Full however the rate
will be low.

John

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-15 Thread gary artim
no, full, doing a mod on run command in bconsole to force full backup
on every test. cheers

On Thu, Dec 15, 2011 at 10:21 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Dec 15, 2011 at 1:09 PM, gary artim gar...@gmail.com wrote:
 using this bacula-sd.conf, the best I get is about 2.4GB a minute. I'm
 not working with network backups, this is just a straight raid 5 to
 lto-4. I'm now thinking that my db (mysql) or raid is the
 drag/slowdown since I can get over 180MBs with btape. Any suggestions
 welcomes, I feel I've exhausted trying every blocksize, file size,
 much appreciated:


 Are you talking about full backups? Remember Incremental and
 differential will have low rates because they generally spend more
 time searching for files to backup then to actually backup although
 still the job completes in way less time then a Full however the rate
 will be low.

 John

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-15 Thread John Drescher
On Thu, Dec 15, 2011 at 1:25 PM, gary artim gar...@gmail.com wrote:
 no, full, doing a mod on run command in bconsole to force full backup
 on every test. cheers


Is the bacula database on the same array as the source?

John

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-15 Thread Fahrer, Julian
 -Ursprüngliche Nachricht-
 Von: gary artim [mailto:gar...@gmail.com]
 Gesendet: Donnerstag, 15. Dezember 2011 19:09
 An: bacula-users@lists.sourceforge.net
 Cc: Gary Artim
 Betreff: Re: [Bacula-users] tuning lto-4
 
 using this bacula-sd.conf, the best I get is about 2.4GB a minute. I'm
 not working with network backups, this is just a straight raid 5 to lto-
 4. I'm now thinking that my db (mysql) or raid is the drag/slowdown
 since I can get over 180MBs with btape. Any suggestions welcomes, I feel
 I've exhausted trying every blocksize, file size, much appreciated:
 
 Device {
  Name = LTO-4
  Media Type = LTO-4
  Archive Device = /dev/nst0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  Maximum Block Size = 2M   # ( about 2.4GB/minute/with 12GB Max File
 Size  )
  Maximum File Size = 12GB
  Autochanger = yes
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
  Alert Command = sh -c 'smartctl -H -l error %c'
 }
 

Try setting Spool Attributes = yes in your job resource to check whether your 
db is the bottleneck.

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-15 Thread gary artim
no, seperate drive. g.

On Thu, Dec 15, 2011 at 10:26 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Dec 15, 2011 at 1:25 PM, gary artim gar...@gmail.com wrote:
 no, full, doing a mod on run command in bconsole to force full backup
 on every test. cheers


 Is the bacula database on the same array as the source?

 John

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-15 Thread John Drescher
On Thu, Dec 15, 2011 at 2:24 PM, gary artim gar...@gmail.com wrote:
 no, seperate drive. g.


Try enabling attribute spooling. Like the other poster said.

John

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-15 Thread gary artim
no seperate drive.

On Thu, Dec 15, 2011 at 10:26 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Dec 15, 2011 at 1:25 PM, gary artim gar...@gmail.com wrote:
 no, full, doing a mod on run command in bconsole to force full backup
 on every test. cheers


 Is the bacula database on the same array as the source?

 John

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-15 Thread gary artim
will do, interestingly I picked another slot on my autochanger
(different lto vol) and got (see below), running a peak time no less.
Last nite I ran a 9pm, the hours of the dead, and was getting about
2.4gb min, today 2.66gb min. -- go figure...

15-Dec 10:58 bacula-dir JobId 1: Bacula bacula-dir 5.0.3 (04Aug10):
15-Dec-2011 10:58:11
  Build OS:   x86_64-redhat-linux-gnu redhat
  JobId:  1
  Job:Prodbackup.2011-12-15_09.20.55_03
  Backup Level:   Full
  Client: bacula-fd 5.0.3 (04Aug10)
x86_64-redhat-linux-gnu,redhat,
  FileSet:FileSetProd 2011-12-15 09:20:55
  Pool:   FullProd (From Job FullPool override)
  Catalog:MyCatalog (From Client resource)
  Storage:LTO-4 (From Job resource)
  Scheduled time: 15-Dec-2011 09:20:47
  Start time: 15-Dec-2011 09:20:58
  End time:   15-Dec-2011 10:58:11
  Elapsed time:   1 hour 37 mins 13 secs
  Priority:   10
  FD Files Written:   35,588
  SD Files Written:   35,588
  FD Bytes Written:   257,543,473,820 (257.5 GB)
  SD Bytes Written:   257,548,885,611 (257.5 GB)
  Rate:   44152.8 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): fufu01
  Volume Session Id:  1
  Volume Session Time:1323969555
  Last Volume Bytes:  257,601,203,200 (257.6 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK



On Thu, Dec 15, 2011 at 11:25 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Dec 15, 2011 at 2:24 PM, gary artim gar...@gmail.com wrote:
 no, seperate drive. g.


 Try enabling attribute spooling. Like the other poster said.

 John

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-15 Thread gary artim
Interesting...I followed the job and got a big increase in writing the
tape using Spool Attributes = yes, went from 2.66GB/minute toe
3.8GB/minute, but but the job took longer to finish, close to 50
minutes writing out attribute data to mysql. Looks like putting the
sql on a ssd would help. Nice option for testing, thanks

2:04 with Spool Attributes=yes  3.8 GB/minute, with system busy
1:37 w/o  Spool Attributes=no   2.66 GB/minute


On Thu, Dec 15, 2011 at 11:31 AM, gary artim gar...@gmail.com wrote:
 will do, interestingly I picked another slot on my autochanger
 (different lto vol) and got (see below), running a peak time no less.


 Last nite I ran a 9pm, the hours of the dead, and was getting about
 2.4gb min, today 2.66gb min. -- go figure...

 15-Dec 10:58 bacula-dir JobId 1: Bacula bacula-dir 5.0.3 (04Aug10):
 15-Dec-2011 10:58:11
  Build OS:               x86_64-redhat-linux-gnu redhat
  JobId:                  1
  Job:                    Prodbackup.2011-12-15_09.20.55_03
  Backup Level:           Full
  Client:                 bacula-fd 5.0.3 (04Aug10)
 x86_64-redhat-linux-gnu,redhat,
  FileSet:                FileSetProd 2011-12-15 09:20:55
  Pool:                   FullProd (From Job FullPool override)
  Catalog:                MyCatalog (From Client resource)
  Storage:                LTO-4 (From Job resource)
  Scheduled time:         15-Dec-2011 09:20:47
  Start time:             15-Dec-2011 09:20:58
  End time:               15-Dec-2011 10:58:11
  Elapsed time:           1 hour 37 mins 13 secs
  Priority:               10
  FD Files Written:       35,588
  SD Files Written:       35,588
  FD Bytes Written:       257,543,473,820 (257.5 GB)
  SD Bytes Written:       257,548,885,611 (257.5 GB)
  Rate:                   44152.8 KB/s
  Software Compression:   None
  VSS:                    no
  Encryption:             no
  Accurate:               no
  Volume name(s):         fufu01
  Volume Session Id:      1
  Volume Session Time:    1323969555
  Last Volume Bytes:      257,601,203,200 (257.6 GB)
  Non-fatal FD errors:    0
  SD Errors:              0
  FD termination status:  OK
  SD termination status:  OK
  Termination:            Backup OK



 On Thu, Dec 15, 2011 at 11:25 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Dec 15, 2011 at 2:24 PM, gary artim gar...@gmail.com wrote:
 no, seperate drive. g.


 Try enabling attribute spooling. Like the other poster said.

 John

--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-02 Thread Andrea Conti
Hello,

 blocksize set with mt and in bacula-sd.conf

Unless you are setting minimum block size (which you really should
not), Bacula uses the tape drive in variable block size mode, with block
sizes up to the value given in maximum block size.

Setting a fixed block size with mt (and reading it back with tapeinfo)
is irrelevant.

 btape: btape.c:1082 Test with zero data and bacula block structure.
 btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
 65536 bytes.

 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s

A 64kb max block size is *way* too small for LTO4. You'll need at least
256kB, possibly 512kB or more to achieve full throughput.

When diagnosing throughput problems, the tests with raw block structure
are more representative of the actual performance of the tape drive,
although the difference will not be that much.

What are you getting in the random data tests? With ~112MB/s for zeroes,
you're still being severely limited by something (most likely block
size). LTO-4 is rated for 120MB/s *to tape*, so you should be aiming for
110MB/s with random data and 250-300MB/s with zeroes (the latter being
dependent on the compression engine maximum bandwidth).

If you can't achieve that with any maximum block size, your hardware is
probably inadequate for the task.

andrea

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-02 Thread gary artim
180 MBs, 256MB min/max blocksize.

[root@genepi1 bacula]# tapeinfo -f /dev/nst0
Product Type: Tape Drive
Vendor ID: 'HP  '
Product ID: 'Ultrium 4-SCSI  '
Revision: 'B12H'
Attached Changer API: No
SerialNumber: 'HU17450M8L'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 1
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x46
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
Block Position: 471909
Partition 0 Remaining Kbytes: 799204
Partition 0 Size in Kbytes: 799204
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 0
[root@genepi1 bacula]#  btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: /dev/nst0 for writing.
02-Dec 13:07 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 command.
02-Dec 13:07 btape JobId 0: 3302 Autochanger loaded? drive 0, result
is Slot 12.
btape: btape.c:476 open device LTO-4 (/dev/nst0): OK
*speed file_size=20 nb_file=10 skip_raw
btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
524288 bytes.
+
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 177.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
btape: btape.c:384 Total Volume bytes=214.7 GB. Total Write rate = 179.5 MB/s

btape: btape.c:1094 Test with random data, should give the minimum throughput.
btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
524288 bytes.
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 26.47 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 26.51 MB/s
++


On Wed, Nov 30, 2011 at 8:02 AM, gary artim gar...@gmail.com wrote:
 Hi --

 Getting about 41.6/MBs and hoping for closer to the max (120MB). I
 tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
 where about 35/MBs. Any advise welcomed...should I look at max/min
 block sizes?
 most of the data is big, genetics data -- filesizes avg in the 500/MB
 to 3-4/GB -- looking at a growth from 4TB to 15TB in the next 2 years.

 run results and bacula-sd.conf and bacula-dir.conf below...

 thanks
 -- gary

 Run:
 ===

  Build OS:               x86_64-redhat-linux-gnu redhat
  JobId:                  5
  Job:                    Prodbackup.2011-11-29_19.32.42_05
  Backup Level:           Full
  Client:                 bacula-fd 5.0.3 (04Aug10)
 x86_64-redhat-linux-gnu,red


 hat,
  FileSet:                FileSetProd 2011-11-29 19:32:42
  Pool:                   FullProd (From Job FullPool override)
  Catalog:                MyCatalog (From Client resource)
  Storage:                LTO-4 (From Job resource)
  Scheduled time:         29-Nov-2011 

Re: [Bacula-users] tuning lto-4

2011-12-01 Thread gary artim
thank much! will try testing with btape. btw, I ran with 20GB maximum
file size/2MB max block (see bacula-sd.conf below) and got these
results, 20MB/s increase, ran 20 minutes faster, got 50MBs -- now if I
can just double the speed I could backup 15TB in about 45/hrs. I don't
have that much data yet, but I'm hovering at 2TB and looking to expand
sharply over time. I'm not doing any networking, it just straight from
a raid 5 to a autochanger/lto-4. gary

  Build OS:   x86_64-redhat-linux-gnu redhat
  JobId:  6
  Job:Prodbackup.2011-11-30_18.49.24_06
  Backup Level:   Full
  Client: bacula-fd 5.0.3 (04Aug10)
x86_64-redhat-linux-gnu,redhat,
  FileSet:FileSetProd 2011-11-30 15:23:58
  Pool:   FullProd (From Job FullPool override)
  Catalog:MyCatalog (From Client resource)
  Storage:LTO-4 (From Job resource)
  Scheduled time: 30-Nov-2011 18:49:15
  Start time: 30-Nov-2011 18:49:26
  End time:   30-Nov-2011 20:14:56
  Elapsed time:   1 hour 25 mins 30 secs
  Priority:   10
  FD Files Written:   35,588
  SD Files Written:   35,588
  FD Bytes Written:   257,543,092,723 (257.5 GB)
  SD Bytes Written:   257,548,504,514 (257.5 GB)
  Rate:   50203.3 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): f2
  Volume Session Id:  2
  Volume Session Time:1322707293
  Last Volume Bytes:  257,600,822,272 (257.6 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

bacula-sd.conf:
Device {
  Name = LTO-4
  Media Type = LTO-4
  Archive Device = /dev/nst0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  #Maximum File Size = 12GB
  Maximum File Size = 20GB
  #Maximum Network Buffer Size = 65536
  Maximum block size = 2M
  #Spool Directory = /db/bacula/spool/LTO4
  #Maximum Spool Size = 200G
  #Maximum Job Spool Size = 150G
  Autochanger = yes
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
  Alert Command = sh -c 'smartctl -H -l error %c'
}



On Wed, Nov 30, 2011 at 11:48 PM, Andrea Conti a...@alyf.net wrote:
 On 30/11/11 19.43, gary artim wrote:
 Thanks much, I'll try today the block size change first. Then try the
 spooling. Dont have any unused disk, but may have to try on a shared
 drive.
 The maximum file size should be okay? g.

 Choosing a max file size is mainly a tradeoff between write performance
 (as the drive will stop and restart at the end of each file to write an
 EOF mark) and restore performance (as the drive can only seek to a file
 mark and then sequentially read through the file until the relevant data
 bocks are found).

 I usually set maximum file size so that there are 2-3 filemarks per tape
 wrap (3GB for LTO3, 5GB for LTO4), but if you don't plan to do regular
 restores, or if you always restore the whole contents of a volume, 12GB
 is fine.

 Anyway, with the figures you're citing your problem is *not* maximum
 file size.

 Try to assess tape performance alone with btape test (which has a
 speed command); you can try different block sizes and configuration
 and see which one gives the best results.

 Doing so will give you a clear indication on whether your bottleneck is
 in tape or disk throughput.

 andrea

 --
 All the data continuously generated in your IT infrastructure
 contains a definitive record of customers, application performance,
 security threats, fraudulent activity, and more. Splunk takes this
 data and makes sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-novd2d
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread Alan Brown
gary artim wrote:
 thank much! will try testing with btape. 

Please let us know the results

 btw, I ran with 20GB maximum
 file size/2MB max block (see bacula-sd.conf below) and got these
 results, 20MB/s increase, ran 20 minutes faster, got 50MBs -- 

You should be seeing 120Mb/s or thereabouts.

If you're spooling/despooling then you'll see lower overall speeds of 
course. What counts is the despooling speed.

How much ram have you got and what are you using to connect the LTO4 
drives up?





--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread gary artim
You guys/gals are great, very responsive! I did try
spooling/despooling and my run times shot up. I was using a simple
7200 drive though, no ssd or raid...I assume the performance gain
happens when your networks multi machines...wearing multiple hats so
will report back on btape next week, unless I get some time. gary

On Thu, Dec 1, 2011 at 8:00 AM, Alan Brown a...@mssl.ucl.ac.uk wrote:
 gary artim wrote:

 thank much! will try testing with btape.

 Please let us know the results

 btw, I ran with 20GB maximum
 file size/2MB max block (see bacula-sd.conf below) and got these
 results, 20MB/s increase, ran 20 minutes faster, got 50MBs --

 You should be seeing 120Mb/s or thereabouts.

 If you're spooling/despooling then you'll see lower overall speeds of
 course. What counts is the despooling speed.

 How much ram have you got and what are you using to connect the LTO4 drives
 up?






--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread Brian Debelius
I believe (its been a while since I have needed to change my 
configuration) that my LTO-3 drive does not do hardware compression on 
blocks over 512K.  I am using 256K blocks right now, and I did not see 
any improvement above that.  I am using spooling on a pair of striped 
hard disks, and despooling happens at 65-80MB/s


On 12/1/2011 10:50 AM, gary artim wrote:
 thank much! will try testing with btape. btw, I ran with 20GB maximum
 file size/2MB max block (see bacula-sd.conf below) and got these
 results, 20MB/s increase, ran 20 minutes faster, got 50MBs -- now if I
 can just double the speed I could backup 15TB in about 45/hrs. I don't
 have that much data yet, but I'm hovering at 2TB and looking to expand
 sharply over time. I'm not doing any networking, it just straight from
 a raid 5 to a autochanger/lto-4. gary

Build OS:   x86_64-redhat-linux-gnu redhat
JobId:  6
Job:Prodbackup.2011-11-30_18.49.24_06
Backup Level:   Full
Client: bacula-fd 5.0.3 (04Aug10)
 x86_64-redhat-linux-gnu,redhat,
FileSet:FileSetProd 2011-11-30 15:23:58
Pool:   FullProd (From Job FullPool override)
Catalog:MyCatalog (From Client resource)
Storage:LTO-4 (From Job resource)
Scheduled time: 30-Nov-2011 18:49:15
Start time: 30-Nov-2011 18:49:26
End time:   30-Nov-2011 20:14:56
Elapsed time:   1 hour 25 mins 30 secs
Priority:   10
FD Files Written:35,588
SD Files Written:35,588
FD Bytes Written:257,543,092,723 (257.5 GB)
SD Bytes Written:257,548,504,514 (257.5 GB)
Rate:   50203.3 KB/s
Software Compression:   None
VSS:no
Encryption: no
Accurate:   no
Volume name(s): f2
Volume Session Id:   2
Volume Session Time:1322707293
Last Volume Bytes:   257,600,822,272 (257.6 GB)
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  OK
SD termination status:  OK
Termination:Backup OK

 bacula-sd.conf:
 Device {
Name = LTO-4
Media Type = LTO-4
Archive Device = /dev/nst0
AutomaticMount = yes;   # when device opened, read it
AlwaysOpen = yes;
RemovableMedia = yes;
RandomAccess = no;
#Maximum File Size = 12GB
Maximum File Size = 20GB
#Maximum Network Buffer Size = 65536
Maximum block size = 2M
#Spool Directory = /db/bacula/spool/LTO4
#Maximum Spool Size = 200G
#Maximum Job Spool Size = 150G
Autochanger = yes
Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
Alert Command = sh -c 'smartctl -H -l error %c'
 }



 On Wed, Nov 30, 2011 at 11:48 PM, Andrea Contia...@alyf.net  wrote:
 On 30/11/11 19.43, gary artim wrote:
 Thanks much, I'll try today the block size change first. Then try the
 spooling. Dont have any unused disk, but may have to try on a shared
 drive.
 The maximum file size should be okay? g.
 Choosing a max file size is mainly a tradeoff between write performance
 (as the drive will stop and restart at the end of each file to write an
 EOF mark) and restore performance (as the drive can only seek to a file
 mark and then sequentially read through the file until the relevant data
 bocks are found).

 I usually set maximum file size so that there are 2-3 filemarks per tape
 wrap (3GB for LTO3, 5GB for LTO4), but if you don't plan to do regular
 restores, or if you always restore the whole contents of a volume, 12GB
 is fine.

 Anyway, with the figures you're citing your problem is *not* maximum
 file size.

 Try to assess tape performance alone with btape test (which has a
 speed command); you can try different block sizes and configuration
 and see which one gives the best results.

 Doing so will give you a clear indication on whether your bottleneck is
 in tape or disk throughput.

 andrea

 --
 All the data continuously generated in your IT infrastructure
 contains a definitive record of customers, application performance,
 security threats, fraudulent activity, and more. Splunk takes this
 data and makes sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-novd2d
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 --
 All the data continuously generated in your IT infrastructure
 contains a definitive record of customers, application performance,
 security threats, fraudulent activity, and more. Splunk takes this
 data and makes sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-novd2d
 

Re: [Bacula-users] tuning lto-4

2011-12-01 Thread Alan Brown
gary artim wrote:
 You guys/gals are great, very responsive! I did try
 spooling/despooling and my run times shot up.

They will - you're copying everything twice (disk to disk to tape), but 
this is the only way to achieve fast despooling speeds - if you don't do 
this then your LTO drive will start to shoe shine and speeds drop off 
rapidly when it happens.

The trick is to run multiple jobs at once - you have to spool to achieve 
this anyway or extracting will be a nightmare.

Spooling is a net gain when you're running incrementals.

Spooling MUST happen on a fast dedicated drive. You're best off dropping 
in a fast SSD such as a 64/128Gb OCZ vertex3 or similar to handle it.

 I was using a simple
 7200 drive though, no ssd or raid...I assume the performance gain
 happens when your networks multi machines...wearing multiple hats so
 will report back on btape next week, unless I get some time. gary

Even on a single host, if the heads are thrashing then spooling will 
save time overall. The big advantage is being able to run multiple jobs 
so that several are spooling data at the same time one is despooling.

 On Thu, Dec 1, 2011 at 8:00 AM, Alan Brown a...@mssl.ucl.ac.uk wrote:
 gary artim wrote:
 thank much! will try testing with btape.
 Please let us know the results

 btw, I ran with 20GB maximum
 file size/2MB max block (see bacula-sd.conf below) and got these
 results, 20MB/s increase, ran 20 minutes faster, got 50MBs --
 You should be seeing 120Mb/s or thereabouts.

 If you're spooling/despooling then you'll see lower overall speeds of
 course. What counts is the despooling speed.

 How much ram have you got and what are you using to connect the LTO4 drives
 up?





 




--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread mark . bergman
In the message dated: Thu, 01 Dec 2011 16:27:33 GMT,
The pithy ruminations from Alan Brown on 
Re: [Bacula-users] tuning lto-4 were:
= gary artim wrote:
=  You guys/gals are great, very responsive! I did try
=  spooling/despooling and my run times shot up.
= 
= They will - you're copying everything twice (disk to disk to tape), but 
= this is the only way to achieve fast despooling speeds - if you don't do 
= this then your LTO drive will start to shoe shine and speeds drop off 
= rapidly when it happens.

And you increase wear  tear on the drive and media.

= 
= The trick is to run multiple jobs at once - you have to spool to achieve 
= this anyway or extracting will be a nightmare.
= 
= Spooling is a net gain when you're running incrementals.
= 

Not necessarily. Spooling is a gain if you are measuring the speed
of writing to tape. Spooling may be a net loss for end-to-end (client
machine--spool server--tape drive) speed.

For backups clients where the total volume being backed up is less than
the spool size, then there's a very good chance of a performance gain. As
soon as a job requires multiple rounds of spooling and de-spooling,
there's a good chance of a performance loss because bacula stops reading
from the client machine (stops spooling that job) as soon as despooling
begins. Of course, spooling allows you to run multiple jobs in parallel, a
clear win over running them in series.


See:

[1] http://copilotco.com/mail-archives/bacula-devel.2007/msg02642.html
[2] 
http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?h=Branch-5.1

[3] 
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg49366.html


= Spooling MUST happen on a fast dedicated drive. You're best off dropping 
= in a fast SSD such as a 64/128Gb OCZ vertex3 or similar to handle it.

Hmm...for LTO4 (large spool files are good), you might want more space
than that, particularly if you have multiple clients (multiple spool
files). A more cost-effective option might be several fast drives (10K
or 15K SAS or SCSI) in RAID-0. It doesn't take very many drives in RAID0 to
have an aggregate drive throughput that is greater than the bus interface.

= 
=  I was using a simple
=  7200 drive though, no ssd or raid...I assume the performance gain

Yeah, the sustained read speed from a 7.2k RPM drive is lower than the
possible write speed to an LTO-4 drive:


http://www.seagate.com/www/en-us/support/before_you_buy/speed_considerations

=  happens when your networks multi machines...wearing multiple hats so
=  will report back on btape next week, unless I get some time. gary
= 
= Even on a single host, if the heads are thrashing then spooling will 
= save time overall. The big advantage is being able to run multiple jobs 
= so that several are spooling data at the same time one is despooling.

Absolutely. Spooling is a big win for multiple jobs, and for reducing
weartear on the tape drive. It may or may not give a performance increase for
any single backup job.

Mark


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread gary artim
btape getting 89 MBs, so maybe my disk and sql updating is effecting
the speed? note drive has a 16384 blocksize, ran tapeinfo on the
drive...gary

[root@genepi1 bacula]# btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: /dev/nst0 for writing.
01-Dec 12:29 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 command.
01-Dec 12:29 btape JobId 0: 3302 Autochanger loaded? drive 0, result
is Slot 12.
btape: btape.c:476 open device LTO-4 (/dev/nst0): OK
*speed file_size=3 skip_raw
btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 3 files of 3.221 GB with blocks of
2097152 bytes.
+++4
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 89.47 MB/s
+++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 89.47 MB/s
+++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 89.47 MB/s
btape: btape.c:384 Total Volume bytes=9.663 GB. Total Write rate = 89.47 MB/s

btape: btape.c:1094 Test with random data, should give the minimum throughput.
btape: btape.c:960 Begin writing 3 files of 3.221 GB with blocks of
2097152 bytes.
+++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 16.02 MB/s
+++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 33.90 MB/s
+++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 44.12 MB/s
btape: btape.c:384 Total Volume bytes=9.663 GB. Total Write rate = 26.18 MB/s


[root@genepi1 bacula]# tapeinfo
Usage: tapeinfo -f generic-device
[root@genepi1 bacula]# tapeinfo -f /dev/changer
Product Type: Medium Changer
Vendor ID: 'OVERLAND'
Product ID: 'NEO Series  '
Revision: '0504'
Attached Changer API: No
SerialNumber: '2B8145'
SCSI ID: 1
SCSI LUN: 1
Ready: yes
[root@genepi1 bacula]# tapeinfo -f /dev/nst0
Product Type: Tape Drive
Vendor ID: 'HP  '
Product ID: 'Ultrium 4-SCSI  '
Revision: 'B12H'
Attached Changer API: No
SerialNumber: 'HU17450M8L'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 1
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x46
BlockSize: 16384
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
BOP: yes
Block Position: 0
Partition 0 Remaining Kbytes: 799204
Partition 0 Size in Kbytes: 799204
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 0



On Thu, Dec 1, 2011 at 10:49 AM,  mark.berg...@uphs.upenn.edu wrote:
 In the message dated: Thu, 01 Dec 2011 16:27:33 GMT,
 The pithy ruminations from Alan Brown on
 Re: [Bacula-users] tuning lto-4 were:
 = gary artim wrote:
 =  You guys/gals are great, very responsive! I did try
 =  spooling/despooling and my run times shot up.
 =
 = They will - you're copying everything twice (disk to disk to tape), but
 = this is the only way to achieve fast despooling speeds - if you don't do
 = this then your LTO drive will start to shoe shine and speeds drop off
 = rapidly when it happens.

 And you increase wear  tear on the drive and media.

 =
 = The trick is to run multiple jobs at once - you have to spool to achieve
 = this anyway or extracting will be a nightmare.
 =
 = Spooling is a net gain when you're running incrementals.
 =

 Not necessarily. Spooling is a gain if you are measuring the speed
 of writing to tape. Spooling may be a net loss for end-to-end (client
 machine--spool server--tape drive) speed.

 For backups clients where the total volume being backed up is less than
 the spool size, then there's a very good chance of a performance gain. As
 soon as a job requires multiple rounds of spooling and de-spooling,
 there's a good chance of a performance loss because bacula stops reading
 from the client machine (stops spooling that job) as soon as despooling
 begins. Of course, spooling allows you to run multiple jobs in parallel, a
 clear win over running them in series.


 See:

        [1] http://copilotco.com/mail-archives/bacula-devel.2007/msg02642.html
        [2] 
 http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?h=Branch-5.1

        [3] 
 http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg49366.html


 = Spooling MUST happen on a fast dedicated drive. You're best off dropping
 = in a fast SSD such as a 64/128Gb OCZ vertex3 or similar to handle it.

 Hmm...for LTO4 (large spool files are good), you might want more space
 than that, particularly if you have multiple clients (multiple spool
 files). A more cost-effective option might be several fast drives (10K
 or 15K SAS or SCSI) in RAID-0. It doesn't take very many drives in RAID0 to
 have an aggregate drive throughput that is greater than the bus interface.

 =
 =  I was using a simple
 =  7200 drive

Re: [Bacula-users] tuning lto-4

2011-12-01 Thread gary artim
got close to 120 MBs, using 64kb buffer and 20gb maximum file size
using btape...now test with real data...gary

===
blocksize set with mt and in bacula-sd.conf to == 65536
===

[root@genepi1 bacula]# btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: /dev/nst0 for writing.
01-Dec 13:36 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 command.
01-Dec 13:36 btape JobId 0: 3302 Autochanger loaded? drive 0, result
is Slot 12.
btape: btape.c:476 open device LTO-4 (/dev/nst0): OK
*speed file_size=20 nb_file=10 skip_raw
btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
65536 bytes.
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.2 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s




===
blocksize set with mt and in bacula-sd.conf to == 32768
===
[root@genepi1 bacula]# btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: /dev/nst0 for writing.
01-Dec 13:12 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 command.
01-Dec 13:12 btape JobId 0: 3302 Autochanger loaded? drive 0, result
is Slot 12.
btape: btape.c:476 open device LTO-4 (/dev/nst0): OK
*speed file_size=20 nb_file=10 skip_raw
btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
32768 bytes.
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 94.60 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 95.44 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 95.44 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 95.02 MB/s

On Wed, Nov 30, 2011 at 8:02 AM, gary artim gar...@gmail.com wrote:
 Hi --

 Getting about 41.6/MBs and hoping for closer to the max (120MB). I
 tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
 where about 35/MBs. Any advise welcomed...should I look at max/min
 block sizes?
 most of the data is big, genetics data -- filesizes avg in the 500/MB
 to 3-4/GB -- looking at a growth from 4TB to 15TB in the next 2 years.

 run results and bacula-sd.conf and bacula-dir.conf below...

 thanks
 -- gary

 Run:
 ===

  Build OS:               x86_64-redhat-linux-gnu redhat
  JobId:                  5
  Job:                    Prodbackup.2011-11-29_19.32.42_05
  Backup Level:           Full
  Client:                 bacula-fd 5.0.3 (04Aug10)
 x86_64-redhat-linux-gnu,red


 hat,
  FileSet:                FileSetProd 2011-11-29 19:32:42
  Pool:                   FullProd (From Job FullPool override)
  Catalog:                MyCatalog (From Client resource)
  Storage:                LTO-4 (From Job resource)
  Scheduled time:         29-Nov-2011 19:32:26
  Start time:             29-Nov-2011 19:32:45
  End time:               29-Nov-2011 21:15:53
  Elapsed time:           1 hour 43 mins 8 secs
  Priority:               10
  FD Files Written:       35,588
  SD Files Written:       35,588
  FD Bytes Written:       257,543,090,368 (257.5 GB)
  SD Bytes Written:       257,548,502,159 (257.5 GB)
  Rate:                   41619.8 KB/s
  Software Compression:   None
  VSS:                    no
  Encryption:             no
  Accurate:               no
  Volume name(s):         f03
  Volume Session Id:      1
  Volume Session Time:    1322622337
  Last Volume Bytes:      257,740,342,272 (257.7 GB)
  Non-fatal FD errors:    0
  SD Errors:              0
  FD termination status:  OK
  SD termination status:  OK
  Termination:            Backup OK

 bacula-sd.conf:
 ==

 Autochanger {
  Name = Autochanger
  Device = LTO-4
  Changer Command = /usr/libexec/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/changer
 }
 Device {
  Name = LTO-4
  Media Type = LTO-4
  Archive Device = /dev/nst0
  AutomaticMount = yes;       

Re: [Bacula-users] tuning lto-4

2011-11-30 Thread Alan Brown
gary artim wrote:
 Hi --
 
 Getting about 41.6/MBs and hoping for closer to the max (120MB). I
 tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
 where about 35/MBs. Any advise welcomed...should I look at max/min
 block sizes?

Don't adjust min size.

Bacula's max block size is ~2Mb (default 65535 bytes) and setting this 
should give a substantial speed boost (it did for me). Going higher than 
bacula's maximum will result in it failing throught to default, so 
double check the block size on a newly labelled tape before committing 
to any value.

WARNING: If you adjust this value, mark all current tapes as USED before 
restarting bacula-sd. Changing block size on an open tape is likely to 
lead to problems getting data off it.

LTO drives tend to have 16Mb maximum buffers. Perhaps Bacula's max block 
size needs increasing? (Kern?)

You should also enable spooling to _very_ fast disk before hitting the 
tape. LTO3 upwards will easily run faster than any spinning media and 
trivially outrun even a raid array if that has to do any seeking.

I use a raid0 set of 5 of Intel E25 60Gb drives and have seen 
throughputs approaching 700Mb/s out to 3 tape drives whilst other jobs 
are spooling (I've got 7 LTO5 drives carrying 6 pools but have never 
seen more than 3 writing simultaneously). These days I'd be more 
inclined to use one of the PCIe SSD cards as they're even faster, with 
less overhead.

 Device {
   Name = LTO-4
   Media Type = LTO-4
   Archive Device = /dev/nst0
   AutomaticMount = yes;   # when device opened, read it
   AlwaysOpen = yes;
   RemovableMedia = yes;
   RandomAccess = no;
   Maximum File Size = 12GB

   Maximum Network Buffer Size = 65536
   Maximum block size = 2M

   Spool Directory = /var/bacula/spool/LTO4
   Maximum Spool Size = 280G
   Maximum Job Spool Size = 150G

   Autochanger = yes
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   Alert Command = sh -c 'smartctl -H -l error %c'
 }

Enlarging network buffers is possible but it must be be the same 
everywhere and should be thoroughly tested first as it can as easily 
cause complete breakage as speedups - especially if backups are taking 
place across a routed network instead of just on your LAN.


AB



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-11-30 Thread gary artim
Thanks much, I'll try today the block size change first. Then try the
spooling. Dont have any unused disk, but may have to try on a shared
drive.
The maximum file size should be okay? g.

On Wed, Nov 30, 2011 at 8:45 AM, Alan Brown a...@mssl.ucl.ac.uk wrote:
 gary artim wrote:

 Hi --

 Getting about 41.6/MBs and hoping for closer to the max (120MB). I
 tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
 where about 35/MBs. Any advise welcomed...should I look at max/min
 block sizes?


 Don't adjust min size.

 Bacula's max block size is ~2Mb (default 65535 bytes) and setting this
 should give a substantial speed boost (it did for me). Going higher than
 bacula's maximum will result in it failing throught to default, so double
 check the block size on a newly labelled tape before committing to any
 value.

 WARNING: If you adjust this value, mark all current tapes as USED before
 restarting bacula-sd. Changing block size on an open tape is likely to lead
 to problems getting data off it.

 LTO drives tend to have 16Mb maximum buffers. Perhaps Bacula's max block
 size needs increasing? (Kern?)

 You should also enable spooling to _very_ fast disk before hitting the tape.
 LTO3 upwards will easily run faster than any spinning media and trivially
 outrun even a raid array if that has to do any seeking.

 I use a raid0 set of 5 of Intel E25 60Gb drives and have seen throughputs
 approaching 700Mb/s out to 3 tape drives whilst other jobs are spooling
 (I've got 7 LTO5 drives carrying 6 pools but have never seen more than 3
 writing simultaneously). These days I'd be more inclined to use one of the
 PCIe SSD cards as they're even faster, with less overhead.


 Device {
  Name = LTO-4
  Media Type = LTO-4
  Archive Device = /dev/nst0
  AutomaticMount = yes;               # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  Maximum File Size = 12GB


  Maximum Network Buffer Size = 65536
  Maximum block size = 2M

  Spool Directory = /var/bacula/spool/LTO4
  Maximum Spool Size     = 280G
  Maximum Job Spool Size = 150G


  Autochanger = yes
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
  Alert Command = sh -c 'smartctl -H -l error %c'
 }


 Enlarging network buffers is possible but it must be be the same everywhere
 and should be thoroughly tested first as it can as easily cause complete
 breakage as speedups - especially if backups are taking place across a
 routed network instead of just on your LAN.


 AB



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-11-30 Thread gary artim
block size change didnt make much difference, but also running with an
rsync running against the backup volume (raid 5). Adding spool and
will run with both blocksize change and spool configuration. -- gary

On Wed, Nov 30, 2011 at 10:43 AM, gary artim gar...@gmail.com wrote:
 Thanks much, I'll try today the block size change first. Then try the
 spooling. Dont have any unused disk, but may have to try on a shared
 drive.
 The maximum file size should be okay? g.

 On Wed, Nov 30, 2011 at 8:45 AM, Alan Brown a...@mssl.ucl.ac.uk wrote:
 gary artim wrote:

 Hi --

 Getting about 41.6/MBs and hoping for closer to the max (120MB). I
 tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
 where about 35/MBs. Any advise welcomed...should I look at max/min
 block sizes?


 Don't adjust min size.

 Bacula's max block size is ~2Mb (default 65535 bytes) and setting this
 should give a substantial speed boost (it did for me). Going higher than
 bacula's maximum will result in it failing throught to default, so double
 check the block size on a newly labelled tape before committing to any
 value.

 WARNING: If you adjust this value, mark all current tapes as USED before
 restarting bacula-sd. Changing block size on an open tape is likely to lead
 to problems getting data off it.

 LTO drives tend to have 16Mb maximum buffers. Perhaps Bacula's max block
 size needs increasing? (Kern?)

 You should also enable spooling to _very_ fast disk before hitting the tape.
 LTO3 upwards will easily run faster than any spinning media and trivially
 outrun even a raid array if that has to do any seeking.

 I use a raid0 set of 5 of Intel E25 60Gb drives and have seen throughputs
 approaching 700Mb/s out to 3 tape drives whilst other jobs are spooling
 (I've got 7 LTO5 drives carrying 6 pools but have never seen more than 3
 writing simultaneously). These days I'd be more inclined to use one of the
 PCIe SSD cards as they're even faster, with less overhead.


 Device {
  Name = LTO-4
  Media Type = LTO-4
  Archive Device = /dev/nst0
  AutomaticMount = yes;               # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  Maximum File Size = 12GB


  Maximum Network Buffer Size = 65536
  Maximum block size = 2M

  Spool Directory = /var/bacula/spool/LTO4
  Maximum Spool Size     = 280G
  Maximum Job Spool Size = 150G


  Autochanger = yes
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
  Alert Command = sh -c 'smartctl -H -l error %c'
 }


 Enlarging network buffers is possible but it must be be the same everywhere
 and should be thoroughly tested first as it can as easily cause complete
 breakage as speedups - especially if backups are taking place across a
 routed network instead of just on your LAN.


 AB



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-11-30 Thread Andrea Conti
On 30/11/11 19.43, gary artim wrote:
 Thanks much, I'll try today the block size change first. Then try the
 spooling. Dont have any unused disk, but may have to try on a shared
 drive.
 The maximum file size should be okay? g.

Choosing a max file size is mainly a tradeoff between write performance
(as the drive will stop and restart at the end of each file to write an
EOF mark) and restore performance (as the drive can only seek to a file
mark and then sequentially read through the file until the relevant data
bocks are found).

I usually set maximum file size so that there are 2-3 filemarks per tape
wrap (3GB for LTO3, 5GB for LTO4), but if you don't plan to do regular
restores, or if you always restore the whole contents of a volume, 12GB
is fine.

Anyway, with the figures you're citing your problem is *not* maximum
file size.

Try to assess tape performance alone with btape test (which has a
speed command); you can try different block sizes and configuration
and see which one gives the best results.

Doing so will give you a clear indication on whether your bottleneck is
in tape or disk throughput.

andrea

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users