Re: [Bacula-users] Slow backup, how to optimize ?

2013-10-07 Thread Radosław Korzeniewski
Hello,

2013/10/7 bdelagree bacula-fo...@backupcentral.com

 Hello everyone!

 After applying the correct settings and restart the good services here are
 the results ... :P
 They are catastrophic!


I do not follow this thread from the beginning, so I could be wrong about
some tips.

You have a 11M files in single backup job. If your job name is not
misleading all your files are located on NFS share. Right?
If yes, this is your main bottleneck. NFS is not the best protocol for this
job. If your NFS is a some kind of NAS array then you can speed up your
backup with NDMP.

Next, you should implement Bacula VirtualFull backup, which avoid any next
Full backup on the client. After that all your jobs will be all Incremental
and your problem with full will gone.


 My full this weekend took 8 hours more!

 I think problems come from my little spools, 24GB per drive and 3Gb by
 jobs (I have 8 jobs)


Data spool is required for tape drive only. You need to manually tune the
best job spool size for best performance. From my experience on one of my
systems I have 8GB/Job which is better then 32GB/Job spool which was
before. The most important, your job spool size can be 8GB too, because you
limit overall spool size to 24GB. It will work.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-10-07 Thread Radosław Korzeniewski
Hello,

2013/10/7 bdelagree bacula-fo...@backupcentral.com

  I do not follow this thread from the beginning, so I could be wrong
 about some tips.

  You have a 11M files in single backup job. If your job name is not
 misleading all your files are located on NFS share. Right?
  If yes, this is your main bottleneck. NFS is not the best protocol for
 this job. If your NFS is a some kind of NAS array then you can speed up
 your backup with NDMP.

 Hi, i don't use NFS share, i put bacula client on this server, I'll look
 at how to implement NDMP


If you are accessing files to backup locally (you have installed a
bacula-fd on nfs server), so you do not need to implement NDMP.


  Next, you should implement Bacula VirtualFull backup, which avoid any
 next Full backup on the client. After that all your jobs will be all
 Incremental  and your problem with full will gone.

 I do not know VirtualFull backup, I will document about this


Check Bacula documentation for that.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-30 Thread John Drescher
 The DataSpooling has not changed my backup. (See the end of this post)
 1day and 14hours for 390Gb  :(

 By cons I just saw that on Friday I restarted only StorageDaemon, was it
 also restart Director and FileDaemon?

 Do you think that enabling compression could improve backup when there are
 many small files?


I would expect adding software compression to slow the backup down.

Does your source raid array have a cache? Reading many small files causes a
lot of seek operations.

John
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-30 Thread Martin Simmons
 On Mon, 30 Sep 2013 00:07:00 -0700, bdelagree  said:
 
 Hi everyone!
 
 The DataSpooling has not changed my backup. (See the end of this post)
 1day and 14hours for 390Gb  :(
 
 By cons I just saw that on Friday I restarted only StorageDaemon, was it also 
 restart Director and FileDaemon?

You need to restart the Director (or at least use the reload command) to
change spooling (the log you posted was not using spooling).

__Martin

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-24 Thread Ralf Brinkmann
Am 23.09.2013 08:47, schrieb bdelagree:
 Hello,

 This summer we invested in a PowerVault TL2000 library with two LTO5 drives 
 to safeguard our various servers.

 Today two of my servers take to save a lot because they contain many small 
 files for low volume (see the bottom of post)
 All my other servers backups quickly (20,000 KB/s to 30,000 KB/s)
 As explained in the documentation for Bacula I added the following option in 
 the StorageDaemon and FileDaemon of these servers:

 Maximum Network Buffer Size = 65536

 But that did not change anything ...
 Did I forget something?
 Something wrong?
 There's an other options that I have not seen?

as I wrote earlier on our LTO4 tape changer for speeding up the building
of the directory tree on restoring I replaced the default configuration file

/etc/my.cnf

by another predefined

/usr/share/mysql/my-huge.cnf

I suppose this might accelerate the backup also.

For speeding up the backup I provided 1500GB diskspace for buffering -
running all backup jobs parallel that are writing to the same cassette 
set. This did help a lot.

-- 
Ralf Brinkmann

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-24 Thread lst_hoe02

Zitat von bdelagree bacula-fo...@backupcentral.com:

 Hello,

 Thank you for the quick response.

 My library is connected to a dedicated server only to services (PDC,  
 DHCP, DNS, LDAP, and Bacula)
 This server is not designed to host files, so he has little space.
 In addition, the MySql database is already 35Gb...
 I can dedicate reasonably 50Gb on this one.

The size of the database mostly depend on the number of  
files/directories and your retention policy.

 Is there one DataSpooling for bacula, or one by drive ?

You should set it by drive and calculate for the number of parallel  
jobs you are using.

 Knowing I have a lot of servers to backup (8 servers for about 2.8  
 Tb of data), is that enough or i need to find another system for  
 Data Spooling?

 df -h from my srv-infra-sm

 Filesystem  Size  Used  
 Avail Use% Mounted on
 rootfs   20G  836M
 20G   5% /
 udev 10M 0
 10M   0% /dev
 tmpfs   1.6G  1.7M   
 1.6G   1% /run
 /dev/disk/by-uuid/465b04fb-de46-409b-928a-ec01ba98373e   20G  836M
 20G   5% /
 tmpfs   5.0M  4.0K   
 5.0M   1% /run/lock
 tmpfs   1.6G  8.0K   
 1.6G   1% /run/shm
 /dev/sda4   109G   40G
 69G  37% /var
 tmpfs   7.9G 0   
 7.9G   0% /tmp


 New options in bacula-sd.conf (thinking that we need a spool by drive)
 Device {
   Name = Drive-0
   .
   .
   .
   Maximum Spool Size = 24gb
   Maximum Job Spool Size = 12gb
   Spool Directory = /var/lib/bacula/spool/drive0
 }

 Device {
   Name = Drive-1
   .
   .
   .
   Maximum Spool Size = 24gb
   Maximum Job Spool Size = 12gb
   Spool Directory = /var/lib/bacula/spool/drive0
 }

If you backup all 8 machines concurrently you should set your Job  
Spool Size to something around available diskspace / 8. Also be  
aware that you need additional spool space for spooling attributes  
which is enabled by default when using data spooling. Think of data  
spooling as some form of cache to pack things together before  
committing to database and tape.

Regards

Andreas




--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-23 Thread lst_hoe02

Zitat von bdelagree bacula-fo...@backupcentral.com:

 Hello,

 This summer we invested in a PowerVault TL2000 library with two LTO5  
 drives to safeguard our various servers.

 Today two of my servers take to save a lot because they contain many  
 small files for low volume (see the bottom of post)
 All my other servers backups quickly (20,000 KB/s to 30,000 KB/s)
 As explained in the documentation for Bacula I added the following  
 option in the StorageDaemon and FileDaemon of these servers:

    Maximum Network Buffer Size = 65536

 But that did not change anything ...
 Did I forget something?
 Something wrong?
 There's an other options that I have not seen?


Be sure to use attribute spooling and if you have some fast local  
storage at the backup server data spooling.

http://www.bacula.org/5.2.x-manuals/en/main/main/Data_Spooling.html

Regards

Andreas


--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-23 Thread Josh Fisher
On 9/23/2013 9:32 AM, bdelagree wrote:
 Hello,

 Thank you for the quick response.

 My library is connected to a dedicated server only to services (PDC, DHCP, 
 DNS, LDAP, and Bacula)
 This server is not designed to host files, so he has little space.
 In addition, the MySql database is already 35Gb...
 I can dedicate reasonably 50Gb on this one.

 Is there one DataSpooling for bacula, or one by drive ?

 Knowing I have a lot of servers to backup (8 servers for about 2.8 Tb of 
 data), is that enough or i need to find another system for Data Spooling?

Each file requires a DB insert, so clients with many small files really 
hit the DB storage system hard. Attribute spooling is particularly 
needed for those clients.


 df -h from my srv-infra-sm

 Filesystem  Size  Used Avail Use% 
 Mounted on
 rootfs   20G  836M   20G   5% 
 /
 udev 10M 0   10M   0% 
 /dev
 tmpfs   1.6G  1.7M  1.6G   1% 
 /run
 /dev/disk/by-uuid/465b04fb-de46-409b-928a-ec01ba98373e   20G  836M   20G   5% 
 /
 tmpfs   5.0M  4.0K  5.0M   1% 
 /run/lock
 tmpfs   1.6G  8.0K  1.6G   1% 
 /run/shm
 /dev/sda4   109G   40G   69G  37% 
 /var
 tmpfs   7.9G 0  7.9G   0% 
 /tmp


 New options in bacula-sd.conf (thinking that we need a spool by drive)
 Device {
Name = Drive-0
.
.
.
Maximum Spool Size = 24gb
Maximum Job Spool Size = 12gb
Spool Directory = /var/lib/bacula/spool/drive0
 }

 Device {
Name = Drive-1
.
.
.
Maximum Spool Size = 24gb
Maximum Job Spool Size = 12gb
Spool Directory = /var/lib/bacula/spool/drive0
 }

 +--
 |This was sent by supervis...@numalliance.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --
 LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
 Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13.
 http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-23 Thread Alan Brown
On 23/09/13 07:47, bdelagree wrote:
 Hello,

 This summer we invested in a PowerVault TL2000 library with two LTO5 drives 
 to safeguard our various servers.

 Today two of my servers take to save a lot because they contain many small 
 files for low volume (see the bottom of post)
 All my other servers backups quickly (20,000 KB/s to 30,000 KB/s)

This is normal. There's an overhead in opening each file and it adds up 
quickly.

I have 1Tb filesystems (98% full) with 7000-1 files in them which 
take about 12 hours to run a full backup

I also have 1Tb filesystems (92% full) with 3-6 million files in them 
and they can take DAYS

(GFS is very slow in opening files. This makes the overhead even more 
painfully obvious - it takes 28 hours just to do a zero-byte incremental 
backup of the worst filesystem)





--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users