[Bacula-users] Slow backup, how to optimize ?

2013-10-07 Thread bdelagree
Hello everyone!

After applying the correct settings and restart the good services here are the 
results ... :P 
They are catastrophic!
My full this weekend took 8 hours more!

I think problems come from my little spools, 24GB per drive and 3Gb by jobs (I 
have 8 jobs)

Maybe I miscalculated my spools ...

+--
|This was sent by supervis...@numalliance.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-10-07 Thread Radosław Korzeniewski
Hello,

2013/10/7 bdelagree bacula-fo...@backupcentral.com

 Hello everyone!

 After applying the correct settings and restart the good services here are
 the results ... :P
 They are catastrophic!


I do not follow this thread from the beginning, so I could be wrong about
some tips.

You have a 11M files in single backup job. If your job name is not
misleading all your files are located on NFS share. Right?
If yes, this is your main bottleneck. NFS is not the best protocol for this
job. If your NFS is a some kind of NAS array then you can speed up your
backup with NDMP.

Next, you should implement Bacula VirtualFull backup, which avoid any next
Full backup on the client. After that all your jobs will be all Incremental
and your problem with full will gone.


 My full this weekend took 8 hours more!

 I think problems come from my little spools, 24GB per drive and 3Gb by
 jobs (I have 8 jobs)


Data spool is required for tape drive only. You need to manually tune the
best job spool size for best performance. From my experience on one of my
systems I have 8GB/Job which is better then 32GB/Job spool which was
before. The most important, your job spool size can be 8GB too, because you
limit overall spool size to 24GB. It will work.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup, how to optimize ?

2013-10-07 Thread bdelagree
 I do not follow this thread from the beginning, so I could be wrong about 
 some tips. 

 You have a 11M files in single backup job. If your job name is not misleading 
 all your files are located on NFS share. Right? 
 If yes, this is your main bottleneck. NFS is not the best protocol for this 
 job. If your NFS is a some kind of NAS array then you can speed up your 
 backup with NDMP. 

Hi, i don't use NFS share, i put bacula client on this server, I'll look at how 
to implement NDMP

 Next, you should implement Bacula VirtualFull backup, which avoid any next 
 Full backup on the client. After that all your jobs will be all Incremental  
 and your problem with full will gone. 

I do not know VirtualFull backup, I will document about this

 Data spool is required for tape drive only. You need to manually tune the 
 best job spool size for best performance. From my experience on one of my 
 systems I have 8GB/Job which is better then 32GB/Job spool which was before. 
 The most important, your job spool size can be 8GB too, because you limit 
 overall spool size to 24GB. It will work.

I'll try it, I can do the test quickly
I'll let you know

+--
|This was sent by supervis...@numalliance.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-10-07 Thread Radosław Korzeniewski
Hello,

2013/10/7 bdelagree bacula-fo...@backupcentral.com

  I do not follow this thread from the beginning, so I could be wrong
 about some tips.

  You have a 11M files in single backup job. If your job name is not
 misleading all your files are located on NFS share. Right?
  If yes, this is your main bottleneck. NFS is not the best protocol for
 this job. If your NFS is a some kind of NAS array then you can speed up
 your backup with NDMP.

 Hi, i don't use NFS share, i put bacula client on this server, I'll look
 at how to implement NDMP


If you are accessing files to backup locally (you have installed a
bacula-fd on nfs server), so you do not need to implement NDMP.


  Next, you should implement Bacula VirtualFull backup, which avoid any
 next Full backup on the client. After that all your jobs will be all
 Incremental  and your problem with full will gone.

 I do not know VirtualFull backup, I will document about this


Check Bacula documentation for that.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup, how to optimize ?

2013-09-30 Thread bdelagree
Hi everyone!

The DataSpooling has not changed my backup. (See the end of this post)
1day and 14hours for 390Gb  :(

By cons I just saw that on Friday I restarted only StorageDaemon, was it also 
restart Director and FileDaemon?

Do you think that enabling compression could improve backup when there are many 
small files?

thank you

---

28-Sep 05:57 srv-infra-sm-dir JobId 378: Start Backup JobId 378, 
Job=srv-nfs-sm-macsoft.2013-09-27_20.00.00_10
28-Sep 05:57 srv-infra-sm-dir JobId 378: Using Device Drive-0
29-Sep 20:37 srv-infra-sm-sd JobId 378: Job write elapsed time = 38:39:42, 
Transfer rate = 2.811 M Bytes/second
29-Sep 20:37 srv-infra-sm-sd JobId 378: Alert: smartctl 5.41 2011-06-09 r3365 
[x86_64-linux-3.2.0-4-amd64] (local build)
29-Sep 20:37 srv-infra-sm-sd JobId 378: Alert: Copyright (C) 2002-11 by Bruce 
Allen, http://smartmontools.sourceforge.net
29-Sep 20:37 srv-infra-sm-sd JobId 378: Alert: 
29-Sep 20:37 srv-infra-sm-sd JobId 378: Alert: TapeAlert: OK
29-Sep 20:37 srv-infra-sm-sd JobId 378: Alert: 
29-Sep 20:37 srv-infra-sm-sd JobId 378: Alert: Error Counter logging not 
supported
29-Sep 20:37 srv-infra-sm-sd JobId 378: Alert: 
29-Sep 20:37 srv-infra-sm-sd JobId 378: Alert: Last n error events log page
29-Sep 20:45 srv-infra-sm-dir JobId 378: Bacula srv-infra-sm-dir 5.2.6 
(21Feb12):
  Build OS:   x86_64-pc-linux-gnu debian 7.0
  JobId:  378
  Job:srv-nfs-sm-macsoft.2013-09-27_20.00.00_10
  Backup Level:   Full
  Client: srv-nfs-sm-fd 5.2.6 (21Feb12) 
x86_64-pc-linux-gnu,debian,7.0
  FileSet:srv-nfs-sm-macsoft 2013-09-20 20:00:00
  Pool:   Default (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:Autochanger (From Job resource)
  Scheduled time: 27-Sep-2013 20:00:00
  Start time: 28-Sep-2013 05:57:23
  End time:   29-Sep-2013 20:45:25
  Elapsed time:   1 day 14 hours 48 mins 2 secs
  Priority:   10
  FD Files Written:   11,984,928
  SD Files Written:   11,984,928
  FD Bytes Written:   388,744,287,257 (388.7 GB)
  SD Bytes Written:   391,295,285,064 (391.2 GB)
  Rate:   2783.1 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): 16L5
  Volume Session Id:  7
  Volume Session Time:1380268256
  Last Volume Bytes:  1,012,484,680,704 (1.012 TB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

29-Sep 20:45 srv-infra-sm-dir JobId 378: Begin pruning Jobs older than 6 months 
.
29-Sep 20:45 srv-infra-sm-dir JobId 378: No Jobs found to prune.
29-Sep 20:45 srv-infra-sm-dir JobId 378: Begin pruning Files.
29-Sep 20:51 srv-infra-sm-dir JobId 378: Pruned Files from 2 Jobs for client 
srv-nfs-sm-fd from catalog.
29-Sep 20:51 srv-infra-sm-dir JobId 378: End auto prune. [Crying or Very sad]  
[Crying or Very sad]  [Crying or Very sad]  [Embarassed]

+--
|This was sent by supervis...@numalliance.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-30 Thread John Drescher
 The DataSpooling has not changed my backup. (See the end of this post)
 1day and 14hours for 390Gb  :(

 By cons I just saw that on Friday I restarted only StorageDaemon, was it
 also restart Director and FileDaemon?

 Do you think that enabling compression could improve backup when there are
 many small files?


I would expect adding software compression to slow the backup down.

Does your source raid array have a cache? Reading many small files causes a
lot of seek operations.

John
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-30 Thread Martin Simmons
 On Mon, 30 Sep 2013 00:07:00 -0700, bdelagree  said:
 
 Hi everyone!
 
 The DataSpooling has not changed my backup. (See the end of this post)
 1day and 14hours for 390Gb  :(
 
 By cons I just saw that on Friday I restarted only StorageDaemon, was it also 
 restart Director and FileDaemon?

You need to restart the Director (or at least use the reload command) to
change spooling (the log you posted was not using spooling).

__Martin

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup, how to optimize ?

2013-09-27 Thread bdelagree
Hi everyone!

Sorry for my short absence but I've been busy with other little problem.
I had to create a virtual machine under OS9 for one of my users
I had forgotten how the old system was very basic !
: p


Finally tonight is my monthly Full Backup.
I wish to change my jobs and set up the DataSpooling.

I will give you the result Monday

Thank you for your help

+--
|This was sent by supervis...@numalliance.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup, how to optimize ?

2013-09-27 Thread bdelagree
Just for you information, here are the modifications:

For the NFS server I created two jobs, one for system and another one for the 
directory that contains the millions of files.

I created the directory /var/lib/spool/drive0 and /var/lib/spool/drive1
I then did a chown-R bacula: bacula /var/lib/spool

In the sections Device of bacula-sd.conf I added these options:

Device { 
Name = Drive-0 
. 
. 
Maximum Spool Size = 24gb
Maximum Job Spool Size = 3gb
Spool Directory = /var/lib/bacula/spool/drive0
}

Device { 
Name = Drive-1
. 
. 
Maximum Spool Size = 24gb
Maximum Job Spool Size = 3gb
Spool Directory = /var/lib/bacula/spool/drive1
}

+--
|This was sent by supervis...@numalliance.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-24 Thread Ralf Brinkmann
Am 23.09.2013 08:47, schrieb bdelagree:
 Hello,

 This summer we invested in a PowerVault TL2000 library with two LTO5 drives 
 to safeguard our various servers.

 Today two of my servers take to save a lot because they contain many small 
 files for low volume (see the bottom of post)
 All my other servers backups quickly (20,000 KB/s to 30,000 KB/s)
 As explained in the documentation for Bacula I added the following option in 
 the StorageDaemon and FileDaemon of these servers:

 Maximum Network Buffer Size = 65536

 But that did not change anything ...
 Did I forget something?
 Something wrong?
 There's an other options that I have not seen?

as I wrote earlier on our LTO4 tape changer for speeding up the building
of the directory tree on restoring I replaced the default configuration file

/etc/my.cnf

by another predefined

/usr/share/mysql/my-huge.cnf

I suppose this might accelerate the backup also.

For speeding up the backup I provided 1500GB diskspace for buffering -
running all backup jobs parallel that are writing to the same cassette 
set. This did help a lot.

-- 
Ralf Brinkmann

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-24 Thread lst_hoe02

Zitat von bdelagree bacula-fo...@backupcentral.com:

 Hello,

 Thank you for the quick response.

 My library is connected to a dedicated server only to services (PDC,  
 DHCP, DNS, LDAP, and Bacula)
 This server is not designed to host files, so he has little space.
 In addition, the MySql database is already 35Gb...
 I can dedicate reasonably 50Gb on this one.

The size of the database mostly depend on the number of  
files/directories and your retention policy.

 Is there one DataSpooling for bacula, or one by drive ?

You should set it by drive and calculate for the number of parallel  
jobs you are using.

 Knowing I have a lot of servers to backup (8 servers for about 2.8  
 Tb of data), is that enough or i need to find another system for  
 Data Spooling?

 df -h from my srv-infra-sm

 Filesystem  Size  Used  
 Avail Use% Mounted on
 rootfs   20G  836M
 20G   5% /
 udev 10M 0
 10M   0% /dev
 tmpfs   1.6G  1.7M   
 1.6G   1% /run
 /dev/disk/by-uuid/465b04fb-de46-409b-928a-ec01ba98373e   20G  836M
 20G   5% /
 tmpfs   5.0M  4.0K   
 5.0M   1% /run/lock
 tmpfs   1.6G  8.0K   
 1.6G   1% /run/shm
 /dev/sda4   109G   40G
 69G  37% /var
 tmpfs   7.9G 0   
 7.9G   0% /tmp


 New options in bacula-sd.conf (thinking that we need a spool by drive)
 Device {
   Name = Drive-0
   .
   .
   .
   Maximum Spool Size = 24gb
   Maximum Job Spool Size = 12gb
   Spool Directory = /var/lib/bacula/spool/drive0
 }

 Device {
   Name = Drive-1
   .
   .
   .
   Maximum Spool Size = 24gb
   Maximum Job Spool Size = 12gb
   Spool Directory = /var/lib/bacula/spool/drive0
 }

If you backup all 8 machines concurrently you should set your Job  
Spool Size to something around available diskspace / 8. Also be  
aware that you need additional spool space for spooling attributes  
which is enabled by default when using data spooling. Think of data  
spooling as some form of cache to pack things together before  
committing to database and tape.

Regards

Andreas




--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup, how to optimize ?

2013-09-23 Thread bdelagree
Hello,

This summer we invested in a PowerVault TL2000 library with two LTO5 drives to 
safeguard our various servers.

Today two of my servers take to save a lot because they contain many small 
files for low volume (see the bottom of post)
All my other servers backups quickly (20,000 KB/s to 30,000 KB/s)
As explained in the documentation for Bacula I added the following option in 
the StorageDaemon and FileDaemon of these servers:

   Maximum Network Buffer Size = 65536

But that did not change anything ...
Did I forget something?
Something wrong?
There's an other options that I have not seen?

Thank you in advance for your help.

PS : If necessary, I can provide you my configuration files and mails bacula

I'm french so I use http://translate.google.fr/


NFS server (Debian7)
   Start time: 21-Sep-2013 6:07:57
   End time: 22-Sep-2013 7:11:19 p.m.
   Elapsed time: 1 day 13 hours 3 mins 22 secs
   Priority: 10
   FD Files Written: 11701096
   SD Files Written: 11701096
   FD Bytes Written: 378,946,671,990 (378.9 GB)
   SD Bytes Written: 381,440,447,623 (381.4 GB)
   Rate: 2840.6 KB / s
   Software Compression: None

Windows 2008 Server:
   Start time: 20-Sep-2013 8:13:30 p.m.
   End time: 21-Sep-2013 6:27:35 p.m.
   Elapsed time: 22 hours 14 mins 5 secs
   Priority: 10
   FD Files Written: 4,818,480
   SD Files Written: 4,818,480
   FD Bytes Written: 362,866,795,007 (362.8 GB)
   SD Bytes Written: 363,628,654,115 (363.6 GB)
   Rate: 4533.3 KB / s
   Software Compression: None
   VSS: yes

+--
|This was sent by supervis...@numalliance.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-23 Thread lst_hoe02

Zitat von bdelagree bacula-fo...@backupcentral.com:

 Hello,

 This summer we invested in a PowerVault TL2000 library with two LTO5  
 drives to safeguard our various servers.

 Today two of my servers take to save a lot because they contain many  
 small files for low volume (see the bottom of post)
 All my other servers backups quickly (20,000 KB/s to 30,000 KB/s)
 As explained in the documentation for Bacula I added the following  
 option in the StorageDaemon and FileDaemon of these servers:

    Maximum Network Buffer Size = 65536

 But that did not change anything ...
 Did I forget something?
 Something wrong?
 There's an other options that I have not seen?


Be sure to use attribute spooling and if you have some fast local  
storage at the backup server data spooling.

http://www.bacula.org/5.2.x-manuals/en/main/main/Data_Spooling.html

Regards

Andreas


--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup, how to optimize ?

2013-09-23 Thread bdelagree
Hello,

Thank you for the quick response.

My library is connected to a dedicated server only to services (PDC, DHCP, DNS, 
LDAP, and Bacula)
This server is not designed to host files, so he has little space.
In addition, the MySql database is already 35Gb...
I can dedicate reasonably 50Gb on this one.

Is there one DataSpooling for bacula, or one by drive ?

Knowing I have a lot of servers to backup (8 servers for about 2.8 Tb of data), 
is that enough or i need to find another system for Data Spooling?

df -h from my srv-infra-sm

Filesystem  Size  Used Avail Use% 
Mounted on
rootfs   20G  836M   20G   5% /
udev 10M 0   10M   0% 
/dev
tmpfs   1.6G  1.7M  1.6G   1% 
/run
/dev/disk/by-uuid/465b04fb-de46-409b-928a-ec01ba98373e   20G  836M   20G   5% /
tmpfs   5.0M  4.0K  5.0M   1% 
/run/lock
tmpfs   1.6G  8.0K  1.6G   1% 
/run/shm
/dev/sda4   109G   40G   69G  37% 
/var
tmpfs   7.9G 0  7.9G   0% 
/tmp


New options in bacula-sd.conf (thinking that we need a spool by drive)
Device {
  Name = Drive-0
  .
  .
  .
  Maximum Spool Size = 24gb
  Maximum Job Spool Size = 12gb
  Spool Directory = /var/lib/bacula/spool/drive0
}

Device {
  Name = Drive-1
  .
  .
  .
  Maximum Spool Size = 24gb
  Maximum Job Spool Size = 12gb
  Spool Directory = /var/lib/bacula/spool/drive0
}

+--
|This was sent by supervis...@numalliance.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-23 Thread Josh Fisher
On 9/23/2013 9:32 AM, bdelagree wrote:
 Hello,

 Thank you for the quick response.

 My library is connected to a dedicated server only to services (PDC, DHCP, 
 DNS, LDAP, and Bacula)
 This server is not designed to host files, so he has little space.
 In addition, the MySql database is already 35Gb...
 I can dedicate reasonably 50Gb on this one.

 Is there one DataSpooling for bacula, or one by drive ?

 Knowing I have a lot of servers to backup (8 servers for about 2.8 Tb of 
 data), is that enough or i need to find another system for Data Spooling?

Each file requires a DB insert, so clients with many small files really 
hit the DB storage system hard. Attribute spooling is particularly 
needed for those clients.


 df -h from my srv-infra-sm

 Filesystem  Size  Used Avail Use% 
 Mounted on
 rootfs   20G  836M   20G   5% 
 /
 udev 10M 0   10M   0% 
 /dev
 tmpfs   1.6G  1.7M  1.6G   1% 
 /run
 /dev/disk/by-uuid/465b04fb-de46-409b-928a-ec01ba98373e   20G  836M   20G   5% 
 /
 tmpfs   5.0M  4.0K  5.0M   1% 
 /run/lock
 tmpfs   1.6G  8.0K  1.6G   1% 
 /run/shm
 /dev/sda4   109G   40G   69G  37% 
 /var
 tmpfs   7.9G 0  7.9G   0% 
 /tmp


 New options in bacula-sd.conf (thinking that we need a spool by drive)
 Device {
Name = Drive-0
.
.
.
Maximum Spool Size = 24gb
Maximum Job Spool Size = 12gb
Spool Directory = /var/lib/bacula/spool/drive0
 }

 Device {
Name = Drive-1
.
.
.
Maximum Spool Size = 24gb
Maximum Job Spool Size = 12gb
Spool Directory = /var/lib/bacula/spool/drive0
 }

 +--
 |This was sent by supervis...@numalliance.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --
 LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
 Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13.
 http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup, how to optimize ?

2013-09-23 Thread Alan Brown
On 23/09/13 07:47, bdelagree wrote:
 Hello,

 This summer we invested in a PowerVault TL2000 library with two LTO5 drives 
 to safeguard our various servers.

 Today two of my servers take to save a lot because they contain many small 
 files for low volume (see the bottom of post)
 All my other servers backups quickly (20,000 KB/s to 30,000 KB/s)

This is normal. There's an overhead in opening each file and it adds up 
quickly.

I have 1Tb filesystems (98% full) with 7000-1 files in them which 
take about 12 hours to run a full backup

I also have 1Tb filesystems (92% full) with 3-6 million files in them 
and they can take DAYS

(GFS is very slow in opening files. This makes the overhead even more 
painfully obvious - it takes 28 hours just to do a zero-byte incremental 
backup of the worst filesystem)





--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup.

2012-08-11 Thread Jari Fredriksson

I started two backups maybe 12 hours ago. Normally full backups run 1-2
h max, but this suddenly...

From database I see no locks, but they geep inserting to batch -table.

I have 12 gigabytes RAM, and given couple gigs to MySQL too. Database
should not be bottle neck.

How can it be so slow. Two backups running, 1st is a Pentium IV machine
with a 80 Gbyte disk, maybe 10 Gbytes used. Has last 12 hours now doing
batch inserts.

Another machine is this 12 GiB Core i7 machine, with 500 GiB disk to
backup up. Maybe half of it used.

Strange. I have used Bacula looong times, since 1.x something. I have
never liked batch inserts, they used to be slow on my earlier low end
machiens. Now I have better machines and running pre compiled bacula
from ubuntu repos.

-- 

Don't go around saying the world owes you a living.  The world owes you
nothing.  It was here first.
-- Mark Twain




signature.asc
Description: OpenPGP digital signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow Backup since Upgrade

2011-05-12 Thread Tobias Dinse
Hi,

since i have upgraded our Backup Server to Debian Squeeze and Bacula 
5.0.2 the Jobs are only write with ~ 5 MB/s.

status storage:

Device IBMLTO4-sd (/dev/nst0) is mounted with:
 Volume:  MITT01
 Pool:MittwochPool
 Media type:  LTO4
 Total Bytes=157,171,864,755 Blocks=78,625,244 Bytes/block=1,999
 Positioned at File=162 Block=20,096


Maybe it could be the blocksize with 1999? I have no Errors in my logfiles.

regards

Tobias

-- 
# Stegbauer Datawork
# Tobias Dinse
# Oberjulbachring 9, 84387 Julbach


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow Backup since Upgrade

2011-05-12 Thread Tobias Dinse
Ok error@blocksize :D Sorry

regards

Tobias

# Stegbauer Datawork
# Tobias Dinse
# Oberjulbachring 9, 84387 Julbach


On 12.05.2011 11:29, Tobias Dinse wrote:
 Hi,

 since i have upgraded our Backup Server to Debian Squeeze and Bacula
 5.0.2 the Jobs are only write with ~ 5 MB/s.

 status storage:

 Device IBMLTO4-sd (/dev/nst0) is mounted with:
   Volume:  MITT01
   Pool:MittwochPool
   Media type:  LTO4
   Total Bytes=157,171,864,755 Blocks=78,625,244 Bytes/block=1,999
   Positioned at File=162 Block=20,096


 Maybe it could be the blocksize with 1999? I have no Errors in my logfiles.

 regards

 Tobias


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-10 Thread Gavin McCullagh
On Mon, 10 Jan 2011, Oliver Hoffmann wrote:

 I did some tests with different gzip levels and with no compression at 
 all. It makes a difference but not as expected. Without compression I 
 still have a rate of only 11346.1 KB/s. Anything else I should try?

Are you sure the cross-over connection is operating at 1Gbps?  Are you sure
that route interface is being used?  It just seems coincidental that you're
still being capped to almost exactly 100Mbps.

Gavin


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line: solved!

2011-01-10 Thread Oliver Hoffmann
 On Mon, 10 Jan 2011, Oliver Hoffmann wrote:
 
  I did some tests with different gzip levels and with no compression
  at all. It makes a difference but not as expected. Without
  compression I still have a rate of only 11346.1 KB/s. Anything else
  I should try?
 
 Are you sure the cross-over connection is operating at 1Gbps?  Are
 you sure that route interface is being used?  It just seems
 coincidental that you're still being capped to almost exactly 100Mbps.
 
 Gavin
 

As said before, I did some tests with ftp and scp. Looks reasonable. 
Oops, got it. The communication between the fd and the director was
correct but fd to sd went still over the slow 100Mbit line.
Now I have a rate of ca. 88 kb/s :-)
Thanx for pointing me in the right direction!

Cheers,

Oliver




--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-09 Thread Oliver Hoffmann
I did some tests with different gzip levels and with no compression at 
all. It makes a difference but not as expected. Without compression I 
still have a rate of only 11346.1 KB/s. Anything else I should try?

Cheers,

Oliver
 On Saturday 08 January 2011 11:46:11 Mister IT Guru wrote:
   
 On 07/01/2011 14:53, Rory Campbell-Lange wrote:
 
 On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
   
 I do full backups at the weekend and it just takes too long. 12h or so.
 bacula does one job after the other and I have a max. transfer rate of
 11 to 12 MBytes/second due to the 100Mbit connection.

 For testing purpose I connected one client via crosslink (1Gbit on
 both sides) to the server. But I still have the same transfer rate. Why
 is that?
 
 What sort of backups are you doing? Are you writing to tape? Are you
 using spooling?

   
 I am new(ish) to bacula, how does spooling speed up jobs, I have noticed 
 similar issues, but because the same behavior appeared on three 
 instances I've built recently. I'm very interested to learn how to 
 improve performance.
 

 Um.. compression?

   


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-09 Thread Oliver Hoffmann

 Hi all,

 I do full backups at the weekend and it just takes too long. 12h or so.
 bacula does one job after the other and I have a max. transfer rate of
 11 to 12 MBytes/second due to the 100Mbit connection.

 For testing purpose I connected one client via crosslink (1Gbit on
 both sides) to the server. But I still have the same transfer rate. Why
 is that?

 The communication definitely goes over 192.168.1.2/192.168.1.1.
 The settings on the client and server are pretty much default.

 I copied an 1,1GB test file with scp through the 1Gbit line and had
 approx. 50MB/s.
 Or with ftp:
 1073741824 bytes sent in 10.61 secs (98867.6 kB/s).

 Thx for hints,

 Some basic information please:

 * database
 * tape drive details if you've backing up to disk
 * the job and jobdefs resource for the job you are running

Database is mysql.
I do not use tapes. Backup to disk only.

Definitions:

Client {
  Name = test-fd
  Password = secret
#Connection over 1GB crosslink
  Address = 192.168.1.2
  FDPort = 9102
  Catalog = MyCatalog
  File Retention = 14 days
  Job Retention = 6 months
}

FileSet {
  Name = test-fd
  Include {
#Here is roughly 2GB of mysl data
File = /tmp/backup
Options {
  signature = MD5
#I did different levels. Now without compression
 # Compression = GZIP9
}
  }
}
Job {
  Name = test
  Type = Backup
  Level = Full
  Client = test-fd
  FileSet = test-fd
#I just start this one myself
  Schedule = never
  Storage = raid-xfs
  Pool = Pool1
  Messages = Standard
}

As said, the rate is still only ca. 11300kb/s.

Thanx,

Oliver






--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-09 Thread Dan Langille
On 1/9/2011 6:19 PM, Oliver Hoffmann wrote:

 Hi all,

 I do full backups at the weekend and it just takes too long. 12h or so.
 bacula does one job after the other and I have a max. transfer rate of
 11 to 12 MBytes/second due to the 100Mbit connection.

 For testing purpose I connected one client via crosslink (1Gbit on
 both sides) to the server. But I still have the same transfer rate. Why
 is that?

 The communication definitely goes over 192.168.1.2/192.168.1.1.
 The settings on the client and server are pretty much default.

 I copied an 1,1GB test file with scp through the 1Gbit line and had
 approx. 50MB/s.
 Or with ftp:
 1073741824 bytes sent in 10.61 secs (98867.6 kB/s).

 Thx for hints,

 Some basic information please:

 * database
 * tape drive details if you've backing up to disk
 * the job and jobdefs resource for the job you are running

 Database is mysql.
 I do not use tapes. Backup to disk only.

 Definitions:

 Client {
Name = test-fd
Password = secret
 #Connection over 1GB crosslink
Address = 192.168.1.2
FDPort = 9102
Catalog = MyCatalog
File Retention = 14 days
Job Retention = 6 months
 }

 FileSet {
Name = test-fd
Include {
 #Here is roughly 2GB of mysl data
  File = /tmp/backup
  Options {
signature = MD5
 #I did different levels. Now without compression
   # Compression = GZIP9
  }
}
 }
 Job {
Name = test
Type = Backup
Level = Full
Client = test-fd
FileSet = test-fd
 #I just start this one myself
Schedule = never
Storage = raid-xfs
Pool = Pool1
Messages = Standard
 }

 As said, the rate is still only ca. 11300kb/s.

To the end of this email, please append the job output (i.e. the job email).

-- 
Dan Langille - http://langille.org/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-08 Thread Mister IT Guru
On 07/01/2011 14:53, Rory Campbell-Lange wrote:
 On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
 I do full backups at the weekend and it just takes too long. 12h or so.
 bacula does one job after the other and I have a max. transfer rate of
 11 to 12 MBytes/second due to the 100Mbit connection.

 For testing purpose I connected one client via crosslink (1Gbit on
 both sides) to the server. But I still have the same transfer rate. Why
 is that?
 What sort of backups are you doing? Are you writing to tape? Are you
 using spooling?

I am new(ish) to bacula, how does spooling speed up jobs, I have noticed 
similar issues, but because the same behavior appeared on three 
instances I've built recently. I'm very interested to learn how to 
improve performance.

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-08 Thread Silver Salonen
On Saturday 08 January 2011 11:46:11 Mister IT Guru wrote:
 On 07/01/2011 14:53, Rory Campbell-Lange wrote:
  On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
  I do full backups at the weekend and it just takes too long. 12h or so.
  bacula does one job after the other and I have a max. transfer rate of
  11 to 12 MBytes/second due to the 100Mbit connection.
 
  For testing purpose I connected one client via crosslink (1Gbit on
  both sides) to the server. But I still have the same transfer rate. Why
  is that?
  What sort of backups are you doing? Are you writing to tape? Are you
  using spooling?
 
 I am new(ish) to bacula, how does spooling speed up jobs, I have noticed 
 similar issues, but because the same behavior appeared on three 
 instances I've built recently. I'm very interested to learn how to 
 improve performance.

Um.. compression?

-- 
Silver

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-08 Thread Dan Langille
On 1/7/2011 9:48 AM, Oliver Hoffmann wrote:
 Hi all,

 I do full backups at the weekend and it just takes too long. 12h or so.
 bacula does one job after the other and I have a max. transfer rate of
 11 to 12 MBytes/second due to the 100Mbit connection.

 For testing purpose I connected one client via crosslink (1Gbit on
 both sides) to the server. But I still have the same transfer rate. Why
 is that?

 The communication definitely goes over 192.168.1.2/192.168.1.1.
 The settings on the client and server are pretty much default.

 I copied an 1,1GB test file with scp through the 1Gbit line and had
 approx. 50MB/s.
 Or with ftp:
 1073741824 bytes sent in 10.61 secs (98867.6 kB/s).

 Thx for hints,

Some basic information please:

* database
* tape drive details if you've backing up to disk
* the job and jobdefs resource for the job you are running

-- 
Dan Langille - http://langille.org/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-08 Thread Dan Langille
On 1/8/2011 4:46 AM, Mister IT Guru wrote:
 On 07/01/2011 14:53, Rory Campbell-Lange wrote:
 On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
 I do full backups at the weekend and it just takes too long. 12h or so.
 bacula does one job after the other and I have a max. transfer rate of
 11 to 12 MBytes/second due to the 100Mbit connection.

 For testing purpose I connected one client via crosslink (1Gbit on
 both sides) to the server. But I still have the same transfer rate. Why
 is that?
 What sort of backups are you doing? Are you writing to tape? Are you
 using spooling?

 I am new(ish) to bacula, how does spooling speed up jobs, I have noticed
 similar issues, but because the same behavior appeared on three
 instances I've built recently. I'm very interested to learn how to
 improve performance.

This is called thread hi-jacking.  Please do not do it.  Please start a 
new thread asking for information about spooling.  But I think you'll 
find many questions about spooling are already covered in the docs and 
in the archives.  :)

-- 
Dan Langille - http://langille.org/

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup even with a dedicated line

2011-01-07 Thread Oliver Hoffmann
Hi all,

I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.

For testing purpose I connected one client via crosslink (1Gbit on
both sides) to the server. But I still have the same transfer rate. Why
is that?

The communication definitely goes over 192.168.1.2/192.168.1.1.
The settings on the client and server are pretty much default.

I copied an 1,1GB test file with scp through the 1Gbit line and had
approx. 50MB/s. 
Or with ftp:
1073741824 bytes sent in 10.61 secs (98867.6 kB/s).

Thx for hints,

Oliver

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-07 Thread Rory Campbell-Lange
On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
 I do full backups at the weekend and it just takes too long. 12h or so.
 bacula does one job after the other and I have a max. transfer rate of
 11 to 12 MBytes/second due to the 100Mbit connection.
 
 For testing purpose I connected one client via crosslink (1Gbit on
 both sides) to the server. But I still have the same transfer rate. Why
 is that?

What sort of backups are you doing? Are you writing to tape? Are you
using spooling?

-- 
Rory Campbell-Lange
r...@campbell-lange.net

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup even with a dedicated line

2011-01-07 Thread Silver Salonen
On Friday 07 January 2011 16:48:07 Oliver Hoffmann wrote:
 Hi all,
 
 I do full backups at the weekend and it just takes too long. 12h or so.
 bacula does one job after the other and I have a max. transfer rate of
 11 to 12 MBytes/second due to the 100Mbit connection.
 
 For testing purpose I connected one client via crosslink (1Gbit on
 both sides) to the server. But I still have the same transfer rate. Why
 is that?
 
 The communication definitely goes over 192.168.1.2/192.168.1.1.
 The settings on the client and server are pretty much default.
 
 I copied an 1,1GB test file with scp through the 1Gbit line and had
 approx. 50MB/s. 
 Or with ftp:
 1073741824 bytes sent in 10.61 secs (98867.6 kB/s).
 
 Thx for hints,
 
 Oliver

Years ago I did the same test and I found out that compression was to blame - 
when I turned off compression I got basically the speed of HDD writing.

-- 
Silver

--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup rate

2009-12-09 Thread Carlo Filippetto
Hi,
I would like to know if is true that I have so slow troughput as this:


*CATALOG
---
**  FD Bytes Written:   478,808,703 (478.8 MB)
  SD Bytes Written:   478,809,069 (478.8 MB)
  Rate:   402.0 KB/s
  Software Compression:   None


INCREMENTAL
--
  SD Bytes Written:   40,582,899 (40.58 MB)
  Rate:   129.7 KB/s
  Software Compression:   86.2 %


  SD Bytes Written:   32,037,212 (32.03
MB)

  Rate:   179.9
KB/s

  Software Compression:   89.7 %


FULL
-
  Elapsed time:   1 day 22 hours 13 mins 37 secs
  Priority:   10
  FD Files Written:   237,200
  SD Files Written:   237,200
  FD Bytes Written:   61,851,118,685 (61.85 GB)
  SD Bytes Written:   61,883,017,775 (61.88 GB)
  Rate:   371.7 KB/s
  Software Compression:   15.5 %


*

All my jobs have the maximum compression

*Options {
compression = GZIP9 #aggiungo compressione massima


*P.S. is this the maximum compression??



My backups are made on Hard Disk. I tried to use an USB, as an iscsi, as a
e-sata device but the rate is so slow..
My best throughput is less then 2Mb/s

It's all right, or there something wrong?

Thak's

Carlo (Italy)
--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup rate

2009-12-09 Thread Timo Neuvonen
Hi Carlo,

for any modern hardware your rates sound low.

Below is an example I get in my home system (Core2 Duo, 8GB memory, CentOS 
5.4 Linux 64-bit), writing to external USB disk, with no compression. 
Backing up a local disk, catalog database on the same physical disk too (not 
an ideal combination).

  FD Files Written:   194,837
  SD Files Written:   194,837
  FD Bytes Written:   164,511,043,989 (164.5 GB)
  SD Bytes Written:   164,537,630,363 (164.5 GB)
  Rate:   24175.0 KB/s
  Software Compression:   None


AFAIK, the GZIP9 you are using is the heaviest compression, both in terms of 
expected ratio, and required cpu load. I think the Bacula documentation 
mentions that levels over 6 usually result in no significant improvent in 
ratio, but consume more cpu power.

Incremental will be slower than full anyway. But since it's this much slower 
even in this speed class, makes me think the reason might be something else 
than compression. But since it shows very different compression ratio, it 
may also be because of compression / different type of the contents of the 
files included in the job in average.

At first, try lowering the compression level, or totally disable it, to get 
a reference that helps you restrict the possible reasons to the low 
throughtput.

A guess without knowing your system: if you have a Windows client with 
antivirus sw that handles every disk access, it could have a heavy impact on 
this too.


--
TiN



Carlo Filippetto carlo.filippe...@gmail.com kirjoitti viestissä 
news:8791c1920912090138l3afdd208t8ec71c4678b1f...@mail.gmail.com...
Hi,
I would like to know if is true that I have so slow troughput as this:


CATALOG
---
  FD Bytes Written:   478,808,703 (478.8 MB)
  SD Bytes Written:   478,809,069 (478.8 MB)
  Rate:   402.0 KB/s
  Software Compression:   None


INCREMENTAL
--
  SD Bytes Written:   40,582,899 (40.58 MB)
  Rate:   129.7 KB/s
  Software Compression:   86.2 %


  SD Bytes Written:   32,037,212 (32.03 MB)
  Rate:   179.9 KB/s
  Software Compression:   89.7 %


FULL
-
  Elapsed time:   1 day 22 hours 13 mins 37 secs
  Priority:   10
  FD Files Written:   237,200
  SD Files Written:   237,200
  FD Bytes Written:   61,851,118,685 (61.85 GB)
  SD Bytes Written:   61,883,017,775 (61.88 GB)
  Rate:   371.7 KB/s
  Software Compression:   15.5 %




All my jobs have the maximum compression

Options {
compression = GZIP9 #aggiungo compressione massima


P.S. is this the maximum compression??



My backups are made on Hard Disk. I tried to use an USB, as an iscsi, as a 
e-sata device but the rate is so slow..
My best throughput is less then 2Mb/s

It's all right, or there something wrong?

Thak's

Carlo (Italy)















--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users 



--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup rate

2009-12-09 Thread Sean M Clark
Carlo Filippetto wrote:
 Hi,
 I would like to know if is true that I have so slow troughput as this:
[...]
 FULL
 -
   Elapsed time:   1 day 22 hours 13 mins 37 secs
[...]
   Rate:   371.7 KB/s
   Software Compression:   15.5 %
 
[...]
 All my jobs have the maximum compression
 
 /Options {
 compression = GZIP9 #aggiungo compressione massima
 
 
 /P.S. is this the maximum compression??

You didn't mention what speed your network connection is, but that's
slow even for 10BaseT.  I would expect to see at least about 800 KB/s on
10Mbit, or at least 6-8MB/s on 100MB, or at least 25-35MB/s on Gigabit
(Yeah, I know Gigabit ought to be up around 80MB/s at least, but in
practice I've yet to see anything manage it personally.  I know it's
possible, though.)

I saw someone else mentioned this already, but yes, GZIP9 is the
maximum, and that might actually be WHY the rate is slow.
The higher you set the compression rate, the more time the program
spends trying to cram more data into each packet before sending it.
If the compression rate is high enough, it may actually take much
more time to do the compression than is saved by sending less data.

A stupid analogy: if bacula-fd is the shipping department of a company,
no compression means the stuff (file data) being shipped is dumped
into a box until it reaches the top, then the box is closed and sent on
its way.  Compression level 1 would be like pausing to press down on the
stuff in the box once and then top off the extra space with a little bit
more file data before sending it.  Compression level 9 is like dumping
the stuff in, smashing it down, dumping more in, smashing it down,
dumping more in, jumping up and down on top of it, then recruiting some
guys from the next department over to stand on top while you seal the
box.  The box ends up holding a lot more, but it takes so much longer to
get the box ready to go that you end up not getting as much shipped out
in the same amount of time.

It can be even worse if the client machine is comparatively low in CPU
power or is heavily loaded (e.g. an old Windows box running Symantec
antivirus doing active protection and scanning every file that bacula
tries to examine or send...).

Unless space on the backup media or bandwidth usage are the biggest
concerns, I tend to drop the compression all the way down to
GZIP1-GZIP4, or turn it off altogether.

On the other hand (or other thread, as the case may be, looking at the
discussion of bandwidth throttling), setting an unnecessarily high
compression level might also be used as a crude way of limiting
bandwidth usage if you don't care so much how long the backup actually
takes.

( I'm hoping to someday see LZMA1-LZMA9 or XZ1-XZ9 compression
options, too... )

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup rate

2009-12-09 Thread Steve Polyack
Sean M Clark wrote:
 Carlo Filippetto wrote:
   
 Hi,
 I would like to know if is true that I have so slow troughput as this:
 
 [...]
   
 FULL
 -
   Elapsed time:   1 day 22 hours 13 mins 37 secs
 
 [...]
   
   Rate:   371.7 KB/s
   Software Compression:   15.5 %

 
 [...]
   
 All my jobs have the maximum compression

 /Options {
 compression = GZIP9 #aggiungo compressione massima


 /P.S. is this the maximum compression??
 
 I saw someone else mentioned this already, but yes, GZIP9 is the
 maximum, and that might actually be WHY the rate is slow.
 The higher you set the compression rate, the more time the program
 spends trying to cram more data into each packet before sending it.
 If the compression rate is high enough, it may actually take much
 more time to do the compression than is saved by sending less data.

   
I can attest to this, GZIP compression (especially at GZIP9) can make a 
large difference in backup transfer speeds, especially when combined 
with client data encryption.  I wouldn't be surprised if he was to get 
5-10MB/sec after disabling GZIP compression, especially if the client 
does not have modern hardware.
 Unless space on the backup media or bandwidth usage are the biggest
 concerns, I tend to drop the compression all the way down to
 GZIP1-GZIP4, or turn it off altogether.
   
This is very true.  There is often very little gain in compression ratio 
with levels greater than GZIP1 or GZIP2, while higher levels will use 
considerably more CPU time.  Of course this depends entirely on the data 
you are compressing, so he should try the various levels himself to see 
whether there is any advantage to using a level greater than GZIP1.
 ( I'm hoping to someday see LZMA1-LZMA9 or XZ1-XZ9 compression
 options, too... )

   
Ditto.


-Steve


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup

2009-05-31 Thread Bruno Friedmann
Take also a look at your dir database setting.
( postgresql or mysql )

If you are using default distro's settings they are certainly to low.
check the ml  wiki about this.


Il Neofita wrote:
 Hi
 I am using EXt3
 and yes I also have small
 Probably
 50%  2M
 40%  10M
 10%40M
 
 On Thu, May 28, 2009 at 8:39 AM, Uwe Schuerkamp hoo...@nionex.net wrote:
 
 On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
 First of all thank you for the answer
 No I do not use compression in my file set
 Options {
   signature = MD5
 }
 I tried to upload with sftp

 Uploading testfile to /tmp/terrierj
 testfile  100%   83MB  41.4MB/s
 00:02
 There is only a problem,
 I have the following configuration
 two ethernet card in both servers
 one connected to the LAN with an IP 10.10.1.X with a speed of 100M

 and the other connected between the two servers  with the IPs
 192.168.10.x
 with the speed of 1G

 On my bacula-dir.conf other the client I put in the address the right
 address and on the statistic of the second ethernet (that one a 1G) I
 have
 100M o traffic therefore is used

 Thank you for the support and any Idea or test that I should do

 Are you backing up a lot of small files? I've seen rates dropping to
 the hundreds of kilobytes when bacula encounters directories with lots
 of small files. Which filesystem are you using on the client host?

 Cheers,

 Uwe




-- 

 Bruno Friedmann



--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup

2009-05-28 Thread Il Neofita
I connected the backup server and the client with a crossover cable at 1G
however
Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
What can I check?
I am using SAS disks

With ethtool I have
Speed: 1000Mb/s
therefore is correct
--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup

2009-05-28 Thread Silver Salonen
On Thursday 28 May 2009 13:01:06 Il Neofita wrote:
 I connected the backup server and the client with a crossover cable at 1G
 however
 Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
 What can I check?
 I am using SAS disks
 
 With ethtool I have
 Speed: 1000Mb/s
 therefore is correct

Check compression. When I use GZIP to compress data, I get rarely more than 
2MB/s. If I don't use any compression, the transfer rate goes up to disk/LAN 
speed :)

-- 
Silver

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup

2009-05-28 Thread Daniele Eccher

Hi,

there is 5Gb of data and the average speed is 9mb at sec. The speed is  
slow .


Try to copy a big file from server to client (or viceversa) and se  
with iptraf the speed of copy. I think there is no problem with bacula  
but in the distro.


Daniele


Il giorno 28/mag/09, alle ore 12:01, Il Neofita ha scritto:

I connected the backup server and the client with a crossover cable  
at 1G

however
Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
What can I check?
I am using SAS disks

With ethtool I have
Speed: 1000Mb/s
therefore is correct


--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity  
professionals. Meet

the minds behind Google Creative Lab, Visual Complexity, Processing, 
iPhoneDevCamp as they present alongside digital heavyweights like  
Barbarian
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com  
___

Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Daniele Eccher
Gruppo Darco  - ICT Sistemi
Via Ostiense 131/L Corpo B, 00154 Roma
E-mail: daniele.ecc...@sociale.it  tel  : +39 06 57060 500 cell : +39  
346 1426128


--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup

2009-05-28 Thread Uwe Schuerkamp
On Thu, May 28, 2009 at 06:01:06AM -0400, Il Neofita wrote:
 I connected the backup server and the client with a crossover cable at 1G
 however
 Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
 What can I check?
 I am using SAS disks
 
 With ethtool I have
 Speed: 1000Mb/s
 therefore is correct


Are you using gzip compression? 

All the best, 

Uwe 


-- 
uwe.schuerk...@nionex.net phone: [+49] 5242.91 - 4740, fax:-69 72
Hauptsitz: Avenwedder Str. 55, D-33311 Guetersloh, Germany
Registergericht Guetersloh HRB 4196, Geschaeftsfuehrer: Horst Gosewehr
NIONEX ist ein Unternehmen der DirectGroup Germany www.directgroupgermany.de

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup

2009-05-28 Thread Uwe Schuerkamp
On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
 First of all thank you for the answer
 No I do not use compression in my file set
 Options {
   signature = MD5
 }
 I tried to upload with sftp
 
 Uploading testfile to /tmp/terrierj
 testfile  100%   83MB  41.4MB/s   00:02
 
 There is only a problem,
 I have the following configuration
 two ethernet card in both servers
 one connected to the LAN with an IP 10.10.1.X with a speed of 100M
 
 and the other connected between the two servers  with the IPs 192.168.10.x
 with the speed of 1G
 
 On my bacula-dir.conf other the client I put in the address the right
 address and on the statistic of the second ethernet (that one a 1G) I have
 100M o traffic therefore is used
 
 Thank you for the support and any Idea or test that I should do
 

Are you backing up a lot of small files? I've seen rates dropping to
the hundreds of kilobytes when bacula encounters directories with lots
of small files. Which filesystem are you using on the client host? 

Cheers,

Uwe 

-- 
uwe.schuerk...@nionex.net phone: [+49] 5242.91 - 4740, fax:-69 72
Hauptsitz: Avenwedder Str. 55, D-33311 Guetersloh, Germany
Registergericht Guetersloh HRB 4196, Geschaeftsfuehrer: Horst Gosewehr
NIONEX ist ein Unternehmen der DirectGroup Germany www.directgroupgermany.de

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup

2009-05-28 Thread Il Neofita
First of all thank you for the answer
No I do not use compression in my file set
Options {
  signature = MD5
}
I tried to upload with sftp

Uploading testfile to /tmp/terrierj
testfile  100%   83MB  41.4MB/s   00:02

There is only a problem,
I have the following configuration
two ethernet card in both servers
one connected to the LAN with an IP 10.10.1.X with a speed of 100M

and the other connected between the two servers  with the IPs 192.168.10.x
with the speed of 1G

On my bacula-dir.conf other the client I put in the address the right
address and on the statistic of the second ethernet (that one a 1G) I have
100M o traffic therefore is used

Thank you for the support and any Idea or test that I should do


On Thu, May 28, 2009 at 6:31 AM, Uwe Schuerkamp hoo...@nionex.net wrote:

 On Thu, May 28, 2009 at 06:01:06AM -0400, Il Neofita wrote:
  I connected the backup server and the client with a crossover cable at 1G
  however
  Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
  What can I check?
  I am using SAS disks
 
  With ethtool I have
  Speed: 1000Mb/s
  therefore is correct


 Are you using gzip compression?

 All the best,

 Uwe


 --
 uwe.schuerk...@nionex.net phone: [+49] 5242.91 - 4740, fax:-69 72
 Hauptsitz: Avenwedder Str. 55, D-33311 Guetersloh, Germany
 Registergericht Guetersloh HRB 4196, Geschaeftsfuehrer: Horst Gosewehr
 NIONEX ist ein Unternehmen der DirectGroup Germany
 www.directgroupgermany.de

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup

2009-05-28 Thread Il Neofita
Hi
I am using EXt3
and yes I also have small
Probably
50%  2M
40%  10M
10%40M

On Thu, May 28, 2009 at 8:39 AM, Uwe Schuerkamp hoo...@nionex.net wrote:

 On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
  First of all thank you for the answer
  No I do not use compression in my file set
  Options {
signature = MD5
  }
  I tried to upload with sftp
 
  Uploading testfile to /tmp/terrierj
  testfile  100%   83MB  41.4MB/s
 00:02
 
  There is only a problem,
  I have the following configuration
  two ethernet card in both servers
  one connected to the LAN with an IP 10.10.1.X with a speed of 100M
 
  and the other connected between the two servers  with the IPs
 192.168.10.x
  with the speed of 1G
 
  On my bacula-dir.conf other the client I put in the address the right
  address and on the statistic of the second ethernet (that one a 1G) I
 have
  100M o traffic therefore is used
 
  Thank you for the support and any Idea or test that I should do
 

 Are you backing up a lot of small files? I've seen rates dropping to
 the hundreds of kilobytes when bacula encounters directories with lots
 of small files. Which filesystem are you using on the client host?

 Cheers,

 Uwe

 --
 uwe.schuerk...@nionex.net phone: [+49] 5242.91 - 4740, fax:-69 72
 Hauptsitz: Avenwedder Str. 55, D-33311 Guetersloh, Germany
 Registergericht Guetersloh HRB 4196, Geschaeftsfuehrer: Horst Gosewehr
 NIONEX ist ein Unternehmen der DirectGroup Germany
 www.directgroupgermany.de

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow Backup Speeds Using Bacula

2007-12-05 Thread Brad M
Hi there,
I've been having some problems attempting to increase the write speed to my 
tape drive through Bacula.
 
If I use the operating system to communicate directly with the tape drive, I 
get the appropriate read and write speeds but using Bacula, I get a third of 
the speed. I have tried spooling the data to a separate physical drive before 
writing, no luck. I have played around with the block sizes using 64K, 128K, 
196K and 256K but still the same performance. I have tried various backup sized 
ranging from under 1gb to 80gb but the speed stays constant on all tests.
 
My average write speed seems to stay around 20mb/s give or take a few 
megabytes. I should be getting at least double that speed for the drive and 
SCSI card that I am using. 
 
My software setup is as follows:FreeBSD 5.5 x86Bacula 2.2.5 (Installed from 
source)MySQL 5.0.45  My hardware setup is as follows:CPU - AMD AM2 
5600+Motherboard - Asus M2N-LRSCSI Card - Adaptec 29160NTape Drive - HP 
StorageWorks Ultrium 448Data Cartridge - HP LTO2 Ultrium 400GB
 
Here are some examples:
DD Read/Write Test
server1# dd if=/dev/zero of=/dev/nsa0 bs=65536 count=3030+0 records 
in30+0 records out1966080 bytes transferred in 299.338943 secs 
(65680729 bytes/sec)server1# mt -f /dev/nsa0 rewindserver1# dd of=/dev/null 
if=/dev/nsa0 bs=65536 count=3030+0 records in30+0 records 
out1966080 bytes transferred in 291.253620 secs (67504054 bytes/sec)
Btape Fill Test
*fill (abbreviated)19:02:56 Flush block, write EOFWrote blk_block=314, 
dev_blk_num=4000 VolBytes=202,308,728,832 rate=19647.3 KB/sWrote 
blk_block=3145000, dev_blk_num=9000 VolBytes=202,631,288,832 rate=19644.3 
KB/sWrote blk_block=315, dev_blk_num=14000 VolBytes=202,953,848,832 
rate=19648.9 KB/sWrote blk_block=3155000, dev_blk_num=3500 
VolBytes=203,276,408,832 rate=19644.0 KB/sWrote blk_block=316, 
dev_blk_num=8500 VolBytes=203,598,968,832 rate=19650.5 KB/sWrote 
blk_block=3165000, dev_blk_num=13500 VolBytes=203,921,528,832 rate=19657.0 
KB/s04-Dec 19:04 btape JobId 0: End of Volume TestVolume1 at 295:14046 on 
device HP_Ultrium (/dev/nsa0). Write of 64512 bytes got 0.btape: btape.c:2345 
Last block at: 295:14045 this_dev_block_num=14046btape: btape.c:2379 End of 
tape 297:0. VolumeCapacity=203,956,752,384. Write rate = 19643.3 KB/sDone 
writing 0 records ...Wrote state file last_block_num1=14045 
last_block_num2=019:04:38 Done filling tape at 297:0. Now beginning re-read of 
tape ...04-Dec 19:05 btape JobId 0: Ready to read from volume TestVolume1 on 
device HP_Ultrium (/dev/nsa0).Rewinding.Reading the first 1 records from 
0:0.1 records read now at 1:5084Reposition from 1:5084 to 295:14045Reading 
block 14045.The last block on the tape matches. Test succeeded.
Full Job Email Output
04-Dec 14:16 server1-dir JobId 57: Start Backup JobId 57, 
Job=Client1.2007-12-04_14.16.1904-Dec 14:16 server1-dir JobId 57: Using Device 
HP_Ultrium04-Dec 14:38 server1-sd JobId 57: Job write elapsed time = 
00:22:16, Transfer rate = 24.13 M bytes/second04-Dec 14:38 server1-dir JobId 
57: Bacula server1-dir 2.2.5 (09Oct07): 04-Dec-2007 14:38:32  Build OS: 
  i386-unknown-freebsd5.5 freebsd 5.5-RELEASE  JobId:  57  
Job:Client1.2007-12-04_14.16.19  Backup Level:   
Full  Client: server1-fd 2.2.5 (09Oct07) 
i386-unknown-freebsd5.5,freebsd,5.5-RELEASE  FileSet:Full Set 
2007-12-04 09:08:20  Pool:   Default (From Job resource)  
Storage:HP_Ultrium (From Job resource)  Scheduled time:   
  04-Dec-2007 14:16:06  Start time: 04-Dec-2007 14:16:08  End time: 
  04-Dec-2007 14:38:32  Elapsed time:   22 mins 24 secs  
Priority:   1  FD Files Written:   37,640  SD Files Written:
   37,640  FD Bytes Written:   32,233,221,269 (32.23 GB)  SD Bytes Written: 
  32,239,016,919 (32.23 GB)  Rate:   23983.1 KB/s  Software 
Compression:   None  VSS:no  Encryption: no  
Volume name(s): blahz  Volume Session Id:  2  Volume Session Time:  
  1196806442  Last Volume Bytes:  32,264,322,048 (32.26 GB)  Non-fatal FD 
errors:0  SD Errors:  0  FD termination status:  OK  SD 
termination status:  OK  Termination:Backup OK04-Dec 14:38 
server1-dir JobId 57: Begin pruning Jobs.04-Dec 14:38 server1-dir JobId 57: No 
Jobs found to prune.04-Dec 14:38 server1-dir JobId 57: Begin pruning 
Files.04-Dec 14:38 server1-dir JobId 57: No Files found to prune.04-Dec 14:38 
server1-dir JobId 57: End auto prune.
# Any help would be very appreciated. Thanks! Brad.
_
Introducing the City @ Live! Take a tour!
http://getyourliveid.ca/?icid=LIVEIDENCA006-

Re: [Bacula-users] Slow Backup Speeds Using Bacula

2007-12-05 Thread John Drescher
 Hi there,
  I've been having some problems attempting to increase the write speed to my
 tape drive through Bacula.

  If I use the operating system to communicate directly with the tape drive,
 I get the appropriate read and write speeds but using Bacula, I get a third
 of the speed. I have tried spooling the data to a separate physical drive
 before writing, no luck. I have played around with the block sizes using
 64K, 128K, 196K and 256K but still the same performance. I have tried
 various backup sized ranging from under 1gb to 80gb but the speed stays
 constant on all tests.

Changing the block size should only make a small percentage change.
Also spooling of a single job will not help because bacula fills the
spool file completely then stops the backup to send the spool file to
tape. When it does that you should see transfer rates of (30 to
45MB/s) for the despooling but the whole job will have a lower backup
rate because of the lack of both spooling and despooling running at
the same time.


  My average write speed seems to stay around 20mb/s give or take a few
 megabytes. I should be getting at least double that speed for the drive and
 SCSI card that I am using.

20 to 30 MB/s is totally reasonable for a full backup on LTO2 drive
depending on compression.

  My software setup is as follows:
 FreeBSD 5.5 x86
 Bacula 2.2.5 (Installed from source)
 MySQL 5.0.45

 My hardware setup is as follows:
 CPU - AMD AM2 5600+
 Motherboard - Asus M2N-LR
 SCSI Card - Adaptec 29160N
 Tape Drive - HP StorageWorks Ultrium 448
 Data Cartridge - HP LTO2 Ultrium 400GB

  Here are some examples:
  DD Read/Write Test
  server1# dd if=/dev/zero of=/dev/nsa0 bs=65536 count=30
 30+0 records in
 30+0 records out
 1966080 bytes transferred in 299.338943 secs (65680729 bytes/sec)
 server1# mt -f /dev/nsa0 rewind
 server1# dd of=/dev/null if=/dev/nsa0 bs=65536 count=30
 30+0 records in
 30+0 records out
 1966080 bytes transferred in 291.253620 secs (67504054 bytes/sec)


These numbers are artificially high because zeros will compress very
highly so very few tape blocks are actually written to the tape so all
you are testing here is the speed of the SCSI bus.

A more realistic test would be to substitute /dev/sda for the input
assuming you have data on your sda drive...

John

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup with compression on Solaris

2007-01-04 Thread Frank Brodbeck
Hi,

Jonas Björklund has spoken, thus:
 Hello,
 
 I get very poor performance with compression on a client. It's a Sun Fire 
 V490 with 4 CPUs on 1350Mhz and 16GB memory.

I'm having similiar problems with bacula here (but different hardware).
filed: Sun Blade 1500 (1 CPU 1503Mhz 1GB memory)
director: Sun E450 ( 4 CPUs 400Mhz 4GB memory)
storaged: Sun Fire V240 (2 CPUs 1002Mhz 2GB memory)

All systems are running Solaris 10 and bacula 1.3.8. The storaged has a
hardware RAID attached, filed and director are using metadevices w/
mirroring.

For my tests SSL encryption and SHA1 signatures has been disabled.

Backups w/o compression are about 5.5MB/s, GZIP6 takes it down to
1.2MB/s. 5.5MB/s aren't really that much for our 100MBit LAN not to
speak of 1.2MB/s. Decreasing the compression down to GZIP2 and GZIP1
resulted in a speed of 2.2MB/s.I watched any machine while backing up w/
iostat, vmstat and netstat. I couldn't see something spectacular on any.
No processes were sitting in the run queue, nor blocked or swapping out.
Load wasn't high on all machines and the disks' service time were quite
acceptable.

With one exception, the service time for the director's disks sometimes
grew over the 30ms, but that shouldn't be a big deal?

I also raised the Maximum Network Buffer Size for the filed and
storaged up to 65536.

Load on the involved switch is low, no I/O errors were detected on the
wire.

As a comparison, transferring files via scp usually gives us a rate of
~10MB/s.

So, are there any suggestions where one could tweak to raise the
transfer rate of bacula?

Regards,
Frank.

-- 
-- Frank Brodbeck, BelWue-Koordination -- Tel: 0711/685-62502 --
   Rechenzentrum der Universitaet Stuttgart 
   Allmandring 3A, 70550 Stuttgart Fax: 0711/678-8363
-- mailto:[EMAIL PROTECTED] -- http://www.belwue.de/ --

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup with compression on Solaris

2006-12-05 Thread Jonas Björklund

Hello,

I get very poor performance with compression on a client. It's a Sun Fire 
V490 with 4 CPUs on 1350Mhz and 16GB memory.


  JobId:  11
  Job:client1.2006-12-04_16.34.10
  Backup Level:   Full
  Client: sasma sparc-sun-solaris2.10,solaris,5.10
  FileSet:Sun System 2006-12-04 10:32:00
  Pool:   1Month
  Storage:File01
  Scheduled time: 04-Dec-2006 16:34:06
  Start time: 04-Dec-2006 16:34:13
  End time:   05-Dec-2006 03:53:36
  Elapsed time:   11 hours 19 mins 23 secs
  Priority:   10
  FD Files Written:   314,934
  SD Files Written:   314,934
  FD Bytes Written:   41,030,170,977 (41.03 GB)
  SD Bytes Written:   41,078,489,760 (41.07 GB)
  Rate:   1006.6 KB/s
  Software Compression:   82.0 %
  Volume name(s): 1Month-0004|1Month-0005
  Volume Session Id:  1
  Volume Session Time:1165246425
  Last Volume Bytes:  31,077,182,535 (31.07 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

When I run the same backup wihtout compression it's fast. Is the server 
really so slow when it compress data? The zlib is from Sun.


  JobId:  10
  Job:    client1.2006-12-04_14.46.15
  Backup Level:   Full (upgraded from Incremental)
  Client: sasma sparc-sun-solaris2.10,solaris,5.10
  FileSet:    Sun System 2006-12-04 10:32:00
  Pool:   1Month
  Storage:    File01
  Scheduled time: 04-Dec-2006 14:46:14
  Start time: 04-Dec-2006 14:46:18
  End time:   04-Dec-2006 16:13:24
  Elapsed time:   1 hour 27 mins 6 secs
  Priority:   10
  FD Files Written:   314,814
  SD Files Written:   314,814
  FD Bytes Written:   227,373,218,380 (227.3 GB)
  SD Bytes Written:   227,421,517,665 (227.4 GB)
  Rate:   43508.1 KB/s
  Software Compression:   None
  Volume name(s): 
1Month-|1Month-0001|1Month-0002|1Month-0003|1Month-0004
  Volume Session Id:  6
  Volume Session Time:    1165224546
  Last Volume Bytes:  39,924,048,802 (39.92 GB)
  Non-fatal FD errors:    0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:    Backup OK-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup with compression on Solaris

2006-12-05 Thread Jonas Björklund


On Tue, 5 Dec 2006, Jonas Björklund wrote:

I get very poor performance with compression on a client. It's a Sun Fire 
V490 with 4 CPUs on 1350Mhz and 16GB memory.


Seems like the Sun server is slow. I got a little bit better performance 
when I used GZIP1 instead of GZIP (GZIP6).-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup with compression on Solaris

2006-12-05 Thread Martin Simmons
 On Tue, 5 Dec 2006 09:11:14 +0100 (CET), Jonas Bjorklund said:
 
 Hello,
 
 I get very poor performance with compression on a client. It's a Sun Fire
 V490 with 4 CPUs on 1350Mhz and 16GB memory.
 
JobId:  11
Job:client1.2006-12-04_16.34.10
Backup Level:   Full
Client: sasma sparc-sun-solaris2.10,solaris,5.10
FileSet:Sun System 2006-12-04 10:32:00
Pool:   1Month
Storage:File01
Scheduled time: 04-Dec-2006 16:34:06
Start time: 04-Dec-2006 16:34:13
End time:   05-Dec-2006 03:53:36
Elapsed time:   11 hours 19 mins 23 secs
Priority:   10
FD Files Written:   314,934
SD Files Written:   314,934
FD Bytes Written:   41,030,170,977 (41.03 GB)
SD Bytes Written:   41,078,489,760 (41.07 GB)
Rate:   1006.6 KB/s
Software Compression:   82.0 %
Volume name(s): 1Month-0004|1Month-0005
Volume Session Id:  1
Volume Session Time:1165246425
Last Volume Bytes:  31,077,182,535 (31.07 GB)
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  OK
SD termination status:  OK
Termination:Backup OK
 
 When I run the same backup wihtout compression it's fast. Is the server
 really so slow when it compress data? The zlib is from Sun.

You could check that the fd uses a lerge % of the CPU when compressing, to be
sure that it isn't waiting for something else.  Also, try timing tar v.s.
tar+gzip on a large directory.

__Martin

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup with compression on Solaris

2006-12-05 Thread Masopust, Christian
  Hello,
  
  I get very poor performance with compression on a client. 
 It's a Sun Fire
  V490 with 4 CPUs on 1350Mhz and 16GB memory.
  
 JobId:  11
 Job:client1.2006-12-04_16.34.10
 Backup Level:   Full
 Client: sasma 
 sparc-sun-solaris2.10,solaris,5.10
 FileSet:Sun System 2006-12-04 10:32:00
 Pool:   1Month
 Storage:File01
 Scheduled time: 04-Dec-2006 16:34:06
 Start time: 04-Dec-2006 16:34:13
 End time:   05-Dec-2006 03:53:36
 Elapsed time:   11 hours 19 mins 23 secs
 Priority:   10
 FD Files Written:   314,934
 SD Files Written:   314,934
 FD Bytes Written:   41,030,170,977 (41.03 GB)
 SD Bytes Written:   41,078,489,760 (41.07 GB)
 Rate:   1006.6 KB/s
 Software Compression:   82.0 %
 Volume name(s): 1Month-0004|1Month-0005
 Volume Session Id:  1
 Volume Session Time:1165246425
 Last Volume Bytes:  31,077,182,535 (31.07 GB)
 Non-fatal FD errors:0
 SD Errors:  0
 FD termination status:  OK
 SD termination status:  OK
 Termination:Backup OK
  
  When I run the same backup wihtout compression it's fast. 
 Is the server
  really so slow when it compress data? The zlib is from Sun.
 
 You could check that the fd uses a lerge % of the CPU when 
 compressing, to be
 sure that it isn't waiting for something else.  Also, try 
 timing tar v.s.
 tar+gzip on a large directory.

Only a guess have you already checked the data to be backuped?
I run into similar problems when running backup (compressed) of a
filesystem containing many already compressed files.

chris

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow Backup Performance on Windows 2003.

2006-08-10 Thread Michael Morgan
I've seen similar data on my backups, but generally, only with very 
small backup sizes (less than 1GB). When I back up over 1GB, the rates 
increase dramatically, although backup from the Windows server is still 
only about 1/2 to 1/3 the Linux server rate. Before you get too 
concerned, try a bigger backup (3-5GB) and see what the performance is. 
With your small backups, it is probably mostly due to the ratio of 
overhead to actual data.

Mike

pedro moreno wrote:
   Hi.
 
   I have been working with bacula for some months, i love this software, 
 my current problem is this one:
 
 My Test Server.
 I'm running bacula server 1.38.11 on FreeBSD 6.1-p3
 Mysql 4.1.20
 Tape HP Storage Works 232 External 200GB Compress
 HD 200 IDE 7200 RPM
 AMD Duron 1.6 Ghz
 512 RAM
 
 Clients:
 2 Win NT 4  Client 1.38.4
 1 Windows 2k3 Estandard Edition Client 1.38.4
 1 Linux Red Hat 9 (bonding ON -- 2 NIC's) 1.38.4
 
 My production server is:
 I'm running bacula server 1.38.5 on FreeBSD 5.4-p16
 Mysql 5.0.21
 Tape HP Storage Works 232 External 200GB Compress
 HD 80 IDE 5400 RPM
 AMD Duron 1.6 Ghz
 512 RAM
 
  Im going to talk about the test server, because the production server 
 has almost the same performance, this is one report from a backup for 
 win2k3:
 JobId:  56
   Job:MBXPDC.2006-08-09_15.39.26
   Backup Level:   Full
   Client: MBXPDC Windows Server 2003,MVS,NT 5.2.3790
   FileSet:MBXPDC-FS 2006-08-02 15:58:32
   Pool:   MueblexFullTape
   Storage:LTO-1
   Scheduled time: 09-Aug-2006 15:39:07
   Start time: 09-Aug-2006 15:39:32
   End time:   09-Aug-2006 15:42:36
   Elapsed time:   3 mins 4 secs
   Priority:   12
   FD Files Written:   686
   SD Files Written:   686
   FD Bytes Written:   145,029,656 (145.0 MB)
   SD Bytes Written:   145,138,453 (145.1 MB)
   Rate:   788.2 KB/s
   Software Compression:   None
   Volume name(s): FullTape-0003
   Volume Session Id:  4
   Volume Session Time:1155162336
   Last Volume Bytes:  2,969,478,620 (2.969 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  OK
   SD termination status:  OK
   Termination:Backup OK
 
 
 You can see the rate...?
 
 This is the problem, this is a new server Supermicro:
 Xeon 3Ghz (2 cpu)
 2 GB RAM
 Raid 5(4 HD)
 
 The others serves are old computers and the Rate value is  1MB/s, i 
 still cannot increase this value. Example:
 
 WinNT 4(A) --- Spool On --Net..Buffer Size = 65536 Rate 2319.7   Data 
 Size 176MB
 WinNT 4(B) --- Spool On --Net..Buffer Size = 65536 Rate 2296.6   Data 
 Size 339MB
 Linux  --- Spool On --Net..Buffer Size = 65536 Rate 5536  
 Data Size 155MB
 FreeBSD---Spool On --Net..Buffer Size = 65536 Rate 1709.7 Data 
 Size 8MB
 Win2k3   ---Spool On --Net..Buffer Size = 65536 Rate 788.2  Data 
 Size 145MB VSS=yes
 
 WinNT 4(A) --- Spool OFF --Net..Buffer Size = 65536 Rate 2319.7   Data 
 Size 176MB
 WinNT 4(B) --- Spool OFF --Net..Buffer Size = 65536 Rate 2296.6   Data 
 Size 339MB
 Linux  --- Spool OFF --Net..Buffer Size = 65536 Rate 5536  
 Data Size 155MB
 FreeBSD---Spool OFF --Net..Buffer Size = 65536 Rate 1709.7 Data 
 Size 8MB
 Win2k3   ---Spool OFF --Net..Buffer Size = 65536 Rate 824   Data 
 Size 145MB VSS=yes
 
 Now check thi:
 
 WinNT 4(A) --- Spool  OFF--Net..Buffer Size = default Rate 1388.2   Data 
 Size 176MB
 WinNT 4(B) --- Spool OFF --Net..Buffer Size = default Rate 1716   Data 
 Size 339MB
 Linux  --- Spool OFF --Net..Buffer Size = default Rate 6739.5 
  Data Size 155MB
 FreeBSD---Spool OFF --Net..Buffer Size = default Rate 1398.7 
 Data Size 8MB
 Win2k3   ---Spool OFF --Net..Buffer Size = default Rate 805.7  
 Data Size 145MB VSS=yes
 
 WinNT 4(A) --- Spool On --Net..Buffer Size = default Rate 2124.1   Data 
 Size 176MB
 WinNT 4(B) --- Spool On --Net..Buffer Size = default Rate 1867.5   Data 
 Size 339MB
 Linux  --- Spool On --Net..Buffer Size =  default Rate 4559  
 Data Size 155MB
 FreeBSD---Spool On --Net..Buffer Size = default Rate 1459.2 Data 
 Size 8MB
 Win2k3   ---Spool On --Net..Buffer Size = default Rate 810.2   
 Data Size 145MB VSS=yes
 
 Them i disable the VSS=on, buffer = 65536
 
 Win2k3   ---Spool On --Net..Buffer Size = default Rate 814   
 Data Size 145MB VSS=off
 
 I check my Indexes on mysql, reading the maillist they look correct:
 
 mysql SHOW index from File;
 +---++--+--+-+---+-+--++--++-+
  
 
 | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation 
 | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
 

[Bacula-users] Slow Backup Performance on Windows 2003.

2006-08-09 Thread pedro moreno
 Hi. I have been working with bacula for some months, i love this software, my current problem is this one:My Test Server.I'm running bacula server 1.38.11 on FreeBSD 6.1-p3Mysql 4.1.20Tape HP Storage Works 232 External 200GB Compress
HD 200 IDE 7200 RPMAMD Duron 1.6 Ghz512 RAMClients:2 Win NT 4 Client 1.38.41 Windows 2k3 Estandard Edition Client 1.38.41 Linux Red Hat 9 (bonding ON -- 2 NIC's) 1.38.4My production server is:
I'm running bacula server 1.38.5 on FreeBSD 5.4-p16
Mysql 5.0.21
Tape HP Storage Works 232 External 200GB Compress
HD 80 IDE 5400 RPM
AMD Duron 1.6 Ghz
512 RAM
Im going to talk about the test server, because the production server has almost the same performance, this is one report from a backup for win2k3:JobId: 56 Job: MBXPDC.2006-08-09_15.39.26
 Backup Level: Full Client: MBXPDC Windows Server 2003,MVS,NT 5.2.3790 FileSet: MBXPDC-FS 2006-08-02 15:58:32 Pool: MueblexFullTape
 Storage: LTO-1 Scheduled time: 09-Aug-2006 15:39:07 Start time: 09-Aug-2006 15:39:32 End time: 09-Aug-2006 15:42:36 Elapsed time: 3 mins 4 secs
 Priority: 12 FD Files Written: 686 SD Files Written: 686 FD Bytes Written: 145,029,656 (145.0 MB) SD Bytes Written: 145,138,453 (145.1 MB) Rate: 
788.2 KB/s Software Compression: None Volume name(s): FullTape-0003 Volume Session Id: 4 Volume Session Time: 1155162336 Last Volume Bytes: 2,969,478,620 (2.969 GB) Non-fatal FD errors: 0
 SD Errors: 0 FD termination status: OK SD termination status: OK Termination: Backup OKYou can see the rate...?This is the problem, this is a new server Supermicro:
Xeon 3Ghz (2 cpu)2 GB RAMRaid 5(4 HD)The others serves are old computers and the Rate value is  1MB/s, i still cannot increase this value. Example:WinNT 4(A) --- Spool On --Net..Buffer Size = 65536 Rate 
2319.7 Data Size 176MBWinNT 4(B) --- Spool On --Net..Buffer Size = 65536 Rate 2296.6 Data Size 339MBLinux --- Spool On --Net..Buffer Size = 65536 Rate 5536 Data Size 155MBFreeBSD ---Spool On --Net..Buffer Size = 65536 Rate 
1709.7  Data Size 8MBWin2k3 ---Spool On --Net..Buffer Size = 65536 Rate 788.2  Data Size 145MB VSS=yesWinNT 4(A) --- Spool OFF --Net..Buffer Size = 65536 Rate 2319.7 Data Size 176MB
WinNT 4(B) --- Spool OFF --Net..Buffer Size = 65536 Rate 2296.6 Data Size 339MB
Linux --- Spool OFF --Net..Buffer Size = 65536 Rate 5536 Data Size 155MB
FreeBSD ---Spool OFF --Net..Buffer Size = 65536 Rate 1709.7  Data Size 8MBWin2k3 ---Spool OFF --Net..Buffer Size = 65536 Rate 824   Data Size 145MB VSS=yes
Now check thi:WinNT 4(A) --- Spool OFF--Net..Buffer Size = default Rate 1388.2  Data Size 176MB
WinNT 4(B) --- Spool OFF --Net..Buffer Size = default Rate 1716  Data Size 339MB
Linux --- Spool OFF --Net..Buffer Size = default Rate 6739.5  Data Size 155MB
FreeBSD ---Spool OFF --Net..Buffer Size = default Rate 1398.7  Data Size 8MB
Win2k3 ---Spool OFF --Net..Buffer Size = default Rate 805.7  Data Size 145MB VSS=yes

WinNT 4(A) --- Spool On --Net..Buffer Size = default Rate 2124.1  Data Size 176MB

WinNT 4(B) --- Spool On --Net..Buffer Size = default Rate 1867.5  Data Size 339MB

Linux --- Spool On --Net..Buffer Size = default Rate 4559  Data Size 155MB

FreeBSD ---Spool On --Net..Buffer Size = default Rate 1459.2  Data Size 8MB
Win2k3 ---Spool On --Net..Buffer Size = default Rate 810.2   Data Size 145MB VSS=yesThem i disable the VSS=on, buffer = 65536Win2k3 ---Spool On --Net..Buffer Size = default Rate 814   Data Size 145MB VSS=off
I check my Indexes on mysql, reading the maillist they look correct:mysql SHOW index from File;+---++--+--+-+---+-+--++--++-+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |+---++--+--+-+---+-+--++--++-+
| File | 0 | PRIMARY | 1 | FileId | A | 90562 | NULL | NULL | | BTREE | || File | 1 | JobId | 1 | JobId | A | NULL | NULL | NULL | | BTREE | |
| File | 1 | JobId_2 | 1 | JobId | A | NULL | NULL | NULL | | BTREE | || File | 1 | JobId_2 | 2 | PathId | A | NULL | NULL | NULL | | BTREE | |
| File | 1 | JobId_2 | 3 | FilenameId | A | NULL | NULL | NULL | | BTREE | |+---++--+--+-+---+-+--++--++-+
This performance is the same on the producction server, i dont run nothing special on win2k3, still is fresh, i dont any firewall or antivirus software yet, my network is running at 100MB Full Duplex, the bacula and win2k3 are on the same switch.
Some has some issue like on win2k3...? Im thinking that myabe the Raid performance is not really good, but the other sevices are runnnig good. Any ideas how resolve this problem...?Any tip will be apreciated, thanks all for you time!!!


NOTE: Iam going to test the backup on File and see what happend.

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-24 Thread MaxxAtWork
On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:


 Hello,
 I have some bacula installations on SunFire 280R Sparc machines, with Solaris 
 10.
 These machines apperar to be very very slow with respect to other 
 installations
 (such as v20z) with same LTO2 device.
 As you can see from the report, 60Gb are copied in 9 hours, with an avarage 
 rate of  1898.9 KB/s!
 On a v20z, the same amount of data is done in 5 hours or less, with an 
 avarage a
 rate of 3139.2 KB/s..

Hello Gabriele,
I put under work my LTO-based backup system on my Solaris 9 machine (a
fileserver running on old 220R!) and results are pretty good:

21-Jul 15:39 scribe02-dir: Bacula 1.38.9 (02May06): 21-Jul-2006 15:39:54
  JobId:  261
  Job:DaisyProj.2006-07-21_14.59.44
  Backup Level:   Full
  Client: daisy-fd sparc-sun-solaris2.9,solaris,5.9
  FileSet:ExportProj 2006-07-21 14:59:46
  Pool:   LTO2alt
  Storage:Autoloader
  Scheduled time: 21-Jul-2006 14:59:24
  Start time: 21-Jul-2006 14:59:47
  End time:   21-Jul-2006 15:39:54
  Elapsed time:   40 mins 7 secs
  Priority:   10
  FD Files Written:   253
  SD Files Written:   253
  FD Bytes Written:   25,391,596,871 (25.39 GB)
  SD Bytes Written:   25,391,628,721 (25.39 GB)
  Rate:   10549.1 KB/s
  Software Compression:   None
  Volume name(s): LTO2alt1
  Volume Session Id:  13
  Volume Session Time:1153207122
  Last Volume Bytes:  25,410,464,200 (25.41 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

The bacula-fd is version 1.38.9, compiled with gcc 3.3.2, hosted on a
Sun Enterprise 220R (2 US-II @450MHz) with Solaris 9. Server has two
arrays Sun D1000, so pretty old hardware. The sd is connected via
ordinary 100 MB/s network to the fd, and it's running on a Xeon-class
server with Linux/Debian.

That said, I would be surprised that your issue is with bacula
'slowness' on Solaris, so maybe the server is slow to access data from
disks or slow to send them?

Regards
-- 
Maxx

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-20 Thread MaxxAtWork
On 7/19/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:


Do you mean that the whole 280R machine maybe running at half-duplex?!I'm not sure what interface you are using for the backups (probably an eriX), 
but to get the link status and link capabilities from the Solaris side you can e.g. use this script:http://www.razorsedge.org/~mike/software/linkck/linkck
BTW: I also had some weird case where autonegotiation with Cisco switchesresulted in a slow link. It may be that you have to set speed manually, but if you do it, remember to set it on both sides.
Cheers-- Maxx
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-19 Thread Gabriele Bulfon


I monitored a machine tonight.This is a self-backup machine running both the server and the fd, still on a sparc 280r.The avarage throughput is 1Mb/sec, while the network backup of another fd is 4Mb/sec.While the machine was backing-up itself, I used "top" to see the machine status.It was 4% of CPU load, with 3% assigned to postgres (I use postgres as the bacula db).Then there was 0.5% for bacula-sd and 0.5% for bacula-fd.Maybe I should use some sort of buffering on bacula?


 



 

Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com

 

--Da: Hristo Benev [EMAIL PROTECTED]A: Gabriele Bulfon [EMAIL PROTECTED] Cc: MaxxAtWork [EMAIL PROTECTED] bacula-users@lists.sourceforge.net Data: 18 luglio 2006 16.43.04 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn Tue, 2006-07-18 at 16:37 +0200, Gabriele Bulfon wrote:
 Do you have any suggestion about parameters I may use to optimize the
 daemons?
 
I'm not a developer :( ... unfortunately -- but you need to see where
is the problem (due high CPU usage; low available RAM etc..) to ask for
optimizations.

Furthermore I do not have sparc machines in my setup to give you
comparison data.
 
   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com 
 
 
 
 --
 
 Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED] 
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
 [EMAIL PROTECTED] 
 Data: 18 luglio 2006 16.32.54 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
 My opinion is that you have bottleneck somewhere (probably CPU
 or RAM, 
 network). 
 You need to monitor those machines during backup to see where
 exactly. 
 
 
 On Tue, 2006-07-18 at 15:53 +0200, Gabriele Bulfon wrote: 
  When the sparc machine are just clients, I may achieve
 2-4Mb/sec 
  When these machines are both servers and clients (backup
 themselves), 
  often I achieve less then 1Mb!! 
  
  
  Gabriele Bulfon - Sonicle S.r.l. 
  Tel +39 028246016 Int. 30 - Fax +39 028243880 
  Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY 
  http://www.sonicle.com 
  
  
  
 
 -- 
  
  Da: Hristo Benev [EMAIL PROTECTED] 
  A: Gabriele Bulfon [EMAIL PROTECTED] 
  Cc: MaxxAtWork [EMAIL PROTECTED] bacula- 
  [EMAIL PROTECTED] 
  Data: 18 luglio 2006 15.43.15 CEST 
  Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire
 280R 
  
  Just to exclude network! 
  
  What is the transfer rate that you can achieve with those 
  servers? 
  
  On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote: 
   Oh no. I do not use compression at all. 
   And if I'd use compression, I'd use hardware one. 
   I don't think it's a problem of compression. 
   I have this problem only on sparc machines. 
   And they slow down the entire network backup during the 
  night 
   
   
   
   Gabriele Bulfon - Sonicle S.r.l. 
   Tel +39 028246016 Int. 30 - Fax +39 028243880 
   Via Felice Cavallotti 16 - 20089, Rozzano - Milano -
 ITALY 
   http://www.sonicle.com 
   
   
   
   
 
 -- 
   
   Da: Hristo Benev [EMAIL PROTECTED] 
   A: Gabriele Bulfon [EMAIL PROTECTED] 
   Cc: MaxxAtWork [EMAIL PROTECTED] bacula- 
   [EMAIL PROTECTED] 
   Data: 18 luglio 2006 15.16.37 CEST 
   Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 
  280R 
   
   Do you use compression, because You have difference in 
   processing power 
   Sparc III is much less powerful than Opteron? 
   
   On Tue, 2006-07-18 at 13:58 +0200, Gabriele Bulfon wrote: 
Yes, it's a network backup. The SunFire is running the 
  FD. 
The server is running on a v20z. 
This server backup many other machines, but no other
 one 
  is 
   running 
that slow. 
I usually achieve 5-8MB/s both on Windo

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-19 Thread Gabriele Bulfon


Do you mean that the whole 280R machine maybe running at half-duplex?!


 



 

Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com

 

--Da: Kern Sibbald [EMAIL PROTECTED]A: bacula-users@lists.sourceforge.net Cc: Gabriele Bulfon [EMAIL PROTECTED] Data: 19 luglio 2006 20.53.39 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROne user had similar problems with his Sparc and it turned out the problem was 
his switches which are auto speed detecting (10/100Mb).  The problem was when 
he plugged a device in to the switch, the switch detected the speed then went 
into half-duplex mode making Bacula run very slowly.  To correct the problem 
he had to power off the switch then power it back on.

On Wednesday 19 July 2006 09:07, Gabriele Bulfon wrote:
 I monitored a machine tonight.
 This is a self-backup machine running both the server and the fd, still on
 a sparc 280r. The avarage throughput is 1Mb/sec, while the network backup
 of another fd is 4Mb/sec. While the machine was backing-up itself, I used
 "top" to see the machine status. It was 4% of CPU load, with 3% assigned to
 postgres (I use postgres as the bacula db). Then there was 0.5% for
 bacula-sd and 0.5% for bacula-fd.
 Maybe I should use some sort of buffering on bacula?
 Gabriele Bulfon - Sonicle S.r.l.
 Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
 http://www.sonicle.com
 ---
--- Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED]
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-users@lists.sourceforge.net
 Data: 18 luglio 2006 16.43.04 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R

 On Tue, 2006-07-18 at 16:37 +0200, Gabriele Bulfon wrote:
  Do you have any suggestion about parameters I may use to optimize the
  daemons?

 I'm not a developer :( ... unfortunately -- but you need to see where
 is the problem (due high CPU usage; low available RAM etc..) to ask for
 optimizations.
 Furthermore I do not have sparc machines in my setup to give you
 comparison data.

Gabriele Bulfon - Sonicle S.r.l.
   Tel +39 028246016 Int. 30 - Fax +39 028243880
  Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
 http://www.sonicle.com
 
 
 
  -
 -
 
  Da: Hristo Benev [EMAIL PROTECTED]
  A: Gabriele Bulfon [EMAIL PROTECTED]
  Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
  [EMAIL PROTECTED]
  Data: 18 luglio 2006 16.32.54 CEST
  Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
  My opinion is that you have bottleneck somewhere (probably CPU
  or RAM,
  network).
  You need to monitor those machines during backup to see where
  exactly.
 
  On Tue, 2006-07-18 at 15:53 +0200, Gabriele Bulfon wrote:
   When the sparc machine are just clients, I may achieve
 
  2-4Mb/sec
 
   When these machines are both servers and clients (backup
 
  themselves),
 
   often I achieve less then 1Mb!!
  
  
   Gabriele Bulfon - Sonicle S.r.l.
   Tel +39 028246016 Int. 30 - Fax +39 028243880
   Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
   http://www.sonicle.com
 
 
  -
 -
 
   Da: Hristo Benev [EMAIL PROTECTED]
   A: Gabriele Bulfon [EMAIL PROTECTED]
   Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
   [EMAIL PROTECTED]
   Data: 18 luglio 2006 15.43.15 CEST
   Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire
 
  280R
 
   Just to exclude network!
  
   What is the transfer rate that you can achieve with those
   servers?
  
   On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote:
Oh no. I do not use compression at all.
And if I'd use compression, I'd use hardware one.
I don't think it's a problem of compression.
I have this problem only on sparc machines.
And they slow down the entire network backup during the
  
   night
  
Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano -
 
  ITALY
 
http://www.sonicle.com
 
 
  -
 -
 

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-19 Thread Ryan Novosielski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

In my organization, the Cisco switches we have are simply not reliable
enough in autonegotiate mode -- period. Autonegotiate sounds nice in
theory, but if you aren't plugging different devices into the port on a
regular basis, why risk not knowing what your interface is going to do?

I forget how to check or set this on Solaris, and it generally differs
by machine type (I have a 280 someplace but am too lazy to check -- look
on Google; I suspect ndd may be involved), but this is a very good place
to look.

Kern Sibbald wrote:
 One user had similar problems with his Sparc and it turned out the problem 
 was 
 his switches which are auto speed detecting (10/100Mb).  The problem was when 
 he plugged a device in to the switch, the switch detected the speed then went 
 into half-duplex mode making Bacula run very slowly.  To correct the problem 
 he had to power off the switch then power it back on.
 
 On Wednesday 19 July 2006 09:07, Gabriele Bulfon wrote:
 I monitored a machine tonight.
 This is a self-backup machine running both the server and the fd, still on
 a sparc 280r. The avarage throughput is 1Mb/sec, while the network backup
 of another fd is 4Mb/sec. While the machine was backing-up itself, I used
 top to see the machine status. It was 4% of CPU load, with 3% assigned to
 postgres (I use postgres as the bacula db). Then there was 0.5% for
 bacula-sd and 0.5% for bacula-fd.
 Maybe I should use some sort of buffering on bacula?
 Gabriele Bulfon - Sonicle S.r.l.
 Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
 http://www.sonicle.com
 ---
 --- Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED]
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-users@lists.sourceforge.net
 Data: 18 luglio 2006 16.43.04 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R

 On Tue, 2006-07-18 at 16:37 +0200, Gabriele Bulfon wrote:
 Do you have any suggestion about parameters I may use to optimize the
 daemons?
 I'm not a developer :( ... unfortunately -- but you need to see where
 is the problem (due high CPU usage; low available RAM etc..) to ask for
 optimizations.
 Furthermore I do not have sparc machines in my setup to give you
 comparison data.

   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com



 -
 -

 Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED]
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
 [EMAIL PROTECTED]
 Data: 18 luglio 2006 16.32.54 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R

 My opinion is that you have bottleneck somewhere (probably CPU
 or RAM,
 network).
 You need to monitor those machines during backup to see where
 exactly.

 On Tue, 2006-07-18 at 15:53 +0200, Gabriele Bulfon wrote:
  When the sparc machine are just clients, I may achieve

 2-4Mb/sec

  When these machines are both servers and clients (backup

 themselves),

  often I achieve less then 1Mb!!
 
 
  Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
  Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
  http://www.sonicle.com


 -
 -

  Da: Hristo Benev [EMAIL PROTECTED]
  A: Gabriele Bulfon [EMAIL PROTECTED]
  Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
  [EMAIL PROTECTED]
  Data: 18 luglio 2006 15.43.15 CEST
  Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire

 280R

  Just to exclude network!
 
  What is the transfer rate that you can achieve with those
  servers?
 
  On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote:
   Oh no. I do not use compression at all.
   And if I'd use compression, I'd use hardware one.
   I don't think it's a problem of compression.
   I have this problem only on sparc machines.
   And they slow down the entire network backup during the
 
  night
 
   Gabriele Bulfon - Sonicle S.r.l.
   Tel +39 028246016 Int. 30 - Fax +39 028243880
   Via Felice Cavallotti 16 - 20089, Rozzano - Milano -

 ITALY

   http://www.sonicle.com


 -
 -

   Da: Hristo Benev [EMAIL PROTECTED

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread MaxxAtWork
On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:


Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other installations (such as v20z) with same LTO2 device.
As you can see from the report, 60Gb are copied in 9 hours, with an avarage rate of 1898.9 KB/s!On a v20z, the same amount of data is done in 5 hours or less, with
 an avarage a rate of 3139.2 KB/s..
Are you talking about local backups, i.e. both your bacula-fd and bacula-sd are running on the same server, or there is a network in between? In the latter case you might be hitting the network transfer limit.
 Cheers,-- Maxx
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread Gabriele Bulfon


Yes, it's a network backup. The SunFire is running the FD.The server is running on a v20z.This server backup many other machines, but no other one is running that slow.I usually achieve 5-8MB/s both on Windows clients and other solaris 10 platforms (x86/amd).May it be that the compiled agent for SPARC has some problem?


 



 

Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com

 

Da: MaxxAtWork [EMAIL PROTECTED]A: bacula-users@lists.sourceforge.net Data: 18 luglio 2006 13.22.13 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:


Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other installations (such as v20z) with same LTO2 device.
As you can see from the report, 60Gb are copied in 9 hours, with an avarage rate of 1898.9 KB/s!On a v20z, the same amount of data is done in 5 hours or less, with
 an avarage a rate of 3139.2 KB/s..
Are you talking about local backups, i.e. both your bacula-fd and bacula-sd are running on the same server, or there is a network in between? In the latter case you might be hitting the network transfer limit.
 Cheers,-- Maxx
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread Hristo Benev
Do you use compression, because You have difference in processing power
Sparc III is much less powerful than Opteron?

On Tue, 2006-07-18 at 13:58 +0200, Gabriele Bulfon wrote:
 Yes, it's a network backup. The SunFire is running the FD.
 The server is running on a v20z.
 This server backup many other machines, but no other one is running
 that slow.
 I usually achieve 5-8MB/s both on Windows clients and other solaris 10
 platforms (x86/amd).
 May it be that the compiled agent for SPARC has some problem?
 
 
   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com 
 
 
 __
 
 
 Da: MaxxAtWork [EMAIL PROTECTED]
 A: bacula-users@lists.sourceforge.net 
 Data: 18 luglio 2006 13.22.13 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
 
 On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:
 Hello,
 I have some bacula installations on SunFire 280R Sparc
 machines, with Solaris 10.
 These machines apperar to be very very slow with
 respect to other installations (such as v20z) with
 same LTO2 device. 
 As you can see from the report, 60Gb are copied in 9
 hours, with an avarage rate of 1898.9 KB/s!
 On a v20z, the same amount of data is done in 5 hours
 or less, with an avarage a rate of 3139.2 KB/s..
 
 
 Are you talking about local backups, i.e. both your bacula-fd
 and bacula-sd are running on the same server, or there is a
 network in between? In the latter case you might be hitting
 the network transfer limit. 
 
 Cheers,
 -- 
 Maxx
 
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to 
 share your
 opinions on IT  business topics through brief surveys -- and earn 
 cash
 
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys -- and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___ Bacula-users mailing list 
 Bacula-users@lists.sourceforge.net 
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread Gabriele Bulfon


Oh no. I do not use compression at all.And if I'd use compression, I'd use hardware one.I don't think it's a problem of compression.I have this problem only on sparc machines.And they slow down the entire network backup during the night


 



 

Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com

 

--Da: Hristo Benev [EMAIL PROTECTED]A: Gabriele Bulfon [EMAIL PROTECTED] Cc: MaxxAtWork [EMAIL PROTECTED] bacula-users@lists.sourceforge.net Data: 18 luglio 2006 15.16.37 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280RDo you use compression, because You have difference in processing power
Sparc III is much less powerful than Opteron?

On Tue, 2006-07-18 at 13:58 +0200, Gabriele Bulfon wrote:
 Yes, it's a network backup. The SunFire is running the FD.
 The server is running on a v20z.
 This server backup many other machines, but no other one is running
 that slow.
 I usually achieve 5-8MB/s both on Windows clients and other solaris 10
 platforms (x86/amd).
 May it be that the compiled agent for SPARC has some problem?
 
 
   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com 
 
 
 __
 
 
 Da: MaxxAtWork [EMAIL PROTECTED]
 A: bacula-users@lists.sourceforge.net 
 Data: 18 luglio 2006 13.22.13 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
 
 On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:
 Hello,
 I have some bacula installations on SunFire 280R Sparc
 machines, with Solaris 10.
 These machines apperar to be very very slow with
 respect to other installations (such as v20z) with
 same LTO2 device. 
 As you can see from the report, 60Gb are copied in 9
 hours, with an avarage rate of 1898.9 KB/s!
 On a v20z, the same amount of data is done in 5 hours
 or less, with an avarage a rate of 3139.2 KB/s..
 
 
 Are you talking about local backups, i.e. both your bacula-fd
 and bacula-sd are running on the same server, or there is a
 network in between? In the latter case you might be hitting
 the network transfer limit. 
 
 Cheers,
 -- 
 Maxx
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys -- and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys -- and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread Hristo Benev
Just to exclude network!

What is the transfer rate that you can achieve with those servers?

On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote:
 Oh no. I do not use compression at all.
 And if I'd use compression, I'd use hardware one.
 I don't think it's a problem of compression.
 I have this problem only on sparc machines.
 And they slow down the entire network backup during the night
 
 
 
   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com 
 
 
 
 --
 
 Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED] 
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
 [EMAIL PROTECTED] 
 Data: 18 luglio 2006 15.16.37 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
 Do you use compression, because You have difference in
 processing power 
 Sparc III is much less powerful than Opteron? 
 
 On Tue, 2006-07-18 at 13:58 +0200, Gabriele Bulfon wrote: 
  Yes, it's a network backup. The SunFire is running the FD. 
  The server is running on a v20z. 
  This server backup many other machines, but no other one is
 running 
  that slow. 
  I usually achieve 5-8MB/s both on Windows clients and other
 solaris 10 
  platforms (x86/amd). 
  May it be that the compiled agent for SPARC has some
 problem? 
  
  
  Gabriele Bulfon - Sonicle S.r.l. 
  Tel +39 028246016 Int. 30 - Fax +39 028243880 
  Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY 
  http://www.sonicle.com 
  
  
 
 
 __ 
  
  
  Da: MaxxAtWork [EMAIL PROTECTED] 
  A: bacula-users@lists.sourceforge.net 
  Data: 18 luglio 2006 13.22.13 CEST 
  Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire
 280R 
  
  
  On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote: 
  Hello, 
  I have some bacula installations on SunFire 280R Sparc 
  machines, with Solaris 10. 
  These machines apperar to be very very slow with 
  respect to other installations (such as v20z) with 
  same LTO2 device. 
  As you can see from the report, 60Gb are copied in 9 
  hours, with an avarage rate of 1898.9 KB/s! 
  On a v20z, the same amount of data is done in 5 hours 
  or less, with an avarage a rate of 3139.2 KB/s.. 
  
  
  Are you talking about local backups, i.e. both your bacula-
 fd 
  and bacula-sd are running on the same server, or there is a 
  network in between? In the latter case you might be hitting 
  the network transfer limit. 
  
  Cheers, 
  -- 
  Maxx 
 
 
 - 
  Take Surveys. Earn Cash. Influence the Future of IT 
  Join SourceForge.net's Techsay panel and you'll get the
 chance to share your 
  opinions on IT  business topics through brief surveys --
 and earn cash 
  http://www.techsay.com/default.php?
 page=join.phpp=sourceforgeCID=DEVDEV 
  ___ 
  Bacula-users mailing list 
  Bacula-users@lists.sourceforge.net 
  https://lists.sourceforge.net/lists/listinfo/bacula-users 
 
 
 - 
  Take Surveys. Earn Cash. Influence the Future of IT 
  Join SourceForge.net's Techsay panel and you'll get the
 chance to share your 
  opinions on IT  business topics through brief surveys --
 and earn cash 
  http://www.techsay.com/default.php?
 page=join.phpp=sourceforgeCID=DEVDEV 
  ___ Bacula-users
 mailing list Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users 
 
 
 
 
-- 
Hristo Benev
IT Manager

WAVEROAD
Partners in Telecommunications

514-935-2020 x225 T
514-935-1001 F
www.waveroad.ca
[EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread Gabriele Bulfon


When the sparc machine are just clients, I may achieve 2-4Mb/secWhen these machines are both servers and clients (backup themselves), often I achieve less then 1Mb!!


 



 

Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com

 

--Da: Hristo Benev [EMAIL PROTECTED]A: Gabriele Bulfon [EMAIL PROTECTED] Cc: MaxxAtWork [EMAIL PROTECTED] bacula-users@lists.sourceforge.net Data: 18 luglio 2006 15.43.15 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280RJust to exclude network!

What is the transfer rate that you can achieve with those servers?

On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote:
 Oh no. I do not use compression at all.
 And if I'd use compression, I'd use hardware one.
 I don't think it's a problem of compression.
 I have this problem only on sparc machines.
 And they slow down the entire network backup during the night
 
 
 
   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com 
 
 
 
 --
 
 Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED] 
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
 [EMAIL PROTECTED] 
 Data: 18 luglio 2006 15.16.37 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
 Do you use compression, because You have difference in
 processing power 
 Sparc III is much less powerful than Opteron? 
 
 On Tue, 2006-07-18 at 13:58 +0200, Gabriele Bulfon wrote: 
  Yes, it's a network backup. The SunFire is running the FD. 
  The server is running on a v20z. 
  This server backup many other machines, but no other one is
 running 
  that slow. 
  I usually achieve 5-8MB/s both on Windows clients and other
 solaris 10 
  platforms (x86/amd). 
  May it be that the compiled agent for SPARC has some
 problem? 
  
  
  Gabriele Bulfon - Sonicle S.r.l. 
  Tel +39 028246016 Int. 30 - Fax +39 028243880 
  Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY 
  http://www.sonicle.com 
  
  
 
 __ 
  
  
  Da: MaxxAtWork [EMAIL PROTECTED] 
  A: bacula-users@lists.sourceforge.net 
  Data: 18 luglio 2006 13.22.13 CEST 
  Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire
 280R 
  
  
  On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote: 
  Hello, 
  I have some bacula installations on SunFire 280R Sparc 
  machines, with Solaris 10. 
  These machines apperar to be very very slow with 
  respect to other installations (such as v20z) with 
  same LTO2 device. 
  As you can see from the report, 60Gb are copied in 9 
  hours, with an avarage rate of 1898.9 KB/s! 
  On a v20z, the same amount of data is done in 5 hours 
  or less, with an avarage a rate of 3139.2 KB/s.. 
  
  
  Are you talking about local backups, i.e. both your bacula-
 fd 
  and bacula-sd are running on the same server, or there is a 
  network in between? In the latter case you might be hitting 
  the network transfer limit. 
  
  Cheers, 
  -- 
  Maxx 
 
 - 
  Take Surveys. Earn Cash. Influence the Future of IT 
  Join SourceForge.net's Techsay panel and you'll get the
 chance to share your 
  opinions on IT  business topics through brief surveys --
 and earn cash 
  http://www.techsay.com/default.php?
 page=join.phpp=sourceforgeCID=DEVDEV 
  ___ 
  Bacula-users mailing list 
  Bacula-users@lists.sourceforge.net 
  https://lists.sourceforge.net/lists/listinfo/bacula-users 
 
 - 
  Take Surveys. Earn Cash. Influence the Future of IT 
  Join SourceForge.net's Techsay panel and you'll get the
 chance to share your 
  opinions on IT  business topics through brief surveys --
 and earn cash 
  http://www.techsay.com/default.php?
 page=join.phpp=sourceforgeCID=DEVDEV

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread Hristo Benev
My opinion is that you have bottleneck somewhere (probably CPU or RAM,
network).
You need to monitor those machines during backup to see where exactly.


On Tue, 2006-07-18 at 15:53 +0200, Gabriele Bulfon wrote:
 When the sparc machine are just clients, I may achieve 2-4Mb/sec
 When these machines are both servers and clients (backup themselves),
 often I achieve less then 1Mb!!
 
 
   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com 
 
 
 
 --
 
 Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED] 
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
 [EMAIL PROTECTED] 
 Data: 18 luglio 2006 15.43.15 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
 Just to exclude network! 
 
 What is the transfer rate that you can achieve with those
 servers? 
 
 On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote: 
  Oh no. I do not use compression at all. 
  And if I'd use compression, I'd use hardware one. 
  I don't think it's a problem of compression. 
  I have this problem only on sparc machines. 
  And they slow down the entire network backup during the
 night 
  
  
  
  Gabriele Bulfon - Sonicle S.r.l. 
  Tel +39 028246016 Int. 30 - Fax +39 028243880 
  Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY 
  http://www.sonicle.com 
  
  
  
 
 
 --
  
  
  Da: Hristo Benev [EMAIL PROTECTED] 
  A: Gabriele Bulfon [EMAIL PROTECTED] 
  Cc: MaxxAtWork [EMAIL PROTECTED] bacula- 
  [EMAIL PROTECTED] 
  Data: 18 luglio 2006 15.16.37 CEST 
  Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire
 280R 
  
  Do you use compression, because You have difference in 
  processing power 
  Sparc III is much less powerful than Opteron? 
  
  On Tue, 2006-07-18 at 13:58 +0200, Gabriele Bulfon wrote: 
   Yes, it's a network backup. The SunFire is running the
 FD. 
   The server is running on a v20z. 
   This server backup many other machines, but no other one
 is 
  running 
   that slow. 
   I usually achieve 5-8MB/s both on Windows clients and
 other 
  solaris 10 
   platforms (x86/amd). 
   May it be that the compiled agent for SPARC has some 
  problem? 
   
   
   Gabriele Bulfon - Sonicle S.r.l. 
   Tel +39 028246016 Int. 30 - Fax +39 028243880 
   Via Felice Cavallotti 16 - 20089, Rozzano - Milano -
 ITALY 
   http://www.sonicle.com 
   
   
   
 
 
 __ 
   
   
   Da: MaxxAtWork [EMAIL PROTECTED] 
   A: bacula-users@lists.sourceforge.net 
   Data: 18 luglio 2006 13.22.13 CEST 
   Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 
  280R 
   
   
   On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote: 
   Hello, 
   I have some bacula installations on SunFire 280R Sparc 
   machines, with Solaris 10. 
   These machines apperar to be very very slow with 
   respect to other installations (such as v20z) with 
   same LTO2 device. 
   As you can see from the report, 60Gb are copied in 9 
   hours, with an avarage rate of 1898.9 KB/s! 
   On a v20z, the same amount of data is done in 5 hours 
   or less, with an avarage a rate of 3139.2 KB/s.. 
   
   
   Are you talking about local backups, i.e. both your
 bacula- 
  fd 
   and bacula-sd are running on the same server, or there is
 a 
   network in between? In the latter case you might be
 hitting 
   the network transfer limit. 
   
   Cheers, 
   -- 
   Maxx 
   
 
 
 - 
   Take Surveys. Earn Cash. Influence the Future of IT 
   Join SourceForge.net's Techsay panel and you'll get the 
  chance to share your 
   opinions on IT  business topics through brief surveys -- 
  and earn cash 
   http://www.techsay.com/default.php? 
  page=join.phpp=sourceforgeCID

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread Gabriele Bulfon


Do you have any suggestion about parameters I may use to optimize the daemons?


 



 

Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com

 

--Da: Hristo Benev [EMAIL PROTECTED]A: Gabriele Bulfon [EMAIL PROTECTED] Cc: MaxxAtWork [EMAIL PROTECTED] bacula-users@lists.sourceforge.net Data: 18 luglio 2006 16.32.54 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280RMy opinion is that you have bottleneck somewhere (probably CPU or RAM,
network).
You need to monitor those machines during backup to see where exactly.


On Tue, 2006-07-18 at 15:53 +0200, Gabriele Bulfon wrote:
 When the sparc machine are just clients, I may achieve 2-4Mb/sec
 When these machines are both servers and clients (backup themselves),
 often I achieve less then 1Mb!!
 
 
   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com 
 
 
 
 --
 
 Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED] 
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
 [EMAIL PROTECTED] 
 Data: 18 luglio 2006 15.43.15 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
 Just to exclude network! 
 
 What is the transfer rate that you can achieve with those
 servers? 
 
 On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote: 
  Oh no. I do not use compression at all. 
  And if I'd use compression, I'd use hardware one. 
  I don't think it's a problem of compression. 
  I have this problem only on sparc machines. 
  And they slow down the entire network backup during the
 night 
  
  
  
  Gabriele Bulfon - Sonicle S.r.l. 
  Tel +39 028246016 Int. 30 - Fax +39 028243880 
  Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY 
  http://www.sonicle.com 
  
  
  
 
 -- 
  
  Da: Hristo Benev [EMAIL PROTECTED] 
  A: Gabriele Bulfon [EMAIL PROTECTED] 
  Cc: MaxxAtWork [EMAIL PROTECTED] bacula- 
  [EMAIL PROTECTED] 
  Data: 18 luglio 2006 15.16.37 CEST 
  Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire
 280R 
  
  Do you use compression, because You have difference in 
  processing power 
  Sparc III is much less powerful than Opteron? 
  
  On Tue, 2006-07-18 at 13:58 +0200, Gabriele Bulfon wrote: 
   Yes, it's a network backup. The SunFire is running the
 FD. 
   The server is running on a v20z. 
   This server backup many other machines, but no other one
 is 
  running 
   that slow. 
   I usually achieve 5-8MB/s both on Windows clients and
 other 
  solaris 10 
   platforms (x86/amd). 
   May it be that the compiled agent for SPARC has some 
  problem? 
   
   
   Gabriele Bulfon - Sonicle S.r.l. 
   Tel +39 028246016 Int. 30 - Fax +39 028243880 
   Via Felice Cavallotti 16 - 20089, Rozzano - Milano -
 ITALY 
   http://www.sonicle.com 
   
   
   
 
 __ 
   
   
   Da: MaxxAtWork [EMAIL PROTECTED] 
   A: bacula-users@lists.sourceforge.net 
   Data: 18 luglio 2006 13.22.13 CEST 
   Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 
  280R 
   
   
   On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote: 
   Hello, 
   I have some bacula installations on SunFire 280R Sparc 
   machines, with Solaris 10. 
   These machines apperar to be very very slow with 
   respect to other installations (such as v20z) with 
   same LTO2 device. 
   As you can see from the report, 60Gb are copied in 9 
   hours, with an avarage rate of 1898.9 KB/s! 
   On a v20z, the same amount of data is done in 5 hours 
   or less, with an avarage a rate of 3139.2 KB/s.. 
   
   
   Are you talking about local backups, i.e. both your
 bacula- 
  fd 
   and bacula-sd are running on the same server, or there is
 a 
   network in between? In the latter case you might

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-18 Thread Hristo Benev
On Tue, 2006-07-18 at 16:37 +0200, Gabriele Bulfon wrote:
 Do you have any suggestion about parameters I may use to optimize the
 daemons?
 
I'm not a developer :( ... unfortunately -- but you need to see where
is the problem (due high CPU usage; low available RAM etc..) to ask for
optimizations.

Furthermore I do not have sparc machines in my setup to give you
comparison data.
 
   Gabriele Bulfon - Sonicle S.r.l.
  Tel +39 028246016 Int. 30 - Fax +39 028243880
 Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com 
 
 
 
 --
 
 Da: Hristo Benev [EMAIL PROTECTED]
 A: Gabriele Bulfon [EMAIL PROTECTED] 
 Cc: MaxxAtWork [EMAIL PROTECTED] bacula-
 [EMAIL PROTECTED] 
 Data: 18 luglio 2006 16.32.54 CEST
 Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
 
 My opinion is that you have bottleneck somewhere (probably CPU
 or RAM, 
 network). 
 You need to monitor those machines during backup to see where
 exactly. 
 
 
 On Tue, 2006-07-18 at 15:53 +0200, Gabriele Bulfon wrote: 
  When the sparc machine are just clients, I may achieve
 2-4Mb/sec 
  When these machines are both servers and clients (backup
 themselves), 
  often I achieve less then 1Mb!! 
  
  
  Gabriele Bulfon - Sonicle S.r.l. 
  Tel +39 028246016 Int. 30 - Fax +39 028243880 
  Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY 
  http://www.sonicle.com 
  
  
  
 
 
 --
  
  
  Da: Hristo Benev [EMAIL PROTECTED] 
  A: Gabriele Bulfon [EMAIL PROTECTED] 
  Cc: MaxxAtWork [EMAIL PROTECTED] bacula- 
  [EMAIL PROTECTED] 
  Data: 18 luglio 2006 15.43.15 CEST 
  Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire
 280R 
  
  Just to exclude network! 
  
  What is the transfer rate that you can achieve with those 
  servers? 
  
  On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote: 
   Oh no. I do not use compression at all. 
   And if I'd use compression, I'd use hardware one. 
   I don't think it's a problem of compression. 
   I have this problem only on sparc machines. 
   And they slow down the entire network backup during the 
  night 
   
   
   
   Gabriele Bulfon - Sonicle S.r.l. 
   Tel +39 028246016 Int. 30 - Fax +39 028243880 
   Via Felice Cavallotti 16 - 20089, Rozzano - Milano -
 ITALY 
   http://www.sonicle.com 
   
   
   
   
 
 
 --
  
   
   Da: Hristo Benev [EMAIL PROTECTED] 
   A: Gabriele Bulfon [EMAIL PROTECTED] 
   Cc: MaxxAtWork [EMAIL PROTECTED] bacula- 
   [EMAIL PROTECTED] 
   Data: 18 luglio 2006 15.16.37 CEST 
   Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 
  280R 
   
   Do you use compression, because You have difference in 
   processing power 
   Sparc III is much less powerful than Opteron? 
   
   On Tue, 2006-07-18 at 13:58 +0200, Gabriele Bulfon wrote: 
Yes, it's a network backup. The SunFire is running the 
  FD. 
The server is running on a v20z. 
This server backup many other machines, but no other
 one 
  is 
   running 
that slow. 
I usually achieve 5-8MB/s both on Windows clients and 
  other 
   solaris 10 
platforms (x86/amd). 
May it be that the compiled agent for SPARC has some 
   problem? 


Gabriele Bulfon - Sonicle S.r.l. 
Tel +39 028246016 Int. 30 - Fax +39 028243880 
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - 
  ITALY 
http://www.sonicle.com 



   
 
 
 __ 


Da: MaxxAtWork [EMAIL PROTECTED] 
A: bacula-users@lists.sourceforge.net 
Data: 18 luglio 2006 13.22.13 CEST 
Oggetto: Re: [Bacula-users] Slow backup on Sparc
 SunFire 
   280R 


On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED]
 wrote: 
Hello, 
I

Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-17 Thread Gabriele Bulfon


Thanks,this is very interesting.My LTO2 drives (I have many installed) are from Certance.Do you achieve these rates on a SunFire 280R? What is your scsi card?I usually never achieve more than 8-10MB/s...how can it be?!


 



 

Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com

 

--Da: Alan Brown [EMAIL PROTECTED]A: Gabriele Bulfon [EMAIL PROTECTED] Cc: bacula-users@lists.sourceforge.net Data: 15 luglio 2006 19.44.26 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn Fri, 14 Jul 2006, Gabriele Bulfon wrote:

 Hello,
 I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.
 These machines apperar to be very very slow with respect to other installations (such as v20z) with same LTO2 device.
 As you can see from the report, 60Gb are copied in 9 hours, with an avarage rate of
 1898.9 KB/s
 !
 On a v20z, the same amount of data is done in 5 hours or less, with
 an avarage
 a rate of
 3139.2 KB/s.

Both of these rates are _very_ slow.

I benchmarked my LTO2 drives at 27-28MB/s last week using btape.

Marketing claims are for 60MB/s on 2:1 compressible data, but I believe 
btape's fill algorithm produces non-compressable.

Even when spooling to disk and then dumping to tape, I still see rates of 
around 10-12Mb/s.

Making up for that halving of speed, I am able to run concurrent backups, 
so the tape drives are more-or-less spinning all the time as various 
drives dump spooled data.

AB




-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-17 Thread Alan Brown
On Mon, 17 Jul 2006, Gabriele Bulfon wrote:

 Thanks,
 this is very interesting.
 My LTO2 drives (I have many installed) are from Certance.

Mine are HP drives installed in a HP MSL6000 library (aka NEO4000)

 Do you achieve these rates on a SunFire 280R?

No, Wintel hardware (HP Proliant DL580g2 - old tech now)

 What is your scsi card?

HP badged Qlogic Fibre HBAs (2Gb/s, but 4Gb/s is available now)

 I usually never achieve more than 8-10MB/s...how can it be?!

What do Certance quote as maximum throughput on the drives?

Typical problems are:

1: Slow scsi speeds
2: bad termination
3: Long cables
4: Scsi contention

The drives here go through a fibre-scsi router on the library, which has a 
dedicated U320 bus per drive. Newer (LTO3) drives are direct fibre 
attached and even faster


AB



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-15 Thread Alan Brown
On Fri, 14 Jul 2006, Gabriele Bulfon wrote:

 Hello,
 I have some bacula installations on SunFire 280R Sparc machines, with Solaris 
 10.
 These machines apperar to be very very slow with respect to other 
 installations (such as v20z) with same LTO2 device.
 As you can see from the report, 60Gb are copied in 9 hours, with an avarage 
 rate of
 1898.9 KB/s
 !
 On a v20z, the same amount of data is done in 5 hours or less, with
 an avarage
 a rate of
 3139.2 KB/s.

Both of these rates are _very_ slow.

I benchmarked my LTO2 drives at 27-28MB/s last week using btape.

Marketing claims are for 60MB/s on 2:1 compressible data, but I believe 
btape's fill algorithm produces non-compressable.

Even when spooling to disk and then dumping to tape, I still see rates of 
around 10-12Mb/s.

Making up for that halving of speed, I am able to run concurrent backups, 
so the tape drives are more-or-less spinning all the time as various 
drives dump spooled data.

AB


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backup on Sparc SunFire 280R

2006-07-14 Thread Gabriele Bulfon


Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other installations (such as v20z) with same LTO2 device.As you can see from the report, 60Gb are copied in 9 hours, with an avarage rate of 1898.9 KB/s!On a v20z, the same amount of data is done in 5 hours or less, with an avarage a rate of 3139.2 KB/s..13-Jul 23:00 iserver-dir: Start Backup JobId 605, Job=Enterprise_Backup.2006-07-13_23.00.00
13-Jul 23:00 iserver-sd: Recycled volume "GIOVEDI" on device "/dev/rmt/1mbn", all previous data lost.
14-Jul 07:54 iserver-dir: Bacula 1.36.2 (28Feb05): 14-Jul-2006 07:54:46
  JobId:  605
  Job:Enterprise_Backup.2006-07-13_23.00.00
  Backup Level:   Full
  Client: iserver-fd
  FileSet:"Full Set" 2006-01-02 23:00:02
  Pool:   "ThursdayPool"
  Storage:"QUANTUM"
  Start time: 13-Jul-2006 23:00:03
  End time:   14-Jul-2006 07:54:46
  FD Files Written:   437,238
  SD Files Written:   437,238
  FD Bytes Written:   60,922,201,547
  SD Bytes Written:   60,991,858,708
  Rate:   1898.9 KB/s
  Software Compression:   None
  Volume name(s): GIOVEDI
  Volume Session Id:  1
  Volume Session Time:1152818241
  Last Volume Bytes:  61,062,250,129
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

14-Jul 07:54 iserver-dir: Begin pruning Jobs.
14-Jul 07:54 iserver-dir: No Jobs found to prune.
14-Jul 07:54 iserver-dir: Begin pruning Files.
14-Jul 07:54 iserver-dir: No Files found to prune.
14-Jul 07:54 iserver-dir: End auto prune.



 



 

Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com

 




-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users