Re: [Bacula-users] bacula performance

2019-06-03 Thread ce
I always check manager/perfmonance, and when it is in the middle of bacula
job, I noticed bacula-fd.exe process is missing from windows Resource
Monitor > Network and Disk , and bacual-fd.exe cpu usgae 0 but Memory usage
49,744 K .
any thought?

On Mon, Jun 3, 2019 at 1:57 PM Dimitri Maziuk via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> On 6/3/19 2:06 PM, ce wrote:
>
> > running multiple jobs for the same client at the same time makes it
> > worse...!!!
>
> I use neither encryption nor windows, but this hints at disk i/o. I'm
> sure sysinternals have some iostat equivalent, or you maybe you could
> try watching it in task manager/perfmon?
>
> --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula performance

2019-06-03 Thread Dimitri Maziuk via Bacula-users
On 6/3/19 2:06 PM, ce wrote:

> running multiple jobs for the same client at the same time makes it
> worse...!!!

I use neither encryption nor windows, but this hints at disk i/o. I'm
sure sysinternals have some iostat equivalent, or you maybe you could
try watching it in task manager/perfmon?

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula performance

2019-06-03 Thread ce
Does any one else have  issue with bacula speed with bacula 9.4.2.
is that normal that bacula speed is too low when encryption is enabled for
windows client and Windows Network IO is around xx kb/s or less with 1 Gbps
bandwidth ???
No cpu and memory issue on the client/server sides though.

P.S.
running multiple jobs for the same client at the same time makes it
worse...!!!
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula performance

2010-08-04 Thread rvent
Hello,

I am new to Bacula and so far i really like it.

I am testing Bacula and at the moment i am trying to backup to disk. Everything 
seems to be working fine except that the file transfers from the client to the 
server are not very fast.

The server's interface is a 2Gbps card, the client's is 10/100mbps, the ports 
on the switch where they both connect is 10/100 so i know i am not going to get 
anything faster than 100mbps. However, i am not getting anywhere near that.

I found out i was using the default Maximum Network Buffer Size so i changed 
that to 100mbps or 13107200 bytes, however, when the backup runs it only gets 
to 1489510.4 bytes per second. If i copy the same amount of data between the 
same server using scp i get 11MB/s.

Is there anything i can check...?

Thanks, bellow are the relevant config files.


bacula-dir.conf

Job #123;
nbsp; Name = test1
nbsp; Type = Backup
nbsp; Level = Incremental
nbsp; Client = client1-fd
nbsp; FileSet = wintest
nbsp; Schedule = WeeklyCycle
nbsp; Storage = File
nbsp; Messages = Standard
nbsp; Pool = Default
nbsp; Write Bootstrap = /var/lib/bacula/%c.bsr
nbsp; Priority = 10
#125;

Client #123;
nbsp; Name = client1-fd
nbsp; Address = client1.domain.com
nbsp; FDPort = 9102
nbsp; Catalog = MyCatalog
nbsp; Password = asdfsdanbsp; nbsp; nbsp; # password for FileDaemon
nbsp; File Retention = 14 daysnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; # 30 
days
nbsp; Job Retention = 1 monthsnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; # six 
months
nbsp; AutoPrune = yesnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 
nbsp; nbsp; nbsp;# Prune expired Jobs/Files
#125;

Storage #123;
nbsp; Name = File
nbsp; Address = server.domain.com
nbsp; SDPort = 9103
nbsp; Password = asdf
nbsp; Device = FileStorage
nbsp; Media Type = File
#125;


bacula-sd.conf

Device #123;
nbsp; Name = FileStorage
nbsp; Media Type = File
nbsp; Archive Device = /bacula-staging/backups
nbsp; LabelMedia = yes;nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 
nbsp; nbsp;# lets Bacula label unlabeled media
nbsp; Random Access = Yes;
nbsp; AutomaticMount = yes;nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 
nbsp;# when device opened, read it
nbsp; RemovableMedia = no;
nbsp; Maximum Network Buffer Size = 13107200
nbsp; AlwaysOpen = no;
#125;


+--
|This was sent by rvent...@h-st.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance

2010-08-04 Thread John Drescher
On Wed, Aug 4, 2010 at 11:28 AM, rvent bacula-fo...@backupcentral.com wrote:
 Hello,

 I am new to Bacula and so far i really like it.

 I am testing Bacula and at the moment i am trying to backup to disk. 
 Everything seems to be working fine except that the file transfers from the 
 client to the server are not very fast.

 The server's interface is a 2Gbps card, the client's is 10/100mbps, the ports 
 on the switch where they both connect is 10/100 so i know i am not going to 
 get anything faster than 100mbps. However, i am not getting anywhere near 
 that.

 I found out i was using the default Maximum Network Buffer Size so i 
 changed that to 100mbps or 13107200 bytes, however, when the backup runs it 
 only gets to 1489510.4 bytes per second. If i copy the same amount of data 
 between the same server using scp i get 11MB/s.

 Is there anything i can check...?

 Thanks, bellow are the relevant config files.


 bacula-dir.conf

 Job #123;
 nbsp; Name = test1
 nbsp; Type = Backup
 nbsp; Level = Incremental
 nbsp; Client = client1-fd
 nbsp; FileSet = wintest
 nbsp; Schedule = WeeklyCycle
 nbsp; Storage = File
 nbsp; Messages = Standard
 nbsp; Pool = Default
 nbsp; Write Bootstrap = /var/lib/bacula/%c.bsr
 nbsp; Priority = 10
 #125;

 Client #123;
 nbsp; Name = client1-fd
 nbsp; Address = client1.domain.com
 nbsp; FDPort = 9102
 nbsp; Catalog = MyCatalog
 nbsp; Password = asdfsdanbsp; nbsp; nbsp; # password for FileDaemon
 nbsp; File Retention = 14 daysnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; # 30 
 days
 nbsp; Job Retention = 1 monthsnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; # 
 six months
 nbsp; AutoPrune = yesnbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 
 nbsp; nbsp; nbsp;# Prune expired Jobs/Files
 #125;

 Storage #123;
 nbsp; Name = File
 nbsp; Address = server.domain.com
 nbsp; SDPort = 9103
 nbsp; Password = asdf
 nbsp; Device = FileStorage
 nbsp; Media Type = File
 #125;


 bacula-sd.conf

 Device #123;
 nbsp; Name = FileStorage
 nbsp; Media Type = File
 nbsp; Archive Device = /bacula-staging/backups
 nbsp; LabelMedia = yes;nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 
 nbsp; nbsp; nbsp;# lets Bacula label unlabeled media
 nbsp; Random Access = Yes;
 nbsp; AutomaticMount = yes;nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; nbsp; 
 nbsp;# when device opened, read it
 nbsp; RemovableMedia = no;
 nbsp; Maximum Network Buffer Size = 13107200
 nbsp; AlwaysOpen = no;
 #125;


Turn off software compression and see if that helps. Also maybe turn
on attribute spooling.

John

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula performance so slow ???

2009-10-13 Thread Klaus Troeger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

i did a clean setup of bacula on a

Intel(R) Xeon(TM) CPU 3.60GHz, 3 GB Memory, Intel raid controller forming
5 internal 72 GB-320/10k SCSI LVD drives to a raid 5 array, where
everything is on.
My Quantum M1500 LTO-3 loader is connected via SCSI 160 LVD to the
Adaptec 39160
card.

OS is (but same results with Ubuntu 9.04 server):

[r...@denbvsbcks1 ~]# uname -a
Linux denbvsbcks1.int.linuxstar.de 2.6.30.8-64.fc11.x86_64 #1 SMP Fri
Sep 25 04:43:32 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[r...@denbvsbcks1 ~]# cat /etc/redhat-release
Fedora release 11 (Leonidas)

Bacula versions installed:

[r...@denbvsbcks1 disk1]# rpm -qa | grep bacula
bacula-docs-2.4.4-3.fc11.x86_64
bacula-storage-common-2.4.4-3.fc11.x86_64
bacula-console-bat-2.4.4-3.fc11.x86_64
bacula-client-2.4.4-3.fc11.x86_64
bacula-sysconfdir-2.4.4-3.fc11.x86_64
bacula-console-2.4.4-3.fc11.x86_64
bacula-director-mysql-2.4.4-3.fc11.x86_64
bacula-console-gnome-2.4.4-3.fc11.x86_64
bacula-director-common-2.4.4-3.fc11.x86_64
bacula-console-wxwidgets-2.4.4-3.fc11.x86_64
bacula-common-2.4.4-3.fc11.x86_64
bacula-traymonitor-2.4.4-3.fc11.x86_64
bacula-storage-mysql-2.4.4-3.fc11.x86_64

Main configuration storage-related:

#
Autochanger {
  Name = M1500
  Device = LTO-3-0
  Changer Command = /usr/libexec/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/sg3
}

Device {
  Name = LTO-3-0
  Drive Index = 0
  Media Type = LTO-3
  Archive Device = /dev/nst0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  AutoChanger = yes
  LabelMedia = yes
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
}

Bacua worked out of the box like a charm, except the performance.

I get between 3 -5 MB/sec, depending on type of backup (local backup
server itself is near 5, via GigaBit copper networt it's 3), too slow
for a
LTO-3 drive (equipped with only LTO-2 tapes, but )

Physical drive performance is 28sec for 1 Gigabyte, so ~35MB/sec

[r...@denbvsbcks1 disk1]# dd if=/dev/zero of=swapfile bs=1024
count=100
100+0 records in
100+0 records out
102400 bytes (1.0 GB) copied, 79.907 s, 12.8 MB/s
[r...@denbvsbcks1 disk1]# date;tar cvf /dev/nst0 swapfile ;date
Fri Oct  2 06:00:22 CEST 2009
swapfile
Fri Oct  2 06:00:50 CEST 2009

So, it's not the physical hardware, it has to be Bacula itself.

btape tests are o.k.
[r...@denbvsbcks1 ~]# btape /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:285 Using device: /dev/nst0 for writing.
02-Oct 05:38 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
command.
02-Oct 05:38 btape JobId 0: 3302 Autochanger loaded? drive 0, result
is Slot 1.
btape: btape.c:372 open device LTO-3-0 (/dev/nst0): OK
*test

=== Write, rewind, and re-read test ===

I'm going to write 1000 records and an EOF
then write 1000 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...

btape: btape.c:831 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:847 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:856 Rewind OK.
1000 blocks re-read correctly.
Got EOF on tape.
1000 blocks re-read correctly.
=== Test Succeeded. End Write, rewind, and re-read test ===


=== Write, rewind, and position test ===

I'm going to write 1000 records and an EOF
then write 1000 records and an EOF, then rewind,
and position to a few blocks and verify that it is correct.

This is an *essential* feature ...

btape: btape.c:943 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:959 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:968 Rewind OK.
Reposition to file:block 0:4
Block 5 re-read correctly.
Reposition to file:block 0:200
Block 201 re-read correctly.
Reposition to file:block 0:999
Block 1000 re-read correctly.
Reposition to file:block 1:0
Block 1001 re-read correctly.
Reposition to file:block 1:600
Block 1601 re-read correctly.
Reposition to file:block 1:999
Block 2000 re-read correctly.
=== Test Succeeded. End Write, rewind, and re-read test ===



=== Append files test ===

This test is essential to Bacula.

I'm going to write one record  in file 0,
   two records in file 1,
 and three records in file 2

btape: btape.c:475 Rewound LTO-3-0 (/dev/nst0)
btape: btape.c:1577 Wrote one record of 64412 bytes.
btape: btape.c:1579 Wrote block to device.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:1577 Wrote one record of 64412 bytes.
btape: btape.c:1579 Wrote block to device.
btape: btape.c:1577 Wrote one record of 64412 bytes.
btape: btape.c:1579 Wrote block to device.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:1577 Wrote one record of 64412 bytes.
btape: 

Re: [Bacula-users] Bacula performance so slow ???

2009-10-13 Thread Alan Brown
On Fri, 2 Oct 2009, Klaus Troeger wrote:

 LTO-3 drive (equipped with only LTO-2 tapes, but )

 Physical drive performance is 28sec for 1 Gigabyte, so ~35MB/sec

 [r...@denbvsbcks1 disk1]# dd if=/dev/zero of=swapfile bs=1024
 count=100
 100+0 records in
 100+0 records out
 102400 bytes (1.0 GB) copied, 79.907 s, 12.8 MB/s
 [r...@denbvsbcks1 disk1]# date;tar cvf /dev/nst0 swapfile ;date
 Fri Oct  2 06:00:22 CEST 2009
 swapfile
 Fri Oct  2 06:00:50 CEST 2009

 So, it's not the physical hardware, it has to be Bacula itself.

You fed it 1Gb of nulls and it only ran at 35Mb/s???

Try feeding it random (uncompressable) data please.




--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula performance so slow ???

2009-10-05 Thread Klaus Troeger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

i did a clean setup of bacula on a

Intel(R) Xeon(TM) CPU 3.60GHz, 3 GB Memory, Intel raid controller forming
5 internal 72 GB-320/10k SCSI LVD drives to a raid 5 array, where
everything is on.
My Quantum M1500 LTO-3 loader is connected via SCSI 160 LVD to the
Adaptec 39160
card.

OS is (but same results with Ubuntu 9.04 server):

[r...@denbvsbcks1 ~]# uname -a
Linux denbvsbcks1.int.linuxstar.de 2.6.30.8-64.fc11.x86_64 #1 SMP Fri
Sep 25 04:43:32 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[r...@denbvsbcks1 ~]# cat /etc/redhat-release
Fedora release 11 (Leonidas)

Bacula versions installed:

[r...@denbvsbcks1 disk1]# rpm -qa | grep bacula
bacula-docs-2.4.4-3.fc11.x86_64
bacula-storage-common-2.4.4-3.fc11.x86_64
bacula-console-bat-2.4.4-3.fc11.x86_64
bacula-client-2.4.4-3.fc11.x86_64
bacula-sysconfdir-2.4.4-3.fc11.x86_64
bacula-console-2.4.4-3.fc11.x86_64
bacula-director-mysql-2.4.4-3.fc11.x86_64
bacula-console-gnome-2.4.4-3.fc11.x86_64
bacula-director-common-2.4.4-3.fc11.x86_64
bacula-console-wxwidgets-2.4.4-3.fc11.x86_64
bacula-common-2.4.4-3.fc11.x86_64
bacula-traymonitor-2.4.4-3.fc11.x86_64
bacula-storage-mysql-2.4.4-3.fc11.x86_64

Main configuration storage-related:

#
Autochanger {
  Name = M1500
  Device = LTO-3-0
  Changer Command = /usr/libexec/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/sg3
}

Device {
  Name = LTO-3-0
  Drive Index = 0
  Media Type = LTO-3
  Archive Device = /dev/nst0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  AutoChanger = yes
  LabelMedia = yes
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
}

Bacua worked out of the box like a charm, except the performance.

I get between 3 -5 MB/sec, depending on type of backup (local backup
server itself is near 5, via GigaBit copper networt it's 3), too slow
for a
LTO-3 drive (equipped with only LTO-2 tapes, but )

Physical drive performance is 28sec for 1 Gigabyte, so ~35MB/sec

[r...@denbvsbcks1 disk1]# dd if=/dev/zero of=swapfile bs=1024
count=100
100+0 records in
100+0 records out
102400 bytes (1.0 GB) copied, 79.907 s, 12.8 MB/s
[r...@denbvsbcks1 disk1]# date;tar cvf /dev/nst0 swapfile ;date
Fri Oct  2 06:00:22 CEST 2009
swapfile
Fri Oct  2 06:00:50 CEST 2009

So, it's not the physical hardware, it has to be Bacula itself.

btape tests are o.k.
[r...@denbvsbcks1 ~]# btape /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:285 Using device: /dev/nst0 for writing.
02-Oct 05:38 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
command.
02-Oct 05:38 btape JobId 0: 3302 Autochanger loaded? drive 0, result
is Slot 1.
btape: btape.c:372 open device LTO-3-0 (/dev/nst0): OK
*test

=== Write, rewind, and re-read test ===

I'm going to write 1000 records and an EOF
then write 1000 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...

btape: btape.c:831 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:847 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:856 Rewind OK.
1000 blocks re-read correctly.
Got EOF on tape.
1000 blocks re-read correctly.
=== Test Succeeded. End Write, rewind, and re-read test ===


=== Write, rewind, and position test ===

I'm going to write 1000 records and an EOF
then write 1000 records and an EOF, then rewind,
and position to a few blocks and verify that it is correct.

This is an *essential* feature ...

btape: btape.c:943 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:959 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:968 Rewind OK.
Reposition to file:block 0:4
Block 5 re-read correctly.
Reposition to file:block 0:200
Block 201 re-read correctly.
Reposition to file:block 0:999
Block 1000 re-read correctly.
Reposition to file:block 1:0
Block 1001 re-read correctly.
Reposition to file:block 1:600
Block 1601 re-read correctly.
Reposition to file:block 1:999
Block 2000 re-read correctly.
=== Test Succeeded. End Write, rewind, and re-read test ===



=== Append files test ===

This test is essential to Bacula.

I'm going to write one record  in file 0,
   two records in file 1,
 and three records in file 2

btape: btape.c:475 Rewound LTO-3-0 (/dev/nst0)
btape: btape.c:1577 Wrote one record of 64412 bytes.
btape: btape.c:1579 Wrote block to device.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:1577 Wrote one record of 64412 bytes.
btape: btape.c:1579 Wrote block to device.
btape: btape.c:1577 Wrote one record of 64412 bytes.
btape: btape.c:1579 Wrote block to device.
btape: btape.c:505 Wrote 1 EOF to LTO-3-0 (/dev/nst0)
btape: btape.c:1577 Wrote one record of 64412 bytes.
btape: 

Re: [Bacula-users] Bacula performance so slow ???

2009-10-05 Thread John Drescher
 Physical drive performance is 28sec for 1 Gigabyte, so ~35MB/sec

 [r...@denbvsbcks1 disk1]# dd if=/dev/zero of=swapfile bs=1024
 count=100
 100+0 records in
 100+0 records out
 102400 bytes (1.0 GB) copied, 79.907 s, 12.8 MB/s

I am confused. This looks horribly slow. I would expect over 200 MB /s
for raid 5 writing. Over 90MB /s for a single SATA2 drive.

 [r...@denbvsbcks1 disk1]# date;tar cvf /dev/nst0 swapfile ;date
 Fri Oct  2 06:00:22 CEST 2009
 swapfile
 Fri Oct  2 06:00:50 CEST 2009

Its no good to test an LTO drive with a file full of zeros. Actually
with this file you should get  60 MB /s with LTO2 tapes because zeros
will compress down to almost nothing.


 So, it's not the physical hardware, it has to be Bacula itself.


I must be misreading you but this looks like your hardware / os is
performing very badly or you are using the slowest hardware you could
find.


John

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance so slow ???

2009-10-05 Thread Klaus Troeger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

o.k., if my following estimation is true, you are right.

p, li { white-space: pre-wrap; }

5-Oct 18:22 denbvsbcks1-sd JobId 3: Job write elapsed time = 00:09:13,
Transfer rate = 5.881 M bytes/second

05-Oct 18:22 denbvsbcks1-sd JobId 3: Committing spooled data to Volume
DE1820. Despooling 3,259,489,919 bytes ...

05-Oct 18:24 denbvsbcks1-sd JobId 3: Despooling elapsed time =
00:01:32, Transfer rate = 35.42 M bytes/second

05-Oct 18:24 denbvsbcks1-sd JobId 3: Sending spooled attrs to the
Director. Despooling 36,499,907 bytes ...

05-Oct 18:25 denbvsbcks1-dir JobId 3: Bacula denbvsbcks1-dir 2.4.4
(28Dec08): 05-Oct-2009 18:25:11


Does it mean, that the spooling to disk was at avarage of 6 MB/sec,
and the writing to tape
reached the 35 MB/sec.

In that case, the raid ontroller has really a problem, because all is
SCSI U320 LVD.

In above constellation, i reinstalled everything and separated the OS
from the spooling
area (now OS is Raid-1, Spool-area stays at Raid-5)

Thanks

Klaus


John Drescher wrote:
 Physical drive performance is 28sec for 1 Gigabyte, so ~35MB/sec

 [r...@denbvsbcks1 disk1]# dd if=/dev/zero of=swapfile bs=1024
 count=100 100+0 records in 100+0 records out
 102400 bytes (1.0 GB) copied, 79.907 s, 12.8 MB/s

 I am confused. This looks horribly slow. I would expect over 200 MB
 /s for raid 5 writing. Over 90MB /s for a single SATA2 drive.

 [r...@denbvsbcks1 disk1]# date;tar cvf /dev/nst0 swapfile ;date
 Fri Oct  2 06:00:22 CEST 2009 swapfile Fri Oct  2 06:00:50 CEST
 2009

 Its no good to test an LTO drive with a file full of zeros.
 Actually with this file you should get  60 MB /s with LTO2 tapes
 because zeros will compress down to almost nothing.

 So, it's not the physical hardware, it has to be Bacula itself.


 I must be misreading you but this looks like your hardware / os is
 performing very badly or you are using the slowest hardware you
 could find.


 John
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrKJb4ACgkQzEBPDKWHbs91JQCfeR9OXnu4/9GS0sBumphvGjV1
p8wAnjzXu5tw6WkxTuHdJWmq/4EGQ6B+
=vXtt
-END PGP SIGNATURE-


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance so slow ???

2009-10-05 Thread Cedric Tefft
Klaus Troeger wrote:

 Does it mean, that the spooling to disk was at avarage of 6 MB/sec,
 and the writing to tape
 reached the 35 MB/sec
It does appear that way.  Are you using encryption and/or (software) 
compression?  Both slow down the spooling process, though by how much I 
don't know.  If you are using either/both, you might try turning them 
off just to see if it makes any difference.

- Cedric



--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance with a 64512 block size

2007-09-18 Thread Chris Howells
Hi,

Marc Schiffbauer wrote:

Finally got around to messing around with bacula again...

 The manual says that nnn being the same number for both settings
 means fixed blocksize.
 
 As I understand it, your solutions should be to just set the
 Minimum Block Size so you get a good perfromance.
 
 Minimum Block Size = 1048576

Unfortunately just setting a Minimum Block Size does not work. btape for 
instance will not work then. It dies with a glibc error. (See end of 
mail for full trace.

For instance with the following setting:

Minimum Block Size = 256000

[EMAIL PROTECTED]:/etc/bacula# btape -c bacula-sd.conf /dev/nst0
snip
test
snip
*** glibc detected *** malloc(): memory corruption: 0x080d9d90 ***

Setting both a Minimum Block Size and Maximum Block Size to the same 
value *does* seems to work with btape.

BTW, I tried using 1048576. Unfortunately this does not work. From 
src/stored/dev.c:

if (dev-max_block_size  100) {
   Jmsg3(jcr, M_ERROR, 0, _(Block size %u on device %s is too 
large, using default %u\n),
  dev-max_block_size, dev-print_name(), DEFAULT_BLOCK_SIZE);

Oops.

Why can I not use  100 bytes? This seems a *really* strange 
restriction. I can happily use blocks of several megabytes using tar.


==

Btape error:

[EMAIL PROTECTED]:/etc/bacula# btape -c bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:285 Using device: /dev/nst0 for writing.
btape: btape.c:368 open device LTO-4 (/dev/nst0): OK
*test

=== Write, rewind, and re-read test ===

I'm going to write 1000 records and an EOF
then write 1000 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...

*** glibc detected *** malloc(): memory corruption: 0x080d9d90 ***
[EMAIL PROTECTED]:/etc/bacula# gdb `which btape`
GNU gdb 6.4-debian
Copyright 2005 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i486-linux-gnu...Using host libthread_db 
library /lib/tls/i686/cmov/libthread_db.so.1.

(gdb) run -c /etc/bacula/bacula-sd.conf /dev/nst0
Starting program: /sbin/btape -c /etc/bacula/bacula-sd.conf /dev/nst0
[Thread debugging using libthread_db enabled]
[New Thread -1210566432 (LWP 2054)]
Tape block granularity is 1024 bytes.
btape: butil.c:285 Using device: /dev/nst0 for writing.
btape: btape.c:368 open device LTO-4 (/dev/nst0): OK
*test

=== Write, rewind, and re-read test ===

I'm going to write 1000 records and an EOF
then write 1000 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210566432 (LWP 2054)]
0x08051eae in write_block_to_dev (dcr=0x80c9a40) at block.c:462
462  memset(block-bufp, 0, wlen-blen); /* clear garbage */
Current language:  auto; currently c++

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance with a 64512 block size

2007-09-13 Thread Alan Brown
Marc Schiffbauer wrote:
 * Chris Howells schrieb am 10.09.07 um 16:47 Uhr:
   
 Arno Lehmann wrote:

 Thanks for your reply.

 
 I'd suggest to do some tests with Bacula, and after you found your 
 best settings, clearly mark all tapes with their respective block sizes.
   
 Will do.

 Are you basically suggesting that I should use the following sd directives:

 Minimum Block Size = nnn
 Maximum Block Size = nnn

 I am *slightly* concerned about operating the drive in fixed block mode, 
 given the dire warnings in the manual.
 

 The manual says that nnn being the same number for both settings
 means fixed blocksize.

 As I understand it, your solutions should be to just set the
 Minimum Block Size so you get a good perfromance.

 Minimum Block Size = 1048576

 won't this fix your performance?
   
How would this affect restores from older tapes?

Althougth I'm only using LTO2 at the moment, this is of interest to me 
as well because there are clear bottlenecks showing up where the tape 
drive isn't running quite as fast as it should be with the default 64 k 
blocking size - especially on highly compressible data like logfiles and 
database dumps.





-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula performance with a 64512 block size

2007-09-10 Thread Chris Howells
Hi,

I am currently struggling to get any kind of reasonable performance out 
of Bacula on my LTO 4 tape size. I have done a considerable of testing 
and benchmarking, and my hunch is that bacula's block size of 64512 
bytes is causing the performance problems.

To test the drive, I used tar, with various block sizes.

Blocking size in tar-speak refers to n * 512 bytes, so a blocking size 
of 2048 actually means a 1048576 byte block.

My results show:

Blocking sizeTimeMB/s
126  105 20.78
250  81  26.94
1000 54  40.41
1500 43  50.75
2048 34  64.19

Plotting Blocking size against MB/s shows a direct linear relationship 
between blocking size and time (and therefore MB/s).

A blocking size of 126 corresponds to a block of 64512 bytes, as used by 
bacula. Strangely enough (or not :) this is *nearly exactly* the maximum 
performance that I have ever seen bacula write to the drive.

I have tried bacula using the Fifo virtual backup device and I can 
data coming in at speeds far, far in excess of the 20MB/s. In fact I 
have had it coming in from two servers at 800Mbit/sec over a GigE 
network. I am therefore confident that it is not the bacula catalogue 
causing performance issues.

I am also getting very little compression with bacula - presumably this 
is because the tape drive can't compress 64512 blocks very well, and 
needs to operate on larger chunks of data.

Is there any way to safely increase the size from 64512 blocks to see if 
that helps matters? I understand that running bacula in fixed-block 
sized mode is not good. Why is 64512 bytes used anyway? It is not a 
power of 2.

Thanks for any help.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance with a 64512 block size

2007-09-10 Thread Arno Lehmann
Hi,

10.09.2007 16:21,, Chris Howells wrote::
 Hi,
 
 I am currently struggling to get any kind of reasonable performance out 
 of Bacula on my LTO 4 tape size. I have done a considerable of testing 
 and benchmarking, and my hunch is that bacula's block size of 64512 
 bytes is causing the performance problems.
 
 To test the drive, I used tar, with various block sizes.
 
 Blocking size in tar-speak refers to n * 512 bytes, so a blocking size 
 of 2048 actually means a 1048576 byte block.
 
 My results show:
 
 Blocking sizeTimeMB/s
 126  105 20.78
 250  81  26.94
 1000 54  40.41
 1500 43  50.75
 2048 34  64.19
 
 Plotting Blocking size against MB/s shows a direct linear relationship 
 between blocking size and time (and therefore MB/s).
 
 A blocking size of 126 corresponds to a block of 64512 bytes, as used by 
 bacula. Strangely enough (or not :) this is *nearly exactly* the maximum 
 performance that I have ever seen bacula write to the drive.
 
 I have tried bacula using the Fifo virtual backup device and I can 
 data coming in at speeds far, far in excess of the 20MB/s. In fact I 
 have had it coming in from two servers at 800Mbit/sec over a GigE 
 network. I am therefore confident that it is not the bacula catalogue 
 causing performance issues.
 
 I am also getting very little compression with bacula - presumably this 
 is because the tape drive can't compress 64512 blocks very well, and 
 needs to operate on larger chunks of data.
 
 Is there any way to safely increase the size from 64512 blocks to see if 
 that helps matters? I understand that running bacula in fixed-block 
 sized mode is not good. Why is 64512 bytes used anyway? It is not a 
 power of 2.

IIRC, that's exactly the point - there seem to be drives and drivers 
out there which don't work too well with 64k block sizes.

Anyway, your above findings are similar to experience, for example, 
published in the German magazine ix.

I'll try if I find that article and see if I can post some detailed 
information.

Also important, I don't see a reason why you could not use larger 
block sizes - some few MB might be reasonable. You would do well do 
clearly document this in your emergency manual, though. Also, try not 
to have tapes written with different block size settings in your 
production pools - you will probably run into trouble if you try to 
restore from them in a year or so.

I'd suggest to do some tests with Bacula, and after you found your 
best settings, clearly mark all tapes with their respective block sizes.

Arno

 Thanks for any help.
 
 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance with a 64512 block size

2007-09-10 Thread Chris Howells
Arno Lehmann wrote:

Thanks for your reply.

 I'd suggest to do some tests with Bacula, and after you found your 
 best settings, clearly mark all tapes with their respective block sizes.

Will do.

Are you basically suggesting that I should use the following sd directives:

Minimum Block Size = nnn
Maximum Block Size = nnn

I am *slightly* concerned about operating the drive in fixed block mode, 
given the dire warnings in the manual.

Thanks.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance with a 64512 block size

2007-09-10 Thread Marc Schiffbauer
* Chris Howells schrieb am 10.09.07 um 16:47 Uhr:
 Arno Lehmann wrote:
 
 Thanks for your reply.
 
  I'd suggest to do some tests with Bacula, and after you found your 
  best settings, clearly mark all tapes with their respective block sizes.
 
 Will do.
 
 Are you basically suggesting that I should use the following sd directives:
 
 Minimum Block Size = nnn
 Maximum Block Size = nnn
 
 I am *slightly* concerned about operating the drive in fixed block mode, 
 given the dire warnings in the manual.

The manual says that nnn being the same number for both settings
means fixed blocksize.

As I understand it, your solutions should be to just set the
Minimum Block Size so you get a good perfromance.

Minimum Block Size = 1048576

won't this fix your performance?

-Marc
-- 
+--+
|  -- http://www.links2linux.de --   |
|  |
+---Registered-Linux-User-#136487http://counter.li.org +

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Performance with many files

2007-02-12 Thread Kern Sibbald
Hello,

On Monday 12 February 2007 11:43, Daniel Holtkamp wrote:
 Hi !

 My bacula 2.0.1 installation is running quite nicely except for some
 servers. I`ll use only one of these as an example as the others have the
 same problem.

 This one server has to backup more than 5 million files that are very
 small (usually less than 2KB). The problem is that the performance
 impact backing up these files is enormous.

 Here is a little sniplet from the last (unfinished) backup.

Elapsed time:   23 hours 51 mins 42 secs
Priority:   10
FD Files Written:   3,562,070
SD Files Written:   3,561,858
FD Bytes Written:   2,507,509,039 (2.507 GB)
SD Bytes Written:   3,088,552,545 (3.088 GB)
Rate:   29.2 KB/s

 At that time the backup ran for almost a complete day and it still has
 to backup 2+ million files that make up for about 3 GB of data. As you
 can see the rate is VERY slow. I have of course enabled attribute
 spooling to take the database out of the equation. Also the backup goes
 to diskbased-volumes. It only gets this slow when it gets to the loads
 of small files - prior to that the backup rate is perfectly acceptable
 with 2MB/s.

 The fileset for this server is this:

 FileSet {
Name = X400mta
Include {
  Options {
  exclude = yes
  wilddir = /var/tmp
  regexdir = /var/[cache/man|catman]/[cat?|X11R6/cat?|local/cat?]
  compression=GZIP
  signature=SHA1
  }
  File = /
  File = /opt
  File = /usr
  File = /var
  File = /export/home
}
Include {
  Options {
  regexdir = /var/[cache/man|catman]/[cat?|X11R6/cat?|local/cat?]
  keepatime=yes
  mtimeonly=yes
  compression=GZIP
  signature=SHA1
  }
  File = /var/tmp
}
Exclude {
  File = .autofsck
  File = /proc
  File = /tmp
  File = .journal
  File = /opt/rsi/archive
  File = /opt/rsi/spool
  File = /opt/x400/mtadata/logfiles
}
 }

 Any ideas on how to improve performance here ? Can the excludes be a
 problem ? Or the Regex ?

 Also what influences the performance on migrating data ?

 I`ve had migration processes running nicely at 15MB/s (max for
 tapedrive) and some go at a measily 1 MB/s - from the same disk-array to
 the same tapedrive of course.

Performance is a complicated issue.  Judging from everything that you have 
written above (especially the variations of the migration speeds), I suspect 
that there is nothing terribly slow with your FD.  Rather the problem seems 
to be in your Catalog.

Catalog performance problems can be due to:
1. the SQL database parameters are not properly configure for handling large 
databases.  This is an issue with MySQL or PostgreSQL (with backup volumes 
like yours you should not be using SQLite).  The manual has some points on 
how to make sure the database is setup to handle large volumes.

2. You may not have all the proper indexes on your tables.  Again, the manual 
suggests some solutions.

3. Inserting attributes in the current Bacula code is rather inefficient, 
especially if you have large numbers of new files being created each backup 
(some mail programs do this).  The current code for version 2.1.4 (in the 
SVN) has some new code that speeds up insertions by quite a lot (most 
improvement is for PostgreSQL, but MySQL also gets a good boost).  This code 
is not currently turned on, though it has been in use at Eric's site for 
quite a long time now.  I will be enabling this code by default in the next 
few weeks once I have tested it a bit.

If you are interested in testing this new code, I would recommend that you get 
in touch with Eric.  Some of the table parameters should be modified, and 
this is documented only in the patches/testing/batch-insert.readme file, and 
you must explicitly turn on a #define in src/version.h)  to turn it on.   
Please copy the bacula-devel list if you decide to do this so that we can all 
benefit from your tests.

Best regards,

Kern

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Performance with many files

2007-02-12 Thread Arno Lehmann
Hello,

On 2/12/2007 11:43 AM, Daniel Holtkamp wrote:
 Hi !
 
 My bacula 2.0.1 installation is running quite nicely except for some 
 servers. I`ll use only one of these as an example as the others have the 
 same problem.
 
 This one server has to backup more than 5 million files that are very 
 small (usually less than 2KB). The problem is that the performance 
 impact backing up these files is enormous.
 
 Here is a little sniplet from the last (unfinished) backup.
 
   Elapsed time:   23 hours 51 mins 42 secs
   Priority:   10
   FD Files Written:   3,562,070
   SD Files Written:   3,561,858
   FD Bytes Written:   2,507,509,039 (2.507 GB)
   SD Bytes Written:   3,088,552,545 (3.088 GB)
   Rate:   29.2 KB/s
 
 At that time the backup ran for almost a complete day and it still has 
 to backup 2+ million files that make up for about 3 GB of data. As you 
 can see the rate is VERY slow. I have of course enabled attribute 
 spooling to take the database out of the equation. Also the backup goes 
 to diskbased-volumes. It only gets this slow when it gets to the loads 
 of small files - prior to that the backup rate is perfectly acceptable 
 with 2MB/s.

Such a number of tiny files is usually a problem. There are several 
reasons to this, IMO: Disk seeks (often 2 per file: read inode, read 
data) which is hard to avoid

Other possible limitations on backup throughput can be minmized, I hope:

 The fileset for this server is this:
 
 FileSet {
   Name = X400mta
   Include {
 Options {
 exclude = yes
 wilddir = /var/tmp
 regexdir = /var/[cache/man|catman]/[cat?|X11R6/cat?|local/cat?]
Probably a probelm. You could try to expand the directories in the 
configuration.
 compression=GZIP
Try running the job without compression. You could even check if 
compression matters much with this special fileset.
 signature=SHA1
This one might be the limiting factor: SHA1 means lots of CPU work. 
Depending on the data you store, you could perhaps run this fileset 
without computing signatures, or use the less cpu-intensive MD5 alternative.
 }
 File = /
 File = /opt
 File = /usr
 File = /var
 File = /export/home
   }
   Include {
 Options {
 regexdir = /var/[cache/man|catman]/[cat?|X11R6/cat?|local/cat?]
 keepatime=yes
 mtimeonly=yes
 compression=GZIP
 signature=SHA1
 }
 File = /var/tmp
   }
   Exclude {
 File = .autofsck
 File = /proc
 File = /tmp
 File = .journal
 File = /opt/rsi/archive
 File = /opt/rsi/spool
 File = /opt/x400/mtadata/logfiles
   }
 }
 
 Any ideas on how to improve performance here ? Can the excludes be a 
 problem ? Or the Regex ?

The regex might be a problem, but I'd start with compression and 
signatures first... both can be quite important, so if these are what 
makes your backups slow you've got to choose...

 Also what influences the performance on migrating data ?

I have no idea whatsoever... except that I would observe the systems 
load when running migration jobs. Not only the load itself, but also i/o 
wait times and memory usage.

Arno

 I`ve had migration processes running nicely at 15MB/s (max for 
 tapedrive) and some go at a measily 1 MB/s - from the same disk-array to 
 the same tapedrive of course.
 
 Best regards,
 Daniel Holtkamp
 
 
 
 
 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier.
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 
 
 
 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
IT-Service Lehmann[EMAIL PROTECTED]
Arno Lehmann  http://www.its-lehmann.de

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula performance with spooling when writing to tape from fd

2006-01-04 Thread Arno Lehmann

Hello,

On 1/4/2006 12:12 AM, Joe Dollard wrote:


I've run into a performance problem when doing backups to tape that I 
need some help in resolving.  According to the output from btape test, 
my tape drive can be written to at around 9,700 KB/s.  I've also run a 
test with the Windows file daemon and can backup to disk on my bacula 
server at around 9,000 KB/s.  Based upon these two figures, I would 
assume that I should be able to do a backup from the Windows file daemon 
to tape at 9,000 KB/s - which over my 100 megabit network I'd be very 
happy with.


The basic asumptions sounds reasonable - windows client can deliver 
data, and the tape could write it without holding the client. BUT the 
figures you give are about the maximum throughput you can get over a 
100M ethernet.


 However with spooling disabled my backup to tape runs at 
about 6700 KB/s (using the same job which gave me 9000 KB/s before).  
With spooling enabled my backup runs at approx 4700 KB/s.


Unless I'm mistaken, the througput report witch spooling enabled is not 
the figure you're interested in because it measures the overall data 
rate: First, data is spooled from client to disk, then despooled from 
disk to tape. In other words, the actual speed for each of the processes 
might be much higer - in your case, I'd assume that the figures you give 
above are a good estimate. 4700K/s, with moving each byte twice, would 
be something like 9400K/s for each subprocess.


To solve the problem with direct client to tape data, you need to make 
sure that there's no bottleneck in your whole setup. First, even if data 
stalls for a short time, the tape drive will stop and has to reposition, 
which can take quite long. During that phase, the network buffers will 
run full, which, depending on your network and client setup, can even 
lead to to a slowed client system.


In other words, writing the data to tape has to be the speed limiting 
part of a network backup without spooling.


You can try to tune your network buffer setup - search the archives for 
some more information - and you might even try to install a faster 
network link between your backup server and the one delivering the data. 
A dedicated network link can help a lot, especially if your network is 
heavily used by other applications as well when the backup jobs run.


One of my servers has about 240GB of data that I need to run a full 
backup on weekly, however my bacula server only has about 100GB of 
available disk space.  As I don't have enough disk space to spool the 
entire job to disk first, the FD is going to be sitting idle while the 
SD writes the first 100GB to disk, and then the process will be repeated 
again, and again for the final 40GB.


Right, but some time in the future, that might change. Don't hold your 
breath, though.


 Is there anything I can do in 
bacula to allow the FD to keep spooling data to the SD while the SD is 
writing data to tape?


There have been different proposals how to handle this problem - 
multiple smaller spool files per job are one solution. This might not be 
the easiest way, because it requires a big change in the way the SD now 
works, and would be limited by hard disk throughput.


The - theoretically - best solution I know about is first a BIG memory 
buffer which holds data for the tape and is only written when it's 
nearly filled, and which would be re-filled whenever possible. Behind 
that, you'd need a fast disk setup with several dedicated disks, each 
for one spool file, and of course each on it's controller. Each 
controller would need it's own bus or dedicated link to the system, in turn.


In other words, a solution to really achieve maximum throughput requires 
not only a major modification of Bacula, but also a really optimized 
system it runs on.


 Are there any other workaround I could use, or am 
I going to have to buy a bigger hard drive for my backup server?


My experience tells me that installing a bigger hard disk spool area is 
the best workaround in terms of cost and resulting speed improvement, yes.


Also, assuming that you have enough disk space, the planned (and already 
started) development of job migration should allow a real D2D2T backup 
setup with Bacula, which would allow not only higher data throughput, 
but also more flexibility. Admittedly, that's something for the future, 
but if you have the disk space now it should be a small modification of 
your setup to use that space not as dedicated spool space, but as hard 
disk volumes in a migration scheme.


I hope this explains your experiences, and perhaps helps a little when 
deciding how to solve the current problem. And, of course, if you can 
support development of the features I mentioned, I'm quite sure that 
many Bacula users would be much impressed :-)


Arno


Thanks,
Joe


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log 
files

for problems?  Stop!  

[Bacula-users] bacula performance with spooling when writing to tape from fd

2006-01-03 Thread Joe Dollard


I've run into a performance problem when doing backups to tape that I 
need some help in resolving.  According to the output from btape test, 
my tape drive can be written to at around 9,700 KB/s.  I've also run a 
test with the Windows file daemon and can backup to disk on my bacula 
server at around 9,000 KB/s.  Based upon these two figures, I would 
assume that I should be able to do a backup from the Windows file daemon 
to tape at 9,000 KB/s - which over my 100 megabit network I'd be very 
happy with.  However with spooling disabled my backup to tape runs at 
about 6700 KB/s (using the same job which gave me 9000 KB/s before).  
With spooling enabled my backup runs at approx 4700 KB/s.


One of my servers has about 240GB of data that I need to run a full 
backup on weekly, however my bacula server only has about 100GB of 
available disk space.  As I don't have enough disk space to spool the 
entire job to disk first, the FD is going to be sitting idle while the 
SD writes the first 100GB to disk, and then the process will be repeated 
again, and again for the final 40GB.  Is there anything I can do in 
bacula to allow the FD to keep spooling data to the SD while the SD is 
writing data to tape?  Are there any other workaround I could use, or am 
I going to have to buy a bigger hard drive for my backup server?


Thanks,
Joe


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance

2005-09-02 Thread Uwe Hees

Hello all,

I finally found a solution to speed up the performance dramatically:
In the FileDaemon resource I set Maximum Network Buffer Size =  
65536 (instead of the default 32k).

Now I get ~3MB/sec, which is reasonable.

Thanks for all assistance,
Uwe


Am 31.08.2005 um 18:28 schrieb Uwe Hees:




Hello,

Am 30.08.2005 um 18:15 schrieb Kern Sibbald:





Perhaps you didn't read the ReleaseNotes where I indicate that  
SQLite3 in my

tests was 4 to 10 times slower than SQLite 2.

Try SQLite 2 or MySQL.




I used sqlite3 mainly because it came preinstalled with MacOS 10.4.  
Meanwhile I have installed MySQL for other reasons and tried bacula  
with it. The result was a double in performance up to ~120kB/sec.


While running the backup job I noticed that netstat reported  
32768 entries in the send queue of the bacula-fd. I tried to backup  
to a remote sd (running under Linux on a 200Mhz/PPC603e, i.e. not a  
powerful box) and got ~520kB/sec.



Am 30.08.2005 um 20:46 schrieb Arno Lehmann:



Also, don't forget that notebook HDs (2.5) are usually a lot  
slower than than desktop or even server disks... and in backing up  
the same machine, you use the slow disk three times: reading,  
writing, database.





The disk has a random read/write performance of about 10 MB/sec.



Now, I don't have disk performance comparisons between an iBook  
and a more typical server setup, but I'd bet that the iBook is  
really slow in comparison...


about 650 kB/s is what I get storing the (dumped) catalog database  
on my backupserver - the server is slower than your iBook, but  
still this is what the tape drive can handle - but this server  
only does the backups, the catalog is on another machine, and  
there are no other processes using lots of memory or bus throughput.


In short: Try it with a setup which resembles your planned use of  
bacula, and with some consideration you will get good results.





Backing up my (and my wife's) notebook to an external disk is  
exactly what I intend to do at home. There's no tape drive  
involved. As for the company, the backup tape drive is not yet  
purchased.


Greetings,
Uwe




---
SF.Net email is Sponsored by the Better Software Conference  EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance

2005-08-31 Thread Uwe Hees
Hello,Am 30.08.2005 um 18:15 schrieb Kern Sibbald:Perhaps you didn't read the ReleaseNotes where I indicate that SQLite3 in my tests was 4 to 10 times slower than SQLite 2.  Try SQLite 2 or MySQL.I used sqlite3 mainly because it came preinstalled with MacOS 10.4. Meanwhile I have installed MySQL for other reasons and tried bacula with it. The result was a double in performance up to ~120kB/sec.While running the backup job I noticed that "netstat" reported 32768 entries in the send queue of the bacula-fd. I tried to backup to a remote sd (running under Linux on a 200Mhz/PPC603e, i.e. not a powerful box) and got ~520kB/sec.  Am 30.08.2005 um 20:46 schrieb Arno Lehmann:Also, don't forget that notebook HDs (2.5") are usually a lot slower than than desktop or even server disks... and in backing up the same machine, you use the slow disk three times: reading, writing, database.The disk has a random read/write performance of about 10 MB/sec.Now, I don't have disk performance comparisons between an iBook and a more typical server setup, but I'd bet that the iBook is really slow in comparison...about 650 kB/s is what I get storing the (dumped) catalog database on my backupserver - the server is slower than your iBook, but still this is what the tape drive can handle - but this server only does the backups, the catalog is on another machine, and there are no other processes using lots of memory or bus throughput.In short: Try it with a setup which resembles your planned use of bacula, and with some consideration you will get good results.Backing up my (and my wife's) notebook to an external disk is exactly what I intend to do at home. There's no tape drive involved. As for the company, the backup tape drive is not yet purchased.Greetings,Uwe

Re: [Bacula-users] Bacula performance

2005-08-31 Thread Jeronimo Zucco
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I use openvpn (http://openvpn.net) in some bacula clients to bacula
server with lzo compression without encrypt, and the transfer time
decrease a lot. I recommend if your data transfer are big.


- --
Jeronimo Zucco
LPIC-1 Linux Professional Institute Certified
Núcleo de Processamento de Dados
Universidade de Caxias do Sul

May the Source be with you. - An unknown jedi programmer.

http://jczucco.blogspot.com

Uwe Hees wrote:
 Hello,
 
 Am 30.08.2005 um 18:15 schrieb Kern Sibbald:
 

 Perhaps you didn't read the ReleaseNotes where I indicate that SQLite3
 in my 
 tests was 4 to 10 times slower than SQLite 2.  

 Try SQLite 2 or MySQL.
 
 
 I used sqlite3 mainly because it came preinstalled with MacOS 10.4.
 Meanwhile I have installed MySQL for other reasons and tried bacula with
 it. The result was a double in performance up to ~120kB/sec.
 
 While running the backup job I noticed that netstat reported 32768
 entries in the send queue of the bacula-fd. I tried to backup to a
 remote sd (running under Linux on a 200Mhz/PPC603e, i.e. not a powerful
 box) and got ~520kB/sec.
  
  
 Am 30.08.2005 um 20:46 schrieb Arno Lehmann:
 
 Also, don't forget that notebook HDs (2.5) are usually a lot slower
 than than desktop or even server disks... and in backing up the same
 machine, you use the slow disk three times: reading, writing, database.
 
 
 The disk has a random read/write performance of about 10 MB/sec.
 
 Now, I don't have disk performance comparisons between an iBook and a
 more typical server setup, but I'd bet that the iBook is really slow
 in comparison...

 about 650 kB/s is what I get storing the (dumped) catalog database on
 my backupserver - the server is slower than your iBook, but still this
 is what the tape drive can handle - but this server only does the
 backups, the catalog is on another machine, and there are no other
 processes using lots of memory or bus throughput.

 In short: Try it with a setup which resembles your planned use of
 bacula, and with some consideration you will get good results.
 
 
 Backing up my (and my wife's) notebook to an external disk is exactly
 what I intend to do at home. There's no tape drive involved. As for the
 company, the backup tape drive is not yet purchased.
 
 Greetings,
 Uwe
 



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFDFerKTCq0VJ4DIPwRAitpAKCWWozvCRvWIsx3UGZVkhSArAG03gCgnIMY
vWLOganSNJOLD9CpCqrhVig=
=ZY/w
-END PGP SIGNATURE-


---
SF.Net email is Sponsored by the Better Software Conference  EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula performance

2005-08-30 Thread Uwe Hees

Hello all,

for some time I am playing with bacula to find out if should use it  
for personal backups at home and maybe use it in the my company to  
backup some Linux servers.


I have tried 1.36 and some 1.37 up to 1.37.37 on my ibook G4 running  
under MacOS X 10.4.2.


While performing the default backup scenario (local disk to disk)  
using the sqlite3 database engine, I get the following results:


27-Aug 16:59 uwes-ibook-dir: Bacula 1.37.37 (24Aug05): 27-Aug-2005  
16:59:09

  JobId:  1
  Job:Client1.2005-08-27_16.41.12
  Backup Level:   Full (upgraded from Incremental)
  Client: uwes-ibook-fd powerpc-apple- 
darwin8.2.1,darwin,8.2.1

  FileSet:Full Set 2005-08-27 16:41:15
  Pool:   Default
  Storage:File
  Scheduled time: 27-Aug-2005 16:41:10
  Start time: 27-Aug-2005 16:41:15
  End time:   27-Aug-2005 16:59:09
  Priority:   10
  FD Files Written:   1,696
  SD Files Written:   1,696
  FD Bytes Written:   61,572,564
  SD Bytes Written:   61,839,775
  Rate:   57.3 KB/s
  Software Compression:   None
  Volume name(s): Test
  Volume Session Id:  1
  Volume Session Time:1125153539
  Last Volume Bytes:  61,951,484
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

This seems fairly poor to me as I think that disk backups must  
perform faster. I won't dare to backup my root partition (~30GB) with  
that speed.


Is there something I miss? Tuned settings? Other database backend?

Best Regards, Uwe




---
SF.Net email is Sponsored by the Better Software Conference  EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance

2005-08-30 Thread Kern Sibbald
On Saturday 27 August 2005 18:22, Uwe Hees wrote:
 Hello all,

 for some time I am playing with bacula to find out if should use it
 for personal backups at home and maybe use it in the my company to
 backup some Linux servers.

 I have tried 1.36 and some 1.37 up to 1.37.37 on my ibook G4 running
 under MacOS X 10.4.2.

 While performing the default backup scenario (local disk to disk)
 using the sqlite3 database engine, I get the following results:

 27-Aug 16:59 uwes-ibook-dir: Bacula 1.37.37 (24Aug05): 27-Aug-2005
 16:59:09
JobId:  1
Job:Client1.2005-08-27_16.41.12
Backup Level:   Full (upgraded from Incremental)
Client: uwes-ibook-fd powerpc-apple-
 darwin8.2.1,darwin,8.2.1
FileSet:Full Set 2005-08-27 16:41:15
Pool:   Default
Storage:File
Scheduled time: 27-Aug-2005 16:41:10
Start time: 27-Aug-2005 16:41:15
End time:   27-Aug-2005 16:59:09
Priority:   10
FD Files Written:   1,696
SD Files Written:   1,696
FD Bytes Written:   61,572,564
SD Bytes Written:   61,839,775
Rate:   57.3 KB/s
Software Compression:   None
Volume name(s): Test
Volume Session Id:  1
Volume Session Time:1125153539
Last Volume Bytes:  61,951,484
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  OK
SD termination status:  OK
Termination:Backup OK

 This seems fairly poor to me as I think that disk backups must
 perform faster. I won't dare to backup my root partition (~30GB) with
 that speed.

 Is there something I miss? 

Perhaps you didn't read the ReleaseNotes where I indicate that SQLite3 in my 
tests was 4 to 10 times slower than SQLite 2.  

 Tuned settings? Other database backend? 

Try SQLite 2 or MySQL.


 Best Regards, Uwe




 ---
 SF.Net email is Sponsored by the Better Software Conference  EXPO
 September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
 Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
 Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Best regards,

Kern

  (
  /\
  V_V


---
SF.Net email is Sponsored by the Better Software Conference  EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula performance

2005-08-30 Thread Arno Lehmann

Hi,

Kern Sibbald wrote:


On Saturday 27 August 2005 18:22, Uwe Hees wrote:


Hello all,

for some time I am playing with bacula to find out if should use it
for personal backups at home and maybe use it in the my company to
backup some Linux servers.

I have tried 1.36 and some 1.37 up to 1.37.37 on my ibook G4 running
under MacOS X 10.4.2.

While performing the default backup scenario (local disk to disk)
using the sqlite3 database engine, I get the following results:


(slow backup speed)
...

Perhaps you didn't read the ReleaseNotes where I indicate that SQLite3 in my 
tests was 4 to 10 times slower than SQLite 2.  


Also, don't forget that notebook HDs (2.5) are usually a lot slower 
than than desktop or even server disks... and in backing up the same 
machine, you use the slow disk three times: reading, writing, database.


Now, I don't have disk performance comparisons between an iBook and a 
more typical server setup, but I'd bet that the iBook is really slow in 
comparison...


about 650 kB/s is what I get storing the (dumped) catalog database on my 
backupserver - the server is slower than your iBook, but still this is 
what the tape drive can handle - but this server only does the backups, 
the catalog is on another machine, and there are no other processes 
using lots of memory or bus throughput.


In short: Try it with a setup which resembles your planned use of 
bacula, and with some consideration you will get good results.


Arno

--
IT-Service Lehmann[EMAIL PROTECTED]
Arno Lehmann  http://www.its-lehmann.de


---
SF.Net email is Sponsored by the Better Software Conference  EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula performance with mysql

2005-07-29 Thread Daniel Weuthen
Hello all together.

After setting up bacula successfuly and solving some problems with your help 
our backup works fine now. But talking to the director with the bconsole ist 
sometimes very slow. for e.g. when askting for the status or when restoring 
file, building the filelist takes long long time.

our db table file is 2,8 gb large. we have a daily full backup of 16 hosts. 
the backup volume is at about 120 gb.

Any ideas?
I will post some queries and the duration of them later.

-- 
Mit freundlichen Gruessen / with kind regards

Daniel Weuthen

---
Megabit Informationstechnik GmbH  Karstr.25  41068 Moenchengladbach
Tel: 02161/308980   mailto:[EMAIL PROTECTED]   ftp://megabit.net
Fax: 02161/3089818  mailto:[EMAIL PROTECTED]   http://megabit.net
---


pgp2hDzza2EBy.pgp
Description: PGP signature