Hello everyone!
After applying the correct settings and restart the good services here are the
results ... :P
They are catastrophic!
My full this weekend took 8 hours more!
I think problems come from my little spools, 24GB per drive and 3Gb by jobs (I
have 8 jobs)
Maybe I miscalculated my
Hello,
2013/10/7 bdelagree bacula-fo...@backupcentral.com
Hello everyone!
After applying the correct settings and restart the good services here are
the results ... :P
They are catastrophic!
I do not follow this thread from the beginning, so I could be wrong about
some tips.
You have a
I do not follow this thread from the beginning, so I could be wrong about
some tips.
You have a 11M files in single backup job. If your job name is not misleading
all your files are located on NFS share. Right?
If yes, this is your main bottleneck. NFS is not the best protocol for this
Hello,
2013/10/7 bdelagree bacula-fo...@backupcentral.com
I do not follow this thread from the beginning, so I could be wrong
about some tips.
You have a 11M files in single backup job. If your job name is not
misleading all your files are located on NFS share. Right?
If yes, this is
Hi everyone!
The DataSpooling has not changed my backup. (See the end of this post)
1day and 14hours for 390Gb :(
By cons I just saw that on Friday I restarted only StorageDaemon, was it also
restart Director and FileDaemon?
Do you think that enabling compression could improve backup when
The DataSpooling has not changed my backup. (See the end of this post)
1day and 14hours for 390Gb :(
By cons I just saw that on Friday I restarted only StorageDaemon, was it
also restart Director and FileDaemon?
Do you think that enabling compression could improve backup when there are
On Mon, 30 Sep 2013 00:07:00 -0700, bdelagree said:
Hi everyone!
The DataSpooling has not changed my backup. (See the end of this post)
1day and 14hours for 390Gb :(
By cons I just saw that on Friday I restarted only StorageDaemon, was it also
restart Director and FileDaemon?
Hi everyone!
Sorry for my short absence but I've been busy with other little problem.
I had to create a virtual machine under OS9 for one of my users
I had forgotten how the old system was very basic !
: p
Finally tonight is my monthly Full Backup.
I wish to change my jobs and set up the
Just for you information, here are the modifications:
For the NFS server I created two jobs, one for system and another one for the
directory that contains the millions of files.
I created the directory /var/lib/spool/drive0 and /var/lib/spool/drive1
I then did a chown-R bacula: bacula
Am 23.09.2013 08:47, schrieb bdelagree:
Hello,
This summer we invested in a PowerVault TL2000 library with two LTO5 drives
to safeguard our various servers.
Today two of my servers take to save a lot because they contain many small
files for low volume (see the bottom of post)
All my
Zitat von bdelagree bacula-fo...@backupcentral.com:
Hello,
Thank you for the quick response.
My library is connected to a dedicated server only to services (PDC,
DHCP, DNS, LDAP, and Bacula)
This server is not designed to host files, so he has little space.
In addition, the MySql
Hello,
This summer we invested in a PowerVault TL2000 library with two LTO5 drives to
safeguard our various servers.
Today two of my servers take to save a lot because they contain many small
files for low volume (see the bottom of post)
All my other servers backups quickly (20,000 KB/s to
Zitat von bdelagree bacula-fo...@backupcentral.com:
Hello,
This summer we invested in a PowerVault TL2000 library with two LTO5
drives to safeguard our various servers.
Today two of my servers take to save a lot because they contain many
small files for low volume (see the bottom of
Hello,
Thank you for the quick response.
My library is connected to a dedicated server only to services (PDC, DHCP, DNS,
LDAP, and Bacula)
This server is not designed to host files, so he has little space.
In addition, the MySql database is already 35Gb...
I can dedicate reasonably 50Gb on this
On 9/23/2013 9:32 AM, bdelagree wrote:
Hello,
Thank you for the quick response.
My library is connected to a dedicated server only to services (PDC, DHCP,
DNS, LDAP, and Bacula)
This server is not designed to host files, so he has little space.
In addition, the MySql database is already
On 23/09/13 07:47, bdelagree wrote:
Hello,
This summer we invested in a PowerVault TL2000 library with two LTO5 drives
to safeguard our various servers.
Today two of my servers take to save a lot because they contain many small
files for low volume (see the bottom of post)
All my other
I started two backups maybe 12 hours ago. Normally full backups run 1-2
h max, but this suddenly...
From database I see no locks, but they geep inserting to batch -table.
I have 12 gigabytes RAM, and given couple gigs to MySQL too. Database
should not be bottle neck.
How can it be so slow. Two
Hi,
since i have upgraded our Backup Server to Debian Squeeze and Bacula
5.0.2 the Jobs are only write with ~ 5 MB/s.
status storage:
Device IBMLTO4-sd (/dev/nst0) is mounted with:
Volume: MITT01
Pool:MittwochPool
Media type: LTO4
Total Bytes=157,171,864,755
Ok error@blocksize :D Sorry
regards
Tobias
# Stegbauer Datawork
# Tobias Dinse
# Oberjulbachring 9, 84387 Julbach
On 12.05.2011 11:29, Tobias Dinse wrote:
Hi,
since i have upgraded our Backup Server to Debian Squeeze and Bacula
5.0.2 the Jobs are only write with ~ 5 MB/s.
status
On Mon, 10 Jan 2011, Oliver Hoffmann wrote:
I did some tests with different gzip levels and with no compression at
all. It makes a difference but not as expected. Without compression I
still have a rate of only 11346.1 KB/s. Anything else I should try?
Are you sure the cross-over connection
On Mon, 10 Jan 2011, Oliver Hoffmann wrote:
I did some tests with different gzip levels and with no compression
at all. It makes a difference but not as expected. Without
compression I still have a rate of only 11346.1 KB/s. Anything else
I should try?
Are you sure the cross-over
I did some tests with different gzip levels and with no compression at
all. It makes a difference but not as expected. Without compression I
still have a rate of only 11346.1 KB/s. Anything else I should try?
Cheers,
Oliver
On Saturday 08 January 2011 11:46:11 Mister IT Guru wrote:
On
Hi all,
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.
For testing purpose I connected one client via crosslink (1Gbit on
both sides) to
On 1/9/2011 6:19 PM, Oliver Hoffmann wrote:
Hi all,
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.
For testing purpose I connected one
On 07/01/2011 14:53, Rory Campbell-Lange wrote:
On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.
On Saturday 08 January 2011 11:46:11 Mister IT Guru wrote:
On 07/01/2011 14:53, Rory Campbell-Lange wrote:
On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer
On 1/7/2011 9:48 AM, Oliver Hoffmann wrote:
Hi all,
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.
For testing purpose I connected one client
On 1/8/2011 4:46 AM, Mister IT Guru wrote:
On 07/01/2011 14:53, Rory Campbell-Lange wrote:
On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12
Hi all,
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.
For testing purpose I connected one client via crosslink (1Gbit on
both sides) to the
On 07/01/11, Oliver Hoffmann (o...@dom.de) wrote:
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.
For testing purpose I connected one client
On Friday 07 January 2011 16:48:07 Oliver Hoffmann wrote:
Hi all,
I do full backups at the weekend and it just takes too long. 12h or so.
bacula does one job after the other and I have a max. transfer rate of
11 to 12 MBytes/second due to the 100Mbit connection.
For testing purpose I
Hi,
I would like to know if is true that I have so slow troughput as this:
*CATALOG
---
** FD Bytes Written: 478,808,703 (478.8 MB)
SD Bytes Written: 478,809,069 (478.8 MB)
Rate: 402.0 KB/s
Software Compression: None
INCREMENTAL
--
SD Bytes
Hi Carlo,
for any modern hardware your rates sound low.
Below is an example I get in my home system (Core2 Duo, 8GB memory, CentOS
5.4 Linux 64-bit), writing to external USB disk, with no compression.
Backing up a local disk, catalog database on the same physical disk too (not
an ideal
Carlo Filippetto wrote:
Hi,
I would like to know if is true that I have so slow troughput as this:
[...]
FULL
-
Elapsed time: 1 day 22 hours 13 mins 37 secs
[...]
Rate: 371.7 KB/s
Software Compression: 15.5 %
[...]
All my jobs have the maximum
Sean M Clark wrote:
Carlo Filippetto wrote:
Hi,
I would like to know if is true that I have so slow troughput as this:
[...]
FULL
-
Elapsed time: 1 day 22 hours 13 mins 37 secs
[...]
Rate: 371.7 KB/s
Software Compression:
Take also a look at your dir database setting.
( postgresql or mysql )
If you are using default distro's settings they are certainly to low.
check the ml wiki about this.
Il Neofita wrote:
Hi
I am using EXt3
and yes I also have small
Probably
50% 2M
40% 10M
10%40M
On Thu, May 28,
I connected the backup server and the client with a crossover cable at 1G
however
Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
What can I check?
I am using SAS disks
With ethtool I have
Speed: 1000Mb/s
therefore is correct
On Thursday 28 May 2009 13:01:06 Il Neofita wrote:
I connected the backup server and the client with a crossover cable at 1G
however
Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
What can I check?
I am using SAS disks
With ethtool I have
Speed: 1000Mb/s
therefore is
Hi,
there is 5Gb of data and the average speed is 9mb at sec. The speed is
slow .
Try to copy a big file from server to client (or viceversa) and se
with iptraf the speed of copy. I think there is no problem with bacula
but in the distro.
Daniele
Il giorno 28/mag/09, alle ore 12:01,
On Thu, May 28, 2009 at 06:01:06AM -0400, Il Neofita wrote:
I connected the backup server and the client with a crossover cable at 1G
however
Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
What can I check?
I am using SAS disks
With ethtool I have
Speed: 1000Mb/s
therefore
On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
First of all thank you for the answer
No I do not use compression in my file set
Options {
signature = MD5
}
I tried to upload with sftp
Uploading testfile to /tmp/terrierj
testfile
First of all thank you for the answer
No I do not use compression in my file set
Options {
signature = MD5
}
I tried to upload with sftp
Uploading testfile to /tmp/terrierj
testfile 100% 83MB 41.4MB/s 00:02
There is only a problem,
I have
Hi
I am using EXt3
and yes I also have small
Probably
50% 2M
40% 10M
10%40M
On Thu, May 28, 2009 at 8:39 AM, Uwe Schuerkamp hoo...@nionex.net wrote:
On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
First of all thank you for the answer
No I do not use compression in my file set
Hi there,
I've been having some problems attempting to increase the write speed to my
tape drive through Bacula.
If I use the operating system to communicate directly with the tape drive, I
get the appropriate read and write speeds but using Bacula, I get a third of
the speed. I have tried
Hi there,
I've been having some problems attempting to increase the write speed to my
tape drive through Bacula.
If I use the operating system to communicate directly with the tape drive,
I get the appropriate read and write speeds but using Bacula, I get a third
of the speed. I have
Hi,
Jonas Björklund has spoken, thus:
Hello,
I get very poor performance with compression on a client. It's a Sun Fire
V490 with 4 CPUs on 1350Mhz and 16GB memory.
I'm having similiar problems with bacula here (but different hardware).
filed: Sun Blade 1500 (1 CPU 1503Mhz 1GB memory)
Hello,
I get very poor performance with compression on a client. It's a Sun Fire
V490 with 4 CPUs on 1350Mhz and 16GB memory.
JobId: 11
Job:client1.2006-12-04_16.34.10
Backup Level: Full
Client: sasma
On Tue, 5 Dec 2006, Jonas Björklund wrote:
I get very poor performance with compression on a client. It's a Sun Fire
V490 with 4 CPUs on 1350Mhz and 16GB memory.
Seems like the Sun server is slow. I got a little bit better performance
when I used GZIP1 instead of GZIP
On Tue, 5 Dec 2006 09:11:14 +0100 (CET), Jonas Bjorklund said:
Hello,
I get very poor performance with compression on a client. It's a Sun Fire
V490 with 4 CPUs on 1350Mhz and 16GB memory.
JobId: 11
Job:client1.2006-12-04_16.34.10
Backup
Hello,
I get very poor performance with compression on a client.
It's a Sun Fire
V490 with 4 CPUs on 1350Mhz and 16GB memory.
JobId: 11
Job:client1.2006-12-04_16.34.10
Backup Level: Full
Client: sasma
I've seen similar data on my backups, but generally, only with very
small backup sizes (less than 1GB). When I back up over 1GB, the rates
increase dramatically, although backup from the Windows server is still
only about 1/2 to 1/3 the Linux server rate. Before you get too
concerned, try a
Hi. I have been working with bacula for some months, i love this software, my current problem is this one:My Test Server.I'm running bacula server 1.38.11 on FreeBSD 6.1-p3Mysql 4.1.20Tape HP Storage Works 232 External 200GB Compress
HD 200 IDE 7200 RPMAMD Duron 1.6 Ghz512 RAMClients:2 Win NT 4
On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:
Hello,
I have some bacula installations on SunFire 280R Sparc machines, with Solaris
10.
These machines apperar to be very very slow with respect to other
installations
(such as v20z) with same LTO2 device.
As you can see from the
On 7/19/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:
Do you mean that the whole 280R machine maybe running at half-duplex?!I'm not sure what interface you are using for the backups (probably an eriX),
but to get the link status and link capabilities from the Solaris side you can e.g. use this
uglio 2006 16.43.04 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn Tue, 2006-07-18 at 16:37 +0200, Gabriele Bulfon wrote:
Do you have any suggestion about parameters I may use to optimize the
daemons?
I'm not a developer :( ... unfortunately -- but you need to see where
is t
--Da: Kern Sibbald [EMAIL PROTECTED]A: bacula-users@lists.sourceforge.net Cc: Gabriele Bulfon [EMAIL PROTECTED] Data: 19 luglio 2006 20.53.39 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROne user had similar problems with his Sparc and it turned out
]
A: Gabriele Bulfon [EMAIL PROTECTED]
Cc: MaxxAtWork [EMAIL PROTECTED] bacula-users@lists.sourceforge.net
Data: 18 luglio 2006 16.43.04 CEST
Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
On Tue, 2006-07-18 at 16:37 +0200, Gabriele Bulfon wrote:
Do you have any suggestion about
On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:
Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other installations (such as v20z) with same LTO2 device.
As you can see from the report, 60Gb
CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:
Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other installations (such as v20z
[EMAIL PROTECTED]
A: bacula-users@lists.sourceforge.net
Data: 18 luglio 2006 13.22.13 CEST
Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
On 7/14/06, Gabriele Bulfon [EMAIL PROTECTED] wrote:
Hello,
I have some bacula installations
] bacula-users@lists.sourceforge.net Data: 18 luglio 2006 15.16.37 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280RDo you use compression, because You have difference in processing power
Sparc III is much less powerful than Opteron?
On Tue, 2006-07-18 at 13:58 +0200, Gabriele
] bacula-
[EMAIL PROTECTED]
Data: 18 luglio 2006 15.16.37 CEST
Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
Do you use compression, because You have difference in
processing power
Sparc III is much less powerful than Opteron?
On Tue
CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280RJust to exclude network!
What is the transfer rate that you can achieve with those servers?
On Tue, 2006-07-18 at 15:37 +0200, Gabriele Bulfon wrote:
Oh no. I do not use compression at all.
And if I'd use compression, I'd use hardware
PROTECTED]
Data: 18 luglio 2006 15.43.15 CEST
Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
Just to exclude network!
What is the transfer rate that you can achieve with those
servers?
On Tue, 2006-07-18 at 15:37 +0200, Gabriele
--Da: Hristo Benev [EMAIL PROTECTED]A: Gabriele Bulfon [EMAIL PROTECTED] Cc: MaxxAtWork [EMAIL PROTECTED] bacula-users@lists.sourceforge.net Data: 18 luglio 2006 16.32.54 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280RMy opinion is that you
]
Data: 18 luglio 2006 16.32.54 CEST
Oggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280R
My opinion is that you have bottleneck somewhere (probably CPU
or RAM,
network).
You need to monitor those machines during backup to see where
exactly
luglio 2006 19.44.26 CESTOggetto: Re: [Bacula-users] Slow backup on Sparc SunFire 280ROn Fri, 14 Jul 2006, Gabriele Bulfon wrote:
Hello,
I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.
These machines apperar to be very very slow with respect to other
On Mon, 17 Jul 2006, Gabriele Bulfon wrote:
Thanks,
this is very interesting.
My LTO2 drives (I have many installed) are from Certance.
Mine are HP drives installed in a HP MSL6000 library (aka NEO4000)
Do you achieve these rates on a SunFire 280R?
No, Wintel hardware (HP Proliant DL580g2
On Fri, 14 Jul 2006, Gabriele Bulfon wrote:
Hello,
I have some bacula installations on SunFire 280R Sparc machines, with Solaris
10.
These machines apperar to be very very slow with respect to other
installations (such as v20z) with same LTO2 device.
As you can see from the report, 60Gb
Hello,I have some bacula installations on SunFire 280R Sparc machines, with Solaris 10.These machines apperar to be very very slow with respect to other installations (such as v20z) with same LTO2 device.As you can see from the report, 60Gb are copied in 9 hours, with an avarage rate of 1898.9
70 matches
Mail list logo