Hey there!
I am currently testing a recovery plan where i simulate the loss of my whole
fileserver, in order to test procedures, etc etc
Bacula is 7.4.4 with pgsql, LTO-6 is from a TapeLoader (Dell PowerVault TL1000)
connected through a Dell 12Gbps SAS HBA card to an ESXi 6.5 server with RAID5
On 7/7/11 2:36 AM, J. Echter wrote:
> Am 07.07.2011 04:43, schrieb Glen Barber:
>> On 7/6/11 12:37 PM, J. Echter wrote:
>>> backup speed has nothing to do with regular backup speed.
>>>
>> Can you explain exactly what this means?
>>
> sorry, i meant backup speed has nothing to do with regular *netw
Am 07.07.2011 04:43, schrieb Glen Barber:
> On 7/6/11 12:37 PM, J. Echter wrote:
>> backup speed has nothing to do with regular backup speed.
>>
> Can you explain exactly what this means?
>
sorry, i meant backup speed has nothing to do with regular *network* speed.
On 7/6/2011 12:31 PM, Jake Debord wrote:
I have a machine I back up that when done averages:
Elapsed time: 41 mins 47 secs
Priority: 1
FD Files Written: 6,948
SD Files Written: 6,948
FD Bytes Written: 14,587,852,350 (14.58 GB)
SD Bytes Written:
On 7/6/11 12:37 PM, J. Echter wrote:
>
> backup speed has nothing to do with regular backup speed.
>
Can you explain exactly what this means?
--
Glen Barber
--
All of the data generated in your IT infrastructure is se
Yes, I use disk based file storage. It is a full backup but, I will turn
gzip off and defrag it to see if I can improve the speed. Thanks for the
advice and any additional advice is welcomed.
On Wed, Jul 6, 2011 at 11:37 AM, J. Echter wrote:
> Am 06.07.2011 18:31, schrieb Jake Debord:
> > I have
Le Mer 6 juillet 2011 18:43, John Drescher a écrit :
> 2011/7/6 Jake Debord :
>> I have a machine I back up that when done averages:
>> Elapsed time: 41 mins 47 secs
>> Priority: 1
>> FD Files Written: 6,948
>> SD Files Written: 6,948
>> FD Bytes Written:
2011/7/6 Jake Debord :
> I have a machine I back up that when done averages:
> Elapsed time: 41 mins 47 secs
> Priority: 1
> FD Files Written: 6,948
> SD Files Written: 6,948
> FD Bytes Written: 14,587,852,350 (14.58 GB)
> SD Bytes Written:
Am 06.07.2011 18:31, schrieb Jake Debord:
> I have a machine I back up that when done averages:
> Elapsed time: 41 mins 47 secs
> Priority: 1
> FD Files Written: 6,948
> SD Files Written: 6,948
> FD Bytes Written: 14,587,852,350 (14.58 GB)
> SD By
I have a machine I back up that when done averages:
Elapsed time: 41 mins 47 secs
Priority: 1
FD Files Written: 6,948
SD Files Written: 6,948
FD Bytes Written: 14,587,852,350 (14.58 GB)
SD Bytes Written: 14,589,273,339 (14.58 GB)
Rate:
On Wed, May 4, 2011 at 2:39 PM, Jesper Krogh wrote:
> On 2011-04-28 17:16, Alex Chekholko wrote:
>> Try changing your Maximum Network Buffer size in your bacula-sd config.
>>
>> Something like
>> Maximum Network Buffer Size = 262144 #65536
>> Maximum block size = 262144
>>
>> Keep in mind th
On 2011-04-28 17:16, Alex Chekholko wrote:
> Try changing your Maximum Network Buffer size in your bacula-sd config.
>
> Something like
>Maximum Network Buffer Size = 262144 #65536
>Maximum block size = 262144
>
> Keep in mind that this will make your sd unable to read previous
> backups, I
Martin Simmons:
> > On Fri, 29 Apr 2011 14:29:33 +0200, Dietz Pröpper said:
> > To see wether the file system is indeed the bottleneck you could try
> > to tar the fs to /dev/null and compare the transfer rate to that of
> > your bacula backup.
>
> Good advice, but beware that GNU tar doesn't
> On Fri, 29 Apr 2011 14:29:33 +0200, Dietz Pröpper said:
>
> To see wether the file system is indeed the bottleneck you could try to tar
> the fs to /dev/null and compare the transfer rate to that of your bacula
> backup.
Good advice, but beware that GNU tar doesn't read any files when the
> From: Jason Voorhees
>
>> >
>> > to get the maximum speed with your LTO-5 drive you should enable data
>> > spooling and change the "Maximum File Size" parameter. The spool disk
>> > must be a fast one, especially if you want to run concurrent jobs.
>> > Forget hdparm as benchmark, use bonnie++
Jason Voorhees:
> Well, these are my results of a bonnie++ test:
[...]
> Version 1.03e --Sequential Output-- --Sequential Input-
> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
Jason Voorhees:
> On Thu, Apr 28, 2011 at 12:01 PM, John Drescher
wrote:
> >> So do you believe these speeds of my backups are normal? I though my
> >> Library tape with LTO-5 tapes could write at 140 MB/s approx. It
> >> isn't possible to achieve higher speeds?
> >
> > You need to speed up your
Jason Voorhees schrieb:
> > I got the biggest gain by changing "Maximum File Size" to 5 GB. How
> > fast is the disk where you spool file is locatet?
> >
> > A different test would be to create a 10 GB file with data from
> > /dev/urandom in the spool directory and the write this file to tape
> > (
> I got the biggest gain by changing "Maximum File Size" to 5 GB. How
> fast is the disk where you spool file is locatet?
>
> A different test would be to create a 10 GB file with data from
> /dev/urandom in the spool directory and the write this file to tape
> (eg. nst0). Note: this will overwrite
> Ok, I don't have that setting enabled but I could try it. Question:
> how do you decide 5 GB is an optimal value for your LTO-4 tapes? what
> value could I put for my LTO-5 tapes? I don't really understand what
> should be the appropiate value for this directive.
> I don't know how to tell you ho
>
> I got the biggest gain by changing "Maximum File Size" to 5 GB. How
> fast is the disk where you spool file is locatet?
>
Ok, I don't have that setting enabled but I could try it. Question:
how do you decide 5 GB is an optimal value for your LTO-4 tapes? what
value could I put for my LTO-5 tap
Jason Voorhees schrieb:
>
> I think I was confusing some terms. The speed I reported was the total
> elapsed time that my backup took. But now according to your comments I
> got this from my logs:
>
> With spooling enabled:
>
> - Job write elapsed time: 102 MB/s average
> - Despooling elapsed ti
>
> to get the maximum speed with your LTO-5 drive you should enable data
> spooling and change the "Maximum File Size" parameter. The spool disk
> must be a fast one, especially if you want to run concurrent jobs.
> Forget hdparm as benchmark, use bonnie++, tiobench, iozone.
>
> Then after after y
Jason Voorhees schrieb:
>
> I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
> TS3100 with hardware compression enabled and software (Bacula)
> compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
> network and iperf tests report me a bandwidth of 112 MB/s.
>
> I'
> I tried to copy a 10 GB file between both servers (Bacula and
> Fileserver) with scp and I got a 48 MB/s speed transfer. Is this why
> my backups are always near to that speed?
>
Try backing up that 10GB file on both servers with bacula.
--
John M. Drescher
--
On 04/28/2011 02:06 PM, Jason Voorhees wrote:
> I tried to copy a 10 GB file between both servers (Bacula and
> Fileserver) with scp and I got a 48 MB/s speed transfer. Is this why
> my backups are always near to that speed?
Try it with "scp -c arcfour" - like compression, encryption introduces
eno
On Thu, Apr 28, 2011 at 3:06 PM, Jason Voorhees wrote:
> On Thu, Apr 28, 2011 at 1:43 PM, John Drescher wrote:
>> On Thu, Apr 28, 2011 at 2:38 PM, John Drescher wrote:
/dev/mapper/mpath0:
Timing buffered disk reads: 622 MB in 3.00 seconds = 207.20 MB/sec
>>> That is a raid. But
On Thu, Apr 28, 2011 at 1:43 PM, John Drescher wrote:
> On Thu, Apr 28, 2011 at 2:38 PM, John Drescher wrote:
>>> /dev/mapper/mpath0:
>>> Timing buffered disk reads: 622 MB in 3.00 seconds = 207.20 MB/sec
>>>
>> That is a raid. But you still may not be able to sustain over 100MB/s
>> of somewh
On Thu, Apr 28, 2011 at 2:38 PM, John Drescher wrote:
>> /dev/mapper/mpath0:
>> Timing buffered disk reads: 622 MB in 3.00 seconds = 207.20 MB/sec
>>
> That is a raid. But you still may not be able to sustain over 100MB/s
> of somewhat random reads. Remember that hdparm is only measuring
> sequ
> /dev/mapper/mpath0:
> Timing buffered disk reads: 622 MB in 3.00 seconds = 207.20 MB/sec
>
That is a raid. But you still may not be able to sustain over 100MB/s
of somewhat random reads. Remember that hdparm is only measuring
sequential performance of large reads.
--
John M. Drescher
-
On Thu, Apr 28, 2011 at 12:01 PM, John Drescher wrote:
>> So do you believe these speeds of my backups are normal? I though my
>> Library tape with LTO-5 tapes could write at 140 MB/s approx. It isn't
>> possible to achieve higher speeds?
>
> You need to speed up your source filesystem to achieve
> So do you believe these speeds of my backups are normal? I though my
> Library tape with LTO-5 tapes could write at 140 MB/s approx. It isn't
> possible to achieve higher speeds?
You need to speed up your source filesystem to achieve better
performance. Use raid10 or get a SSD. It has nothing at
On Thu, Apr 28, 2011 at 11:41 AM, John Drescher wrote:
>> How can I know where's the bottleneck? I'm using an ext4 filesystem.
>> Are these tests useful?
>>
>> [root@qsrpsbk1 ~]# hdparm -t /dev/sda
>>
>> /dev/sda:
>> Timing buffered disk reads: 370 MB in 3.01 seconds = 122.89 MB/sec
>> [root@qs
> How can I know where's the bottleneck? I'm using an ext4 filesystem.
> Are these tests useful?
>
> [root@qsrpsbk1 ~]# hdparm -t /dev/sda
>
> /dev/sda:
> Timing buffered disk reads: 370 MB in 3.01 seconds = 122.89 MB/sec
> [root@qsrpsbk1 ~]# hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached re
On Thu, Apr 28, 2011 at 10:30 AM, John Drescher wrote:
>> No, there are just a "normal" number of files from a shared folder of
>> my fileserver with spreadsheets, documents, images, PDFs, just
>> information of final users.
>>
>
> The performance problem is probably filesystem performance. A sing
> The performance problem is probably filesystem performance. A single
> hard drive will only hit 100 MB/s if you are baking up files that are
> a few hundred MB.
>
>
> --
> John M. Drescher
>
How could I run some tests to verify this? I'm running MySQL server in
the same host where Bacula is inst
Did you activated attribute spooling ( and maybe data spooling too if
you use LTO )?
2011/4/28 Jason Voorhees :
> Hi:
>
> On Thu, Apr 28, 2011 at 10:19 AM, John Drescher wrote:
>> On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees
>> wrote:
>>> Hi:
>>>
>>> I'm running Bacula 5.0.3 in RHEL 6.0 x86
> No, there are just a "normal" number of files from a shared folder of
> my fileserver with spreadsheets, documents, images, PDFs, just
> information of final users.
>
The performance problem is probably filesystem performance. A single
hard drive will only hit 100 MB/s if you are baking up files
Try changing your Maximum Network Buffer size in your bacula-sd config.
Something like
Maximum Network Buffer Size = 262144 #65536
Maximum block size = 262144
Keep in mind that this will make your sd unable to read previous
backups, IIRC.
Search archives for this parameter, e.g.
http://old.
Hi:
On Thu, Apr 28, 2011 at 10:19 AM, John Drescher wrote:
> On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees wrote:
>> Hi:
>>
>> I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
>> TS3100 with hardware compression enabled and software (Bacula)
>> compression disabled, using L
On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees wrote:
> Hi:
>
> I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
> TS3100 with hardware compression enabled and software (Bacula)
> compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
> network and iperf tests repo
Hi:
I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
TS3100 with hardware compression enabled and software (Bacula)
compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
network and iperf tests report me a bandwidth of 112 MB/s.
I'm not using any spooling configura
hymie!> So one of my machines has a few zillion tiny little files.
Here's your problem right there. Reading all the metadata for those
files is the killer.
If the client is beefy enough, you can try splitting it up so there
are multiple readers all hitting the disk at once. This will
paralli
On 4/19/2011 10:21 AM, hymie! wrote:
> Marcello Romani writes:
>
>> Maybe it's not relevant to your case, but have you tried to enable
>> spooling ?
> I don't think spooling will solve my problem. First off, I'm using disks
> as my storage, not tapes; spooling is not recommended. Second, the
> bo
> So one of my machines has a few zillion tiny little files.
>
> My full backup took 44 hours. I can deal with that if I have to.
> My incremental backup has been running for 10 hours now.
> Files=71,560 Bytes=273,397,510 Bytes/sec=7,666 Errors=0
> Files Examined=14,675,372
>
> I know that b
Maybe I can answer to follow-ups at once. Easy one first:
Il 19/04/2011 15:37, hymie! ha scritto:
>> So one of my machines has a few zillion tiny little files.
>>
>> My full backup took 44 hours. I can deal with that if I have to.
>> My incremental backup has been running for 10 hours now.
>>
Am 19.04.2011 15:37, schrieb hymie!:
>
> So one of my machines has a few zillion tiny little files.
>
> My full backup took 44 hours. I can deal with that if I have to.
> My incremental backup has been running for 10 hours now.
> Files=71,560 Bytes=273,397,510 Bytes/sec=7,666 Errors=0
>
Il 19/04/2011 15:37, hymie! ha scritto:
>
> So one of my machines has a few zillion tiny little files.
>
> My full backup took 44 hours. I can deal with that if I have to.
> My incremental backup has been running for 10 hours now.
> Files=71,560 Bytes=273,397,510 Bytes/sec=7,666 Errors=0
>
So one of my machines has a few zillion tiny little files.
My full backup took 44 hours. I can deal with that if I have to.
My incremental backup has been running for 10 hours now.
Files=71,560 Bytes=273,397,510 Bytes/sec=7,666 Errors=0
Files Examined=14,675,372
I know that bacula has t
On Mon, Jul 13, 2009 at 12:26 PM, Hayden
Katzenellenbogen wrote:
> John,
>
> For now the DB is on the same raid partition. I our DB admin is building
> a new high availability pair that I will move it onto soon.
>
> The data that I am accessing is local but like you said spooling should
> help I ha
-Original Message-
From: John Drescher [mailto:dresche...@gmail.com]
Sent: Wednesday, July 08, 2009 9:28 AM
To: Hayden Katzenellenbogen
Cc: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Speed writing to tape drive
On Wed, Jul 8, 2009 at 12:12 PM, Martin Simmons
wrote
]
Sent: Wednesday, July 08, 2009 9:13 AM
To: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Speed writing to tape drive
>>>>> On Mon, 6 Jul 2009 10:14:09 -0700, Hayden Katzenellenbogen said:
>
> Hello,
>
> Thanks to a patch published two weeks ago
On Wed, Jul 8, 2009 at 12:12 PM, Martin Simmons wrote:
>> On Mon, 6 Jul 2009 10:14:09 -0700, Hayden Katzenellenbogen said:
>>
>> Hello,
>>
>> Thanks to a patch published two weeks ago I am finally making headway
>> into the wonderful world of Bacula. I have a single machine right now
>> with ab
> On Mon, 6 Jul 2009 10:14:09 -0700, Hayden Katzenellenbogen said:
>
> Hello,
>
> Thanks to a patch published two weeks ago I am finally making headway
> into the wonderful world of Bacula. I have a single machine right now
> with about 1.2T of data I am backing up.
>
> When I run the btape
Hello,
Thanks to a patch published two weeks ago I am finally making headway
into the wonderful world of Bacula. I have a single machine right now
with about 1.2T of data I am backing up.
When I run the btape fill test I get write speeds of around 70MB/s when
I run a full backup from the local ma
On Fri, 5 Dec 2008, Alex Chekholko wrote:
> > - a built-in Fast Ethernet adapter (3com 3c509)
I have had a _lot_ of trouble with 3com Vortex/boomerang/tornado NICs
under high load - they tend to start emitting unswitchable packets which
splatter the entire network causing slowdowns on all machine
> Date: Fri, 5 Dec 2008 04:45:56 -0500
> From: David Lee Lambert <[EMAIL PROTECTED]>
> I'm trying to use Bacula to do daily backups of data stored in iSCSI LUNs on
> a
> NetApp filer, using NetApp snapshots to ensure consistency. The hosts to be
> backed up have dual Gigabit Ethernet connec
On Fri, 5 Dec 2008 04:45:56 -0500
David Lee Lambert <[EMAIL PROTECTED]> wrote:
> I'm trying to use Bacula to do daily backups of data stored in iSCSI LUNs on
> a
> NetApp filer, using NetApp snapshots to ensure consistency. The hosts to be
> backed up have dual Gigabit Ethernet connections to
I'm trying to use Bacula to do daily backups of data stored in iSCSI LUNs on a
NetApp filer, using NetApp snapshots to ensure consistency. The hosts to be
backed up have dual Gigabit Ethernet connections to the NetApp. The backup
host consists of:
- a desktop-class (32-bit, 2.4GHz) machine w
Alan Brown schrieb:
> On Tue, 13 Nov 2007, Florian Engelmann wrote:
>
>> hi,
>> how fast does your windows bacula-fd daemon backup to a linux server?
>> Our backup to disk is (300GB of files) running at 4 MB/s over a gigabit
>> connection (GZIP compressed at default compression level and also teste
On Wed, 14 Nov 2007, Florian Engelmann (Manntech) wrote:
>> Turn off GZIP
>>
>>
> I turned off GZIP and got this result:
> Backup Level: Incremental, since=2007-11-12 22:00:03
^^
Incremental means it must scan the fi
On Tue, 13 Nov 2007, Florian Engelmann wrote:
> hi,
> how fast does your windows bacula-fd daemon backup to a linux server?
> Our backup to disk is (300GB of files) running at 4 MB/s over a gigabit
> connection (GZIP compressed at default compression level and also tested
> a crossover connection)
Hi Florian,
It would probably be incredibly fast without compression, 4 MB/s is probably
the compression rate @ the client.
Michael
On Nov 13, 2007 8:52 AM, Florian Engelmann <[EMAIL PROTECTED]>
wrote:
> hi,
> how fast does your windows bacula-fd daemon backup to a linux server?
> Our backup to
hi,
how fast does your windows bacula-fd daemon backup to a linux server?
Our backup to disk is (300GB of files) running at 4 MB/s over a gigabit
connection (GZIP compressed at default compression level and also tested
a crossover connection). Seems to be slow dosn't it? Is there any way to
tun
- "John Drescher" <[EMAIL PROTECTED]> wrote:
> > The Tape Drive:
> > Vendor: COMPAQModel: DLT4000 Rev:
> D887
> > Type: Sequential-Access ANSI SCSI
> revision: 02
> > target0:0:6: Beginning Domain Validation
> >
> I have a speed question about my DLT tape drive. First some tech:
>
> The controller:
> description: SCSI storage controller
> product: AIC-7892A U160/m
> vendor: Adaptec
> physical id: 9
> bus info: [EMAIL PROTECTED]
Dear list,
I have a speed question about my DLT tape drive. First some tech:
The controller:
description: SCSI storage controller
product: AIC-7892A U160/m
vendor: Adaptec
physical id: 9
bus info: [EMAIL PROTECTED]:09
In the message dated: Tue, 24 Apr 2007 22:57:34 +0200,
The pithy ruminations from Sysadmin Worldsoft on
were:
=> Hello John,
=>
=> John Drescher a écrit :
=>
=> > You did not give any details of your systems. I assume you have a
=> > gigabit network between all the involved systems. What data
In the message dated: Tue, 24 Apr 2007 22:57:34 +0200,
The pithy ruminations from Sysadmin Worldsoft on
were:
Hello John,
>
John Drescher a icrit :
>
> You did not give any details of your systems. I assume you have a
> gigabit network between all the involved systems. What database are
> yo
Michael Nelson wrote:
> I find that the overall throughput for my gigabit-connected LTO-3
> jukebox seems to depend more on the makeup of the files being backed up
> than anything.
>
Sorry, but that is confusing language, and could be misinterpreted easily.
What I should have said is that th
John Drescher wrote:
> Are you using spooling on the backup with 7 million files?
Yes, John... all my jobs are spooled to disc then to tape.
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
--
> It's VERY fast on backups where it is doing large to medium files, and
> VERY slow on backups where it has to back up millions of tiny files. I
> have three webservers I back up that fit the "millions of small files"
> definition, and a full backup of each of them takes about 3 hours
> apiece.
> I backup a directory mounted on the same server which the library is
> connected.
>
I assume you mean the raid array that contains the data is on the
server and it is not nfs mounted...
>
> [Directory on NAS Powervault 220S<> Raid Controller] <-> Server <->
> [SCSI Adaptec 39160 <> Tape Library
Sysadmin Worldsoft wrote:
> I try to backup 193GB with bacula 2.0.3 on a tape library Powervault
> 124T with LTO3.
>
> The backup take 7 hours to terminate. The specification for PV 124T is
> "Supports maximum native transfer rates of 288GB/hr (LTO-3)"
>
> Any idea for this problem ?
>
I find
Hello John,
John Drescher a écrit :
> You did not give any details of your systems. I assume you have a
> gigabit network between all the involved systems. What database are
> you using? Did you properly set up the indexes? Are you using
> spooling?
>
I backup a directory mounted on the same se
maximum speeds on a tape drive are kinda like maximum capacity on your
broadband connection.
what's advertised and what you'll get are two different things.
I have that same DELL drive, and I'm seeing about 60gb/hour, which I
feel is reasonable for LTO3 with no compression... your numbers aren't
On 4/24/07, Sysadmin Worldsoft <[EMAIL PROTECTED]> wrote:
> Hi Folks,
>
> I try to backup 193GB with bacula 2.0.3 on a tape library Powervault
> 124T with LTO3.
>
> The backup take 7 hours to terminate. The specification for PV 124T is
> "Supports maximum native transfer rates of 288GB/hr (LTO-3)"
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Using 100baseT? This document might help explain things:
http://www.dell.com/content/topics/global.aspx/power/en/ps4q02_wolfram?c=us&cs=555&l=en&s=biz
- -Proto
Sysadmin Worldsoft wrote:
> Hi Folks,
>
> I try to backup 193GB with bacula 2.0.3 on a
Hi Folks,
I try to backup 193GB with bacula 2.0.3 on a tape library Powervault
124T with LTO3.
The backup take 7 hours to terminate. The specification for PV 124T is
"Supports maximum native transfer rates of 288GB/hr (LTO-3)"
Any idea for this problem ?
Result for the backup:
22-Apr 06:
Alan Brown wrote:
> On Wed, 14 Feb 2007, Jesper Krogh wrote:
>
>> The attached Tape is an LTO-3(Quantum PX506) with has a reported rate
>> at 80
>> MB/s (I havent tested this). The network is a gigabit network, which I
>> can
>> put around 600 mbit/s through using nc in both ends on some junk-file
On Wed, February 14, 2007 12:31 am, Jesper Krogh wrote:
> Anyone who can tell if this is typical.. or where my bottleneck is in this
> system?
My LTO-3 jukebox setup is very similar to yours. My backup speeds as show
by "Rate:" in the log entries ranges from about 56KB/s to 27600KB/s,
depending
On Wed, 14 Feb 2007, Jesper Krogh wrote:
> The attached Tape is an LTO-3(Quantum PX506) with has a reported rate at 80
> MB/s (I havent tested this). The network is a gigabit network, which I can
> put around 600 mbit/s through using nc in both ends on some junk-files.
Is that "native" or "compre
Hi.
We've just upgraded our bacula-installation from 1.36 to 2.0 .. That worked
excellent.. I'm very impressed with the smooth transistion.
In the old installation we had transferrates around 30 MB/s (measured using
iptraf when bacula-fd was processing some big files) (never more, often
less).. a
Hi,
I've been seeing variable performance on a Dell Poweredge 6400 backing
up to a PV132T tape library. If I reboot the server, I can backup the
data (Oracle standby database ~ 300Gb) at a speed of about 20,000,000
Bytes/sec. However, after performing a backup/verify cycle, I find that
I can only
Nicolas Stein wrote:
> - A Sata drive is 150MB/s, not counting the RAID, again, even with all
> software losses, it is still much higher...
The SATA bus may be 150MB/s, but I think you'll have trouble finding a
drive that does much more than 30-50MB/s except from its cache :)
Doesn't look like t
On Saturday 09 July 2005 12:00, Nicolas Stein wrote:
> Hi,
>
> I'm running Bacula 1.36.3, in a mutli-OS environment.
>
> - The director is running on a Fedora Core 2 Machine.
>
> - The Storage daemon is on the same machine, and stores to HD files,
> in different directories (different "storage" an
Hi,
I'm running Bacula 1.36.3, in a mutli-OS environment.
- The director is running on a Fedora Core 2 Machine.
- The Storage daemon is on the same machine, and stores to HD files,
in different directories (different "storage" and "Pool")
of a 1.1T software RAID5 volume (/dev/md0).
The volume
On Thu, 19 May 2005, Kern Sibbald wrote:
This problem seems to be rather current on Win2000 systems, and other Windows
systems. Though I do not have any proof except that it only happens with
Windows systems, I attribute it to a bug or deliberate throttling in the
Microsoft networking code.
I woul
forge.net; Martin Simmons; Matthew Butt
> Subject: Re: [Bacula-users] Speed of Windows FD
>
> On Thu, 19 May 2005, Kern Sibbald wrote:
>
> > This problem seems to be rather current on Win2000 systems, and
other
> Windows
> > systems. Though I do not have any proof excep
ay 19, 2005 8:09 AM
> To: bacula-users@lists.sourceforge.net
> Cc: Martin Simmons; Matthew Butt
> Subject: Re: [Bacula-users] Speed of Windows FD
>
> Hello,
>
> This problem seems to be rather current on Win2000 systems, and other
> Windows
> systems. Though I do
il list that regularly build it themselves ...
>
> Matthew Butt + T R I C Y C L E
>
> > -Original Message-
> > From: Kern Sibbald [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, May 19, 2005 8:09 AM
> > To: bacula-users@lists.sourceforge.net
> > Cc: Martin Simmon
out what is going wrong, I would certainly be
happy.
On Thursday 19 May 2005 12:16, Martin Simmons wrote:
> >>>>> On Wed, 18 May 2005 17:09:10 -0400, "Matthew Butt"
> >>>>> <[EMAIL PROTECTED]> said:
>
> Matt> Content-class: urn:content
Matthew Butt wrote:
> I'm trying to figure out at what speed/duplex the Windows server is but
> the switch it's plugged into shows that's it's also 1000Mbps full
> duplex. Cabling is all Cat5e.
GigE doesn't do half duplex AFAIK, so if it's GigE too, it's full duplex
(I think CSMA/CD becomes too d
>>>>> On Wed, 18 May 2005 17:09:10 -0400, "Matthew Butt" <[EMAIL PROTECTED]>
>>>>> said:
Matt> Content-class: urn:content-classes:message
Matt> Thread-Topic: [Bacula-users] Speed of Windows FD
Matt> I have two identical Win 2k3 server
x27;s it's also 1000Mbps full
duplex. Cabling is all Cat5e.
Matthew Butt + T R I C Y C L E
> -Original Message-
> From: Simon Weller [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, May 18, 2005 5:14 PM
> To: Matthew Butt
> Cc: bacula-users@lists.sourceforge.net
> Subject:
Have you checked network speed and duplex?
- Si
On Wed, 2005-05-18 at 17:09 -0400, Matthew Butt wrote:
> > Matt> I have two identical Win 2k3 servers (Dell PowerEdge 2800,
> U320
> > RAID5,
> > Matt> dual P4 Xeon 2.8) that I need to backup data onto an FC3
> server
> > running
> > Matt> Bac
> Matt> I have two identical Win 2k3 servers (Dell PowerEdge 2800,
U320
> RAID5,
> Matt> dual P4 Xeon 2.8) that I need to backup data onto an FC3
server
> running
> Matt> Bacula (P4 2.8GHz, USB2 HDD). All three machines have Gigabit
> cards
> Matt> running on a Gigabit switch with appropri
> On Tue, 17 May 2005 18:54:01 -0400, "Matthew Butt" <[EMAIL PROTECTED]>
> said:
Matt> I have two identical Win 2k3 servers (Dell PowerEdge 2800, U320 RAID5,
Matt> dual P4 Xeon 2.8) that I need to backup data onto an FC3 server running
Matt> Bacula (P4 2.8GHz, USB2 HDD). All three
Hi all,
I have two identical Win 2k3 servers (Dell PowerEdge 2800, U320 RAID5,
dual P4 Xeon 2.8) that I need to backup data onto an FC3 server running
Bacula (P4 2.8GHz, USB2 HDD). All three machines have Gigabit cards
running on a Gigabit switch with appropriate Cat5e cables.
Server1 has two fi
Hi and good evening to you...
Michael 'buk' Scherer wrote:
Good morning.
I guess I should have waited before cheering too loud.
The spooling and the writing of the data to tape is done with a nice speed.
Between 6 and 8 MB/s, which is good. I could live with that.
BUT, here we go, after everything
1 - 100 of 110 matches
Mail list logo