Re: tuning amanda

2004-11-17 Thread Alexander Jolk
Kai Zimmer wrote:
>> The total data was 716188 Mbyte and took almost 110:57 - 8:56 = 102:01.
>> That is 1996 kB/sec.
>> You didn't specify what tapedrive and host server you used, but this
>> seems a little low to me.
[...]
> i had a look at the tape specs now (which i should have done earlier) - it 
> says it can transfer data at 16 MB/s - not only 2 MB/s. I will definitly have 
> to run amtapetype now :-}. Let's see, how it's going afterwards.

I think the 2MB/s you observe are due to the slow dumping speed across
the network.  Whatever your tapedrive does, you won't finish earlier
than your clients send the data. :-)  How nice to be limited by network
and disk speed rather than tape!

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: tuning amanda

2004-11-17 Thread Kai Zimmer
Hi Alex,

> Since you are using hardware compression, the tape length given in the
> tapetype is just an estimate anyway.  It helps amandas scheduler to plan
> its flushes, but in the end she'll always try another block until she
> hits EOT.  That's when you hope the DLE she taped was small and not
> direct-to-tape, because the aborted attempt means wasted time and space
> anyway.  A good tape length estimate will help the scheduler; but if you
> use hardware compression, it will depend on your data.

ok, i'm running amtapetype now and will decrease the value for tape length 
afterwards until amanda stops complaining about EOT. 

thanks,
Kai 




Re: tuning amanda

2004-11-17 Thread Alexander Jolk
Kai Zimmer wrote:
>> I'd venture to try `compress server' here [...]

> "compress server"? Didn't you mean "compress client"?

Of course, sorry for that mistake.

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: tuning amanda

2004-11-17 Thread Alexander Jolk
Kai Zimmer wrote:
>>  the successful size of the tapes is only 200GB, which
>> means that amanda wrote more than 100GB to tape, only to run into EOT
>> and start over.
>> Conclusion: either give a more realistic estimate of effective tape
>> length in your tapetype definition so amanda doesn't try to schedule a
>> huge flush only to fail;

> hmm, that's a tapetype definition i found on amanda.org for this drive. But 
> i'll change it of course.

Since you are using hardware compression, the tape length given in the
tapetype is just an estimate anyway.  It helps amandas scheduler to plan
its flushes, but in the end she'll always try another block until she
hits EOT.  That's when you hope the DLE she taped was small and not
direct-to-tape, because the aborted attempt means wasted time and space
anyway.  A good tape length estimate will help the scheduler; but if you
use hardware compression, it will depend on your data.

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: tuning amanda

2004-11-17 Thread Kai Zimmer
Hi Paul,

thanks for your answer.

> This means that the backup was done in about 12 hours, which would
> have taken more than 4 times as much when doing every DLE sequentially.
> Amanda does dump many clients at the same time.

A - ok, it's clear now.
 
> About your problem, the pdf shows that from the 16 dumpers you started,
> only a minority is busy: a peak of 40% in the beginning, and then only
> 1 or even 0 dumpers for a long time.
> At the same time the holdingdisk is used 100%.
> Together it means that your holdingdisk area is too small.
> If possible I would add a cheap large disk in the server (200 Gbyte
> or more would be nice).  You can configure multiple holdingdisks.

i thought 345 GB for my holding disk were ok, since it's half of my total 
storage (that's what the amanda documentation said). I'll try to split the DLEs 
first - if that doesn't help, i'll add some more diskspace.
 
> The graph also shows the tapedrive is (almost) always busy.  That's
> good.  You can fill the little gap with having a larger holdingdisk.
> 
> The total data was 716188 Mbyte and took almost 110:57 - 8:56 = 102:01.
> That is 1996 kB/sec.
> You didn't specify what tapedrive and host server you used, but this
> seems a little low to me.  What is the expected throughput of your
> tapedrive (what does amtapetype indicate as speed)?

i haven't run amtapetype yet, i took the following tapetype definition from 
amanda.org:
define tapetype HPULT2 {
comment "HP Ultrium 2-SCSI (200G tape used)"
length 206705 mbytes
filemark 571 kbytes
speed 2165 kps
}
maybe i should really run amtapetype, but i'm a bit afraid of the time it 
takes...
 
> My tapedrive is a simple AIT-1 drive, 35 Gbyte native capacity,
> theoretically doing 3MBytes/sec.  Amanda does 2600 kB/sec on
> average (large partitions going at 3000-3090 kB/sec).  And the
> amanda server is only a 300 MHz Celeron with 128 MB RAM and a 80
> Gbyte IDE holdingdisk.

that's interesting. maybe my definition is really false.
 
> Slow tapespeed can be the result of one or more DLE's bypassing
> the holdingdisk.  That results in the tapedrive not streaming anymore.
> The file "amdump.1" contains the keyword "PORT-DUMP" instead of 
> "FILE-DUMP" for such partitions. Are there any?

no, there are only file-dumps.
 
> Again adding holdingdisk space would help in that case (or splitting
> those DLE's into smaller ones).

i'll try that first :-)

> Having a large holdingdisk, and many parallel dumpers, could result
> in doing the dumps fast, finishing before people come in and load
> the client machines, and taping all those collected images can then
> proceed at the normal tapespeed, but without bothering the end users.

i had a look at the tape specs now (which i should have done earlier) - it says 
it can transfer data at 16 MB/s - not only 2 MB/s. I will definitly have to run 
amtapetype now :-}. Let's see, how it's going afterwards.

Thanks a lot,
Kai




Re: tuning amanda

2004-11-17 Thread Kai Zimmer
Hi Alex,

> How much holding disk does that spell?  Verify the setting in
> amanda.conf, you don't want to be limited here.

345 GB - Amanda is allowed to use the disk completely, except for 30% reserve.

> I'd venture to try `compress server' here on all but the Sparc machine. 
> Seem to be nice beasts to me...

"compress server"? Didn't you mean "compress client"?

> Notice you hit EOT at between 300GB and 390GB, so your data is quite
> compressible, and you are in fact using hardware compression.  Is that
> LTO-2?

yes - it's an LTO-2. Most of our files are xml, so the compression should work 
good.

>  Anyway, the successful size of the tapes is only 200GB, which
> means that amanda wrote more than 100GB to tape, only to run into EOT
> and start over.
> Conclusion: either give a more realistic estimate of effective tape
> length in your tapetype definition so amanda doesn't try to schedule a
> huge flush only to fail;

hmm, that's a tapetype definition i found on amanda.org for this drive. But 
i'll change it of course.

> or use software compression with the true tape
> length (that would help with holding disk space as well).

ok, i'll try both and see which is faster.

>  And moreover,
> break your disks in smaller DLEs; that way when amanda hits EOT, she
> only needs to restart a few GB, and not hundreds.  That alone should cut
> your backup time down by a third.

ok, maybe that was a problem...
 
> Add to that the better parallelizing, and you're down to less than a
> weekend.

:-)

> As far as I see, you've got dump rates of consistently over 1MB/s. 
> That's not bad I think, I'm sometimes getting worse.  The longest runner
> would be kira:.../konvert, 134GB at 1.3MB/s gives 10s which spells
> 27h to me.  Plus two other disks on the same machine that cannot be
> parallelized. :-(  Cutting into smaller pieces would stop them all from
> doing a full the same day, though.

that was my first guess, too. Now i have some parameters i can optimize now 
(tapesize, client compression, DLEs).

Thanks a lot for your help,
Kai






Re: tuning amanda

2004-11-17 Thread Paul Bijnens
Kai Zimmer wrote:
Run Time (hrs:min)110:57
Dump Time (hrs:min)   120:52
Sorry, i didn't completely understand that. Run time can't be lower
than dump time, so parallelizing will gain max. 10h ?
Yes it runtime should be much lower than dumptime when having
many clients.
From my "archive" run this weekend:
STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:39
Run Time (hrs:min)12:07
Dump Time (hrs:min)   51:45  51:45   0:00
This means that the backup was done in about 12 hours, which would
have taken more than 4 times as much when doing every DLE sequentially.
Amanda does dump many clients at the same time.
About your problem, the pdf shows that from the 16 dumpers you started,
only a minority is busy: a peak of 40% in the beginning, and then only
1 or even 0 dumpers for a long time.
At the same time the holdingdisk is used 100%.
Together it means that your holdingdisk area is too small.
If possible I would add a cheap large disk in the server (200 Gbyte
or more would be nice).  You can configure multiple holdingdisks.
The graph also shows the tapedrive is (almost) always busy.  That's
good.  You can fill the little gap with having a larger holdingdisk.
The total data was 716188 Mbyte and took almost 110:57 - 8:56 = 102:01.
That is 1996 kB/sec.
You didn't specify what tapedrive and host server you used, but this
seems a little low to me.  What is the expected throughput of your
tapedrive (what does amtapetype indicate as speed)?
My tapedrive is a simple AIT-1 drive, 35 Gbyte native capacity,
theoretically doing 3MBytes/sec.  Amanda does 2600 kB/sec on
average (large partitions going at 3000-3090 kB/sec).  And the
amanda server is only a 300 MHz Celeron with 128 MB RAM and a 80
Gbyte IDE holdingdisk.
Slow tapespeed can be the result of one or more DLE's bypassing
the holdingdisk.  That results in the tapedrive not streaming anymore.
The file "amdump.1" contains the keyword "PORT-DUMP" instead of 
"FILE-DUMP" for such partitions. Are there any?

Again adding holdingdisk space would help in that case (or splitting
those DLE's into smaller ones).
Having a large holdingdisk, and many parallel dumpers, could result
in doing the dumps fast, finishing before people come in and load
the client machines, and taping all those collected images can then
proceed at the normal tapespeed, but without bothering the end users.
--
Paul Bijnens, XplanationTel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
http://www.xplanation.com/  email:  [EMAIL PROTECTED]
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out  *
***



Re: tuning amanda

2004-11-17 Thread Kai Zimmer
Hi Alex,

> Look into the `columnspec' docs in amanda.conf in order to stop your
> columns from running into each other.

thanks for the hint - it's much better now:

DUMP SUMMARY:
 DUMPER STATS TAPER 
STATS 
HOSTNAME DISK  L   ORIG-KBOUT-KB COMP%  MMM:SS   KB/s 
MMM:SS   KB/s
---  
-
before  /home 0  29046520  29046520   --   215:09 2250.1 214:16 
2259.4
crusher /home 0  16168600  16168600   --   205:04 1314.1 240:58 
1118.3
hera/home 0  30581620  30581620   --   419:16 1215.7 540:09 
 943.6
kira/home 0  87110752  87110752   --  1291:56 1123.8 431:15 
3366.6
kira/home/konvert 0 133950848 133950848   --  1633:50 1366.4 432:31 
5161.7
kira/home/prepare_rehbein 1197190197190   --33:11   99.0  11:03 
 297.5
kirk/home/clip0 110277408 110277408   --   577:09 3184.6 516:11 
3560.6
kirk/ohne-clip0 204097056 204097056   --  1561:00 2179.1 660:32 
5149.8
lore/local/home   0 104371080 104371080   --  1059:46 1641.4 489:04 
3556.8
lore/local/home/sokirko   0   8572610   8572610   --   116:18 1228.6 169:18 
 843.9
zeus/home 0   9003040   9003040   --   139:31 1075.5 165:25 
 907.2




Re: tuning amanda

2004-11-17 Thread Alexander Jolk
Kai Zimmer wrote:
> the machines are:
> kirk (amanda-server): 2x650 P3; IDE-LVM (hold: 2, local 6 disks), Linux, 
> Giga-Ethernet

How much holding disk does that spell?  Verify the setting in
amanda.conf, you don't want to be limited here.

> before:  1x244 Opteron, single-SCSI-Disk U160, Linux, Giga-Ethernet
> crusher: 1x242 Opteron, single-SCSI-Disk U160, Linux, Giga-Ethernet
> hera:2x1,4 Ghz P3, SCSI-LVM (2 disks), Linux, Giga-Ethernet
> kira:2x1,4 Ghz P3, IDE-LVM  (8 disks), Linux, Giga-Ethernet
> lore:2x1,8 Ghz Xeon, SCSI-HW-Raid (10 disks R5), Linux, Giga-Ethernet
> zeus:2x400 Mhz Sparc, single-SCSI-Disks (4), Solaris 8, Giga-Ethernet

I'd venture to try `compress server' here on all but the Sparc machine. 
Seem to be nice beasts to me...

> USAGE BY TAPE:
>   Label   Time  Size  %Nb
>   volume008  30:30  193301.4   93.5 7
>   volume009  11:01  199313.5   96.4 1
>   volume010  15:47  192761.9   93.3 2
>   volume011   7:13  130811.4   63.3 1
[...]
>   taper: tape volume008 kb 389149248 fm 8 writing file: No space left on 
> device
[...]
>   taper: tape volume009 kb 305159648 fm 2 writing file: No space left on 
> device
[...]
>   taper: tape volume010 kb 324967808 fm 3 writing file: No space left on 
> device
[...]
>   taper: tape volume011 kb 133950912 fm 1 [OK]

Notice you hit EOT at between 300GB and 390GB, so your data is quite
compressible, and you are in fact using hardware compression.  Is that
LTO-2?  Anyway, the successful size of the tapes is only 200GB, which
means that amanda wrote more than 100GB to tape, only to run into EOT
and start over.

Conclusion: either give a more realistic estimate of effective tape
length in your tapetype definition so amanda doesn't try to schedule a
huge flush only to fail; or use software compression with the true tape
length (that would help with holding disk space as well).  And moreover,
break your disks in smaller DLEs; that way when amanda hits EOT, she
only needs to restart a few GB, and not hundreds.  That alone should cut
your backup time down by a third.

Add to that the better parallelizing, and you're down to less than a
weekend.

> before   /home   0 2904652029046520   --  215:092250.1 214:162259.4
> crusher  /home   0 1616860016168600   --  205:041314.1 240:581118.3
> hera /home   0 3058162030581620   --  419:161215.7 540:09 943.6
> kira /home   0 8711075287110752   -- 1291:561123.8 431:153366.6
> kira -me/konvert 0 133950848133950848   -- 1633:501366.4 432:315161.7
> kira -re_rehbein 1  197190 197190   --   33:11  99.0  11:03 297.5
> kirk /home/clip  0 110277408110277408   --  577:093184.6 516:113560.6
> kirk /ohne-clip  0 204097056204097056   -- 1561:002179.1 660:325149.8
> lore /local/home 0 104371080104371080   -- 1059:461641.4 489:043556.8
> lore -me/sokirko 0 85726108572610   --  116:181228.6 169:18 843.9
> zeus /home   0 90030409003040   --  139:311075.5 165:25 907.2

As far as I see, you've got dump rates of consistently over 1MB/s. 
That's not bad I think, I'm sometimes getting worse.  The longest runner
would be kira:.../konvert, 134GB at 1.3MB/s gives 10s which spells
27h to me.  Plus two other disks on the same machine that cannot be
parallelized. :-(  Cutting into smaller pieces would stop them all from
doing a full the same day, though.

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: tuning amanda

2004-11-17 Thread Alexander Jolk
Kai Zimmer wrote:
> before   /home   0 2904652029046520   --  215:092250.1 214:162259.4
[...]

Look into the `columnspec' docs in amanda.conf in order to stop your
columns from running into each other.

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: tuning amanda

2004-11-17 Thread Alexander Jolk
Kai Zimmer wrote:
> My servers CPU is slower than most of my clients. But at the moment i only 
> use hardware compression on the tape. Should i change it to server or client 
> compression?

>From your amplot, it doesn't seem as if you're network bound, so sending
uncompressed data over the wire shouldn't be a problem; otherwise I'd
have recommended you try `compress client'.  You might still want to
compress on either side, client or server, in order to cut down on
holding disk usage.  In any case, make sure you don't mix hardware and
software (client or server) compression.  (Unless you have an LTO drive,
which does cope nicely with already-compressed data.)  This point has
been discussed several times already on the list, you'll find all the
details in the archives.

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29


Re: tuning amanda

2004-11-17 Thread Kai Zimmer
Hi Joshua,

thanks for your answer.

> Please post the whole report email.  Also, some hints as to machine types, 
> CPU speeds, disk types, OS(es), and network speed would be helpful.

the machines are:
kirk (amanda-server): 2x650 P3; IDE-LVM (hold: 2, local 6 disks), Linux, 
Giga-Ethernet
before:  1x244 Opteron, single-SCSI-Disk U160, Linux, Giga-Ethernet
crusher: 1x242 Opteron, single-SCSI-Disk U160, Linux, Giga-Ethernet
hera:2x1,4 Ghz P3, SCSI-LVM (2 disks), Linux, Giga-Ethernet
kira:2x1,4 Ghz P3, IDE-LVM  (8 disks), Linux, Giga-Ethernet
lore:2x1,8 Ghz Xeon, SCSI-HW-Raid (10 disks R5), Linux, Giga-Ethernet
zeus:2x400 Mhz Sparc, single-SCSI-Disks (4), Solaris 8, Giga-Ethernet 

here is the report mail (there are some irrelevant failure messages):

-snip--
These dumps were to tapes volume008, volume009, volume010, volume011.
The next 4 tapes Amanda expects to used are: volume012, volume013, volume014, 
volume015.

FAILURE AND STRANGE DUMP SUMMARY:
  hera   /home lev 0 STRANGE
  kirk   /ohne-clip lev 0 STRANGE
  kira   /home lev 0 STRANGE


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)8:56
Run Time (hrs:min)110:57
Dump Time (hrs:min)   120:52  120:19   0:33
Output Size (meg)  716188.2   715995.6  192.6
Original Size (meg)716188.2   715995.6  192.6
Avg Compressed Size (%) -- -- --(level:#disks ...)
Filesystems Dumped   11 10  1   (1:1)
Avg Dump Rate (k/s)  1685.4 1692.7   99.0

Tape Time (hrs:min)   64:31  64:20   0:11
Tape Size (meg)716188.2   715995.6  192.6
Tape Used (%) 346.5  346.40.1   (level:#disks ...)
Filesystems Taped11 10  1   (1:1)
Avg Tp Write Rate (k/s)  3157.8 3166.0  297.5

USAGE BY TAPE:
  Label   Time  Size  %Nb
  volume008  30:30  193301.4   93.5 7
  volume009  11:01  199313.5   96.4 1
  volume010  15:47  192761.9   93.3 2
  volume011   7:13  130811.4   63.3 1


FAILED AND STRANGE DUMP DETAILS:

/-- hera   /home lev 0 STRANGE
sendbackup: start [hera:/home level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/bin/tar -f... -
sendbackup: info end
? gtar: ./var/log/httpd/access_log: file changed as we read it
| Total bytes written: 31315578880 (29GB, 1.2MB/s)
sendbackup: size 30581620
sendbackup: end
\

/-- kirk   /ohne-clip lev 0 STRANGE
sendbackup: start [kirk:/ohne-clip level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/bin/tar -f... -
sendbackup: info end
? gtar: ./home/amanda/full/index/kira/_home_prepare__rehbein/20041112_1.gz.tmp: 
Warning: Cannot stat: No such file or directory
| gtar: ./home/backup/var/lib/mysql/mysql.sock: socket ignored
| gtar: ./home/backup/var/run/.nscd_socket: socket ignored
| gtar: ./home/backup/var/run/sendmail/control: socket ignored
| gtar: ./home/mysql/mysql.sock: socket ignored
| Total bytes written: 208995389440 (195GB, 2.1MB/s)
sendbackup: size 204097060
sendbackup: end
\

/-- kira   /home lev 0 STRANGE
sendbackup: start [kira:/home level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/bin/tar -f... -
sendbackup: info end
? gtar: ./home/herold/public_html/chart.pdf: Warning: Cannot stat: No such file 
or directory
? gtar: ./home/herold/public_html/gnuplot-manual.pdf: Warning: Cannot stat: No 
such file or directory
? gtar: ./home/herold/public_html/test.eps: Warning: Cannot stat: No such file 
or directory
? gtar: ./home/herold/public_html/test.pdf: Warning: Cannot stat: No such file 
or directory
? gtar: ./home/kai/.DCOPserver_kira_tuvok.bbaw.de\:0: Warning: Cannot stat: No 
such file or directory
? gtar: ./home/kai/.DCOPserver_kira_tuvok.bbaw.de_0: Warning: Cannot stat: No 
such file or directory
? gtar: 
./home/kai/.kde/share/config/session/konsole_11c0a8014e0001093427225027818_1100195599_472021:
 Warning: Cannot stat: No such file or directory
? gtar: 
./home/kai/.kde/share/config/session/konsole_11c0a8014e00010934390090278100010_1100195599_472160:
 Warning: Cannot stat: No such file or directory
? gtar: 
./home/kai/.kde/share/config/session/kwin_11c0a8014e0001065359865001403_1100195599_540511:
 Warning: Cannot stat: No such file or directory
| Total bytes written: 89201408000 (83GB, 1.1MB/s)
sendbackup: size 87110750
sendbackup: end
\


NOTES:
  planner: Incremental of kira:/home bumped to level 2.
  planner: Full dump of hera:/home promoted from 1 day ahead.
  planner: Full dump of crusher:/home promoted from 1 day ahead.
  planner: Full dump of kira:/home/konvert promoted from 2 days ahead.
  planner: Full dump of kirk:/home/clip promoted from 3 days ahead.
  taper: tape volume008 kb 389149248 fm 8 writing file: N

Re: tuning amanda

2004-11-17 Thread Joshua Baker-LePain
On Wed, 17 Nov 2004 at 2:37pm, Kai Zimmer wrote

> > As it stands, you don't almost gain anything by
> > parallelizing DLEs:
> > > Run Time (hrs:min)110:57
> > > Dump Time (hrs:min)   120:52
> 
> Sorry, i didn't completely understand that. Run time can't be lower 
> than dump time, so parallelizing will gain max. 10h ?

I think what he was saying is that your dump time is not much higher than 
your run time, therefore amanda's inherent parallelization isn't gaining 
you much.  "Run time" is the wall clock time from the time you start 
'amdump' until the time it finishes.  "Dump time" is the total time spent 
by all the DUMP programs amanda runs.  Since many DUMPs can be run at 
once, "dump time" *can be* significantly higher than run time -- the 
higher the ratio, the more efficient amanda is running.  If there are ways 
you can improve your parallelization, your run time may go down.  

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


Re: tuning amanda

2004-11-17 Thread Joshua Baker-LePain
On Wed, 17 Nov 2004 at 1:39pm, Kai Zimmer wrote

> my amanda installation is quite slow:
> Estimate Time (hrs:min)8:56
> Run Time (hrs:min)110:57
> Dump Time (hrs:min)   120:52
> for 716 GB. 
> 
> this is what Amplot says:
> http://www.dwds.de/~kai/images/20041112.pdf

Please post the whole report email.  Also, some hints as to machine types, 
CPU speeds, disk types, OS(es), and network speed would be helpful.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


Re: tuning amanda

2004-11-17 Thread Kai Zimmer
Hi Alex,

thanks for your answer:

> I've found on my installation that for some clients, the bottleneck was
> rather the client's CPU.  Switching from `compress client' to `compress
> server' gained me some percent.

My servers CPU is slower than most of my clients. But at the moment i only use 
hardware compression on the tape. Should i change it to server or client 
compression? 

>  Also, breaking up into smaller DLEs
> surely helps, if only because it tends to keep the holding disk free
> (you run into 100% a few times), which might allow other clients to be
> started in parallel.

ok, that's clear.

> As it stands, you don't almost gain anything by
> parallelizing DLEs:
> > Run Time (hrs:min)110:57
> > Dump Time (hrs:min)   120:52

Sorry, i didn't completely understand that. Run time can't be lower than dump 
time, so parallelizing will gain max. 10h ?

> But even so, 716GB in 120h isn't that bad a troughput.  I'm getting half
> an order of magnitude better, 400GB in typically 16h, but I wouldn't bet
> the house on getting below 24h.

ok - at least that might run over the weekend :-)
thanks for your help,
Kai 





Re: tuning amanda

2004-11-17 Thread Alexander Jolk
Kai Zimmer wrote:
> Are there possibilties to gain speed without investing into new hardware?
> Will breaking up into smaller DLEs help?

I've found on my installation that for some clients, the bottleneck was
rather the client's CPU.  Switching from `compress client' to `compress
server' gained me some percent.  Also, breaking up into smaller DLEs
surely helps, if only because it tends to keep the holding disk free
(you run into 100% a few times), which might allow other clients to be
started in parallel.  As it stands, you don't almost gain anything by
parallelizing DLEs:
> Run Time (hrs:min)110:57
> Dump Time (hrs:min)   120:52

But even so, 716GB in 120h isn't that bad a troughput.  I'm getting half
an order of magnitude better, 400GB in typically 16h, but I wouldn't bet
the house on getting below 24h.

Alex

-- 
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 /  fax +33-1 42 68 18 29