Hi,
Summarizing up:
your clients have 100Mbit-Nics,
your server has a 1000Mbit-Nic,
you are not using a holdingdisk, so as far as i recall,
you are getting the maximum possible performance out of your setup.
Why?
without Holdingdisk, amanda will fetch all your dumps one after the other,
no matter what you set inparallel to in amada.conf.

Or has that behavior changend for newer versions of amanda?

You are limited by the speed of your client-nics, 100mBit/sec means max 11 
MByte/sec.
and as a short calculation this leads to roughly 3 to 4 days backup-time.

if your NAS has a 1000Mbit-Nic, and if the systems are connected together by a
1GBit/sec switch then do yourself a favor and put a holdingdisk into your 
server,
i would suggest a sata-disk with around 2 times the capacity of the largest DLE 
you have.
It will cut Backuptime dramatically, as amanda will start dumping many hosts in 
parallel.

But if your nas only has a 100MBit NIC or you don't have a Gbit switch you'll 
never get
amanda faster than now, nor any other backup solution.
Hope that helps
Christoph

Am 15.03.2013 07:41, schrieb Amit Karpe:
I am sharing her more Info:

cpu usage

On server (Intel® Xeon® series Quad core processors @ 2.66GHz)
# ps -eo pcpu,pid,user,args | sort -r -k1 | head
%CPU   PID USER     COMMAND
  6.0 26873 33       /usr/bin/gzip --fast
  4.3 26906 33       /usr/bin/gzip --fast
27.7 30002 ntop     ntop
  2.1 26517 33       dumper3 DailySet2
  2.1 26515 33       dumper1 DailySet2
  1.4  1851 root     [nfsiod]
  1.2  1685 nobody   /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-borneo -i
/var/run/dirsrv/slapd-borneo.pid -w /var/run/dirsrv/slapd-borneo.startpid
  1.0 27603 root     ps -eo pcpu,pid,user,args
  1.0  2135 root     [nfsd]

But on client is always 80%-90% cpu usage. So I am planning to use
"compression server fast".


parallel:
Though I am using inparallel option in config file, I am not sure whether
multiple dumper or other process running parallel or not !
  inparallel 30           #performance
         maxdumps 5              #performance


netusage:
I read on forum that netusage is obsolete option, but still I have tried to
play around from 8m to 8000m, but no grt success. What should it value
for netusage
? If my server having NIC support for 1000 Mbps.

maxdumps:
I have changed it from one to five. How to make sure whether its working or
not ?

I have tested 15GB backup by changing above parameters for 50+ times. I see
its improvement in performance only 5%. i.e. I reduce backup time from
18min to 15min. Can someone guide me to improve it further ?


Client System: These normal ten workstation with 4GB RAM, Xeon duel core
2.5GHz, 100 Mbps NIC.
Those having 200G to 800G data, but number of files are far more in numbers.
Just to give idea:
# find /disk1 | wc -l
647139
# df -h /disk1
Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c0d2       1.8T  634G  1.1T  37% /disk1

or
# du -sh .
202G .
# find | wc -l
707172

I have tried with amplot I have found these outputs:
amdump.1<https://www.dropbox.com/sh/qhh16izq5z43iqj/hx6uplXRUp/20130315094305.ps>&
amdump.2<https://www.dropbox.com/sh/qhh16izq5z43iqj/7IecwXLIUp/20130315105836.ps>
Sorry but I could not understand these plot. I think it just cover first
one min information.

Thank you all those you are helping and answering my dumb questions.


Reply via email to