Zlatko:
OK: tcpwindowsize is in KBytes (actually my setting is 1280).

I have the same question (why client is idling?).
We have users working on that server from 9:00 until 22:00 (every
day...), backup jobs start about 00:15/01:15 

I've monitored the nodes in disk i/o operations and network transfers,
in different moments of the day.
About cpu load/memory usage/pagination: the values are all OK, for
example:
- cpu load (usr) has an average of 5~7 during all day
- memory usage (have 7GB RAM) is not a problem (about 30~40% for
computational the same for noncomp)
- pagination: max use may be 10~12% (mostly of the day 0.5%, peaks
during user's work time)

Viewing your results (500GB in 3:10hs), and trying to compare: we're
backing up 250GB starting 00:15 and ending 07:00... 250GB in 6:45hs
(it's not good)

Yesterday I set resource utilization = 10 too, and performance was the
same (really a bit worst).

I think there are multiple problems (and our provider -IBM- cannot help
us):

First of all: disk performance (IBM should tell us how to get the best
from FastT500), we have from 15 to 25 MB/sec in the storage...

Then: all TSM nodes in our installation have not the same file
configuration.
I explain a bit more this: we have nodes merging a lot of files
(>250000) with an average size of 40KB each and a few files (<1000) with
an average size of 50MB (it's Oracle Financials: the database server
keeps datafiles and files belonging to application and DB motor).

We have 4 nodes such as just described, with a total of 35~40GB for each
(average and growing...)

Well, here was a brief description.
I'm listening for new ideas.
Thanks

Ignacio


> -----Mensaje original-----
> De: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
> Enviado el: viernes, 17 de mayo de 2002 5:29
> Para: [EMAIL PROTECTED]
> Asunto: Re: Tuning TSM
> 
> 
> Just a point
> TCPWindowsize parameter is measured in kilobytes not bytes. 
> And according 
> to Administrator's reference it must be between 0 and 2048. If not in 
> range on client complains with ANS1036S for invalid value. On server 
> values out of range mean 0, i.e. OS default.
> However this are side remarks. The main question is why 
> client is idling. 
> Have you monitored the node during to disk and to tape operation? Is 
> migration starting during backup? Are you using DIRMC.
> You wrote client compression - what is the processor usage 
> (user)? What is 
> the disk load - is the processor I/O wait high? Is the paging 
> space used - 
> check with svmon -P.
> You should get much better results. For example recently 
> we've achieved 
> 500 GB in 3h10m - fairly good. It was similar to your config - AIX 
> node&server, client compression, disk pool, GB ether. Ether 
> was driven 
> 10-25 MB/s depending on achieved compression. The bottleneck was EMC 
> Symmetrix the node was reading from but another company was 
> dealing with 
> it and we were unable to get more than 60-70 MB/s read. 
> Resourceutilization was set to 10.
> 
> Zlatko Krastev
> IT Consultant
> 
> 
> 
> 
> Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> Sent by:        "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> To:     [EMAIL PROTECTED]
> cc: 
> 
> Subject:        Re: Tuning TSM
> 
> Zlatko:
> Here are the answers:
> 
> > Have you tested what is the performance of FAStT 500 disks? 
> > Those were 
> > mid-class disk storage originally designed for PC servers 
> > market and later 
> > modified to be compatible with rs6k/pSeries.
> 
> This is completely true: IBM installed FastT500 instead of  SSA disks
> (they said they were faster).
> We have done a "re-cabling" of these devices, and got a better IO
> balance on the controllers.
> 
> > Try some simple tests on the TSM server when quiet:
> > - 'date; dd if=/dev/zero of=/fastt_fs/test_file bs=262144 
> > count=16384; 
> > date' to test write speed. Use large file to simulate TSM 
> heavy load.
> > - 'date; dd if=/fastt_fs/test_file of=/dev/null' to check read speed
> 
> In a test like you proposed, FastT500 brings a peformance of about
> 60MB/sec
> 
> > Also check your Gb Ether - are you using Jumbo frames, have 
> > you enabled in 
> > AIX 'no -o rfc1323=1', what are send&receive bufsizes, what is 
> > TCPWindowsize of TSM server&client. Have you measured 
> > LAN&disk load during 
> > backup?
> 
> Yes, we're using this parameters:
> <from "no">
>                 tcp_sendspace = 1048576
>             tcp_recvspace = 1048576
>             udp_sendspace = 64000
>             udp_recvspace = 64000
>                 rfc1323 = 1
>  
> <from TSM server>
> TCPWindowsize         1310720
> 
> What I've observed is that during a client backup session, 
> TSM seems to
> be idled for a long time (and the server machine has not constraint
> problems for memory or I/O)
> 
> I can send other data.
> Thank you all.
> 
> Ignacio
> 
> 
> 
> 
> 
> 
> 
> > 
> > Zlatko Krastev
> > IT Consultant
> > 
> > 
> > 
> > Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> > Sent by:        "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> > To:     [EMAIL PROTECTED]
> > cc: 
> > 
> > Subject:        Tuning TSM
> > 
> > Hi:
> > I'm managing a pretty small TSM installation with 4 RS/6K 
> machines (2
> > 6M1 and 2 6H1) running AIX 4.3.3 (ML9).
> > TSM software consists of the server (running in a 6M1 - 7Gb 
> RAM), and
> > the clients running in the same machine and on the others.
> > 
> > I´ve got the following situation:
> > - the total of data backed up is about 200Gb's,
> > - 4 servers are connected using gigabit ethernet links (and 
> > have 6Gb RAM
> > and 7Gb RAM each model 6H1 and 6M1 respectively)
> > - TSM uses a storage pool of 240Gb on FastT500 disks (those are
> > connected by FC channels)
> > - TSM uses a 3581 library (LTO) with 1 drive,
> > 
> > The fact is (for the same set of information):
> > When I do an archive backup operation with TSM, the time 
> > elapsed rounds
> > 5 hours (TSM writes "right to" the tape).
> > When I do an incremental backup operation, TSM uses about 
> > 6:30hs for it
> > (TSM writes to storage pool).
> > 
> > I'm looking for a rational approach to solve this 
> "problem": isn't it
> > more fast writing to storage pool (disk) that to tape?
> > 
> > Anyone had the same performance problem?
> > 
> > Is it really a performance problem?
> > 
> > I would like some commentaries about this, I can provide some 
> > info about
> > the configuration of TSM and the AIX servers.
> > 
> > Regards
> > 
> > Ignacio
> > 
> 

Reply via email to