Yes, it is in the UNIX backup-archive client manual.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-----Original Message-----
From: Bern Ruelas [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 17, 2002 5:58 PM
To: [EMAIL PROTECTED]
Subject: Re: Tuning TSM


Hi Don,

I looked for "incrbydate" in the installation guide and reference manual for
this option but haven't found it. Can I use this for Solaris clients?

-Bern Ruelas
Cadence Design Systems - Storage
Sr. Systems Engineer

-----Original Message-----
From: Don France (TSMnews) [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 17, 2002 2:27 PM
To: [EMAIL PROTECTED]
Subject: Re: Tuning TSM


Reading thru this thread, no one has mentioned that backup will be slower
than archive -- for TWO significant reasons: 1. The "standard"
progressive-incremental requires alot of work in comparing the attributes of
all files in the affected file systems, especially for a LARGE number of
files/directories (whereas archive has minimal database overhead -- it just
moves data). 2. Writes to disk are NOT as fast as tape IF the data can be
delivered to the tape device at "streaming" speed;  this is especially true
if using no-RAID or RAID-5 for disk pool striping (with parity)... RAID-0
might compete if multiple paths & controllers are configured.  The big
advantage to disk pools is more concurrent backup/archive operations, then
disk-migration can stream offload the data to tape.

So, firstly, debug fundamentals using tar and archive commands (to eliminate
db overhead comparing file system attributes to identify "changed"
files/objects);  once you are satisfied with the thruput for archive, allow
20-50% overhead for daily incremental. If your best "incremental" experience
is not satisfactory, (but archive is okay) consider other options discussed
in the performance-tuning papers -- such as, reducing the number of files
per file system, use incrbydate during the week, increase horsepower on the
client machine and/or TSM server (depending on where the incr. bottlenecks
are).

The SHARE archives do not yet have the Nashville proceedings posted; when
they do show up, they are in the members-only area  (I was just there,
searching for other sessions).


----- Original Message -----
From: "Ignacio Vidal" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 17, 2002 6:30 AM
Subject: Re: Tuning TSM


Zlatko:
OK: tcpwindowsize is in KBytes (actually my setting is 1280).

I have the same question (why client is idling?).
We have users working on that server from 9:00 until 22:00 (every day...),
backup jobs start about 00:15/01:15

I've monitored the nodes in disk i/o operations and network transfers, in
different moments of the day. About cpu load/memory usage/pagination: the
values are all OK, for
example:
- cpu load (usr) has an average of 5~7 during all day
- memory usage (have 7GB RAM) is not a problem (about 30~40% for
computational the same for noncomp)
- pagination: max use may be 10~12% (mostly of the day 0.5%, peaks during
user's work time)

Viewing your results (500GB in 3:10hs), and trying to compare: we're backing
up 250GB starting 00:15 and ending 07:00... 250GB in 6:45hs (it's not good)

Yesterday I set resource utilization = 10 too, and performance was the same
(really a bit worst).

I think there are multiple problems (and our provider -IBM- cannot help
us):

First of all: disk performance (IBM should tell us how to get the best from
FastT500), we have from 15 to 25 MB/sec in the storage...

Then: all TSM nodes in our installation have not the same file
configuration. I explain a bit more this: we have nodes merging a lot of
files
(>250000) with an average size of 40KB each and a few files (<1000) with an
average size of 50MB (it's Oracle Financials: the database server keeps
datafiles and files belonging to application and DB motor).

We have 4 nodes such as just described, with a total of 35~40GB for each
(average and growing...)

Well, here was a brief description.
I'm listening for new ideas.
Thanks

Ignacio


> -----Mensaje original-----
> De: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
> Enviado el: viernes, 17 de mayo de 2002 5:29
> Para: [EMAIL PROTECTED]
> Asunto: Re: Tuning TSM
>
>
> Just a point
> TCPWindowsize parameter is measured in kilobytes not bytes. And
> according to Administrator's reference it must be between 0 and 2048.
> If not in range on client complains with ANS1036S for invalid value.
> On server values out of range mean 0, i.e. OS default.
> However this are side remarks. The main question is why
> client is idling.
> Have you monitored the node during to disk and to tape operation? Is
> migration starting during backup? Are you using DIRMC.
> You wrote client compression - what is the processor usage
> (user)? What is
> the disk load - is the processor I/O wait high? Is the paging
> space used -
> check with svmon -P.
> You should get much better results. For example recently
> we've achieved
> 500 GB in 3h10m - fairly good. It was similar to your config - AIX
> node&server, client compression, disk pool, GB ether. Ether
> was driven
> 10-25 MB/s depending on achieved compression. The bottleneck was EMC
> Symmetrix the node was reading from but another company was
> dealing with
> it and we were unable to get more than 60-70 MB/s read.
> Resourceutilization was set to 10.
>
> Zlatko Krastev
> IT Consultant
>
>
>
>
> Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> Sent by:        "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> To:     [EMAIL PROTECTED]
> cc:
>
> Subject:        Re: Tuning TSM
>
> Zlatko:
> Here are the answers:
>
> > Have you tested what is the performance of FAStT 500 disks? Those
> > were mid-class disk storage originally designed for PC servers
> > market and later
> > modified to be compatible with rs6k/pSeries.
>
> This is completely true: IBM installed FastT500 instead of  SSA disks
> (they said they were faster). We have done a "re-cabling" of these
> devices, and got a better IO balance on the controllers.
>
> > Try some simple tests on the TSM server when quiet:
> > - 'date; dd if=/dev/zero of=/fastt_fs/test_file bs=262144
> > count=16384; date' to test write speed. Use large file to simulate
> > TSM
> heavy load.
> > - 'date; dd if=/fastt_fs/test_file of=/dev/null' to check read speed
>
> In a test like you proposed, FastT500 brings a peformance of about
> 60MB/sec
>
> > Also check your Gb Ether - are you using Jumbo frames, have you
> > enabled in AIX 'no -o rfc1323=1', what are send&receive bufsizes,
> > what is TCPWindowsize of TSM server&client. Have you measured
> > LAN&disk load during
> > backup?
>
> Yes, we're using this parameters:
> <from "no">
>                 tcp_sendspace = 1048576
>             tcp_recvspace = 1048576
>             udp_sendspace = 64000
>             udp_recvspace = 64000
>                 rfc1323 = 1
>
> <from TSM server>
> TCPWindowsize         1310720
>
> What I've observed is that during a client backup session, TSM seems
> to be idled for a long time (and the server machine has not constraint
> problems for memory or I/O)
>
> I can send other data.
> Thank you all.
>
> Ignacio
>
>
>
>
>
>
>
> >
> > Zlatko Krastev
> > IT Consultant
> >
> >
> >
> > Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> > Sent by:        "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> > To:     [EMAIL PROTECTED]
> > cc:
> >
> > Subject:        Tuning TSM
> >
> > Hi:
> > I'm managing a pretty small TSM installation with 4 RS/6K
> machines (2
> > 6M1 and 2 6H1) running AIX 4.3.3 (ML9).
> > TSM software consists of the server (running in a 6M1 - 7Gb
> RAM), and
> > the clients running in the same machine and on the others.
> >
> > I4ve got the following situation:
> > - the total of data backed up is about 200Gb's,
> > - 4 servers are connected using gigabit ethernet links (and have 6Gb
> > RAM and 7Gb RAM each model 6H1 and 6M1 respectively)
> > - TSM uses a storage pool of 240Gb on FastT500 disks (those are
> > connected by FC channels)
> > - TSM uses a 3581 library (LTO) with 1 drive,
> >
> > The fact is (for the same set of information):
> > When I do an archive backup operation with TSM, the time elapsed
> > rounds 5 hours (TSM writes "right to" the tape).
> > When I do an incremental backup operation, TSM uses about
> > 6:30hs for it
> > (TSM writes to storage pool).
> >
> > I'm looking for a rational approach to solve this
> "problem": isn't it
> > more fast writing to storage pool (disk) that to tape?
> >
> > Anyone had the same performance problem?
> >
> > Is it really a performance problem?
> >
> > I would like some commentaries about this, I can provide some info
> > about the configuration of TSM and the AIX servers.
> >
> > Regards
> >
> > Ignacio
> >
>

Reply via email to