-----Original Message-----
From: Zlatko Krastev/ACIT [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, 13 November 2001 1:55
To: [EMAIL PROTECTED]
Subject: Re: Server Database Performance


Wanda and Jeff are right - you have to look at all those things and
especially to %tm_act problem.
But I will bet also on what Paul pointed - single FE card is simply not
enough. If you are unable to put Gigabit Ethernet get at least one or
three
more FE cards and make an EtherChannel or subdivide somehow the LAN.

Back to %tm_act and disks:
You pointed 88 x 1 gig FILES, right? And those files are on same
filesystem
--> same LUN ? So TSM is issuing up to 88 I/Os in parallel just to be
queued later at the SCSI/FC adapter and/or LUN and/or spindles in EMC
?!?
I would dedicate at least 4 different LUNs ensuring they are on
different
physical disks inside EMC (I know how to do this only on IBM ESS), would
create few (less than 10 or even less than 5) logical volumes on each
LUN,
and use the raw devices instead of getting filesystem overhead and
journaling. For the latter you have to care for MIRRORWRITE DB and
MIRRORWRITE LOG.
I am not a performance guru and cannot say is 32 a good number of DB
volumes (as Jeff pointed) or not. But you can also have a look is the
AIX
hdisk(s) device containing DB volumes the only one overloaded. What
about
the staging area disk pool(s)? Is it made of file volumes (I mean
devclass=disk volname=/TSM_DISK_FS/volXXX) or raw volumes
(/dev/rhdiskXYZ)?
What is the usage pattern there?

The N(!)etwork issue:
can you remember what was 6 months ago - was again 500 GB/night or just
300
GB which slowly grew up. many of my customers are experiencing problems
that users simply do not delete useless data. They keep 3 years old mail
with the attachments, the message with 20-30 megs presentation they just
got was forwarded to whole division (but the forwarding user keeps both
received copy and the one in "Sent" folder), another 5 meg Word file is
saved under new name after minor update, ...., etc.
So at that time you were able to fit 300 GB in 8-10 hour and is normal
to
be unable to fit current 50-60 % more in same backup window.

Just some thoughts.


Zlatko Krastev
IT Consultant






Mubashir Farooqi <[EMAIL PROTECTED]> on 07.11.2001 21:08:44
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To:     [EMAIL PROTECTED]
cc:

Subject:        Server Database Performance

I have a TSM/AIX server v4.2.1.6 running under AIX 4.3.3 on an H70. TSM
is
the
only application running on this box. I have about 300 Unix and NT
clients
backing up close to 500 GB of incremental everyday on this server. TSM
server
database size is 87.5 GB and 75% utilized. TSM server log size is 5 GB.
Clients
dump the data into staging area which is 450 GB in size. I have 6 3590E
drives
in a 3494 library connected to this TSM server. I have one 100 Mbits
full
duplex
network connection. All the disks are EMC disks.

For past few months we are seeing performance of TSM server going from
bad
to
worse. Lately the backups have come to crawling halt. For example the NT
cluster
servers which use to take 6-8 hours to complete the backups now take
20-24
hours
or more to complete the backups. All diagnostic and performance data
points
to
problem with database. CPU and memory utilization never exceeds 40-50%.
IOSTAT
constantly shows %tm_act 100 with very little Kb_read/wrtn. Filemon data
for
physical volume of database shows seeks 10 times more than seeks for
other
volumes and init as 0. This TSM server was setup about two year ago. We
have
never unloaded/reloaded the database or run any form of compaction.

Question I have are:
 what can I do to improve the performance?
how long will it take to perform 65 GB database unload/reload?
currently the database consists of 88 files each 1 GB in size. If add
another
set of 8 files 10 GB in size and use dbvol delete command, TSM will move
the
contents to new set of volumes. Will this process do any compaction and
improve
the database performance?

Thanks in advance....

Mubashir Farooqi
World Bank, HQ

Reply via email to