Re: Tuning TSM

2002-05-18 Thread Seay, Paul

Yes, it is in the UNIX backup-archive client manual.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Bern Ruelas [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 17, 2002 5:58 PM
To: [EMAIL PROTECTED]
Subject: Re: Tuning TSM


Hi Don,

I looked for incrbydate in the installation guide and reference manual for
this option but haven't found it. Can I use this for Solaris clients?

-Bern Ruelas
Cadence Design Systems - Storage
Sr. Systems Engineer

-Original Message-
From: Don France (TSMnews) [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 17, 2002 2:27 PM
To: [EMAIL PROTECTED]
Subject: Re: Tuning TSM


Reading thru this thread, no one has mentioned that backup will be slower
than archive -- for TWO significant reasons: 1. The standard
progressive-incremental requires alot of work in comparing the attributes of
all files in the affected file systems, especially for a LARGE number of
files/directories (whereas archive has minimal database overhead -- it just
moves data). 2. Writes to disk are NOT as fast as tape IF the data can be
delivered to the tape device at streaming speed;  this is especially true
if using no-RAID or RAID-5 for disk pool striping (with parity)... RAID-0
might compete if multiple paths  controllers are configured.  The big
advantage to disk pools is more concurrent backup/archive operations, then
disk-migration can stream offload the data to tape.

So, firstly, debug fundamentals using tar and archive commands (to eliminate
db overhead comparing file system attributes to identify changed
files/objects);  once you are satisfied with the thruput for archive, allow
20-50% overhead for daily incremental. If your best incremental experience
is not satisfactory, (but archive is okay) consider other options discussed
in the performance-tuning papers -- such as, reducing the number of files
per file system, use incrbydate during the week, increase horsepower on the
client machine and/or TSM server (depending on where the incr. bottlenecks
are).

The SHARE archives do not yet have the Nashville proceedings posted; when
they do show up, they are in the members-only area  (I was just there,
searching for other sessions).


- Original Message -
From: Ignacio Vidal [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 17, 2002 6:30 AM
Subject: Re: Tuning TSM


Zlatko:
OK: tcpwindowsize is in KBytes (actually my setting is 1280).

I have the same question (why client is idling?).
We have users working on that server from 9:00 until 22:00 (every day...),
backup jobs start about 00:15/01:15

I've monitored the nodes in disk i/o operations and network transfers, in
different moments of the day. About cpu load/memory usage/pagination: the
values are all OK, for
example:
- cpu load (usr) has an average of 5~7 during all day
- memory usage (have 7GB RAM) is not a problem (about 30~40% for
computational the same for noncomp)
- pagination: max use may be 10~12% (mostly of the day 0.5%, peaks during
user's work time)

Viewing your results (500GB in 3:10hs), and trying to compare: we're backing
up 250GB starting 00:15 and ending 07:00... 250GB in 6:45hs (it's not good)

Yesterday I set resource utilization = 10 too, and performance was the same
(really a bit worst).

I think there are multiple problems (and our provider -IBM- cannot help
us):

First of all: disk performance (IBM should tell us how to get the best from
FastT500), we have from 15 to 25 MB/sec in the storage...

Then: all TSM nodes in our installation have not the same file
configuration. I explain a bit more this: we have nodes merging a lot of
files
(25) with an average size of 40KB each and a few files (1000) with an
average size of 50MB (it's Oracle Financials: the database server keeps
datafiles and files belonging to application and DB motor).

We have 4 nodes such as just described, with a total of 35~40GB for each
(average and growing...)

Well, here was a brief description.
I'm listening for new ideas.
Thanks

Ignacio


 -Mensaje original-
 De: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
 Enviado el: viernes, 17 de mayo de 2002 5:29
 Para: [EMAIL PROTECTED]
 Asunto: Re: Tuning TSM


 Just a point
 TCPWindowsize parameter is measured in kilobytes not bytes. And
 according to Administrator's reference it must be between 0 and 2048.
 If not in range on client complains with ANS1036S for invalid value.
 On server values out of range mean 0, i.e. OS default.
 However this are side remarks. The main question is why
 client is idling.
 Have you monitored the node during to disk and to tape operation? Is
 migration starting during backup? Are you using DIRMC.
 You wrote client compression - what is the processor usage
 (user)? What is
 the disk load - is the processor I/O wait high? Is the paging
 space used -
 check with svmon -P.
 You should get much better results. For example recently
 we've achieved
 500 GB in 3h10m - fairly good. It was similar to your config

Re: Tuning TSM

2002-05-17 Thread Zlatko Krastev

Just a point
TCPWindowsize parameter is measured in kilobytes not bytes. And according 
to Administrator's reference it must be between 0 and 2048. If not in 
range on client complains with ANS1036S for invalid value. On server 
values out of range mean 0, i.e. OS default.
However this are side remarks. The main question is why client is idling. 
Have you monitored the node during to disk and to tape operation? Is 
migration starting during backup? Are you using DIRMC.
You wrote client compression - what is the processor usage (user)? What is 
the disk load - is the processor I/O wait high? Is the paging space used - 
check with svmon -P.
You should get much better results. For example recently we've achieved 
500 GB in 3h10m - fairly good. It was similar to your config - AIX 
nodeserver, client compression, disk pool, GB ether. Ether was driven 
10-25 MB/s depending on achieved compression. The bottleneck was EMC 
Symmetrix the node was reading from but another company was dealing with 
it and we were unable to get more than 60-70 MB/s read. 
Resourceutilization was set to 10.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: 

Subject:Re: Tuning TSM

Zlatko:
Here are the answers:

 Have you tested what is the performance of FAStT 500 disks? 
 Those were 
 mid-class disk storage originally designed for PC servers 
 market and later 
 modified to be compatible with rs6k/pSeries.

This is completely true: IBM installed FastT500 instead of  SSA disks
(they said they were faster).
We have done a re-cabling of these devices, and got a better IO
balance on the controllers.

 Try some simple tests on the TSM server when quiet:
 - 'date; dd if=/dev/zero of=/fastt_fs/test_file bs=262144 
 count=16384; 
 date' to test write speed. Use large file to simulate TSM heavy load.
 - 'date; dd if=/fastt_fs/test_file of=/dev/null' to check read speed

In a test like you proposed, FastT500 brings a peformance of about
60MB/sec

 Also check your Gb Ether - are you using Jumbo frames, have 
 you enabled in 
 AIX 'no -o rfc1323=1', what are sendreceive bufsizes, what is 
 TCPWindowsize of TSM serverclient. Have you measured 
 LANdisk load during 
 backup?

Yes, we're using this parameters:
from no
tcp_sendspace = 1048576
tcp_recvspace = 1048576
udp_sendspace = 64000
udp_recvspace = 64000
rfc1323 = 1
 
from TSM server
TCPWindowsize 1310720

What I've observed is that during a client backup session, TSM seems to
be idled for a long time (and the server machine has not constraint
problems for memory or I/O)

I can send other data.
Thank you all.

Ignacio







 
 Zlatko Krastev
 IT Consultant
 
 
 
 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
 Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 cc: 
 
 Subject:Tuning TSM
 
 Hi:
 I'm managing a pretty small TSM installation with 4 RS/6K machines (2
 6M1 and 2 6H1) running AIX 4.3.3 (ML9).
 TSM software consists of the server (running in a 6M1 - 7Gb RAM), and
 the clients running in the same machine and on the others.
 
 I´ve got the following situation:
 - the total of data backed up is about 200Gb's,
 - 4 servers are connected using gigabit ethernet links (and 
 have 6Gb RAM
 and 7Gb RAM each model 6H1 and 6M1 respectively)
 - TSM uses a storage pool of 240Gb on FastT500 disks (those are
 connected by FC channels)
 - TSM uses a 3581 library (LTO) with 1 drive,
 
 The fact is (for the same set of information):
 When I do an archive backup operation with TSM, the time 
 elapsed rounds
 5 hours (TSM writes right to the tape).
 When I do an incremental backup operation, TSM uses about 
 6:30hs for it
 (TSM writes to storage pool).
 
 I'm looking for a rational approach to solve this problem: isn't it
 more fast writing to storage pool (disk) that to tape?
 
 Anyone had the same performance problem?
 
 Is it really a performance problem?
 
 I would like some commentaries about this, I can provide some 
 info about
 the configuration of TSM and the AIX servers.
 
 Regards
 
 Ignacio
 



Re: Tuning TSM

2002-05-17 Thread Ignacio Vidal

Zlatko:
OK: tcpwindowsize is in KBytes (actually my setting is 1280).

I have the same question (why client is idling?).
We have users working on that server from 9:00 until 22:00 (every
day...), backup jobs start about 00:15/01:15 

I've monitored the nodes in disk i/o operations and network transfers,
in different moments of the day.
About cpu load/memory usage/pagination: the values are all OK, for
example:
- cpu load (usr) has an average of 5~7 during all day
- memory usage (have 7GB RAM) is not a problem (about 30~40% for
computational the same for noncomp)
- pagination: max use may be 10~12% (mostly of the day 0.5%, peaks
during user's work time)

Viewing your results (500GB in 3:10hs), and trying to compare: we're
backing up 250GB starting 00:15 and ending 07:00... 250GB in 6:45hs
(it's not good)

Yesterday I set resource utilization = 10 too, and performance was the
same (really a bit worst).

I think there are multiple problems (and our provider -IBM- cannot help
us):

First of all: disk performance (IBM should tell us how to get the best
from FastT500), we have from 15 to 25 MB/sec in the storage...

Then: all TSM nodes in our installation have not the same file
configuration.
I explain a bit more this: we have nodes merging a lot of files
(25) with an average size of 40KB each and a few files (1000) with
an average size of 50MB (it's Oracle Financials: the database server
keeps datafiles and files belonging to application and DB motor).

We have 4 nodes such as just described, with a total of 35~40GB for each
(average and growing...)

Well, here was a brief description.
I'm listening for new ideas.
Thanks

Ignacio


 -Mensaje original-
 De: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
 Enviado el: viernes, 17 de mayo de 2002 5:29
 Para: [EMAIL PROTECTED]
 Asunto: Re: Tuning TSM
 
 
 Just a point
 TCPWindowsize parameter is measured in kilobytes not bytes. 
 And according 
 to Administrator's reference it must be between 0 and 2048. If not in 
 range on client complains with ANS1036S for invalid value. On server 
 values out of range mean 0, i.e. OS default.
 However this are side remarks. The main question is why 
 client is idling. 
 Have you monitored the node during to disk and to tape operation? Is 
 migration starting during backup? Are you using DIRMC.
 You wrote client compression - what is the processor usage 
 (user)? What is 
 the disk load - is the processor I/O wait high? Is the paging 
 space used - 
 check with svmon -P.
 You should get much better results. For example recently 
 we've achieved 
 500 GB in 3h10m - fairly good. It was similar to your config - AIX 
 nodeserver, client compression, disk pool, GB ether. Ether 
 was driven 
 10-25 MB/s depending on achieved compression. The bottleneck was EMC 
 Symmetrix the node was reading from but another company was 
 dealing with 
 it and we were unable to get more than 60-70 MB/s read. 
 Resourceutilization was set to 10.
 
 Zlatko Krastev
 IT Consultant
 
 
 
 
 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
 Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 cc: 
 
 Subject:Re: Tuning TSM
 
 Zlatko:
 Here are the answers:
 
  Have you tested what is the performance of FAStT 500 disks? 
  Those were 
  mid-class disk storage originally designed for PC servers 
  market and later 
  modified to be compatible with rs6k/pSeries.
 
 This is completely true: IBM installed FastT500 instead of  SSA disks
 (they said they were faster).
 We have done a re-cabling of these devices, and got a better IO
 balance on the controllers.
 
  Try some simple tests on the TSM server when quiet:
  - 'date; dd if=/dev/zero of=/fastt_fs/test_file bs=262144 
  count=16384; 
  date' to test write speed. Use large file to simulate TSM 
 heavy load.
  - 'date; dd if=/fastt_fs/test_file of=/dev/null' to check read speed
 
 In a test like you proposed, FastT500 brings a peformance of about
 60MB/sec
 
  Also check your Gb Ether - are you using Jumbo frames, have 
  you enabled in 
  AIX 'no -o rfc1323=1', what are sendreceive bufsizes, what is 
  TCPWindowsize of TSM serverclient. Have you measured 
  LANdisk load during 
  backup?
 
 Yes, we're using this parameters:
 from no
 tcp_sendspace = 1048576
 tcp_recvspace = 1048576
 udp_sendspace = 64000
 udp_recvspace = 64000
 rfc1323 = 1
  
 from TSM server
 TCPWindowsize 1310720
 
 What I've observed is that during a client backup session, 
 TSM seems to
 be idled for a long time (and the server machine has not constraint
 problems for memory or I/O)
 
 I can send other data.
 Thank you all.
 
 Ignacio
 
 
 
 
 
 
 
  
  Zlatko Krastev
  IT Consultant
  
  
  
  Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
  Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  cc: 
  
  Subject:Tuning TSM
  
  Hi:
  I'm managing

Re: Tuning TSM

2002-05-17 Thread Don France (TSMnews)

Reading thru this thread, no one has mentioned that backup will be slower
than archive -- for TWO significant reasons:
1. The standard progressive-incremental requires alot of work in comparing
the attributes of all files in the affected file systems, especially for a
LARGE number of files/directories (whereas archive has minimal database
overhead -- it just moves data).
2. Writes to disk are NOT as fast as tape IF the data can be delivered to
the tape device at streaming speed;  this is especially true if using
no-RAID or RAID-5 for disk pool striping (with parity)... RAID-0 might
compete if multiple paths  controllers are configured.  The big advantage
to disk pools is more concurrent backup/archive operations, then
disk-migration can stream offload the data to tape.

So, firstly, debug fundamentals using tar and archive commands (to eliminate
db overhead comparing file system attributes to identify changed
files/objects);  once you are satisfied with the thruput for archive, allow
20-50% overhead for daily incremental. If your best incremental experience
is not satisfactory, (but archive is okay) consider other options discussed
in the performance-tuning papers -- such as, reducing the number of files
per file system, use incrbydate during the week, increase horsepower on the
client machine and/or TSM server (depending on where the incr. bottlenecks
are).

The SHARE archives do not yet have the Nashville proceedings posted; when
they do show up, they are in the members-only area  (I was just there,
searching for other sessions).


- Original Message -
From: Ignacio Vidal [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 17, 2002 6:30 AM
Subject: Re: Tuning TSM


Zlatko:
OK: tcpwindowsize is in KBytes (actually my setting is 1280).

I have the same question (why client is idling?).
We have users working on that server from 9:00 until 22:00 (every
day...), backup jobs start about 00:15/01:15

I've monitored the nodes in disk i/o operations and network transfers,
in different moments of the day.
About cpu load/memory usage/pagination: the values are all OK, for
example:
- cpu load (usr) has an average of 5~7 during all day
- memory usage (have 7GB RAM) is not a problem (about 30~40% for
computational the same for noncomp)
- pagination: max use may be 10~12% (mostly of the day 0.5%, peaks
during user's work time)

Viewing your results (500GB in 3:10hs), and trying to compare: we're
backing up 250GB starting 00:15 and ending 07:00... 250GB in 6:45hs
(it's not good)

Yesterday I set resource utilization = 10 too, and performance was the
same (really a bit worst).

I think there are multiple problems (and our provider -IBM- cannot help
us):

First of all: disk performance (IBM should tell us how to get the best
from FastT500), we have from 15 to 25 MB/sec in the storage...

Then: all TSM nodes in our installation have not the same file
configuration.
I explain a bit more this: we have nodes merging a lot of files
(25) with an average size of 40KB each and a few files (1000) with
an average size of 50MB (it's Oracle Financials: the database server
keeps datafiles and files belonging to application and DB motor).

We have 4 nodes such as just described, with a total of 35~40GB for each
(average and growing...)

Well, here was a brief description.
I'm listening for new ideas.
Thanks

Ignacio


 -Mensaje original-
 De: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
 Enviado el: viernes, 17 de mayo de 2002 5:29
 Para: [EMAIL PROTECTED]
 Asunto: Re: Tuning TSM


 Just a point
 TCPWindowsize parameter is measured in kilobytes not bytes.
 And according
 to Administrator's reference it must be between 0 and 2048. If not in
 range on client complains with ANS1036S for invalid value. On server
 values out of range mean 0, i.e. OS default.
 However this are side remarks. The main question is why
 client is idling.
 Have you monitored the node during to disk and to tape operation? Is
 migration starting during backup? Are you using DIRMC.
 You wrote client compression - what is the processor usage
 (user)? What is
 the disk load - is the processor I/O wait high? Is the paging
 space used -
 check with svmon -P.
 You should get much better results. For example recently
 we've achieved
 500 GB in 3h10m - fairly good. It was similar to your config - AIX
 nodeserver, client compression, disk pool, GB ether. Ether
 was driven
 10-25 MB/s depending on achieved compression. The bottleneck was EMC
 Symmetrix the node was reading from but another company was
 dealing with
 it and we were unable to get more than 60-70 MB/s read.
 Resourceutilization was set to 10.

 Zlatko Krastev
 IT Consultant




 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
 Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 cc:

 Subject:Re: Tuning TSM

 Zlatko:
 Here are the answers:

  Have you tested what is the performance of FAStT 500 disks?
  Those were
  mid-class disk

Re: Tuning TSM

2002-05-17 Thread Bern Ruelas

Hi Don,

I looked for incrbydate in the installation guide and reference manual for this 
option but haven't found it. Can I use this for Solaris clients? 

-Bern Ruelas
Cadence Design Systems - Storage
Sr. Systems Engineer

-Original Message-
From: Don France (TSMnews) [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 17, 2002 2:27 PM
To: [EMAIL PROTECTED]
Subject: Re: Tuning TSM


Reading thru this thread, no one has mentioned that backup will be slower
than archive -- for TWO significant reasons:
1. The standard progressive-incremental requires alot of work in comparing
the attributes of all files in the affected file systems, especially for a
LARGE number of files/directories (whereas archive has minimal database
overhead -- it just moves data).
2. Writes to disk are NOT as fast as tape IF the data can be delivered to
the tape device at streaming speed;  this is especially true if using
no-RAID or RAID-5 for disk pool striping (with parity)... RAID-0 might
compete if multiple paths  controllers are configured.  The big advantage
to disk pools is more concurrent backup/archive operations, then
disk-migration can stream offload the data to tape.

So, firstly, debug fundamentals using tar and archive commands (to eliminate
db overhead comparing file system attributes to identify changed
files/objects);  once you are satisfied with the thruput for archive, allow
20-50% overhead for daily incremental. If your best incremental experience
is not satisfactory, (but archive is okay) consider other options discussed
in the performance-tuning papers -- such as, reducing the number of files
per file system, use incrbydate during the week, increase horsepower on the
client machine and/or TSM server (depending on where the incr. bottlenecks
are).

The SHARE archives do not yet have the Nashville proceedings posted; when
they do show up, they are in the members-only area  (I was just there,
searching for other sessions).


- Original Message -
From: Ignacio Vidal [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 17, 2002 6:30 AM
Subject: Re: Tuning TSM


Zlatko:
OK: tcpwindowsize is in KBytes (actually my setting is 1280).

I have the same question (why client is idling?).
We have users working on that server from 9:00 until 22:00 (every
day...), backup jobs start about 00:15/01:15

I've monitored the nodes in disk i/o operations and network transfers,
in different moments of the day.
About cpu load/memory usage/pagination: the values are all OK, for
example:
- cpu load (usr) has an average of 5~7 during all day
- memory usage (have 7GB RAM) is not a problem (about 30~40% for
computational the same for noncomp)
- pagination: max use may be 10~12% (mostly of the day 0.5%, peaks
during user's work time)

Viewing your results (500GB in 3:10hs), and trying to compare: we're
backing up 250GB starting 00:15 and ending 07:00... 250GB in 6:45hs
(it's not good)

Yesterday I set resource utilization = 10 too, and performance was the
same (really a bit worst).

I think there are multiple problems (and our provider -IBM- cannot help
us):

First of all: disk performance (IBM should tell us how to get the best
from FastT500), we have from 15 to 25 MB/sec in the storage...

Then: all TSM nodes in our installation have not the same file
configuration.
I explain a bit more this: we have nodes merging a lot of files
(25) with an average size of 40KB each and a few files (1000) with
an average size of 50MB (it's Oracle Financials: the database server
keeps datafiles and files belonging to application and DB motor).

We have 4 nodes such as just described, with a total of 35~40GB for each
(average and growing...)

Well, here was a brief description.
I'm listening for new ideas.
Thanks

Ignacio


 -Mensaje original-
 De: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
 Enviado el: viernes, 17 de mayo de 2002 5:29
 Para: [EMAIL PROTECTED]
 Asunto: Re: Tuning TSM


 Just a point
 TCPWindowsize parameter is measured in kilobytes not bytes.
 And according
 to Administrator's reference it must be between 0 and 2048. If not in
 range on client complains with ANS1036S for invalid value. On server
 values out of range mean 0, i.e. OS default.
 However this are side remarks. The main question is why
 client is idling.
 Have you monitored the node during to disk and to tape operation? Is
 migration starting during backup? Are you using DIRMC.
 You wrote client compression - what is the processor usage
 (user)? What is
 the disk load - is the processor I/O wait high? Is the paging
 space used -
 check with svmon -P.
 You should get much better results. For example recently
 we've achieved
 500 GB in 3h10m - fairly good. It was similar to your config - AIX
 nodeserver, client compression, disk pool, GB ether. Ether
 was driven
 10-25 MB/s depending on achieved compression. The bottleneck was EMC
 Symmetrix the node was reading from but another company was
 dealing with
 it and we were unable to get more than 60-70 MB/s

Re: Tuning TSM

2002-05-16 Thread Sandra Ghaoui

Hello Ignacio,

I had performance problems with TSM too. (I still have
in another installation too :p) ... After fixing the
networks problems (to 100/full  duplex) performance
was still poor.
I don't know what version you are using but in my case
I had to change the TCPWINDOWSIZE in the client
options file to 63 (default is 32) , and now I'm
backing up 11 GB in 1 hour which is pretty good ...

Hope it helps
Sandra


--- Ignacio Vidal [EMAIL PROTECTED] wrote:
 Lindsey:
 I've been walking around once and again about
 networking configuration,
 then with disk i/o performance, then with how is
 configurated on disk
 the storage pool (if it was in raid 5, or in raid 1
 or in raid 10...).

 Those servers are connected through gigabit ethernet
 channels, and they
 are offering from 50 to 75 MBytes/sec. I believe
 that throughput is very
 low, but Tivoli's people (here) insisted in other
 factors (disk i/o,
 configuration of raids, etc)

 I'll try your reccomendation, I have not all
 neccesary values from our
 switches now.
 Thanks

 Ignacio

  -Mensaje original-
  De: lt [mailto:[EMAIL PROTECTED]]
  Enviado el: Miircoles, 15 de Mayo de 2002 19:42
  Para: [EMAIL PROTECTED]
  Asunto: Re: Tuning TSM
 
 
  Hi,
   Be sure to set ALL parameters for the nic cards
 correctly
  to match the
  ports on the switches.
   Ensure that ALL 'no options' are set correctly
 for your
  environment.
 
  Example:
   AIX 433_ML_08:
100MB ethernet nic cards have the xmit/recieve
 buffer pools
  maxed out
100MB ethernet nic cards have the speed/duplex
 set to match
  switch ports
'no options' are set via an /etc/rc.{filename} 
 called via
  /etc/inittab via:
rctunenet:2:wait:/etc/rc.tunenet  /dev/console
 21 #Tune
  Network Parms
 example:
  /etc/rc.tunenet
   if [ -f /usr/sbin/no ]
   then
   thewall=$(/usr/sbin/no -o thewall | awk '{
 print $3 }')
   if [ $thewall -lt 4096 ]
   then
   /usr/sbin/no -d thewall
   else
   print thewall is set to $thewall - left as is
   fi
   /usr/sbin/no -d thewall
   /usr/sbin/no -d sb_max
   /usr/sbin/no -o tcp_sendspace=$thewall
   /usr/sbin/no -o tcp_recvspace=$thewall
   /usr/sbin/no -o udp_sendspace=64000
   /usr/sbin/no -o udp_recvspace=64000
   /usr/sbin/no -o net_malloc_police=32768
   /usr/sbin/no -o tcp_mssdflt=1452
   /usr/sbin/no -o ipqmaxlen=150
   /usr/sbin/no -o rfc1323=1
   fi
   print Network parameters tuned...
   By allowing AIX_ML_08 to figure out the best
 settings for
  thewall/sb_max, no -d thewall/sb_max, I do not
 have to go
  thru the issue
  of calculating it anymore!!!
   Having gone thru the above scenario, my 100MB
 ethernet cloud
  performs
  at, a minimum, 10MB/sec. A lot of the network
 traffic is logged at:
  11MB/sec.
   We are now implementing a GIG ethernet network
 and I am
  looking forward
  to working with it as well.
 
  HTH.
 
 
  Mr. Lindsey Thomson
  BLDG:042/2F-065 IMA: 0422F065
  11400 Burnet Rd.,  Austin, TX 78758
  off) 512) 823 6522 / (TL) 793 6522
 
  I never waste memory on things that can easily be
 stored
   and retrieved from elsewhere.- Albert
 Einstein
  Blessed is the nation whose God is the Lord -
 Psalm 33:12
  Time is a great teacher, but unfortunately it
 kills all
   its pupils- Hector Berloiz
 
  On Wed, 15 May 2002, Ignacio Vidal wrote:
 
   Hi:
   I'm managing a pretty small TSM installation
 with 4 RS/6K
  machines (2
   6M1 and 2 6H1) running AIX 4.3.3 (ML9).
   TSM software consists of the server (running in
 a 6M1 - 7Gb
  RAM), and
   the clients running in the same machine and on
 the others.
  
   I4ve got the following situation:
   - the total of data backed up is about 200Gb's,
   - 4 servers are connected using gigabit ethernet
 links (and
  have 6Gb RAM
   and 7Gb RAM each model 6H1 and 6M1 respectively)
   - TSM uses a storage pool of 240Gb on FastT500
 disks (those are
   connected by FC channels)
   - TSM uses a 3581 library (LTO) with 1 drive,
  
   The fact is (for the same set of information):
   When I do an archive backup operation with TSM,
 the time
  elapsed rounds
   5 hours (TSM writes right to the tape).
   When I do an incremental backup operation, TSM
 uses about
  6:30hs for it
   (TSM writes to storage pool).
  
   I'm looking for a rational approach to solve
 this
  problem: isn't it
   more fast writing to storage pool (disk) that to
 tape?
  
   Anyone had the same performance problem?
  
   Is it really a performance problem?
  
   I would like some commentaries about this, I can
 provide
  some info about
   the configuration of TSM and the AIX servers.
  
   Regards
  
   Ignacio
  
 


__
Do You Yahoo!?
LAUNCH - Your Yahoo! Music Experience
http://launch.yahoo.com



Re: Tuning TSM

2002-05-16 Thread Cook, Dwight E

Run multiple concurrent client sessions...
Run with TSM client compression...
Go to disk first (to help facilitate the multiple concurrent sessions seeing
how you only have one drive)...
Get another drive ! ! ! (will really help when you have to run reclamation)
I'm partial to 3590's but lots of folks are running LTO's
for backups, look into resourceutilization
for archives, just fire off multiple, if backing up a large DB run 25+
concurrently
we archive a 3.1 TB SAP data base in 20 hours running 25 concurrent sessions
with TSM client compression (compresses the info down to about 800 GB) and
back over 100 Mb/sec fast ethernet (@ 40 GB/hr)  across GigE we run about 70
GB/hr so do the same work in just 12 hours (or so)...


just my 2 cents worth

Dwight



-Original Message-
From: Ignacio Vidal [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 15, 2002 4:50 PM
To: [EMAIL PROTECTED]
Subject: Tuning TSM


Hi:
I'm managing a pretty small TSM installation with 4 RS/6K machines (2
6M1 and 2 6H1) running AIX 4.3.3 (ML9).
TSM software consists of the server (running in a 6M1 - 7Gb RAM), and
the clients running in the same machine and on the others.

I´ve got the following situation:
- the total of data backed up is about 200Gb's,
- 4 servers are connected using gigabit ethernet links (and have 6Gb RAM
and 7Gb RAM each model 6H1 and 6M1 respectively)
- TSM uses a storage pool of 240Gb on FastT500 disks (those are
connected by FC channels)
- TSM uses a 3581 library (LTO) with 1 drive,

The fact is (for the same set of information):
When I do an archive backup operation with TSM, the time elapsed rounds
5 hours (TSM writes right to the tape).
When I do an incremental backup operation, TSM uses about 6:30hs for it
(TSM writes to storage pool).

I'm looking for a rational approach to solve this problem: isn't it
more fast writing to storage pool (disk) that to tape?

Anyone had the same performance problem?

Is it really a performance problem?

I would like some commentaries about this, I can provide some info about
the configuration of TSM and the AIX servers.

Regards

Ignacio



Re: Tuning TSM

2002-05-16 Thread Bill Boyer

Great for AIX, but does anyone have IP tuning parameters for Windows2000?

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
lt
Sent: Wednesday, May 15, 2002 6:42 PM
To: [EMAIL PROTECTED]
Subject: Re: Tuning TSM


Hi,
 Be sure to set ALL parameters for the nic cards correctly to match the
ports on the switches.
 Ensure that ALL 'no options' are set correctly for your environment.

Example:
 AIX 433_ML_08:
  100MB ethernet nic cards have the xmit/recieve buffer pools maxed out
  100MB ethernet nic cards have the speed/duplex set to match switch ports
  'no options' are set via an /etc/rc.{filename}  called via
/etc/inittab via:
  rctunenet:2:wait:/etc/rc.tunenet  /dev/console 21 #Tune Network Parms
   example:
/etc/rc.tunenet
 if [ -f /usr/sbin/no ]
 then
 thewall=$(/usr/sbin/no -o thewall | awk '{ print $3 }')
 if [ $thewall -lt 4096 ]
 then
 /usr/sbin/no -d thewall
 else
 print thewall is set to $thewall - left as is
 fi
 /usr/sbin/no -d thewall
 /usr/sbin/no -d sb_max
 /usr/sbin/no -o tcp_sendspace=$thewall
 /usr/sbin/no -o tcp_recvspace=$thewall
 /usr/sbin/no -o udp_sendspace=64000
 /usr/sbin/no -o udp_recvspace=64000
 /usr/sbin/no -o net_malloc_police=32768
 /usr/sbin/no -o tcp_mssdflt=1452
 /usr/sbin/no -o ipqmaxlen=150
 /usr/sbin/no -o rfc1323=1
 fi
 print Network parameters tuned...
 By allowing AIX_ML_08 to figure out the best settings for
thewall/sb_max, no -d thewall/sb_max, I do not have to go thru the issue
of calculating it anymore!!!
 Having gone thru the above scenario, my 100MB ethernet cloud performs
at, a minimum, 10MB/sec. A lot of the network traffic is logged at:
11MB/sec.
 We are now implementing a GIG ethernet network and I am looking forward
to working with it as well.

HTH.


Mr. Lindsey Thomson
BLDG:042/2F-065 IMA: 0422F065
11400 Burnet Rd.,  Austin, TX 78758
off) 512) 823 6522 / (TL) 793 6522

I never waste memory on things that can easily be stored
 and retrieved from elsewhere.- Albert Einstein
Blessed is the nation whose God is the Lord - Psalm 33:12
Time is a great teacher, but unfortunately it kills all
 its pupils- Hector Berloiz

On Wed, 15 May 2002, Ignacio Vidal wrote:

 Hi:
 I'm managing a pretty small TSM installation with 4 RS/6K machines (2
 6M1 and 2 6H1) running AIX 4.3.3 (ML9).
 TSM software consists of the server (running in a 6M1 - 7Gb RAM), and
 the clients running in the same machine and on the others.

 I4ve got the following situation:
 - the total of data backed up is about 200Gb's,
 - 4 servers are connected using gigabit ethernet links (and have 6Gb RAM
 and 7Gb RAM each model 6H1 and 6M1 respectively)
 - TSM uses a storage pool of 240Gb on FastT500 disks (those are
 connected by FC channels)
 - TSM uses a 3581 library (LTO) with 1 drive,

 The fact is (for the same set of information):
 When I do an archive backup operation with TSM, the time elapsed rounds
 5 hours (TSM writes right to the tape).
 When I do an incremental backup operation, TSM uses about 6:30hs for it
 (TSM writes to storage pool).

 I'm looking for a rational approach to solve this problem: isn't it
 more fast writing to storage pool (disk) that to tape?

 Anyone had the same performance problem?

 Is it really a performance problem?

 I would like some commentaries about this, I can provide some info about
 the configuration of TSM and the AIX servers.

 Regards

 Ignacio




Re: Tuning TSM

2002-05-16 Thread Ignacio Vidal

Dwight:
LTO is performing good enough (for us), and we only have 1 drive.
I was reading all hints exposed in the list, many of them have been
implemented during last 2 months.
We're using a disk storage pool, client compression, etc.
I realize that we can explore a bit more resourceutilization
parameter...

Thanks

Ignacio


 -Mensaje original-
 De: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
 Enviado el: jueves, 16 de mayo de 2002 9:43
 Para: [EMAIL PROTECTED]
 Asunto: Re: Tuning TSM
 
 
 Run multiple concurrent client sessions...
 Run with TSM client compression...
 Go to disk first (to help facilitate the multiple concurrent 
 sessions seeing
 how you only have one drive)...
 Get another drive ! ! ! (will really help when you have to 
 run reclamation)
 I'm partial to 3590's but lots of folks are running LTO's
 for backups, look into resourceutilization
 for archives, just fire off multiple, if backing up a large DB run 25+
 concurrently
 we archive a 3.1 TB SAP data base in 20 hours running 25 
 concurrent sessions
 with TSM client compression (compresses the info down to 
 about 800 GB) and
 back over 100 Mb/sec fast ethernet (@ 40 GB/hr)  across GigE 
 we run about 70
 GB/hr so do the same work in just 12 hours (or so)...
 
 
 just my 2 cents worth
 
 Dwight
 
 
 
 -Original Message-
 From: Ignacio Vidal [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, May 15, 2002 4:50 PM
 To: [EMAIL PROTECTED]
 Subject: Tuning TSM
 
 
 Hi:
 I'm managing a pretty small TSM installation with 4 RS/6K machines (2
 6M1 and 2 6H1) running AIX 4.3.3 (ML9).
 TSM software consists of the server (running in a 6M1 - 7Gb RAM), and
 the clients running in the same machine and on the others.
 
 I´ve got the following situation:
 - the total of data backed up is about 200Gb's,
 - 4 servers are connected using gigabit ethernet links (and 
 have 6Gb RAM
 and 7Gb RAM each model 6H1 and 6M1 respectively)
 - TSM uses a storage pool of 240Gb on FastT500 disks (those are
 connected by FC channels)
 - TSM uses a 3581 library (LTO) with 1 drive,
 
 The fact is (for the same set of information):
 When I do an archive backup operation with TSM, the time 
 elapsed rounds
 5 hours (TSM writes right to the tape).
 When I do an incremental backup operation, TSM uses about 
 6:30hs for it
 (TSM writes to storage pool).
 
 I'm looking for a rational approach to solve this problem: isn't it
 more fast writing to storage pool (disk) that to tape?
 
 Anyone had the same performance problem?
 
 Is it really a performance problem?
 
 I would like some commentaries about this, I can provide some 
 info about
 the configuration of TSM and the AIX servers.
 
 Regards
 
 Ignacio
 



Re: Tuning TSM

2002-05-16 Thread Seay, Paul

Look at the proceedings at www.share.org from the Nashville Meeting.  A
couple TSM sessions given by Charlie Nichols from the TSM Performance Lab
has everything you need (5722 and 5723).

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Bill Boyer [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 16, 2002 9:34 AM
To: [EMAIL PROTECTED]
Subject: Re: Tuning TSM


Great for AIX, but does anyone have IP tuning parameters for Windows2000?

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of lt
Sent: Wednesday, May 15, 2002 6:42 PM
To: [EMAIL PROTECTED]
Subject: Re: Tuning TSM


Hi,
 Be sure to set ALL parameters for the nic cards correctly to match the
ports on the switches.  Ensure that ALL 'no options' are set correctly for
your environment.

Example:
 AIX 433_ML_08:
  100MB ethernet nic cards have the xmit/recieve buffer pools maxed out
  100MB ethernet nic cards have the speed/duplex set to match switch ports
  'no options' are set via an /etc/rc.{filename}  called via /etc/inittab
via:
  rctunenet:2:wait:/etc/rc.tunenet  /dev/console 21 #Tune Network Parms
   example:
/etc/rc.tunenet
 if [ -f /usr/sbin/no ]
 then
 thewall=$(/usr/sbin/no -o thewall | awk '{ print $3 }')
 if [ $thewall -lt 4096 ]
 then
 /usr/sbin/no -d thewall
 else
 print thewall is set to $thewall - left as is
 fi
 /usr/sbin/no -d thewall
 /usr/sbin/no -d sb_max
 /usr/sbin/no -o tcp_sendspace=$thewall
 /usr/sbin/no -o tcp_recvspace=$thewall
 /usr/sbin/no -o udp_sendspace=64000
 /usr/sbin/no -o udp_recvspace=64000
 /usr/sbin/no -o net_malloc_police=32768
 /usr/sbin/no -o tcp_mssdflt=1452
 /usr/sbin/no -o ipqmaxlen=150
 /usr/sbin/no -o rfc1323=1
 fi
 print Network parameters tuned...
 By allowing AIX_ML_08 to figure out the best settings for thewall/sb_max,
no -d thewall/sb_max, I do not have to go thru the issue of calculating it
anymore!!!  Having gone thru the above scenario, my 100MB ethernet cloud
performs at, a minimum, 10MB/sec. A lot of the network traffic is logged at:
11MB/sec.  We are now implementing a GIG ethernet network and I am looking
forward to working with it as well.

HTH.


Mr. Lindsey Thomson
BLDG:042/2F-065 IMA: 0422F065
11400 Burnet Rd.,  Austin, TX 78758
off) 512) 823 6522 / (TL) 793 6522

I never waste memory on things that can easily be stored
 and retrieved from elsewhere.- Albert Einstein
Blessed is the nation whose God is the Lord - Psalm 33:12 Time is a great
teacher, but unfortunately it kills all
 its pupils- Hector Berloiz

On Wed, 15 May 2002, Ignacio Vidal wrote:

 Hi:
 I'm managing a pretty small TSM installation with 4 RS/6K machines (2
 6M1 and 2 6H1) running AIX 4.3.3 (ML9). TSM software consists of the
 server (running in a 6M1 - 7Gb RAM), and the clients running in the
 same machine and on the others.

 I4ve got the following situation:
 - the total of data backed up is about 200Gb's,
 - 4 servers are connected using gigabit ethernet links (and have 6Gb
 RAM and 7Gb RAM each model 6H1 and 6M1 respectively)
 - TSM uses a storage pool of 240Gb on FastT500 disks (those are
 connected by FC channels)
 - TSM uses a 3581 library (LTO) with 1 drive,

 The fact is (for the same set of information):
 When I do an archive backup operation with TSM, the time elapsed
 rounds 5 hours (TSM writes right to the tape). When I do an
 incremental backup operation, TSM uses about 6:30hs for it (TSM writes
 to storage pool).

 I'm looking for a rational approach to solve this problem: isn't it
 more fast writing to storage pool (disk) that to tape?

 Anyone had the same performance problem?

 Is it really a performance problem?

 I would like some commentaries about this, I can provide some info
 about the configuration of TSM and the AIX servers.

 Regards

 Ignacio




Re: Tuning TSM

2002-05-15 Thread lt

Hi,
 Be sure to set ALL parameters for the nic cards correctly to match the 
ports on the switches.
 Ensure that ALL 'no options' are set correctly for your environment.

Example:
 AIX 433_ML_08:
  100MB ethernet nic cards have the xmit/recieve buffer pools maxed out
  100MB ethernet nic cards have the speed/duplex set to match switch ports
  'no options' are set via an /etc/rc.{filename}  called via 
/etc/inittab via:
  rctunenet:2:wait:/etc/rc.tunenet  /dev/console 21 #Tune Network Parms
   example:
/etc/rc.tunenet
 if [ -f /usr/sbin/no ]
 then
 thewall=$(/usr/sbin/no -o thewall | awk '{ print $3 }')
 if [ $thewall -lt 4096 ]
 then
 /usr/sbin/no -d thewall
 else
 print thewall is set to $thewall - left as is
 fi
 /usr/sbin/no -d thewall
 /usr/sbin/no -d sb_max
 /usr/sbin/no -o tcp_sendspace=$thewall
 /usr/sbin/no -o tcp_recvspace=$thewall
 /usr/sbin/no -o udp_sendspace=64000
 /usr/sbin/no -o udp_recvspace=64000
 /usr/sbin/no -o net_malloc_police=32768
 /usr/sbin/no -o tcp_mssdflt=1452
 /usr/sbin/no -o ipqmaxlen=150
 /usr/sbin/no -o rfc1323=1
 fi
 print Network parameters tuned...
 By allowing AIX_ML_08 to figure out the best settings for 
thewall/sb_max, no -d thewall/sb_max, I do not have to go thru the issue 
of calculating it anymore!!!
 Having gone thru the above scenario, my 100MB ethernet cloud performs 
at, a minimum, 10MB/sec. A lot of the network traffic is logged at: 
11MB/sec.
 We are now implementing a GIG ethernet network and I am looking forward 
to working with it as well.

HTH.


Mr. Lindsey Thomson
BLDG:042/2F-065 IMA: 0422F065
11400 Burnet Rd.,  Austin, TX 78758
off) 512) 823 6522 / (TL) 793 6522

I never waste memory on things that can easily be stored 
 and retrieved from elsewhere.- Albert Einstein
Blessed is the nation whose God is the Lord - Psalm 33:12
Time is a great teacher, but unfortunately it kills all 
 its pupils- Hector Berloiz

On Wed, 15 May 2002, Ignacio Vidal wrote:

 Hi:
 I'm managing a pretty small TSM installation with 4 RS/6K machines (2
 6M1 and 2 6H1) running AIX 4.3.3 (ML9).
 TSM software consists of the server (running in a 6M1 - 7Gb RAM), and
 the clients running in the same machine and on the others.
 
 I´ve got the following situation:
 - the total of data backed up is about 200Gb's,
 - 4 servers are connected using gigabit ethernet links (and have 6Gb RAM
 and 7Gb RAM each model 6H1 and 6M1 respectively)
 - TSM uses a storage pool of 240Gb on FastT500 disks (those are
 connected by FC channels)
 - TSM uses a 3581 library (LTO) with 1 drive,
 
 The fact is (for the same set of information):
 When I do an archive backup operation with TSM, the time elapsed rounds
 5 hours (TSM writes right to the tape).
 When I do an incremental backup operation, TSM uses about 6:30hs for it
 (TSM writes to storage pool).
 
 I'm looking for a rational approach to solve this problem: isn't it
 more fast writing to storage pool (disk) that to tape?
 
 Anyone had the same performance problem?
 
 Is it really a performance problem?
 
 I would like some commentaries about this, I can provide some info about
 the configuration of TSM and the AIX servers.
 
 Regards
 
 Ignacio
 



Re: Tuning TSM

2002-05-15 Thread Ignacio Vidal

Lindsey:
I've been walking around once and again about networking configuration,
then with disk i/o performance, then with how is configurated on disk
the storage pool (if it was in raid 5, or in raid 1 or in raid 10...).

Those servers are connected through gigabit ethernet channels, and they
are offering from 50 to 75 MBytes/sec. I believe that throughput is very
low, but Tivoli's people (here) insisted in other factors (disk i/o,
configuration of raids, etc)

I'll try your reccomendation, I have not all neccesary values from our
switches now.
Thanks

Ignacio

 -Mensaje original-
 De: lt [mailto:[EMAIL PROTECTED]]
 Enviado el: Miércoles, 15 de Mayo de 2002 19:42
 Para: [EMAIL PROTECTED]
 Asunto: Re: Tuning TSM
 
 
 Hi,
  Be sure to set ALL parameters for the nic cards correctly 
 to match the 
 ports on the switches.
  Ensure that ALL 'no options' are set correctly for your 
 environment.
 
 Example:
  AIX 433_ML_08:
   100MB ethernet nic cards have the xmit/recieve buffer pools 
 maxed out
   100MB ethernet nic cards have the speed/duplex set to match 
 switch ports
   'no options' are set via an /etc/rc.{filename}  called via 
 /etc/inittab via:
   rctunenet:2:wait:/etc/rc.tunenet  /dev/console 21 #Tune 
 Network Parms
example:
 /etc/rc.tunenet
  if [ -f /usr/sbin/no ]
  then
  thewall=$(/usr/sbin/no -o thewall | awk '{ print $3 }')
  if [ $thewall -lt 4096 ]
  then
  /usr/sbin/no -d thewall
  else
  print thewall is set to $thewall - left as is
  fi
  /usr/sbin/no -d thewall
  /usr/sbin/no -d sb_max
  /usr/sbin/no -o tcp_sendspace=$thewall
  /usr/sbin/no -o tcp_recvspace=$thewall
  /usr/sbin/no -o udp_sendspace=64000
  /usr/sbin/no -o udp_recvspace=64000
  /usr/sbin/no -o net_malloc_police=32768
  /usr/sbin/no -o tcp_mssdflt=1452
  /usr/sbin/no -o ipqmaxlen=150
  /usr/sbin/no -o rfc1323=1
  fi
  print Network parameters tuned...
  By allowing AIX_ML_08 to figure out the best settings for 
 thewall/sb_max, no -d thewall/sb_max, I do not have to go 
 thru the issue 
 of calculating it anymore!!!
  Having gone thru the above scenario, my 100MB ethernet cloud 
 performs 
 at, a minimum, 10MB/sec. A lot of the network traffic is logged at: 
 11MB/sec.
  We are now implementing a GIG ethernet network and I am 
 looking forward 
 to working with it as well.
 
 HTH.
 
 
 Mr. Lindsey Thomson
 BLDG:042/2F-065 IMA: 0422F065
 11400 Burnet Rd.,  Austin, TX 78758
 off) 512) 823 6522 / (TL) 793 6522
 
 I never waste memory on things that can easily be stored 
  and retrieved from elsewhere.- Albert Einstein
 Blessed is the nation whose God is the Lord - Psalm 33:12
 Time is a great teacher, but unfortunately it kills all 
  its pupils- Hector Berloiz
 
 On Wed, 15 May 2002, Ignacio Vidal wrote:
 
  Hi:
  I'm managing a pretty small TSM installation with 4 RS/6K 
 machines (2
  6M1 and 2 6H1) running AIX 4.3.3 (ML9).
  TSM software consists of the server (running in a 6M1 - 7Gb 
 RAM), and
  the clients running in the same machine and on the others.
  
  I´ve got the following situation:
  - the total of data backed up is about 200Gb's,
  - 4 servers are connected using gigabit ethernet links (and 
 have 6Gb RAM
  and 7Gb RAM each model 6H1 and 6M1 respectively)
  - TSM uses a storage pool of 240Gb on FastT500 disks (those are
  connected by FC channels)
  - TSM uses a 3581 library (LTO) with 1 drive,
  
  The fact is (for the same set of information):
  When I do an archive backup operation with TSM, the time 
 elapsed rounds
  5 hours (TSM writes right to the tape).
  When I do an incremental backup operation, TSM uses about 
 6:30hs for it
  (TSM writes to storage pool).
  
  I'm looking for a rational approach to solve this 
 problem: isn't it
  more fast writing to storage pool (disk) that to tape?
  
  Anyone had the same performance problem?
  
  Is it really a performance problem?
  
  I would like some commentaries about this, I can provide 
 some info about
  the configuration of TSM and the AIX servers.
  
  Regards
  
  Ignacio