Re: Please tell me about your LTO3 / LTO4 performance

2008-07-03 Thread Eric Bourgi
No 

We only use TDP for R3

pkginfo -l TDPR3   
   PKGINST:  TDPR3
  NAME:  Data Protection for SAP
  CATEGORY:  application
  ARCH:  sparc
   VERSION:  5.5.0.0.DSP=5.5.0
   BASEDIR:  /opt/tivoli/tsm/tdp_r3/ora64
VENDOR:  IBM Tivoli Storage Manager for ERP
  DESC:  Data Protection for SAP
STATUS:  completely installed

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Justin Miller
Sent: Wednesday, July 02, 2008 11:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Are you using the RMAN interface for your SAP backups?  We are using it
here and I'm curious to see how others have things configured when using
RMAN.

Justin Miller




Eric Bourgi [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
07/02/2008 12:53 AM
Please respond to ADSM: Dist Stor Manager


To: ADSM-L@VM.MARIST.EDU
cc:
Subject:Re: [ADSM-L] Please tell me about your LTO3 /
LTO4 performance


For large client ,and it feasable do not use your LAN , but LANFREE
backup , so your master
Tsm server as a Library manager .

Here is an example of backup (TDP for mysap )
3 lto3 tape drives as target and 7 multiplexing ( so 21 threads )
The speed goes from 600 GB/Hour to over 1TB/hour, it depend on the
client disk load
We went from 2GB HBA/switch  to 4GB HBA/switch and could not see a
significant impact
Note that we also use TDP compression ( which is not a real compression
but helps )

Parallel sessions : 3
Multiplexed files : 7

BKI1215I: Average transmission rate was 821.396 GB/h (233.641 MB/sec).
BKI1227I: Average compression factor was 1.148.
BKI0020I: End of program at: Wed Jul 02 03:00:00 2008 .
BKI0021I: Elapsed time: 05 h 57 min 01 sec .


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 7:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge.  Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.

I'll also try the alternate pathing ASAP.

Thanks for the suggestions!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-03 Thread Kauffman, Tom
NO RMAN here. We do one full backup per night - online on weeknights, offline 
at midnight Saturday.

But our R3 database is only 1.4 TB currently.

Tom

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Justin 
Miller
Sent: Wednesday, July 02, 2008 5:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Are you using the RMAN interface for your SAP backups?  We are using it
here and I'm curious to see how others have things configured when using
RMAN.

Justin Miller




Eric Bourgi [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
07/02/2008 12:53 AM
Please respond to ADSM: Dist Stor Manager


To: ADSM-L@VM.MARIST.EDU
cc:
Subject:Re: [ADSM-L] Please tell me about your LTO3 / LTO4 
performance


For large client ,and it feasable do not use your LAN , but LANFREE
backup , so your master
Tsm server as a Library manager .

Here is an example of backup (TDP for mysap )
3 lto3 tape drives as target and 7 multiplexing ( so 21 threads )
The speed goes from 600 GB/Hour to over 1TB/hour, it depend on the
client disk load
We went from 2GB HBA/switch  to 4GB HBA/switch and could not see a
significant impact
Note that we also use TDP compression ( which is not a real compression
but helps )

Parallel sessions : 3
Multiplexed files : 7

BKI1215I: Average transmission rate was 821.396 GB/h (233.641 MB/sec).
BKI1227I: Average compression factor was 1.148.
BKI0020I: End of program at: Wed Jul 02 03:00:00 2008 .
BKI0021I: Elapsed time: 05 h 57 min 01 sec .


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 7:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge.  Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.

I'll also try the alternate pathing ASAP.

Thanks for the suggestions!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-03 Thread Ochs, Duane
We have three TSM servers using diskpools and DBs behind an SVC.
Each has approximately 2.5 tb of space on MDGs of 48 disks or more on
either DS4500s or a DS8300.

There is a very significant performance difference between that and a
fourth site that has a dedicated 4200 configured with 500gb sata drives.

In the grand scheme of things I don't feel that the SVC disks are
necessary to achieve the performance we needed when we were writing
across a 1gbit lan/wan or to the LTO2 drives we had. 

Recently we migrated to LTO4 and our diskpool migrations are running
much faster, which is probably directly related to the SVC and large
MDGs being able to keep the data flowing to the tape drives.

If you are looking for specific info, let me know.
Duane


Information Systems - Unix,Storage and Retention
Quad/Graphics Inc.
Sussex, Wisconsin
[EMAIL PROTECTED]
www.QG.com



 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Orville Lantto
Sent: Wednesday, July 02, 2008 8:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Any more thoughts on using TSM through a SVC?  I am about to configure
such a system and have reservations about putting TSM disk storage pools
on SVC LUNs.

Orville L. Lantto




From: Justin Miller
Sent: Wed 7/2/2008 16:19
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Please tell me about your LTO3 / LTO4 performance


Are you using the RMAN interface for your SAP backups?  We are using it
here and I'm curious to see how others have things configured when using
RMAN.

Justin Miller




Eric Bourgi [EMAIL PROTECTED] Sent by: ADSM: Dist Stor
Manager ADSM-L@VM.MARIST.EDU
07/02/2008 12:53 AM
Please respond to ADSM: Dist Stor Manager


To: ADSM-L@VM.MARIST.EDU
cc:
Subject:Re: [ADSM-L] Please tell me about your LTO3 /
LTO4 performance


For large client ,and it feasable do not use your LAN , but LANFREE
backup , so your master Tsm server as a Library manager .

Here is an example of backup (TDP for mysap )
3 lto3 tape drives as target and 7 multiplexing ( so 21 threads ) The
speed goes from 600 GB/Hour to over 1TB/hour, it depend on the client
disk load We went from 2GB HBA/switch  to 4GB HBA/switch and could not
see a significant impact Note that we also use TDP compression ( which
is not a real compression but helps )

Parallel sessions : 3
Multiplexed files : 7

BKI1215I: Average transmission rate was 821.396 GB/h (233.641 MB/sec).
BKI1227I: Average compression factor was 1.148.
BKI0020I: End of program at: Wed Jul 02 03:00:00 2008 .
BKI0021I: Elapsed time: 05 h 57 min 01 sec .


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 7:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge.  Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.

I'll also try the alternate pathing ASAP.

Thanks for the suggestions!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-03 Thread Thach, Kevin G
What model are your SVC nodes?  4F2, 8F2, 8G4?  How many nodes in your
cluster?  Are you directing the vdisks for your TSM server across all
I/O groups, or just a single?

I'd be interested in more details of your config.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ochs, Duane
Sent: Thursday, July 03, 2008 9:45 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

We have three TSM servers using diskpools and DBs behind an SVC.
Each has approximately 2.5 tb of space on MDGs of 48 disks or more on
either DS4500s or a DS8300.

There is a very significant performance difference between that and a
fourth site that has a dedicated 4200 configured with 500gb sata drives.

In the grand scheme of things I don't feel that the SVC disks are
necessary to achieve the performance we needed when we were writing
across a 1gbit lan/wan or to the LTO2 drives we had. 

Recently we migrated to LTO4 and our diskpool migrations are running
much faster, which is probably directly related to the SVC and large
MDGs being able to keep the data flowing to the tape drives.

If you are looking for specific info, let me know.
Duane


Information Systems - Unix,Storage and Retention
Quad/Graphics Inc.
Sussex, Wisconsin
[EMAIL PROTECTED]
www.QG.com



 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Orville Lantto
Sent: Wednesday, July 02, 2008 8:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Any more thoughts on using TSM through a SVC?  I am about to configure
such a system and have reservations about putting TSM disk storage pools
on SVC LUNs.

Orville L. Lantto




From: Justin Miller
Sent: Wed 7/2/2008 16:19
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Please tell me about your LTO3 / LTO4 performance


Are you using the RMAN interface for your SAP backups?  We are using it
here and I'm curious to see how others have things configured when using
RMAN.

Justin Miller




Eric Bourgi [EMAIL PROTECTED] Sent by: ADSM: Dist Stor
Manager ADSM-L@VM.MARIST.EDU
07/02/2008 12:53 AM
Please respond to ADSM: Dist Stor Manager


To: ADSM-L@VM.MARIST.EDU
cc:
Subject:Re: [ADSM-L] Please tell me about your LTO3 /
LTO4 performance


For large client ,and it feasable do not use your LAN , but LANFREE
backup , so your master Tsm server as a Library manager .

Here is an example of backup (TDP for mysap )
3 lto3 tape drives as target and 7 multiplexing ( so 21 threads ) The
speed goes from 600 GB/Hour to over 1TB/hour, it depend on the client
disk load We went from 2GB HBA/switch  to 4GB HBA/switch and could not
see a significant impact Note that we also use TDP compression ( which
is not a real compression but helps )

Parallel sessions : 3
Multiplexed files : 7

BKI1215I: Average transmission rate was 821.396 GB/h (233.641 MB/sec).
BKI1227I: Average compression factor was 1.148.
BKI0020I: End of program at: Wed Jul 02 03:00:00 2008 .
BKI0021I: Elapsed time: 05 h 57 min 01 sec .


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 7:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge.  Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.

I'll also try the alternate pathing ASAP.

Thanks for the suggestions!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-02 Thread Eric Bourgi
For large client ,and it feasable do not use your LAN , but LANFREE
backup , so your master 
Tsm server as a Library manager .

Here is an example of backup (TDP for mysap )
3 lto3 tape drives as target and 7 multiplexing ( so 21 threads )
The speed goes from 600 GB/Hour to over 1TB/hour, it depend on the
client disk load  
We went from 2GB HBA/switch  to 4GB HBA/switch and could not see a
significant impact
Note that we also use TDP compression ( which is not a real compression
but helps )

Parallel sessions : 3
Multiplexed files : 7

BKI1215I: Average transmission rate was 821.396 GB/h (233.641 MB/sec).
BKI1227I: Average compression factor was 1.148.
BKI0020I: End of program at: Wed Jul 02 03:00:00 2008 .
BKI0021I: Elapsed time: 05 h 57 min 01 sec .


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 7:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge.  Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.

I'll also try the alternate pathing ASAP.

Thanks for the suggestions!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Please tell me about your LTO3 / LTO4 performance

Hi all-



For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to point me in the right direction.



If any other LTO3 or LTO4 users out there could give me some examples of
their real-world performance along with a little detail on their config,
that would be most helpful as well!



My current environment consists of:



* TSM server = p570 LPAR w/4  1.9GHz processors and 8GB RAM, (6)
2Gb HBAS (2 for disk and 4 for tape traffic), and a 10Gb Ethernet
adapter.

* TSM 5.4.1.2 on AIX 5.3 TL6

* 3584 w/14 LTO3 drives at primary site

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-02 Thread Justin Miller
Are you using the RMAN interface for your SAP backups?  We are using it
here and I'm curious to see how others have things configured when using
RMAN.

Justin Miller




Eric Bourgi [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
07/02/2008 12:53 AM
Please respond to ADSM: Dist Stor Manager


To: ADSM-L@VM.MARIST.EDU
cc:
Subject:Re: [ADSM-L] Please tell me about your LTO3 / LTO4 
performance


For large client ,and it feasable do not use your LAN , but LANFREE
backup , so your master
Tsm server as a Library manager .

Here is an example of backup (TDP for mysap )
3 lto3 tape drives as target and 7 multiplexing ( so 21 threads )
The speed goes from 600 GB/Hour to over 1TB/hour, it depend on the
client disk load
We went from 2GB HBA/switch  to 4GB HBA/switch and could not see a
significant impact
Note that we also use TDP compression ( which is not a real compression
but helps )

Parallel sessions : 3
Multiplexed files : 7

BKI1215I: Average transmission rate was 821.396 GB/h (233.641 MB/sec).
BKI1227I: Average compression factor was 1.148.
BKI0020I: End of program at: Wed Jul 02 03:00:00 2008 .
BKI0021I: Elapsed time: 05 h 57 min 01 sec .


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 7:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge.  Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.

I'll also try the alternate pathing ASAP.

Thanks for the suggestions!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Please tell me about your LTO3 / LTO4 performance

Hi all-



For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to point me in the right direction.



If any other

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-02 Thread Orville Lantto
Any more thoughts on using TSM through a SVC?  I am about to configure such a 
system and have reservations about putting TSM disk storage pools on SVC LUNs.

Orville L. Lantto




From: Justin Miller
Sent: Wed 7/2/2008 16:19
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Please tell me about your LTO3 / LTO4 performance


Are you using the RMAN interface for your SAP backups?  We are using it
here and I'm curious to see how others have things configured when using
RMAN.

Justin Miller




Eric Bourgi [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
07/02/2008 12:53 AM
Please respond to ADSM: Dist Stor Manager


To: ADSM-L@VM.MARIST.EDU
cc:
Subject:Re: [ADSM-L] Please tell me about your LTO3 / LTO4 
performance


For large client ,and it feasable do not use your LAN , but LANFREE
backup , so your master
Tsm server as a Library manager .

Here is an example of backup (TDP for mysap )
3 lto3 tape drives as target and 7 multiplexing ( so 21 threads )
The speed goes from 600 GB/Hour to over 1TB/hour, it depend on the
client disk load
We went from 2GB HBA/switch  to 4GB HBA/switch and could not see a
significant impact
Note that we also use TDP compression ( which is not a real compression
but helps )

Parallel sessions : 3
Multiplexed files : 7

BKI1215I: Average transmission rate was 821.396 GB/h (233.641 MB/sec).
BKI1227I: Average compression factor was 1.148.
BKI0020I: End of program at: Wed Jul 02 03:00:00 2008 .
BKI0021I: Elapsed time: 05 h 57 min 01 sec .


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 7:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge.  Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.

I'll also try the alternate pathing ASAP.

Thanks for the suggestions!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Please tell me about your LTO3 / LTO4 performance

Hi all-



For quite

Please tell me about your LTO3 / LTO4 performance

2008-07-01 Thread Thach, Kevin G
Hi all-

 

For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to point me in the right direction.

 

If any other LTO3 or LTO4 users out there could give me some examples of
their real-world performance along with a little detail on their config,
that would be most helpful as well!

 

My current environment consists of:

 

* TSM server = p570 LPAR w/4  1.9GHz processors and 8GB RAM, (6)
2Gb HBAS (2 for disk and 4 for tape traffic), and a 10Gb Ethernet
adapter. 

* TSM 5.4.1.2 on AIX 5.3 TL6

* 3584 w/14 LTO3 drives at primary site

* 3584 w/12 LTO1 drives at DR/hotsite (copypool volumes are
written directly to this library via SAN routing)

* DB (80GB -- 4GB DBVOL size) residing on IBM DS8300 behind IBM
SVC

* Log (11GB - single LOGVOL) residing on IBM DS8300 behind IBM
SVC

* Primary Storage pool in question (2.5TB -- 20GB volume size),
DISK device class, residing on IBMDS8300 behind IBM SVC

 

I currently back up about 4.5TB / night, of which ~2TB is written
directly to my primary LTO3 tape pool with a simultaneous write to my
copypool across town.  So, each morning I'm left with about 2.5TB of
data to copy and migrate from my disk pool(s) to copypool and onsite
tape respectively.

 

My backup stg performance to LTO1 tape (copypool) is about what I would
expect.  I run 5 threads for this process (5 mount points used), and I
consistently average 20-25MB/sec/drive.  Fair enough.  I don't know of
anyone getting a whole lot more than that out of an LTO1 drive.

 

After that is complete, I then migrate that data to my LTO3 tape here
onsite.  That performance is pretty lousy compared to what I would
expect to get out of LTO3.  I run  6 migration threads (6 mount points
used), and I average around 25MB/sec/drive going to LT03 as well.

 

All SAN links between the TSM server and the LT03 drives are a minimum
of 2Gb, so that is my lowest common denominator.  I've tried using less
threads to see if perhaps I was saturating an HBA rather than the drive.
Same speed.  I've tried separating my DB and STG pools on different
storage subsystems.  Same speed.  I've opened PMR's with IBM support,
and they have poured over all of my TSM server settings / config and
found nothing to go on.  We've had IBM ATS teams evaluate the situation,
and they've never been able to pinpoint a problem.

 

I've tried various tools--tapewrite, nmon, filemon, etc. and I've not
found a smoking gun.

 

At this point, my gut is that SVC is the bottleneck, but for those of
you familiar with SVC, you know that trying to obtain meaningful
performance statistics on the SVC cluster itself is frustrating.

 

I know there are folks out there getting much better performance out of
LTO3 drives, so please tell me how you're doing it!

 

Suggestions?  Questions? 

 

Thank you!

-Kevin

 

 




-
This E-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION
intended only for the use of the Individual(s) named above.  If you
are not the intended recipient of this E-mail, or the employee or
agent responsible for delivering it to the intended recipient, you
are hereby notified that any dissemination or copying of this
E-mail is strictly prohibited.  If you have received this E-mail in
error, please immediately notify us at (865)374-4900 or notify us
by E-mail at [EMAIL PROTECTED]


Re: Please tell me about your LTO3 / LTO4 performance

2008-07-01 Thread Wanda Prather
If you think it's the SVC, why not try taking TSM out of the picture:

If you use OS tools to COPY a big chunk of data (say a 20 GB file) from one
spot behind the SVC to the other, and time it, what is your MB/sec rate?




On 7/1/08, Thach, Kevin G [EMAIL PROTECTED] wrote:

 Hi all-



 For quite some time now, I have been trying to track down an elusive
 bottleneck in my TSM environment relating to disk-to-tape performance.
 This is a long read, but I would be very greatful for any suggestions.
 Hopefully some of you folks much smarter than me out there will be able
 to point me in the right direction.



 If any other LTO3 or LTO4 users out there could give me some examples of
 their real-world performance along with a little detail on their config,
 that would be most helpful as well!



 My current environment consists of:



 * TSM server = p570 LPAR w/4  1.9GHz processors and 8GB RAM, (6)
 2Gb HBAS (2 for disk and 4 for tape traffic), and a 10Gb Ethernet
 adapter.

 * TSM 5.4.1.2 on AIX 5.3 TL6

 * 3584 w/14 LTO3 drives at primary site

 * 3584 w/12 LTO1 drives at DR/hotsite (copypool volumes are
 written directly to this library via SAN routing)

 * DB (80GB -- 4GB DBVOL size) residing on IBM DS8300 behind IBM
 SVC

 * Log (11GB - single LOGVOL) residing on IBM DS8300 behind IBM
 SVC

 * Primary Storage pool in question (2.5TB -- 20GB volume size),
 DISK device class, residing on IBMDS8300 behind IBM SVC



 I currently back up about 4.5TB / night, of which ~2TB is written
 directly to my primary LTO3 tape pool with a simultaneous write to my
 copypool across town.  So, each morning I'm left with about 2.5TB of
 data to copy and migrate from my disk pool(s) to copypool and onsite
 tape respectively.



 My backup stg performance to LTO1 tape (copypool) is about what I would
 expect.  I run 5 threads for this process (5 mount points used), and I
 consistently average 20-25MB/sec/drive.  Fair enough.  I don't know of
 anyone getting a whole lot more than that out of an LTO1 drive.



 After that is complete, I then migrate that data to my LTO3 tape here
 onsite.  That performance is pretty lousy compared to what I would
 expect to get out of LTO3.  I run  6 migration threads (6 mount points
 used), and I average around 25MB/sec/drive going to LT03 as well.



 All SAN links between the TSM server and the LT03 drives are a minimum
 of 2Gb, so that is my lowest common denominator.  I've tried using less
 threads to see if perhaps I was saturating an HBA rather than the drive.
 Same speed.  I've tried separating my DB and STG pools on different
 storage subsystems.  Same speed.  I've opened PMR's with IBM support,
 and they have poured over all of my TSM server settings / config and
 found nothing to go on.  We've had IBM ATS teams evaluate the situation,
 and they've never been able to pinpoint a problem.



 I've tried various tools--tapewrite, nmon, filemon, etc. and I've not
 found a smoking gun.



 At this point, my gut is that SVC is the bottleneck, but for those of
 you familiar with SVC, you know that trying to obtain meaningful
 performance statistics on the SVC cluster itself is frustrating.



 I know there are folks out there getting much better performance out of
 LTO3 drives, so please tell me how you're doing it!



 Suggestions?  Questions?



 Thank you!

 -Kevin








 -
 This E-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION
 intended only for the use of the Individual(s) named above.  If you
 are not the intended recipient of this E-mail, or the employee or
 agent responsible for delivering it to the intended recipient, you
 are hereby notified that any dissemination or copying of this
 E-mail is strictly prohibited.  If you have received this E-mail in
 error, please immediately notify us at (865)374-4900 or notify us
 by E-mail at [EMAIL PROTECTED]



Re: Please tell me about your LTO3 / LTO4 performance

2008-07-01 Thread Kauffman, Tom
How are your tape drives attached to your TSM HBAs? Presumably by SAN switch, 
so how do you have the drives zoned? Ideally, every drive should be visible on 
every fiber and alternate path support should be enabled (chdev -l rmtx -a 
alt_pathing=yes) (do NOT do for the SMC if you do not have path failover; may 
not work for LTO3 if you do not have path failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM LPAR; two 
SAN switches, with the even-numbered drives in one and the odd-numbered drives 
in the other. The result is 80 rmt (tape) devices for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per 
drive/network interface in my nightly SAP backups. (dedicated Gb networks, one 
per concurrent backup session - Gigabit Ethernet NICs are cheap!) My off-site 
copy processes run at LTO2 drive speed (the 'twos are only used for offsite 
tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Thach, 
Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Please tell me about your LTO3 / LTO4 performance

Hi all-



For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to point me in the right direction.



If any other LTO3 or LTO4 users out there could give me some examples of
their real-world performance along with a little detail on their config,
that would be most helpful as well!



My current environment consists of:



* TSM server = p570 LPAR w/4  1.9GHz processors and 8GB RAM, (6)
2Gb HBAS (2 for disk and 4 for tape traffic), and a 10Gb Ethernet
adapter.

* TSM 5.4.1.2 on AIX 5.3 TL6

* 3584 w/14 LTO3 drives at primary site

* 3584 w/12 LTO1 drives at DR/hotsite (copypool volumes are
written directly to this library via SAN routing)

* DB (80GB -- 4GB DBVOL size) residing on IBM DS8300 behind IBM
SVC

* Log (11GB - single LOGVOL) residing on IBM DS8300 behind IBM
SVC

* Primary Storage pool in question (2.5TB -- 20GB volume size),
DISK device class, residing on IBMDS8300 behind IBM SVC



I currently back up about 4.5TB / night, of which ~2TB is written
directly to my primary LTO3 tape pool with a simultaneous write to my
copypool across town.  So, each morning I'm left with about 2.5TB of
data to copy and migrate from my disk pool(s) to copypool and onsite
tape respectively.



My backup stg performance to LTO1 tape (copypool) is about what I would
expect.  I run 5 threads for this process (5 mount points used), and I
consistently average 20-25MB/sec/drive.  Fair enough.  I don't know of
anyone getting a whole lot more than that out of an LTO1 drive.



After that is complete, I then migrate that data to my LTO3 tape here
onsite.  That performance is pretty lousy compared to what I would
expect to get out of LTO3.  I run  6 migration threads (6 mount points
used), and I average around 25MB/sec/drive going to LT03 as well.



All SAN links between the TSM server and the LT03 drives are a minimum
of 2Gb, so that is my lowest common denominator.  I've tried using less
threads to see if perhaps I was saturating an HBA rather than the drive.
Same speed.  I've tried separating my DB and STG pools on different
storage subsystems.  Same speed.  I've opened PMR's with IBM support,
and they have poured over all of my TSM server settings / config and
found nothing to go on.  We've had IBM ATS teams evaluate the situation,
and they've never been able to pinpoint a problem.



I've tried various tools--tapewrite, nmon, filemon, etc. and I've not
found a smoking gun.



At this point, my gut is that SVC is the bottleneck, but for those of
you familiar with SVC, you know that trying to obtain meaningful
performance statistics on the SVC cluster itself is frustrating.



I know there are folks out there getting much better performance out of
LTO3 drives, so please tell me how you're doing it!



Suggestions?  Questions?



Thank you!

-Kevin








-
This E-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION
intended only for the use of the Individual(s) named above.  If you
are not the intended recipient of this E-mail, or the employee or
agent responsible for delivering it to the intended recipient, you
are hereby notified that any dissemination or copying

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-01 Thread Thach, Kevin G
I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Please tell me about your LTO3 / LTO4 performance

Hi all-



For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to point me in the right direction.



If any other LTO3 or LTO4 users out there could give me some examples of
their real-world performance along with a little detail on their config,
that would be most helpful as well!



My current environment consists of:



* TSM server = p570 LPAR w/4  1.9GHz processors and 8GB RAM, (6)
2Gb HBAS (2 for disk and 4 for tape traffic), and a 10Gb Ethernet
adapter.

* TSM 5.4.1.2 on AIX 5.3 TL6

* 3584 w/14 LTO3 drives at primary site

* 3584 w/12 LTO1 drives at DR/hotsite (copypool volumes are
written directly to this library via SAN routing)

* DB (80GB -- 4GB DBVOL size) residing on IBM DS8300 behind IBM
SVC

* Log (11GB - single LOGVOL) residing on IBM DS8300 behind IBM
SVC

* Primary Storage pool in question (2.5TB -- 20GB volume size),
DISK device class, residing on IBMDS8300 behind IBM SVC



I currently back up about 4.5TB / night, of which ~2TB is written
directly to my primary LTO3 tape pool with a simultaneous write to my
copypool across town.  So, each morning I'm left with about 2.5TB of
data to copy and migrate from my disk pool(s) to copypool and onsite
tape respectively.



My backup stg performance to LTO1 tape (copypool) is about what I would
expect.  I run 5 threads for this process (5 mount points used), and I
consistently average 20-25MB/sec/drive.  Fair enough.  I don't know of
anyone getting a whole lot more than that out of an LTO1 drive.



After that is complete, I then migrate that data to my LTO3 tape here
onsite.  That performance is pretty lousy compared to what I would
expect to get out of LTO3.  I run  6 migration threads (6 mount points
used), and I average around 25MB/sec/drive going to LT03 as well.



All SAN links between the TSM server and the LT03 drives are a minimum
of 2Gb, so that is my lowest common denominator.  I've tried using less
threads to see if perhaps I was saturating an HBA rather than the drive.
Same speed.  I've tried separating my DB and STG pools on different
storage subsystems.  Same speed.  I've opened PMR's with IBM support,
and they have poured over all of my TSM server settings / config and
found nothing to go on.  We've had IBM ATS teams evaluate the situation,
and they've never been able to pinpoint a problem.



I've tried various tools--tapewrite, nmon, filemon, etc. and I've not
found a smoking gun.



At this point, my gut is that SVC is the bottleneck, but for those of
you familiar with SVC, you know that trying to obtain meaningful

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-01 Thread Kauffman, Tom
Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the ISL to 
the edge switches? For your system, it should be at least 6 Gb; 8 would be 
marginally better (three paired ports at 2 Gb/port, or two paired ports at 4 
Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Thach, 
Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Please tell me about your LTO3 / LTO4 performance

Hi all-



For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to point me in the right direction.



If any other LTO3 or LTO4 users out there could give me some examples of
their real-world performance along with a little detail on their config,
that would be most helpful as well!



My current environment consists of:



* TSM server = p570 LPAR w/4  1.9GHz processors and 8GB RAM, (6)
2Gb HBAS (2 for disk and 4 for tape traffic), and a 10Gb Ethernet
adapter.

* TSM 5.4.1.2 on AIX 5.3 TL6

* 3584 w/14 LTO3 drives at primary site

* 3584 w/12 LTO1 drives at DR/hotsite (copypool volumes are
written directly to this library via SAN routing)

* DB (80GB -- 4GB DBVOL size) residing on IBM DS8300 behind IBM
SVC

* Log (11GB - single LOGVOL) residing on IBM DS8300 behind IBM
SVC

* Primary Storage pool in question (2.5TB -- 20GB volume size),
DISK device class, residing on IBMDS8300 behind IBM SVC



I currently back up about 4.5TB / night, of which ~2TB is written
directly to my primary LTO3 tape pool with a simultaneous write to my
copypool across town.  So, each morning I'm left with about 2.5TB of
data to copy and migrate from my disk pool(s) to copypool and onsite
tape respectively.



My backup stg performance to LTO1 tape (copypool) is about what I would
expect.  I run 5 threads for this process (5 mount points used), and I
consistently average 20-25MB/sec/drive.  Fair enough.  I don't know of
anyone getting a whole lot more than that out of an LTO1 drive.



After that is complete, I then migrate that data to my LTO3 tape here
onsite.  That performance is pretty lousy compared to what I would
expect to get out of LTO3.  I run  6 migration threads (6 mount points
used), and I average around 25MB/sec/drive going to LT03 as well.



All SAN links between the TSM server and the LT03 drives are a minimum
of 2Gb, so that is my lowest common denominator.  I've tried using less
threads to see if perhaps I was saturating an HBA rather than the drive.
Same speed.  I've tried separating my DB and STG pools

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-01 Thread Gee, Norman
Single migrate process of compress data from DS-4200 to LTO4 ~ 300 GB
per hour. 4 Gb fabric, No ISL. 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 9:07 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Please tell me about your LTO3 / LTO4 performance

Hi all-



For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to point me in the right direction.



If any other LTO3 or LTO4 users out there could give me some examples of
their real-world performance along with a little detail on their config,
that would be most helpful as well!



My current environment consists of:



* TSM server = p570 LPAR w/4  1.9GHz processors and 8GB RAM, (6)
2Gb HBAS (2 for disk and 4 for tape traffic), and a 10Gb Ethernet
adapter.

* TSM 5.4.1.2 on AIX 5.3 TL6

* 3584 w/14 LTO3 drives at primary site

* 3584 w/12 LTO1 drives at DR/hotsite (copypool volumes are
written directly to this library via SAN routing)

* DB (80GB -- 4GB DBVOL size) residing on IBM DS8300 behind IBM
SVC

* Log (11GB - single LOGVOL) residing on IBM DS8300 behind IBM
SVC

* Primary Storage pool in question (2.5TB -- 20GB volume size),
DISK device class, residing on IBMDS8300 behind IBM SVC



I currently back up about 4.5TB / night, of which ~2TB is written
directly to my primary LTO3 tape pool with a simultaneous write to my
copypool across town.  So, each morning I'm left with about 2.5TB of
data to copy and migrate from my disk pool(s) to copypool and onsite
tape respectively.



My backup stg performance to LTO1 tape (copypool) is about what I would
expect.  I run 5 threads for this process (5 mount points used), and I
consistently average 20-25MB/sec/drive.  Fair enough.  I don't know of
anyone getting a whole lot more than that out of an LTO1 drive.



After that is complete, I then migrate that data to my LTO3 tape here
onsite.  That performance is pretty lousy compared to what I would
expect to get out of LTO3.  I run  6 migration threads (6 mount points
used

Re: Please tell me about your LTO3 / LTO4 performance

2008-07-01 Thread Thach, Kevin G
There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge.  Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.

I'll also try the alternate pathing ASAP.

Thanks for the suggestions!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

Two items, then.

Alternate pathing may help. Also, what is the available bandwidth of the
ISL to the edge switches? For your system, it should be at least 6 Gb; 8
would be marginally better (three paired ports at 2 Gb/port, or two
paired ports at 4 Gb/port).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

I am set up very similar to you.  My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)

Therefore, I have 64 rmt devices at the AIX level for my LTO3 drives, as
each tape HBA sees each of the 14 drives.  I am not using the alternate
pathing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please tell me about your LTO3 / LTO4 performance

How are your tape drives attached to your TSM HBAs? Presumably by SAN
switch, so how do you have the drives zoned? Ideally, every drive should
be visible on every fiber and alternate path support should be enabled
(chdev -l rmtx -a alt_pathing=yes) (do NOT do for the SMC if you do not
have path failover; may not work for LTO3 if you do not have path
failover).

I have 10 LTO4 and 6 LTO2 drives, and 10 fibers to tape from my TSM
LPAR; two SAN switches, with the even-numbered drives in one and the
odd-numbered drives in the other. The result is 80 rmt (tape) devices
for the LPAR.

I know I'm network limited - so I only get a maximum of 110 MB/sec per
drive/network interface in my nightly SAP backups. (dedicated Gb
networks, one per concurrent backup session - Gigabit Ethernet NICs are
cheap!) My off-site copy processes run at LTO2 drive speed (the 'twos
are only used for offsite tapes).

This is for 4 concurrent sessions over two network interfaces:
BKI1215I: Average transmission rate was 762.364 GB/h (216.850 MB/sec).
BKI1227I: Average compression factor was 1.000.
BKI0020I: End of program at: Mon Jun 30 20:55:08 EDT 2008 .
BKI0021I: Elapsed time: 01 h 52 min 00 sec .
BKI0024I: Return code is: 0.

So I averaged 108 MB/sec over the NIC, and 54 MB/sec to the drive.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thach, Kevin G
Sent: Tuesday, July 01, 2008 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Please tell me about your LTO3 / LTO4 performance

Hi all-



For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to point me in the right direction.



If any other LTO3 or LTO4 users out there could give me some examples of
their real-world performance along with a little detail on their config,
that would be most helpful as well!



My current environment consists of:



* TSM server = p570 LPAR w/4  1.9GHz processors and 8GB RAM, (6)
2Gb HBAS (2 for disk and 4 for tape traffic), and a 10Gb Ethernet
adapter.

* TSM 5.4.1.2 on AIX 5.3 TL6

* 3584 w/14 LTO3 drives at primary site

* 3584 w/12 LTO1 drives at DR/hotsite (copypool volumes are
written directly to this library via SAN routing)

* DB (80GB -- 4GB DBVOL size) residing on IBM DS8300 behind IBM
SVC

* Log (11GB - single LOGVOL) residing on IBM DS8300 behind IBM
SVC

* Primary Storage pool in question (2.5TB -- 20GB volume size),
DISK device class, residing on IBMDS8300 behind IBM SVC



I currently back up about 4.5TB / night, of which ~2TB is written
directly to my primary LTO3 tape pool with a simultaneous write to my
copypool across town.  So, each morning I'm left with about 2.5TB of
data to copy and migrate from my disk pool(s) to copypool and onsite
tape respectively.



My backup stg performance to LTO1 tape (copypool) is about what I would
expect.  I run 5 threads for this process (5 mount points used), and I
consistently average 20-25MB/sec/drive.  Fair enough.  I don't know of
anyone getting a whole lot more than that out of an LTO1 drive.



After that is complete, I