TSM Performance 5.5 vs 6.2

2011-10-27 Thread Daniel Sparrman
First off, to determine if your hardware is enough, it would be useful to know 
the size of your envinvironment (db size, amount of daily data, total amount of 
data).

As for 6.2 in general, the internal housekeeping of TSM is alot faster. One of 
the main issues for alot of people during 5.5 was expiration processing. With 
the new features such as multi-threading, expiration now goes alot faster. 

TSM 6.2 requires abit more hardware, but overall, all environments I've 
upgraded this far has seen performance increase across the board. So I believe 
the risk that your performance would go down to be very, very small.

I think IBM mentioned somewhere that overall database performance has been 
increase threefolded with the implementation of DB2 as a database engine.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Druckenmiller, David 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/26/2011 20:41
Ärende: [ADSM-L] TSM Performance 5.5 vs 6.2


I need to show management that simply upgrading TSM from 5.5 to 6.2 will not 
cause a degradation in performance.  I know a lot of upgrades are done by 
moving to new hardware.  We don't have that luxury.  We are currently running 
on AIX 6.1 on p520 server.  I've already upped memory to 32gb.

Anyone have any experience to share?

Thanks
Dave

-
CONFIDENTIALITY NOTICE: This email and any attachments may contain
confidential information that is protected by law and is for the
sole use of the individuals or entities to which it is addressed.
If you are not the intended recipient, please notify the sender by
replying to this email and destroying all copies of the
communication and attachments. Further use, disclosure, copying,
distribution of, or reliance upon the contents of this email and
attachments is strictly prohibited. To contact Albany Medical
Center, or for a copy of our privacy practices, please visit us on
the Internet at www.amc.edu.

Re: FODC (First Occurrence Data Capture) dumps

2011-10-27 Thread Steven Langdale
Zoltan

These are DB2 dumps.  Assuming you don't need them, and by the dates you
don't, they are OK to remove

Steven

On 25 October 2011 15:17, Zoltan Forray/AC/VCU zfor...@vcu.edu wrote:

 I have been looking around on our servers to cleanup large/unnecessary
 files and came upon the /dumps/FODC_Panic_ folders with many gigs of
 cores and such.

 Any need to keep these around and can they be deleted?  Some of them date
 back to 2009.


 Zoltan Forray
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html



Failing cifs backup on Isilon NAS

2011-10-27 Thread Stefan Folkerts
Hi guy's,

A customer of ours got themselves a new NAS system and need to make backups
using the ba client (they don't want an ndmp solutions just yet).

I am using a user account that can access/copy/delete data on the cifs share
I am trying to backup but running a TSM backup doesn´t work.
Now I know that TSM support for cifs shares is limited on some devices,
might be the case here but I am still wondering if anybody has seen this
before and/or might have a solution for this issue.

If I just do a dsmc i \\server\share it also fails on the same ANS1228E and
ANS4007E messages.

http://imgur.com/7YzYl

I suspect it has something to do with reading the security metadata of the
dir's/files and fails to do so but I am not sure.

Regards,
  Stefan


comm/idle/resource timeout values

2011-10-27 Thread Richard Rhodes
Hi Everyone,

In working with support on a couple issues we've realized that we have
different
values for commtimeout, idletimeout, and resource timeout.

We have:   2 dedicated library manager instances
   7 tsm instances for BA client file backups
   2 tsm instances for BIG LanFree Oracle backups (tdpo/lanfree)
(db's  1TB)
  (all the big lanfree nodes are in these instances
  32 Nodes with tdpo/lanfree setups

All instances share the same tape drives via the dedicated library
managers.

The dedicated library managers, TSM instances for big lanfree nodes,
and the storage agents are all defined with the following parms:
  commtimeout 14400
  idletimeout   240
  resourcetimeout60

The seven tsm instances for normal BA client backups have the following
parms:
(These tsm servers include the problem-child Windows nodes with millions
of small files.)
  commtimeout 3600
  idletimeout  150
  resourcetimeout   60


IBM support indicated that ALL instances in this environment should use
the same values
for these parms. If they are not the same, then it can be a cause for one
of the problems we are fighting (scsi reservation errors).
 I'm not sure if the values above are good/bad/ugly, or, what values
should be used.  I'm not finding many specific recommendations.

Any suggestions would be greatly appreciated!

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


Re: FODC (First Occurrence Data Capture) dumps

2011-10-27 Thread Zoltan Forray/AC/VCU
Thanks for the confirmation..gave me back 30GB



From:   Steven Langdale steven.langd...@gmail.com
To: ADSM-L@VM.MARIST.EDU
Date:   10/27/2011 04:04 AM
Subject:Re: [ADSM-L] FODC (First Occurrence Data Capture) dumps
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Zoltan

These are DB2 dumps.  Assuming you don't need them, and by the dates you
don't, they are OK to remove

Steven

On 25 October 2011 15:17, Zoltan Forray/AC/VCU zfor...@vcu.edu wrote:

 I have been looking around on our servers to cleanup large/unnecessary
 files and came upon the /dumps/FODC_Panic_ folders with many gigs of
 cores and such.

 Any need to keep these around and can they be deleted?  Some of them
date
 back to 2009.


 Zoltan Forray
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html



comm/idle/resource timeout values - take 2

2011-10-27 Thread Richard Rhodes
(I had  the values for commtimeout and idletimeout values backwards! fixed
below)

Hi Everyone,

In working with support on a couple issues we've realized that we have
different
values for commtimeout, idletimeout, and resource timeout.

We have:   2 dedicated library manager instances
   7 tsm instances for BA client file backups
   2 tsm instances for BIG LanFree Oracle backups (tdpo/lanfree)
(db's  1TB)
  (all the big lanfree nodes are in these instances
  32 Nodes with tdpo/lanfree setups

All instances share the same tape drives via the dedicated library
managers.

The dedicated library managers, TSM instances for big lanfree nodes,
and the storage agents are all defined with the following parms:
  commtimeout   240(fixed)
  idletimeout 14400(fixed)
  resourcetimeout60

The seven tsm instances for normal BA client backups have the following
parms:
(These tsm servers include the problem-child Windows nodes with millions
of small files.)
  commtimeout  150 (fixed)
  idletimeout 3600 (fixed)
  resourcetimeout   60


IBM support indicated that ALL instances in this environment should use
the same values
for these parms. If they are not the same, then it can be a cause for one
of the problems we are fighting (scsi reservation errors).
 I'm not sure if the values above are good/bad/ugly, or, what values
should be used.  I'm not finding many specific recommendations.

Any suggestions would be greatly appreciated!

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


Re: Failing cifs backup on Isilon NAS

2011-10-27 Thread Skylar Thompson
Can you backup over NFS instead? We do that for our Isilon clusters and 
it works surprisingly well.


--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

On 10/27/11 04:19 AM, Stefan Folkerts wrote:

Hi guy's,

A customer of ours got themselves a new NAS system and need to make backups
using the ba client (they don't want an ndmp solutions just yet).

I am using a user account that can access/copy/delete data on the cifs share
I am trying to backup but running a TSM backup doesn´t work.
Now I know that TSM support for cifs shares is limited on some devices,
might be the case here but I am still wondering if anybody has seen this
before and/or might have a solution for this issue.

If I just do a dsmc i \\server\share it also fails on the same ANS1228E and
ANS4007E messages.

http://imgur.com/7YzYl

I suspect it has something to do with reading the security metadata of the
dir's/files and fails to do so but I am not sure.

Regards,
   Stefan


Re: comm/idle/resource timeout values - take 2

2011-10-27 Thread Remco Post
Hi Richard,

which version of TSM are you running? In some version (6.3 ???) of TSM the SCSI 
reservation method changed. SO if you mix various levels, you might find 
yourself in trouble, of if you don't set the SCSI reservation key correctly.

Also, there is a bug in TSM 5.5.2 and lower for NDMP where the NAS filer might 
report SCSI reservation conflicts because TSM doesn't track the NDMP session 
properly.

Also, I've seen SCSI reservation conflicts reported when the server2server 
communication from (IIRC) the LM to the LC doesn't work. Check if you can route 
commands in both directions properly...


On 27 okt. 2011, at 14:58, Richard Rhodes wrote:

 (I had  the values for commtimeout and idletimeout values backwards! fixed
 below)
 
 Hi Everyone,
 
 In working with support on a couple issues we've realized that we have
 different
 values for commtimeout, idletimeout, and resource timeout.
 
 We have:   2 dedicated library manager instances
   7 tsm instances for BA client file backups
   2 tsm instances for BIG LanFree Oracle backups (tdpo/lanfree)
 (db's  1TB)
  (all the big lanfree nodes are in these instances
  32 Nodes with tdpo/lanfree setups
 
 All instances share the same tape drives via the dedicated library
 managers.
 
 The dedicated library managers, TSM instances for big lanfree nodes,
 and the storage agents are all defined with the following parms:
  commtimeout   240(fixed)
  idletimeout 14400(fixed)
  resourcetimeout60
 
 The seven tsm instances for normal BA client backups have the following
 parms:
 (These tsm servers include the problem-child Windows nodes with millions
 of small files.)
  commtimeout  150 (fixed)
  idletimeout 3600 (fixed)
  resourcetimeout   60
 
 
 IBM support indicated that ALL instances in this environment should use
 the same values
 for these parms. If they are not the same, then it can be a cause for one
 of the problems we are fighting (scsi reservation errors).
 I'm not sure if the values above are good/bad/ugly, or, what values
 should be used.  I'm not finding many specific recommendations.
 
 Any suggestions would be greatly appreciated!
 
 Rick
 
 
 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If
 the reader of this message is not the intended recipient or an
 agent responsible for delivering it to the intended recipient, you
 are hereby notified that you have received this document in error
 and that any review, dissemination, distribution, or copying of
 this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete
 the original message.

-- 
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: comm/idle/resource timeout values - take 2

2011-10-27 Thread Richard Rhodes
We are:
   all tsm servers are v5.5.2
   storage agents are v5.5.1 and v5.4.1(being upgraded to v551)

We do not have any NAS/NDMP backups.

We found one possible cause of a reservation conflicts - we had one of the

TSM instances with a device class with mount wait 0  Every other
instance
 has mount wait 1.  I hate to think how long it has been that way, but
no one
 noticed the excessive mounts until two days ago!  IBM indicated that tsm
 instances with different mount wait settings could cause scsi reservation
conflicts.


Thanks!

Rick




From:   Remco Post r.p...@plcs.nl
To: ADSM-L@VM.MARIST.EDU
Date:   10/27/2011 02:50 PM
Subject:Re: comm/idle/resource timeout values - take 2
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi Richard,

which version of TSM are you running? In some version (6.3 ???) of TSM the
SCSI reservation method changed. SO if you mix various levels, you might
find yourself in trouble, of if you don't set the SCSI reservation key
correctly.

Also, there is a bug in TSM 5.5.2 and lower for NDMP where the NAS filer
might report SCSI reservation conflicts because TSM doesn't track the NDMP
session properly.

Also, I've seen SCSI reservation conflicts reported when the server2server
communication from (IIRC) the LM to the LC doesn't work. Check if you can
route commands in both directions properly...


On 27 okt. 2011, at 14:58, Richard Rhodes wrote:

 (I had  the values for commtimeout and idletimeout values backwards!
fixed
 below)

 Hi Everyone,

 In working with support on a couple issues we've realized that we have
 different
 values for commtimeout, idletimeout, and resource timeout.

 We have:   2 dedicated library manager instances
   7 tsm instances for BA client file backups
   2 tsm instances for BIG LanFree Oracle backups (tdpo/lanfree)
 (db's  1TB)
  (all the big lanfree nodes are in these instances
  32 Nodes with tdpo/lanfree setups

 All instances share the same tape drives via the dedicated library
 managers.

 The dedicated library managers, TSM instances for big lanfree nodes,
 and the storage agents are all defined with the following parms:
  commtimeout   240(fixed)
  idletimeout 14400(fixed)
  resourcetimeout60

 The seven tsm instances for normal BA client backups have the following
 parms:
 (These tsm servers include the problem-child Windows nodes with millions
 of small files.)
  commtimeout  150 (fixed)
  idletimeout 3600 (fixed)
  resourcetimeout   60


 IBM support indicated that ALL instances in this environment should use
 the same values
 for these parms. If they are not the same, then it can be a cause for
one
 of the problems we are fighting (scsi reservation errors).
 I'm not sure if the values above are good/bad/ugly, or, what values
 should be used.  I'm not finding many specific recommendations.

 Any suggestions would be greatly appreciated!

 Rick


 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If
 the reader of this message is not the intended recipient or an
 agent responsible for delivering it to the intended recipient, you
 are hereby notified that you have received this document in error
 and that any review, dissemination, distribution, or copying of
 this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete
 the original message.

--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Can you restore a SystemState backup when you have the HP OpenView problem?

2011-10-27 Thread Lindsay Morris
Hi, all.
I have a customer running the 6.2.2.2 client on Windows boxes that have HP
OpenView installed.
(See APAR IC72446, http://www-01.ibm.com/support/docview.wss?uid=swg1IC72446
.)

As the APAR says, they see a lot of dsmsched.log messages like this:
  ANS1417W Protected system state file filename is backed up to the drive
file space, not system state file space
And they could ignore those.  But then the backup says it fails with RC=12.

On the other hand, query filespace shows the SystemState filespace as backed
up successfully at that time.

So they don't know whether to believe the RC=12 failure, or the last-backup
date success indicator.
And they can't easily find a test case.
So, has anybody tested a SystemState restore after a similar backup
failure?
Or can Andy Raibeck say with authority that the SystemState restore will by
golly work, regardless of the RC=12?

Thanks for any wisdom.

-- Lindsay Morris
TSMworks, Inc.
lind...@tsmworks.com
1-859-539-9900

--

Lindsay Morris
CEO, TSMworks
Tel. 1-859-539-9900
lind...@tsmworks.com