We just brought up new pSeries chassis to rollover our existing pSeries
chassis. The new lpars will be AIX v7. The plan is to get up to TSM v7, then
swing the storage to new lpars on the new chassis. Of course, testing all this
swing is yet to occur. We’ve got stuck just on the TSM v6-v7
Yes, they are all AIX 6100-09.
We have DR lpars for each PROD lpar.
When we upgrade we test by bringing up the appropriate TSM instance on it's DR
server, perform the upgrade to test, leave the DR aix at the new TSM version,
then upgrade the PROD lpar/TSM. We've tested two TSM instances on
Fyi,
When I upgraded on AIX I had went from 6.3.4.0 to 7.1.3.0 and then to
7.1.5.100 with no issues.
Date (GMT) Version Pre-release Driver
2016/06/01 18:29:52 7.1.3.0
2016/06/07 17:11:31
Since I need to upgrade 6-TSM servers (2-of them at the same time since
they are Library Managers) to 7.1.5.200, I am closely following this
thread/discussion about upgrade issues.
Have all these woes only been on AIX system?
I tested an old decommissioned server (all user data had been exported
We've been testing . . . we tried walking all the upgrades:
- brought up a 6.3.5 tsm db
- tried upgrade to 7.1.4, it failed/hung (yup, responds like all our tests)
- restored snapshots back to the 6.3.5 setup
- tried upgrade v7.1- WORKED, TSM comes up fine
- tried upgrade to v7.1.1.1 -
Hi guys!
I would like to use the tsmdiskperf.pl script which is part of the Blueprints.
We however install TSM on a cluster, so our file systems are all Veritas
filesystems, where the Blueprint uses the standard Linux driver. So to trick
the script I would like to create a small bash script