Hi All
TSM 5.4.1.2 on Solaris 2.9
Backing up Exchange (MS Windows Server 2003, RC2 SP) - 5.5.0.0 in a clustered
environment.
We had this set up by an outside company and are now seeing some problems with
the TSM Cluster Scheduler automatically failing over the cluster due to a
password issue
Thanks for the info guys. SANDISCOVERY is turned ON at the moment. I
have turned it OFF now - just waiting for the go-ahead to reboot the
server.
Regards,
Jacques van den Berg
TSM / Storage / SAP Basis Administrator
Pick 'n Pay IT
Email : [EMAIL PROTECTED]
Tel : +2721 - 658 1711
Fax :
Hi,
First, de-select affect group in the cluster resource properties for
the TSM Cluster Scheduler in cluster administrator, cluadmin.exe.
This prevents the whole cluster to failover when the TSM service stops
due to whatever.. ex. bad password.
Verify that the correct registry parameter is
Hi Henrik, many thanks for that, very helpful.
The reason I'm confused about the 'cluster scheduler' is that we also have a
'TSM Exchange TDP Scheduler' running, and I assumed it was this that was
responsible for the backups?
Why the need for the two separate schedulers?
Thanks again.
Farren
Farren,
You may be affected by a faulty configuration, where the password in MS
cluster's checkpoint file is not synchronized with the local registry
password anymore, thus making the service fail ... It happened several
times in our shop already !
IBM noticed the problem, and published a
Thanks Arnaud
If resetting the passwords fails I will move on to that.
Regards
Farren
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of PAC Brion
Arnaud
Sent: 01 July 2008 11:15
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Cluster
Hi again Farren !
Aha ok, two nodenames?
My best guess is that TSM Cluster Scheduler is only backing up the OS.
And TSM Exchange TDP Scheduler is responsible for Exchange backups.
What I do is to use two TSM nodes and two scheduler services on Exchange
servers like you seem to have.
But I only
Hello
Ah OK, that makes sense.
I just checked and the TSM CHI-MB Cluster RG Scheduler points to a standard
looking dsm.opt file although we don't do normal backups on these servers
anyway (yes).
The TSM Exchange TDM Cluster RG Scheduler points to
\Tivoli\TSM\TDPExchange\dsm.opt that
Otto,
After you export using the server to server method, verify that all data
has been successfully imported to the target TSM server. Then delete the node
and all of it's data on the source TSM server. The volumes holding the deleted
data will have free space. The volumes can then be
Hi all-
For quite some time now, I have been trying to track down an elusive
bottleneck in my TSM environment relating to disk-to-tape performance.
This is a long read, but I would be very greatful for any suggestions.
Hopefully some of you folks much smarter than me out there will be able
to
If you think it's the SVC, why not try taking TSM out of the picture:
If you use OS tools to COPY a big chunk of data (say a 20 GB file) from one
spot behind the SVC to the other, and time it, what is your MB/sec rate?
On 7/1/08, Thach, Kevin G [EMAIL PROTECTED] wrote:
Hi all-
For quite
A bit of a survey for those of you that happen to use the OpenVMS ABC
client. I've recently found the /summary option, but the format in the
act log is quite different than the normal job reports and our reporting
software doesn't catch it. We'll have to modify it, but I was just
wondering...
How are your tape drives attached to your TSM HBAs? Presumably by SAN switch,
so how do you have the drives zoned? Ideally, every drive should be visible on
every fiber and alternate path support should be enabled (chdev -l rmtx -a
alt_pathing=yes) (do NOT do for the SMC if you do not have path
I am set up very similar to you. My TSM LPAR HBAS connect to a director
class switch which has an ISL to each of the edge switches that the tape
drives themselves connect to (odd drives on one and even on the other
like yourself.)
Therefore, I have 64 rmt devices at the AIX level for my LTO3
Two items, then.
Alternate pathing may help. Also, what is the available bandwidth of the ISL to
the edge switches? For your system, it should be at least 6 Gb; 8 would be
marginally better (three paired ports at 2 Gb/port, or two paired ports at 4
Gb/port).
-Original Message-
From:
I have a number of retired systems that still have data archived on one
of my TSM servers.
I'm exporting the data to another TSM server.
Is there an easy way to find what mgmt class the data was originally
retained as ?
Single migrate process of compress data from DS-4200 to LTO4 ~ 300 GB
per hour. 4 Gb fabric, No ISL.
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kauffman, Tom
Sent: Tuesday, July 01, 2008 9:07 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Please
There are two 2Gb ISL's going to each switch for a total bandwidth of
4Gb to each edge. Our SAN monitoring tool (EFCM) doesn't show that
we're maxing out the ISL's, but I can easily add one to see what
happens.
I'll also try the alternate pathing ASAP.
Thanks for the suggestions!
-Original
Folks,
When I saw this post, I asked our engineers if it would be possible to
more nearly approximate the other messages from our client.
Unfortunately, we're a V 3.1 API client and as such have very limited
support for messages. We can't get them there with what we have.
I am equally
On Jul 1, 2008, at 12:53 PM, Ochs, Duane wrote:
I have a number of retired systems that still have data archived on
one
of my TSM servers.
I'm exporting the data to another TSM server.
Is there an easy way to find what mgmt class the data was originally
retained as ?
Perform a Select on the
20 matches
Mail list logo