result code ?

2002-04-29 Thread Stumpf, Joachim

Hi together,

since we have upgraded our TSM server from 4.1.3 to 4.1.5 on OS/390 2.10 we get result 
codes ? from the clients in some cases (clients are 4.1, 4.2 and 5.0). But i dont 
know why this happens?
Anybody know where this code come from and why it did not have a known number?

thanks in advance...

-- 
regards / Mit freundlichen Gruessen
Joachim Stumpf
Datev eG
Nuremberg - Germany
  



Re: BACKUPSETS on TSM OS/390

2002-04-29 Thread John Naylor

Zoltan,
The answer to your first query is yes you can create backupsets with a file
device class and let sms manage them,
including dfhsm migrate to tape if you want.
Not sure what you are trying to achieve  in your second question, and have not
done this myself, but I believe
1) The client would definitely have to be same os as original backupset client
2) Restorability would depend on what local devices allowed for that particular
client.
John








Zoltan Forray/AC/VCU [EMAIL PROTECTED] on 04/26/2002 08:27:53 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:(bcc: John Naylor/HAV/SSE)
Subject:  BACKUPSETS on TSM OS/390



Has anyone used BACKUPSETS on the OS/390 platform ?

Can the output be a disk file (i.e. can I create a DEVICE CLASS of FILE
and let SMS manage the files being created ?)

Once I create the BACKUPSET file, can I say, FTP it to another platform,
i.e. my PC ? so that anyone could restore from it, using the client ?








**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**



Antwort: Re: Antwort: Re: Antwort: ANR0534W - size estimate exceeded

2002-04-29 Thread Gerhard Wolkerstorfer

Zlatko,

yes, this is a TDP problem...
According to your suggestion - how can you bind dbspaces or logical logs to a
special Managementclass ?
I know - with the include Parameter in the DSM.OPT, but how do you know the
filename so that you can do it ?

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Zlatko Krastev) am 26.04.2002 15:10:56

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Antwort: Re: Antwort: ANR0534W - size estimate exceeded



Gerhard,

you are also right, but ... :-) this actually is TDP problem.
usually TDPs send large files. When we are talking about TDP for Informix
we have two types of files - dbspace backups and logical logs. Former are
huge where latter are very small. If you bind large files to a class with
direct to tape and logs to go to disk everything should be fine.

Zlatko Krastev
IT Consultant



Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Antwort: Re: Antwort: ANR0534W - size estimate exceeded

Zlatko,
you are right, BUT..when the TDP sends an incorrect Filesize to the
TSM
Server, the maxsize Parameter won't work
(TDP sends 100 Byte - the server will let the File go to the diskpool, but
the
file will indeed have 20 Gb, which will fill up your diskpool and bring up
the
message indicated (storage exceeded))

And for tracing purposes I wanted to know, if there is any possibility to
check
the filesize, which the client is sending to the server.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Zlatko Krastev) am 26.04.2002 11:34:22

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Antwort: ANR0534W - size estimate exceeded



Isabel, Gerhard,

you can set MAXSIze parameter of the disk pool. I usually set it about
30-60% of the diskpool size (or better pool free size, i.e. size -
highmig). Files larger than this would bypass the diskpool and go down the
hierarchy (next stgpool). For this might be tape pool, i.e. file will go
direct to tape.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Antwort: ANR0534W - size estimate exceeded

Isabel,
we still have this problem regarding to the TDP Informix.
It seems, that the TDP (sometimes ?) isn't sending the correct Filesize
and the
File (DB Backup) exceeds your DISKPOOL and cannot swap to the Tapepool.
If the TDP would send the correct Filesize, TSM would possibly go direct
to tape
and the problem wouldn't come.

Question: How can I check the Filesize and/or Filename, the client is
sending to
the server ??

MfG
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Isabel Gerhardt) am 26.04.2002 09:17:44

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:ANR0534W - size estimate exceeded



Hi Listmembers,

we recently started to recieve following errors:

04/23/02 20:30:29 ANR0534W Transaction failed for session 1 for
node
   NODE1 (WinNT)
   - size estimate exceeded and server is unable to
obtain
   additional space in storage pool DISKPOOL.
04/24/02 20:38:19 ANR0534W Transaction failed for session 173 for
node
   NODE2 (TDP Infmx AIX42) - size estimate
   exceeded and server is unable to obtain
additional space
   in storage pool DISKPOOL.

From previous Messeges of the List I checked, that the Diskpool has
caching disabled and the clients have no compression allowed.

I was away from work for a while and meanwhile a serverupdate has been
done.
If anyone can point me to the source of this error, please help!

Thanks in advance,
Isabel Gerhardt

Server:
Storage Management Server for AIX-RS/6000 - Version 4, Release 1, Level
5.0
AIX 4.3

Node1:
 PLATFORM_NAME: WinNT
   CLIENT_OS_LEVEL: 5.00
CLIENT_VERSION: 4
CLIENT_RELEASE: 2
  CLIENT_LEVEL: 1
   CLIENT_SUBLEVEL: 20

Node2:
 PLATFORM_NAME: TDP Infmx AIX42
   CLIENT_OS_LEVEL: 4.3
   CLIENT_VERSION: 4
CLIENT_RELEASE: 2
  CLIENT_LEVEL: 0
   CLIENT_SUBLEVEL: 0



Re: Network Tuning

2002-04-29 Thread Zlatko Krastev

TSM does not have settings for traffic shaping. But knowing which ports
are used by TSM (usually 1500 for data transfers) you can set accordingly
your routers in WAN environment. For LAN environment I expect you would
like to have maximum performance for backups and either use SAN or set-up
a dedicated backup LAN segment as Miles pointed.

Zlatko Krastev
IT Consultant



Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Network Tuning

Hi all

I am fairly new to TSM and I am not sure how network tuning is to be done
in the TSM 4.2.1 environment. My current problem is that I want to be able
to ensure that TSM does not use anymore than say 30 percent of the total
bandwidth. Is there anyone that could give me some help or an idea of
where
to look through the manuals or what settings need to be changed.

thanks in advance

Paul



Re: Backing up clients from DMZ on TSM server inside the firewall

2002-04-29 Thread Zlatko Krastev

Look at the post I've made last month
http://msgs.adsm.org/cgi-bin/get/adsm0203/1294.html
The official Tivoli document is called TSM for Windows Using the
Backup-Archive Client. This is the place I've got the info from.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Re: Backing up clients from DMZ on TSM server inside the firewall

NAT the TSM server address so that it appears to be in the DMZ.

That way if you need to change the layout of the LAN outside of the DMZ,
you don't have as many firewall rules to change.

Has anyone seen a document that describes exactly what ports the TSM
client
needs to use for a backup session? Using tcpdump to figure out what we
need
open seems kind of backwards.

Thanks, [RC]

Robert Clark
 The Regence Group
Storage Administrator
  503-220-4743



Makkar, Jas
JMakkar@ADT.To: [EMAIL PROTECTED]
COM cc:
Sent by: Subject: Backing up clients
from DMZ on TSM server inside the
ADSM: Dist  firewall
Stor Manager
[EMAIL PROTECTED]
RIST.EDU


04/23/2002
10:59 AM
Please
respond to
ADSM: Dist
Stor Manager






We are trying to develop an approach to backup the
clients who are in the DMZ via TSM server sitting
inside the firewall.  Please comment on the following
strategy:


To backup the Clients in DMZ from TSM Lib located
within the Intranet, install the TSM client on the
Client in DMZ and open a port in the firewall.
Additionally, use data encryption.   To do this, you
would use the include.exclude and exclude.encrypt
options in your options file. . The encryption key for
these can either be stored locally on your machine or
prompted for each time a backup or restore is
attempted. This is set with encryptkey option in your
options file.

TSM clients in DMZ should not be allowed do any
administrative function.   You can only prevent the
client from deleting backups and archives. This can be
performed by running (on the TSM server): update node
nodename archdelete=no backdelete=no .

Note:  You could also change password=prompt in the
client options file to require a password before a
client could perform any actions.  Not recommended
though.   Additionally, since the TSM server address
is required in client options file, you  can't hide
information about the TSM server, in case of security
breach.

ANY BETTER IDEA is appreciated.  Additionally, any red
flags in the strategy.

Thanks in Advance.
Jas
[EMAIL PROTECTED]



===
IMPORTANT NOTICE: This communication, including any attachment, contains
information that may be confidential or privileged, and is intended solely
for the entity or individual to whom it is addressed.  If you are not the
intended recipient, you should delete this message and are hereby notified
that any disclosure, copying, or distribution of this message is strictly
prohibited.  Nothing in this email, including any attachment, is intended
to be a legally binding signature.



please advise best procedure..export/import

2002-04-29 Thread chris rees

Hi

Could you tell me if there is a better way to achieve the following.
Basically we had a period where archive data was going to the wrong storage
pools. The archive copygroup setting has now been fixed but I have archive
data that I want to move to the archive storage pools. To get the archive
data into the correct storage pools I an planning to do the following

1) export node A filedata=archive devclass=3590class
2) delete filespace A * type=archive
3) import node A domain=samsp filedata=archive devclass=3590class
dates=absolute vol=volume names

I presume the import will look at the copygroup settings and put the archive
data into the correct storage pool.

Any hints/tips greatly appreciated

Chris



_
Join the world s largest e-mail service with MSN Hotmail.
http://www.hotmail.com



Device_Mountlimit_VTS

2002-04-29 Thread Schaub Joachim Paul ABX-PROD-ZH

Dear *SM Gurus

Our VTS has 64 logical drives, the mountlimit in this deviceclass is set to
38 in the TSM Server. Last Week i saw in the Mainvew Monitor an usage off 50
logical drives by TSM ! Is it possible to user more mountpoints, as they are
defined by teh mountlimit?

Env: TSM Server 4.2.1.9 OS/390

Thanks in advance

Joachim   



Joachim Paul Schaub
Abraxas Informatik AG
Beckenhofstrasse 23
CH-8090 Zürich
Schweiz / Switzerland

Telefon: +41 (01) 259 34 41
Telefax: +41 (01) 259 42 82
E-Mail: mailto:[EMAIL PROTECTED]
Internet: http://www.abraxas.ch




Help needed

2002-04-29 Thread Wieslaw Markowiak/Kra/ComputerLand/PL

hi,
I'm looking for a manual on TSM scripting - can you help me?



archive or incremental backup type

2002-04-29 Thread George Harding

**
Entertainment UK Limited
Registered Office: 243 Blyth Road, Hayes, Middlesex UB3 1DN.
Registered in England Numbered 409775

This e-mail is only intended for the person(s) to whom it is addressed and may contain 
confidential information.  Unless stated to the contrary, any
opinions or comments are personal to the writer and do not represent the official view 
of the company.  If you have received this e-mail in error,
please notify us immediately by reply e-mail and then delete this message from your 
system.  Please do not copy it or use it for any purposes, or
disclose its contents to any other person.  Thank you for your co-operation.
**

I would like to get some advice on the advantages / disadvantages of
archive versus incremental backup types.
The files I am backing up are database files so in general are large and
need to be restorable to point in time for consistency.

Thanks



Re: Network Tuning

2002-04-29 Thread robert

Hi,

The easiest way to check your performance is to do an FTP from your
node to your TSM server... and check the throughput. You should be able
to almost get the same throughput with TSM as with an FTP.

There are several performance tuning issues, you can have a look in a
redbook : Getting Started with Tivoli Storage Manager: Implementation
Guide, Chapter 13.   Performance considerations -
http://www.redbooks.ibm.com/redbooks/SG245416.html

Try changing some settings ... and backup a large file from the node.

Regards,

Robert Brilmayer.


PS on the client node RESOURCEUTILIZATION option lets you regulate the
level of resources the TSM server and client can use during processing
(default = 2, one ftp stream and one communication stream).


 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
 Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 cc:

 Subject:Network Tuning

 Hi all

 I am fairly new to TSM and I am not sure how network tuning is to be
done
 in the TSM 4.2.1 environment. My current problem is that I want to be
able
 to ensure that TSM does not use anymore than say 30 percent of the
total
 bandwidth. Is there anyone that could give me some help or an idea of
 where
 to look through the manuals or what settings need to be changed.

 thanks in advance

 Paul





Re: please advise best procedure..export/import

2002-04-29 Thread Daniel Sparrman

Hi

Have you considered using migrate to move the data, or is the storagepool 
containing data that you don't want to move? Else if, you could set your 
new storagepool as the next storage pool for the old one, and then lower 
the hi/lo to 0/0, and the migrate process would automatically move the 
data to the new storagepool.

If you are using collocation, you could also use move data vol name to 
move the data from one storagepool to another.

With TSM 5.1, you have move node data command, which would make it 
possible to move the archive data for particular nodes to another storage 
pool.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 HÄGERNÄS
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51




chris rees [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2002-04-29 14:38
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:please advise best procedure..export/import


Hi

Could you tell me if there is a better way to achieve the following.
Basically we had a period where archive data was going to the wrong 
storage
pools. The archive copygroup setting has now been fixed but I have archive
data that I want to move to the archive storage pools. To get the archive
data into the correct storage pools I an planning to do the following

1) export node A filedata=archive devclass=3590class
2) delete filespace A * type=archive
3) import node A domain=samsp filedata=archive devclass=3590class
dates=absolute vol=volume names

I presume the import will look at the copygroup settings and put the 
archive
data into the correct storage pool.

Any hints/tips greatly appreciated

Chris



_
Join the world s largest e-mail service with MSN Hotmail.
http://www.hotmail.com



Re: archive or incremental backup type

2002-04-29 Thread Daniel Sparrman

Hi

Incremental = Minimizes your backup window, as only changed files are 
moved. However, you could activate subfile backup, which means that only 
the changed part of the file is backed up. Incremental works best with 
smaller files, as large files requires that the complete file is backed 
up. Subfile cache would perhaps work good also with large files, as only 
the changed part of the file is backed up. However, normlly a database 
doesn't work as ordinary large file, as there is to much changes in the 
file.

Archive = Best for storing files for a specific amount of 
days/month/years. However, archiving is like doing full backups all the 
time, which makes a cost in backup time.

Normally for databases, you use a TDP to minimize the time required for 
backup. Different types of TDP:s have different ways of backing up; 
differential, incremental, log archiving, full backups and so on. But, if 
you do hot backups, it's recommended to use TDP:s, as a file backup client 
doesn't work 100%(some files may have been locked by the application 
during the backup. This can be solved by using Dynamic setting, but this 
doesn't automatically mean 100% consistency when trying to restore).

It would be easier to do a recommendation if you told us what kind of 
application you are using. For some applications, doing 
incremental/archiving works great, for some it's a disaster.

If you still insist on using file backup/archive client, I'd recommend 
using cold backups, using archive. This could be done on perhaps a weekly 
basis.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 HÄGERNÄS
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51




George Harding [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2002-04-29 10:51
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:archive or incremental backup type


**
Entertainment UK Limited
Registered Office: 243 Blyth Road, Hayes, Middlesex UB3 1DN.
Registered in England Numbered 409775

This e-mail is only intended for the person(s) to whom it is addressed and 
may contain confidential information.  Unless stated to the contrary, any
opinions or comments are personal to the writer and do not represent the 
official view of the company.  If you have received this e-mail in error,
please notify us immediately by reply e-mail and then delete this message 
from your system.  Please do not copy it or use it for any purposes, or
disclose its contents to any other person.  Thank you for your 
co-operation.
**

I would like to get some advice on the advantages / disadvantages of
archive versus incremental backup types.
The files I am backing up are database files so in general are large and
need to be restorable to point in time for consistency.

Thanks



Re: Device_Mountlimit_VTS

2002-04-29 Thread Bill Boyer

I've seen TSM use more than the MOUNTLIMIT when high priority tasks need to
be performed. But I question you're use of a VTS for TSM? There was just a
long discussion on this a couple weeks back. Applications that use the
entire media (DISP=MOD) like TSM and DFSMShsm are not really good candidates
for a VTS. Check out the archives to review the thread.
(http://www.adsm.org) When TSM wants to add on to an existing storage pool
volume, the existing data must be transferred back into cache in the VTS
before it can be appended. Then the 'new' volume has to be staged back to
real 3590 tape. The original location on real 3590 is now unavailable and
needs to be reclaimed. By doing this a lot, you are forcing the VTS to do a
lot of reclamation tasks. Plus the mount wait time to stage the data is
holding you up. Unles you write to a volume and mark it as read-only so TSM
won't try to append to it again.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Schaub Joachim Paul ABX-PROD-ZH
Sent: Monday, April 29, 2002 4:06 AM
To: [EMAIL PROTECTED]
Subject: Device_Mountlimit_VTS


Dear *SM Gurus

Our VTS has 64 logical drives, the mountlimit in this deviceclass is set to
38 in the TSM Server. Last Week i saw in the Mainvew Monitor an usage off 50
logical drives by TSM ! Is it possible to user more mountpoints, as they are
defined by teh mountlimit?

Env: TSM Server 4.2.1.9 OS/390

Thanks in advance

Joachim



Joachim Paul Schaub
Abraxas Informatik AG
Beckenhofstrasse 23
CH-8090 Z|rich
Schweiz / Switzerland

Telefon: +41 (01) 259 34 41
Telefax: +41 (01) 259 42 82
E-Mail: mailto:[EMAIL PROTECTED]
Internet: http://www.abraxas.ch




Re: BACKUPSETS on TSM OS/390

2002-04-29 Thread Zoltan Forray/AC/VCU

This is what I want to do:

1.  Create the BACKUPSET on the OS/390 server to a flat file
2.  FTP the file (binary) to another box/pc
3.  Restore the files from the BACKUPSET to the pc the file was FTPed to.

How close/similar do the filesystems have to be ?  For instance, can I
restore a Novell server backup files to another non-Novell box ?





John Naylor [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
04/29/2002 05:33 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: BACKUPSETS on TSM OS/390


Zoltan,
The answer to your first query is yes you can create backupsets with a
file
device class and let sms manage them,
including dfhsm migrate to tape if you want.
Not sure what you are trying to achieve  in your second question, and have
not
done this myself, but I believe
1) The client would definitely have to be same os as original backupset
client
2) Restorability would depend on what local devices allowed for that
particular
client.
John








Zoltan Forray/AC/VCU [EMAIL PROTECTED] on 04/26/2002 08:27:53 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:(bcc: John Naylor/HAV/SSE)
Subject:  BACKUPSETS on TSM OS/390



Has anyone used BACKUPSETS on the OS/390 platform ?

Can the output be a disk file (i.e. can I create a DEVICE CLASS of FILE
and let SMS manage the files being created ?)

Once I create the BACKUPSET file, can I say, FTP it to another platform,
i.e. my PC ? so that anyone could restore from it, using the client ?








**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**



Re: TDP performance!!!!!

2002-04-29 Thread Bill Boyer

We saw a problem with this at our last disaster recovery exercise. The
switch ports were set to 100/full and the NIC card (win2k) was set to
100/full, but the restore thru-put suck'd. After the network people looked
into it, turns out the switch ports were reporting 100/half. When they
updated the NIC drivers to the lastest release from the vender, the switch
was then reporting 100/full. I've seen NIC drivers cause thru-put issues on
numerous occasions. You should veriy that the drivers are the latest, or the
latest GOOD version.

The network people said that even though you set the card and switch port to
100/full, doesn't mean you'll get it. There is some handshaking that goes on
between the card and port and if that doesn't happen, the switch will
downgrade and try again. Kinda like a modem connecting. It will drop speed
and protocol until a good transmission is obtained.

Bill Boyer
DSS, Inc.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Regelin Michael (CHA)
Sent: Friday, April 26, 2002 5:58 AM
To: [EMAIL PROTECTED]
Subject: Re: TDP performance!


Hi,

A typical behavior of your problem, is that when installing a swtich or a
router, sometime there is a collision between the full-duplex and
halp-duplex configurtion.

Make sure that on your switch (port level) and on your server (Nic level)
your have the same parameters (of course full-duplex is best if you have
100Mb/s)..

Mike



___
 Michael REGELIN
 Ingenieur Informatique - O.S.O.
 Groupware  Messagerie
 Centre des Technologies de l'Information (CTI)
 Route des Acacias 82
 CP149 - 1211 Geneve 8
 Tel. + 41 22 3274322  -  Fax + 41 22 3275499
 Mailto:[EMAIL PROTECTED]
 http://intraCTI.etat-ge.ch/services_complementaires/messagerie.html
 __



-Message d'origine-
De : Zlatko Krastev [mailto:[EMAIL PROTECTED]]
Envoye : vendredi, 26. avril 2002 11:49
A : [EMAIL PROTECTED]
Objet : Re: TDP performance!


Why are you restarting the AIX box? This is Windows behavior - works after
restart. Did you restarted Windows too?
Have you tested network throughput? Try ftp-ing from Windows to AIX (and
back). What is the transfer rate for file larger than 100-200 MB?
Have you tested B/A client throughput? Try backuprestore of ordinary file
0.5-1GB. You can make new local replica of one big Domino DB outside
drive:\lotus\notes\data directory and backup it using GUI.
Is backup going to disk or direct to tape? Is migration starting during
backup to disk?
What is Windows box processor/memory utilization during backup? Is memory
overcommited and paging extensively used?
Is there other high-volume activity on the same partition/disk?
Answers to some of those questions might help you to pinpoint the problem.
If still there is no success report to the list again.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:TDP performance!

Hi everybody,

I'm having BIG performance problems using TDP for
Lotus Domino...

I am backing up the mail databases using TDP for Lotus
Domino. My configuration is the following:

-TSM Server 4.2.1 on a AIX 4.3.3
-TSM client 4.2.1 and TDP for Lotus Domino 1.1 on the
mail server that is a WIN2000

The company wants to perform a full backup of the mail
database directly on a tape every day. The total size
is of 8GB approximately. First time I did the back up,
it took 1 hour to complete. But since then, I'm
getting a throughput rate as low as 50 Kb/sec !!!
which means that it would need 48 hours at least to
complete the backup.

I verified both the AIX and WIN servers speed
connections and both are of 100 Mb/sec. I then tried a
backup on the tape using the AIX 'tar' command and
got a very good rate. Finally, I restarted the AIX
but I'm still getting this very low rate ...

Can you offer me any advice?

Thx a lot
Sandra

__
Do You Yahoo!?
Yahoo! Games - play chess, backgammon, pool and more
http://games.yahoo.com/



AW: Device_Mountlimit_VTS

2002-04-29 Thread Schaub Joachim Paul ABX-PROD-ZH

Thank you Bill
we now about the constallation tsm-vts and are on the right evaluation path
now (nativ 3590 drives for tsm?)

with kind regards

Joachim

-Ursprüngliche Nachricht-
Von: Bill Boyer [mailto:[EMAIL PROTECTED]]
Gesendet: Montag, 29. April 2002 15:52
An: [EMAIL PROTECTED]
Betreff: Re: Device_Mountlimit_VTS


I've seen TSM use more than the MOUNTLIMIT when high priority tasks need to
be performed. But I question you're use of a VTS for TSM? There was just a
long discussion on this a couple weeks back. Applications that use the
entire media (DISP=MOD) like TSM and DFSMShsm are not really good candidates
for a VTS. Check out the archives to review the thread.
(http://www.adsm.org) When TSM wants to add on to an existing storage pool
volume, the existing data must be transferred back into cache in the VTS
before it can be appended. Then the 'new' volume has to be staged back to
real 3590 tape. The original location on real 3590 is now unavailable and
needs to be reclaimed. By doing this a lot, you are forcing the VTS to do a
lot of reclamation tasks. Plus the mount wait time to stage the data is
holding you up. Unles you write to a volume and mark it as read-only so TSM
won't try to append to it again.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Schaub Joachim Paul ABX-PROD-ZH
Sent: Monday, April 29, 2002 4:06 AM
To: [EMAIL PROTECTED]
Subject: Device_Mountlimit_VTS


Dear *SM Gurus

Our VTS has 64 logical drives, the mountlimit in this deviceclass is set to
38 in the TSM Server. Last Week i saw in the Mainvew Monitor an usage off 50
logical drives by TSM ! Is it possible to user more mountpoints, as they are
defined by teh mountlimit?

Env: TSM Server 4.2.1.9 OS/390

Thanks in advance

Joachim



Joachim Paul Schaub
Abraxas Informatik AG
Beckenhofstrasse 23
CH-8090 Z|rich
Schweiz / Switzerland

Telefon: +41 (01) 259 34 41
Telefax: +41 (01) 259 42 82
E-Mail: mailto:[EMAIL PROTECTED]
Internet: http://www.abraxas.ch




Re: BACKUPSETS on TSM OS/390

2002-04-29 Thread John Naylor

Zoltan,
As far as I am aware there is no local device client  backupset restore for
Novell, so you are stuck with standard backupset restore across network.
I believe that backupset restore is operating system dependent ie. a novell
client backupset can only be restored to a novell client.
John





Zoltan Forray/AC/VCU [EMAIL PROTECTED] on 04/29/2002 03:00:06 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:(bcc: John Naylor/HAV/SSE)
Subject:  Re: BACKUPSETS on TSM OS/390



This is what I want to do:

1.  Create the BACKUPSET on the OS/390 server to a flat file
2.  FTP the file (binary) to another box/pc
3.  Restore the files from the BACKUPSET to the pc the file was FTPed to.

How close/similar do the filesystems have to be ?  For instance, can I
restore a Novell server backup files to another non-Novell box ?





John Naylor [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
04/29/2002 05:33 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: BACKUPSETS on TSM OS/390


Zoltan,
The answer to your first query is yes you can create backupsets with a
file
device class and let sms manage them,
including dfhsm migrate to tape if you want.
Not sure what you are trying to achieve  in your second question, and have
not
done this myself, but I believe
1) The client would definitely have to be same os as original backupset
client
2) Restorability would depend on what local devices allowed for that
particular
client.
John








Zoltan Forray/AC/VCU [EMAIL PROTECTED] on 04/26/2002 08:27:53 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:(bcc: John Naylor/HAV/SSE)
Subject:  BACKUPSETS on TSM OS/390



Has anyone used BACKUPSETS on the OS/390 platform ?

Can the output be a disk file (i.e. can I create a DEVICE CLASS of FILE
and let SMS manage the files being created ?)

Once I create the BACKUPSET file, can I say, FTP it to another platform,
i.e. my PC ? so that anyone could restore from it, using the client ?








**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**



Re: tdp-oracle 2.2.0.2 with unkown error

2002-04-29 Thread Neil Rasmussen

The rc = 406 is a return code from the TSM API and Oracle caught the
error. I am not sure why there is no logging from the TSM API to help you
indicate where the error originated. Rc = 406 is telling me that the TSM
API could not find the dsm.opt file. Please make sure that you're TDP for
Oracle options file has an entry similar to the following:

DSMI_ORC_CONFIGc:\tivoli\tsm\agentoba\dsm.opt

With this, TDP for Oracle will pass into the TSM API where to find the
dsm.opt file.


--

Date:Fri, 26 Apr 2002 12:19:03 +0200
From:Norbert Martin NKM-Solutions [EMAIL PROTECTED]
Subject: tdp-oracle 2.2.0.2  with unkown error

Hi,
at the moment we have the problem with a tdp for Oracle NT V2.2.0.2 an
Client 4.2.1.20

Please help

Has anybody seen this problem und lose this before?
Error Log / Sched Log:
04/25/2002 11:59:45 ANS1512E Scheduled event 'IC_V1_W1_NT_ORACLE' failed.
Return code = 3.
04/25/2002:09:44:48 PID158  ==  Error: pstdpoCallDsmSetUp failed. rc =
406

What is happend? The Oracle DB is a 8.0.5


RMAN-06005: connected to target database: SVP_BUHA
RMAN-06008: connected to recovery catalog database

RMAN run {
2 allocate channel t1 type 'SBT_TAPE' parms
3 'ENV=(TDPO_OPTFILE=c:\apps\tivoli\tsm\AgentOBA\tdpo.opt)';
4
5 allocate channel t2 type 'SBT_TAPE' parms
6 'ENV=(TDPO_OPTFILE=c:\apps\tivoli\tsm\AgentOBA\tdpo.opt)';
7
8 backup incremental level 0
9 format 'df_%t_%s_%p_%u_%c'
10 (database include current controlfile);
11
12 sql 'alter system archive log current';
13
14 backup  archivelog all delete input
15 format 'df_%t_%s_%p_%u_%c';
16
17 release channel t1;
18
19 release channel t2;
20 }
21
RMAN-03022: compiling command: allocate
RMAN-03023: executing command: allocate
RMAN-00569: error message stack follows
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
RMAN-07001: could not open channel t1
RMAN-10008: could not create channel context
RMAN-10024: error setting up for rpc polling
RMAN-10006: error running sql statement: select distinct my.sid,
sex.serial
from
 v$mystat my, x$ksusex sex where sex.sid = my.sid
RMAN-10002: ORACLE error: ORA-01455: converting column overflows integer
datatyp

with kind regards / mit freundlichen Gruessen

Norbert Martin
High End Storage Consultant
DISK / TAPE / SAN / TSM
Mobile:+49-170-2234111
E-Mail:[EMAIL PROTECTED]


Regards,

Neil Rasmussen
Software Development
TDP for Oracle
[EMAIL PROTECTED]



Problems using dsmcad to launch scheduler

2002-04-29 Thread Robert Dowsett

Hi everyone.

We have recently been trying to use dsmcad on our SGI clients to handle
launching the dsmc sched process.

This works ok except that the dsmcad process echos all messages to the
client's console so that we get all the messages coming from the scheduler
displayed on the console. This output literally drowns out all the other
console messages.

When we start dsmcad we use the following command to redirect its output...
/usr/tivoli/tsm/client/ba/bin/dsmcad  1/dev/null 21


We have the following entries in our dsm.sys
  SCHEDLOGname   /var/adm/adsm/dsmsched.log
 Managedservices webclient schedule

The  /var/adm/adsm/dsmsched.log is written to each day with the output of
the scheduler (which is desirable).
We also get output from dsmcad in dsmwebcl.log.

Is there any way to stop the dsmcad process from writing to /dev/console??


Our setup:
TSM client v4.1.2.99/v4.2.0.0 for SGI
IRIX 6.5


Thanks in advance for your help

Robert Dowsett


IS Partner, Norsk Hydro



2nd try - archive script

2002-04-29 Thread George Lesho

Sent a note to the ADSM list on Friday afternoon and had no responses. I
wrote a Korn script which basically stores the output of a df command
less the root file space in a file. This file is then read in, one line at
a time with the input used as the file system to be backed up using a dsmc
archive command. The problem is, we have used nested names in our file
systems and since I would like this script to be portable; that is, used in
all our production systems, I don't want to archive the same stuff over and
over...example: /usr/ , /usr/local/, /usr/bmc/, /usr/informix  each is a
file system and a directory tree which would be backed up under /usr/. Any
sample script that you might be using would be appreciated! Thanks and
hopefully Monday morning will see more people checking the list.

OS AIX 433 Server 4.1.5 Clients 4.1.3

George Lesho
System/Storage Admin
AFC Enterprises



Re: archive or incremental backup type

2002-04-29 Thread George Harding

Most of the database backups will be Oracle databases.  Currently we do not
have the option of using TDP, so we will either take hot or cold database
backups.  If space permits we will backup to disk first then start the
dsmc.  The main database recovery requirement is consistency between files,
so the restored files must all come from the SAME backup.





Daniel Sparrman [EMAIL PROTECTED]@VM.MARIST.EDU on 29/04/2002
14:37:05

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:

Subject:  Re: archive or incremental backup type


Hi

Incremental = Minimizes your backup window, as only changed files are
moved. However, you could activate subfile backup, which means that only
the changed part of the file is backed up. Incremental works best with
smaller files, as large files requires that the complete file is backed
up. Subfile cache would perhaps work good also with large files, as only
the changed part of the file is backed up. However, normlly a database
doesn't work as ordinary large file, as there is to much changes in the
file.

Archive = Best for storing files for a specific amount of
days/month/years. However, archiving is like doing full backups all the
time, which makes a cost in backup time.

Normally for databases, you use a TDP to minimize the time required for
backup. Different types of TDP:s have different ways of backing up;
differential, incremental, log archiving, full backups and so on. But, if
you do hot backups, it's recommended to use TDP:s, as a file backup client
doesn't work 100%(some files may have been locked by the application
during the backup. This can be solved by using Dynamic setting, but this
doesn't automatically mean 100% consistency when trying to restore).

It would be easier to do a recommendation if you told us what kind of
application you are using. For some applications, doing
incremental/archiving works great, for some it's a disaster.

If you still insist on using file backup/archive client, I'd recommend
using cold backups, using archive. This could be done on perhaps a weekly
basis.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 HÄGERNÄS
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51




George Harding [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2002-04-29 10:51
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:archive or incremental backup type




I would like to get some advice on the advantages / disadvantages of
archive versus incremental backup types.
The files I am backing up are database files so in general are large and
need to be restorable to point in time for consistency.

Thanks




Re: archive or incremental backup type

2002-04-29 Thread Daniel Sparrman
HiOk.. No TDP:s... How about using SQL-BackTrack for Oracle? With BackTrack, you can incorporate incrementals of your oracle server, which means a lot faster backups.If not, I'd suggest using either full backups, with a special mgmt class holding the oracle databases. Or, using archiving. However, there will be no difference in speed, and in this case, if you set your mgmt classes correct, no difference in how long to keep each backup.The archive only has a retention period. This means, if you wish to keep every copy for 30 days, the retention period will be 30 days. Archies doesn't handle versioning.If you use full backups, you'll have to be sure to set the versioning rules correctly, or you will store a lot of data in your backupsystem.I suggest using archives. This way, it's a bit easier to understand how many copies to keep, and how long to keep them. With the example above, backing the database up every database, means you have 30 "versions" of the database. If you backup the database once a week, you'll have 4 copies(or 5 depending of month), and each copy will be stored for 30 days.Best RegardsDaniel Sparrman---DanielSparrmanExistiStockholmABPropellervägen6B18362HÄGERNÄSVäxel:08-7549800Mobil:070-3992751-"ADSM: Dist Stor Manager" [EMAIL PROTECTED] wrote: -To: [EMAIL PROTECTED]From: George Harding [EMAIL PROTECTED]Sent by: "ADSM: Dist Stor Manager" [EMAIL PROTECTED]Date: 04/29/2002 05:31PMSubject: Re: archive or incremental backup typeMost of the database backups will be Oracle databases.  Currently we do nothave the option of using TDP, so we will either take hot or cold databasebackups.  If space permits we will backup to disk first then start thedsmc.  The main database recovery requirement is consistency between files,so the restored files must all come from the SAME backup.Daniel Sparrman [EMAIL PROTECTED]@VM.MARIST.EDU on 29/04/200214:37:05Please respond to "ADSM: Dist Stor Manager" [EMAIL PROTECTED]Sent by:  "ADSM: Dist Stor Manager" [EMAIL PROTECTED]To:   [EMAIL PROTECTED]cc:Subject:  Re: archive or incremental backup typeHiIncremental = Minimizes your backup window, as only changed files aremoved. However, you could activate subfile backup, which means that onlythe changed part of the file is backed up. Incremental works best withsmaller files, as large files requires that the complete file is backedup. Subfile cache would perhaps work good also with large files, as onlythe changed part of the file is backed up. However, normlly a databasedoesn't work as ordinary large file, as there is to much changes in the"file".Archive = Best for storing files for a specific amount ofdays/month/years. However, archiving is like doing full backups all thetime, which makes a cost in backup time.Normally for databases, you use a TDP to minimize the time required forbackup. Different types of TDP:s have different ways of backing up;differential, incremental, log archiving, full backups and so on. But, ifyou do hot backups, it's recommended to use TDP:s, as a file backup clientdoesn't work 100%(some files may have been locked by the applicationduring the backup. This can be solved by using Dynamic setting, but thisdoesn't automatically mean 100% consistency when trying to restore).It would be easier to do a recommendation if you told us what kind ofapplication you are using. For some applications, doingincremental/archiving works great, for some it's a disaster.If you still insist on using file backup/archive client, I'd recommendusing cold backups, using archive. This could be done on perhaps a weeklybasis.Best RegardsDaniel Sparrman---Daniel SparrmanExist i Stockholm ABPropellervägen 6B183 62 HÄGERNÄSVäxel: 08 - 754 98 00Mobil: 070 - 399 27 51George Harding [EMAIL PROTECTED]Sent by: "ADSM: Dist Stor Manager" [EMAIL PROTECTED]2002-04-29 10:51Please respond to "ADSM: Dist Stor Manager"To: [EMAIL PROTECTED]cc:Subject:archive or incremental backup typeI would like to get some advice on the advantages / disadvantages ofarchive versus incremental backup types.The files I am backing up are database files so in general are large andneed to be restorable to point in time for consistency.Thanks

Re: AW: Device_Mountlimit_VTS

2002-04-29 Thread David Browne

So, if you turn collocation off will TSM perform better with the VTS?
Will this be a big performance hit on the client restores?


   

  Schaub Joachim   

  Paul ABX-PROD-ZH To:  [EMAIL PROTECTED]   

  joachim.schaub@ cc: 

  ABRAXAS.CH  Subject: AW: Device_Mountlimit_VTS  

  Sent by: ADSM:  

  Dist Stor

  Manager 

  [EMAIL PROTECTED] 

  T.EDU   

   

   

  04/29/02 09:59   

  AM   

  Please respond   

  to ADSM: Dist   

  Stor Manager

   

   

   





Thank you Bill
we now about the constallation tsm-vts and are on the right evaluation path
now (nativ 3590 drives for tsm?)

with kind regards

Joachim

-Ursprüngliche Nachricht-
Von: Bill Boyer [mailto:[EMAIL PROTECTED]]
Gesendet: Montag, 29. April 2002 15:52
An: [EMAIL PROTECTED]
Betreff: Re: Device_Mountlimit_VTS


I've seen TSM use more than the MOUNTLIMIT when high priority tasks need to
be performed. But I question you're use of a VTS for TSM? There was just a
long discussion on this a couple weeks back. Applications that use the
entire media (DISP=MOD) like TSM and DFSMShsm are not really good
candidates
for a VTS. Check out the archives to review the thread.
(http://www.adsm.org) When TSM wants to add on to an existing storage pool
volume, the existing data must be transferred back into cache in the VTS
before it can be appended. Then the 'new' volume has to be staged back to
real 3590 tape. The original location on real 3590 is now unavailable and
needs to be reclaimed. By doing this a lot, you are forcing the VTS to do a
lot of reclamation tasks. Plus the mount wait time to stage the data is
holding you up. Unles you write to a volume and mark it as read-only so TSM
won't try to append to it again.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Schaub Joachim Paul ABX-PROD-ZH
Sent: Monday, April 29, 2002 4:06 AM
To: [EMAIL PROTECTED]
Subject: Device_Mountlimit_VTS


Dear *SM Gurus

Our VTS has 64 logical drives, the mountlimit in this deviceclass is set to
38 in the TSM Server. Last Week i saw in the Mainvew Monitor an usage off
50
logical drives by TSM ! Is it possible to user more mountpoints, as they
are
defined by teh mountlimit?

Env: TSM Server 4.2.1.9 OS/390

Thanks in advance

Joachim



Joachim Paul Schaub
Abraxas Informatik AG
Beckenhofstrasse 23
CH-8090 Z|rich
Schweiz / Switzerland

Telefon: +41 (01) 259 34 41
Telefax: +41 (01) 259 42 82
E-Mail: mailto:[EMAIL PROTECTED]
Internet: http://www.abraxas.ch





Re: AW: Device_Mountlimit_VTS

2002-04-29 Thread Bill Boyer

Collocation has nothing to do with it. It's the nature of TSM and the VTS
that causes 'problems.' TSM likes to use all the space on a tape. TSM keeps
writing to a tape until it hits end-of-volume (EOV), then asks for another
tape with space on it or a scratch tape.

In a VTS, when you ask for a virtual volume to be mounted, if the data does
not already reside in the VTS cache/disk, then ALL the data on the volume is
recalled from the 'real' 3590 tape into the cache. If there's not room, then
the VTS will remove existing data from the cache based on a LRU algorithm.
If that data hasn't been copied to the 'real' 3590 volumes,that has to
happen first. So, no instead of mounting a tape and positioning to the end
of the data, you have to read the data from tape into the disk cache BEFORE
you can do anything with it. So,as the amount of data on the volume grows,
the amount of time it takes to perform the tape mount increases.

Now when you append data to the virtual volume, the VTS now has to stage
this data back out to the 3590 tapes...all of it. Not just the 'new' data
you appended. So now you're writing back out up to a full 3490 amount of
data to tape. Where the original data existed on tape is now unusable space.
Just like in TSM where expired data on tapes becomes unusable and you need
to reclaim. This happens in the VTS, too. You have regularly scheduled
reclamation tasks, plus thresholds. So, the more you append data to existing
volumes the more frequent you would need to run the reclamation. Just more
overhead in the VTS.

Plus by having to recall the virtual volume into cache before you can append
to it, or even read from it may cause other data withing the VTS to be
removed from cache. This could cause longer mount times for other
applications/jobs streams. You would want to have a larger amount of disk
cache within the VTS for a system used for TSM.

TSM reclamation would drive this system crazy, too. Without collocation,
think of how many input tape mounts it takes to reclaim your offsite storage
pool(s). Each one of those 'mounts' would cause the VTS to read the data
back into cache before it could be read by TSM. Plus the VTS doesn't know
that there is only maybe 10% usable data on that virtual volume. As far as
he's concerned, it's a full tape.

Maybe for a small TSM system you could use a VTS, but TSM is not the right
application for a VTS. IMHO that is.

Bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
David Browne
Sent: Monday, April 29, 2002 12:35 PM
To: [EMAIL PROTECTED]
Subject: Re: AW: Device_Mountlimit_VTS


So, if you turn collocation off will TSM perform better with the VTS?
Will this be a big performance hit on the client restores?



  Schaub Joachim
  Paul ABX-PROD-ZH To:  [EMAIL PROTECTED]
  joachim.schaub@ cc:
  ABRAXAS.CH  Subject: AW:
Device_Mountlimit_VTS
  Sent by: ADSM:
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  T.EDU


  04/29/02 09:59
  AM
  Please respond
  to ADSM: Dist
  Stor Manager







Thank you Bill
we now about the constallation tsm-vts and are on the right evaluation path
now (nativ 3590 drives for tsm?)

with kind regards

Joachim

-Urspr|ngliche Nachricht-
Von: Bill Boyer [mailto:[EMAIL PROTECTED]]
Gesendet: Montag, 29. April 2002 15:52
An: [EMAIL PROTECTED]
Betreff: Re: Device_Mountlimit_VTS


I've seen TSM use more than the MOUNTLIMIT when high priority tasks need to
be performed. But I question you're use of a VTS for TSM? There was just a
long discussion on this a couple weeks back. Applications that use the
entire media (DISP=MOD) like TSM and DFSMShsm are not really good
candidates
for a VTS. Check out the archives to review the thread.
(http://www.adsm.org) When TSM wants to add on to an existing storage pool
volume, the existing data must be transferred back into cache in the VTS
before it can be appended. Then the 'new' volume has to be staged back to
real 3590 tape. The original location on real 3590 is now unavailable and
needs to be reclaimed. By doing this a lot, you are forcing the VTS to do a
lot of reclamation tasks. Plus the mount wait time to stage the data is
holding you up. Unles you write to a volume and mark it as read-only so TSM
won't try to append to it again.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Schaub Joachim Paul ABX-PROD-ZH
Sent: Monday, April 29, 2002 4:06 AM
To: [EMAIL PROTECTED]
Subject: Device_Mountlimit_VTS


Dear *SM Gurus

Our VTS has 64 logical drives, the mountlimit in this deviceclass is set to
38 in the TSM Server. Last Week i saw in the Mainvew Monitor an usage off
50

TDP R3 keeping monthly and yearly for different retentions?

2002-04-29 Thread Paul Fielding

Hi all,

I did some poking around the list and didn't see anything on the subject.

Does anybody have a good method for doing Monthly and Yearly backups of an R3 (oracle) 
database using the TDP for R3? I have a requirement to maintain daily backups for 2 
weeks, monthly backups for 3 months and yearly backups for 7 years.   Superficially, 
It appears to be straightforward to set up different server stanzas within the TDP 
profile for different days of the week, but that's it.

I suspect that I could get extra fancy and write a script to do a flip of the profile 
to an alternate profile file on the appropriate days, and have it flip back when it's 
done, but that seems like a bit of a band-aid to me and I'm wondering if anyone's come 
up with something better?

regards,

Paul



Re: Help needed

2002-04-29 Thread Hunley, Ike

Try this link...  http://216.185.145.68/discus/messages/1/adsm_sql.pdf

-Original Message-
From: Wieslaw Markowiak/Kra/ComputerLand/PL
[mailto:[EMAIL PROTECTED]]
Sent: Monday, April 29, 2002 6:35 AM
To: [EMAIL PROTECTED]
Subject: Help needed


hi,
I'm looking for a manual on TSM scripting - can you help me?



Blue Cross Blue Shield of Florida, Inc., and its subsidiary and
affiliate companies are not responsible for errors or omissions in this e-mail 
message. Any personal comments made in this e-mail do not reflect the views of Blue 
Cross Blue Shield of Florida, Inc.



Stratus VOS?

2002-04-29 Thread Orville L. Lantto

Has anyone had any experience backing up Stratus VOS.



Orville L. Lantto
Datatrend Technologies, Inc.  (http://www.datatrend.com)
121 Cheshire Lane #700
Minnetonka, MN 55305
Email: [EMAIL PROTECTED]
V: 952-931-1203
F: 952-931-1293
C: 612-770-9166



About TSM API

2002-04-29 Thread Fred Zhang

Hi,

I am looking for TSM API. So far I can only find Client API. Does TSM has
API which we can use to talk to the server without a client installed? Any
information would be greatly appreciated.

=
Fred Zhang
NetiQ Corporation
3553 N. First St.
San Jose, CA 95134
phone: (408)856-3102
fax: (408)856-3102
e-mail: [EMAIL PROTECTED]
=



dirmc question

2002-04-29 Thread Jim Kirkman

I want to implement the dirmc scheme. I've created a disk pool (and have
read all about the seq pool with device=file) dirdiskpool, a management
class dirmc with a backup copy group pointing to the dirdiskpool, a
client option dirmc that is part of a client option set called, you got
it, dirmc.  I've associated some clients with said option set but I've
no evidence of folders in the disk pool. I'm wondering if I've got the
option set correctly, although the query looks good

 Optionset: DIRMC
  Option: DIRMC
Sequence number: 0
   Override: No
   Option Value: dirmc

any help appreciated



--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
966-5884



Re: About TSM API

2002-04-29 Thread Thomas A. La Porte

Fred,

If by talking to the server without a client you mean running
administrative commands, there is no TSM administrative API. The
client API is for backup/restore and archive/retrieve operations,
and that is the only API available.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Mon, 29 Apr 2002, Fred Zhang wrote:

Hi,

I am looking for TSM API. So far I can only find Client API. Does TSM has
API which we can use to talk to the server without a client installed? Any
information would be greatly appreciated.

=
Fred Zhang
NetiQ Corporation
3553 N. First St.
San Jose, CA 95134
phone: (408)856-3102
fax: (408)856-3102
e-mail: [EMAIL PROTECTED]
=





Re: EMC Celerra fileserver (NAS) with a Symmetrix (Fibre-SAN) backend

2002-04-29 Thread Norma Fisher

Hi Kent, we have a Celerra file server for our NT LAN and it works
wonderfully.  We replaced 11 OS2 servers and have 4000 users, 500gb of
storage that we migrated to NT4 on the Celerra(so it was a double
conversion).  The failover works as advertised, just make sure you have
failsafe networking enabled.   It has solved many issues of the NT
environment such as mapping the users, servers failing, maintenance issues
etc.  But most of all it is reliable and  service interruptions are no
longer attributed to the servers as we have had no Celerra failures since
the first week and those two were an initial  parm issue and human error.
We also wanted quotas at the directory level and that is now coming.
Support has been splendid.

While there are many third party software products that do not work with
NAS in general our major drawback is with TSM,  NDMP is not supported for
Celerra that I know of.  We have had a couple of instances where
directories show up and that usually means we have to run the Celerra
cleanup utilities and sometime add space as we have gone over the limit of
85% allocated this has caused TSM to loop.We also have seen TSM
suddenly go into full backup mode for no reason, it seems this may occur if
the service is knocked down or the client loses communication with the
server during the backup.  Also, TSM journalling is not supported for
network attached drives and we were depending on this to speed up the
backup of 3million files.  We have now broken the file systems down into
multiple nodes and are using a 4.2 client with a 4.1 Server on OS390, but
we need to break it down into more nodes to get still better backup times,
but it now seems to be stable,  EMC tells us that the TSM issues except
NDMP are fixed.  We are not prepared at this time to change our
infrastructure and backup to an AIX box with fibre tape.

Celerra has new backup capablity such as SNAPSURE and Concurrent Backup
that you could investigate.  Also there is COMMVAULT from Galaxy software
that is integrated with Celerra, and there is also EMC's TIMEFINDER which
will give you a tactical mirror for restore and a copy to backup from.
EMC's SRDF for remote copy is of course the defacto DR product when you
have a SYMM.

You have a large implementation and I would be interested in what you find
works as your backup/recovery solution.

Regards...Norma



__
The information in this e-mail is intended solely for the addressee(s)
named, and is confidential. Any other distribution, disclosure or copying
is strictly prohibited. If you have received this communication in error,
please reply by e-mail to the sender and delete or destroy all copies of
this message.

Les renseignements contenus dans le présent message électronique sont
confidentiels et concernent exclusivement le(s) destinataire(s) désigné(s).
Il est strictement interdit de distribuer ou de copier ce message.  Si vous
avez reçu ce message par erreur, veuillez répondre par courriel à
l'expéditeur et effacer ou détruire toutes les copies du présent message.


Tivoli Decision Support

2002-04-29 Thread Hart, Charles

Is anyone running TDS for TSM that has a 50GB TSM DB?

Regards,

Charles Hart
Medtronic Storage Team



Unix directory exclude question

2002-04-29 Thread Mattice, David

We are running a scheduled incremental on an AIX 4.3.3 client.  There is a
need to exclude a specific directory tree, which needs to be archived via
another, shell script based, scheduled command.

The initial idea was to add an exclude.dir in the client dsm.sys file.
This caused the incremental to exclude that directory tree but, when
performing the command line (dsmc) archive the log indicates that this tree
is excluded.

Any assistance would be appreciated.

Thanks,
Dave

ADT Security Services



Re: TDP R3 keeping monthly and yearly for different retentions?

2002-04-29 Thread Don France (TSMnews)

The customers I've worked with used a shell script to determine -archmc for
daily/weekly/monthly;  without TDP, the script manipulates the parameter
passed in for the -archmc value on the dsmc ar cmd... you could use a
presched command to do the same (for flip the profile name, causing TDP to
use varying -archmc values).

- Original Message -
From: Paul Fielding [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 29, 2002 10:22 AM
Subject: TDP R3 keeping monthly and yearly for different retentions?


Hi all,

I did some poking around the list and didn't see anything on the subject.

Does anybody have a good method for doing Monthly and Yearly backups of an
R3 (oracle) database using the TDP for R3? I have a requirement to maintain
daily backups for 2 weeks, monthly backups for 3 months and yearly backups
for 7 years.   Superficially, It appears to be straightforward to set up
different server stanzas within the TDP profile for different days of the
week, but that's it.

I suspect that I could get extra fancy and write a script to do a flip of
the profile to an alternate profile file on the appropriate days, and have
it flip back when it's done, but that seems like a bit of a band-aid to me
and I'm wondering if anyone's come up with something better?

regards,

Paul



Re: Unix directory exclude question

2002-04-29 Thread Zlatko Krastev

exclude.backup  /directory/.../*
This would exclude files from backup but not prevent archives.
Unfortunately it does not exclude the directories.

Zlatko Krastev
IT Consultant



Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Unix directory exclude question

We are running a scheduled incremental on an AIX 4.3.3 client.  There is
a
need to exclude a specific directory tree, which needs to be archived
via
another, shell script based, scheduled command.

The initial idea was to add an exclude.dir in the client dsm.sys file.
This caused the incremental to exclude that directory tree but, when
performing the command line (dsmc) archive the log indicates that this
tree
is excluded.

Any assistance would be appreciated.

Thanks,
Dave

ADT Security Services



FW: HSM on Solaris 8

2002-04-29 Thread Kelly J. Lipp

Folks,

Anyone with experience running the 5.1 HSM client on Solaris, read on.
Ideas are welcome.

Thanks,

Kelly J. Lipp
Storage Solutions Specialists, Inc.
PO Box 51313
Colorado Springs, CO 80949
[EMAIL PROTECTED] or [EMAIL PROTECTED]
www.storsol.com or www.storserver.com
(719)531-5926
Fax: (240)539-7175


-Original Message-
From: Thee, Gwen [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 29, 2002 12:34 PM
To: '[EMAIL PROTECTED]'
Subject: HSM on Solaris 8


 Hi Kelly,

 I installed hsm on one of our Solaris 8 machines, (the new 5.1 client with
 64 bit support), and I am having some problems. Hopefully you can help me.
 The good news is that the essential task of migrating and retrieving files
 works fine if we do a manual migration. The bad news is twofold. It will
 not pre-migrate any files, or start the migration automatically even
 though it is above the threshold to start, and the following bad events
 have been going on since I activated the first hsm file system.

 The dsmrecalld daemon will not stay up. It disappears about every 30 to 60
 minutes. There are no errors issued at the time that it dies. We have a
 cron that restarts it, but obviously this is a bad thing since if someone
 tries to get a file in the 60 seconds that it's not running, they get the
 stub.

 The other thing is I have constant errors in the dsmerror.log. These also
 started at the time we activated the first file system with hsm, and they
 are issued about every 10th of a second. The Tivoli site is very unhelpful
 with this error, here's their description of it:

 ANS9511E
program-name: cannot read DM attributes on session session for
 file handle = handle token = token. Reason : error
   Explanation: TSM space management cannot read the DM attributes of a DM
 object, usually a file.
   System Action: Processing of the file is interrupted.
   User Response: Continue with normal operation.

 I did not find this very useful in determining my problems. This is what
 the dsmerror.log looks like:

 04/29/02   14:54:00 ANS9511E dsmmonitord: cannot read DM attributes on
 session
 11847 for
 file handle = 00ff01ff 0007 001a766b
 01001011ff0c token = DM_NO_TOKEN. Reason : No such process
 04/29/02   14:54:10 ANS9511E dsmmonitord: cannot read DM attributes on
 session
 11847 for
 file handle = 00ff01ff 0007 001a766b
 01001011ff0c token = DM_NO_TOKEN. Reason : No such process

 It has been issuing these errors non-stop since we activated an hsm file
 system. To give you an overall feel for how things are set up, here's the
 parameters set in the dsm.sys file:
 CANDIDATESInterval  24
 CHECKFororphans yes
 CHECKThresholds 5
 MAXCANDProcs5
 MAXMIGRators1
 MAXRecalldaemons20
 MAXRECOncileproc3
 MAXThresholdproc3
 MIGFILEEXPiration   7
 MINMIGFILESize  1000
 MINRECAlldaemons3
 RECOncileinterval   24

 SErvername  puppy_tsm
COMMmethod TCPip
TCPPort1500
TCPServeraddress   129.228.65.204
passwordaccess generate
schedlogretention 7
errorlogretention 7
INCLexcl /opt/tivoli/tsm/client/ba/bin/include_exclude
schedlogname /opt/tivoli/tsm/client/ba/bin/dsmsched.log
errorlogname /opt/tivoli/tsm/client/ba/bin/dsmerror.log

 and the dsm.opt:
 COMPRESSIon No
 OPTIONFormatSHort
 RESToremigstate No
 SErvername  puppy_tsm

 The management class parameters are set to:
 spacemgtechinique Auto
 automignonuse Three
 migrequiresbkup   Yes
 migdestinationhsmdisk

 The only thing he did successfully on his own is create the candidate
 list. He did not pre-migrate and he did not migrate automatically.

 Please help,
 Thanks,
 Gwen




Re: Unix directory exclude question

2002-04-29 Thread Don France (TSMnews)

Sounds like a bug -- yes, there has been a level (or 3) that incorrectly
caused exclude list to be processed by ar cmd... it's the client code that
controls it -- try running the latest (4.2.x) client, unless you're hot to
use 5.1, then get the latest 5.1 download patch.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Mattice, David [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 29, 2002 4:44 PM
Subject: Unix directory exclude question


 We are running a scheduled incremental on an AIX 4.3.3 client.  There is
a
 need to exclude a specific directory tree, which needs to be archived
via
 another, shell script based, scheduled command.

 The initial idea was to add an exclude.dir in the client dsm.sys file.
 This caused the incremental to exclude that directory tree but, when
 performing the command line (dsmc) archive the log indicates that this
tree
 is excluded.

 Any assistance would be appreciated.

 Thanks,
 Dave

 ADT Security Services



Impact of TSM upgrade to 4.2 on Fibre channel Protocol

2002-04-29 Thread Subash, Chandra

Hi guys

I am upgrading my TSM server from version  3.7.4 to 4.2.1.13.
2 Questions

1.  What should be the upgrade path

3.7.4 ---  4.2 4.2.1.0  4.2.1.13
Is it right ??

2. The install mentions that all FCP ( fibre channel protocol definitions
will be lost and have to be re - installed ). I have got some FCP Disk
drives and Fibre channel adapters attached to my tsm server . Please advise

Regards
Chandra Subash
Unix/ Oracle Database Administrator
Teletech International
154 Pacific Highway, St Leonards
NSW 2065
Tel 02 -99301569
Sydney, Australia
E-mail [EMAIL PROTECTED]




This email is confidential and intended for the addressee(s) only.
If you are not an addressee please promptly notify the sender and delete the
message; do not use the content in any other way.



Priority of DB Dump

2002-04-29 Thread Fred Johanson

Somewhere in the AdminGuide there is the statement that the BA DB process will
preempt all other processes.  We have in the past presumed this to be true, tho
we've never observed it in reality.

Out current hardware setup is the number of disk storage pools plus the DB is
one greater than the number of tape drives available.  This caused no problems
in V3R7 of earlier releases (in V2 and V3R1, there were two less drives than
disk pools), but since upgrading to V4R1 in January, we have experienced the
system going down with the log full about once a week.  Setting the log trigger
to 40% has not remedied the situation: if all disk storage pools are in
migration, the only other tape process I allow during the backup production
window, the incremental DB dump seems to wait until a drive frees naturally,
and with collocated tapepools, that can take longer that the logfile can hold
out.

The question is, did I miss something when installing the upgrade about setting
priorities for processes?  or was I just lucky that the system I inherited when
we upgraded to V2R1 never was busy enough to trigger the situation?  I should
add that I process about 800-850 clients a night on each of my servers.



FC-SCSI Bridge issues

2002-04-29 Thread Cowperthwaite, Eric

Has anyone seen something similar and have any ideas?

Our configuration is:

Dell Server with Qlogic 2200F HBA, Win2K SP2, TSM 4.2.1.9
Ancor SANBox 16 FC switch
STK 3250 FC-SCSI Bridge
Sun StorEdge L60 DLT8000 Library (4 drives)
The switch is zoned so that only the TSM server and the bridge are in one
zone. TSM hard drives are also on the SAN, but in a separate zone.

Everything works fine normally. But occasionally we receive SCSI errors from
adsmscsi and lose connectivity to the tape drives. Once that happens the
only apparent solution is to restart the bridge and the TSM server. Any
thoughts/ideas/input would be appreciated.

This typically happens during large TDPO backups. Unfortunately it means we
have not had a good backup of a large data mart in this network in the past
month. I'm ready to bag the fiber connections and go back to SCSI on this
subsystem.

Eric Cowperthwaite
Senior System Administrator - Infrastructure
EDS Business Process Management



TSM on Sun Solaris and WNT/2000

2002-04-29 Thread Zosimo Noriega

Please provide me information about the comparison of using TSM server on
Sun Solaris or Windows NT/2000 platforms.

best regards,
Zosi Noriega
ADNOC - UAE