Re: select statement

2003-03-17 Thread Cook, Dwight E
I don't know if you are going to be able to do that with a select statement.
adsm.archives  adsm.backups have the NODE_NAME, FILESPACE_NAME, HL_NAME,
LL_NAME but no info on size
adsm.occupancy has node_name, filespace_name,  some MB fields but nothing
file name specific
adsm.contents has node_name, filespace_name, file_name, file_size (stored
size) but you will find that all members of an aggregate have the size of
the aggregate listed (last time I looked)
so anyway, I don't know if what you are looking for is obtainable via
selects from an admin session.

Dwight


-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]
Sent: Monday, March 17, 2003 12:43 PM
To: [EMAIL PROTECTED]
Subject: select statement


Hello!

I am still learning about the select statement and I wanted to make sure
that I get the syntax correct before I run it.  What I need is a select
statement that will look for all files with the following name:
/ut0/oradata/arch/*.arc and how much space it takes up on an individual
node.  Does anyone have any suggestions?  Thanks!

Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338


Re: urgent! server is down!

2003-03-12 Thread Cook, Dwight E
What was on /dev/rtsmvglv11  ?
By the errors you are seeing, I'd guess either a data base or a log volume.
does errpt show problems on the physical volume(s) that are tsmvglv11 ?


Dwight



-Original Message-
From: Michelle Wiedeman [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 12, 2003 2:53 AM
To: [EMAIL PROTECTED]
Subject: urgent! server is down!
Importance: High

hi all,
first things firts, aix 4.3 tsm 4.1
After a reboot yesterday tsm doesnt start. A collegue of mine has been at it
till late last night.
Now I have to pick it up. My guess is the database is corrupt. but the
messages also come with other errors.
could anyone help me out? anyway I've never had to restore a db before. How
do I go at it? below are the errors

thnx heeps!
michelle

ANR0900I Processing options file dsmserv.opt.
ANR000W Unable to open default locale message catalog, /usr/lib/nls/msg/C/.
ANR0990I Server restart-recovery in progress.
ANRD lvminit.c(1872): The capacity of disk '/dev/rtsmvglv11' has
changed;
old capacity 983040 - new capacity 999424.
ANRD lvminit.c(1628): Unable to add disk /dev/rtsmvglv11 to disk table.
ANRD lvminit.c(1872): The capacity of disk '/dev/rtsmvglv11' has
changed;
old capacity 983040 - new capacity 999424.
ANRD lvminit.c(1628): Unable to add disk /dev/rtsmvglv11 to disk table.
ANR0259E Unable to read complete restart/checkpoint information from any
database or recovery log volume.


Re: urgent! server is down!

2003-03-12 Thread Cook, Dwight E
YOU MIGHT be able to take the /dev/rtsmvglv11 out of your dsmserv.dsk file
and try starting tsm.
In seeing your ...11 first, and seeing it ~messed up~ TSM is more than
likely going to simply state
I'm broke, I'm going down...
and letting you deal with how you want to fix it...

try removing the seemingly broken file from the dsmserv.dsk file and try to
start TSM...

Dwight



-Original Message-
From: Michelle Wiedeman [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 12, 2003 9:53 AM
To: [EMAIL PROTECTED]
Subject: Re: urgent! server is down!


hi all,
It took a while to answer you all, since the whole company seems to be
imploding a the moment.

Ofcourse noone has done anything on the server :| there has been an
enlarging /var, but this is in rootvg and not in the vg where there are any
tsm volumes.

Well, the volume in question turns out to be a db volume, it resides in a
separeate volumegroup (tsmvg) and has a mirror /dev/rtsmvglv12.

Since tsm says the volume has changed in size, it is useless i guess to try
using the mirror.
Te suggestion that the difference in size the server displays in error is
16mb exactly doesnt go, the pp size of this vg is 64MB.

I'm gonna try and restore the lv and the db and see how it goes,

I'll let you now!!

thnx a lot everyone! :o*
michelle



-Original Message-
From: Dan Foster [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 12, 2003 2:41 PM
To: [EMAIL PROTECTED]
Subject: Re: urgent! server is down!


Hot Diggety! Richard Sims was rumored to have written:
 After a reboot yesterday tsm doesnt start. ...
 ...
 ANR0900I Processing options file dsmserv.opt.
 ANR000W Unable to open default locale message catalog,
/usr/lib/nls/msg/C/.
 ANR0990I Server restart-recovery in progress.
 ANRD lvminit.c(1872): The capacity of disk '/dev/rtsmvglv11' has
 changed; old capacity 983040 - new capacity 999424.
 ...
 I would take a deep breath and stand back and think about that situation
 first...  There's no good reason for a server to be running fine and
[...]

I agree with what Richard had to say. Taking a deep breath is always step
#1 for handling a crisis without making it worse.

999424 - 983040 = 16384, which is exactly 16 MB and sounds suspiciously
like the PP size. 'rtsm...' sounds like a raw LV rather than a filesystem.

Perhaps someone with root access had done this at some point earlier:

# extendlv tsmvglv11 1

[or had done the equivalent in SMIT.]

(DO NOT EXECUTE THE ABOVE COMMAND! I am only theorizing what may have
happened)

As for eng_US vs C, do this:

# grep LANG /etc/environment

If it says LANG=C then try:

1. Changing it to LANG=en_US in /etc/environment
2. At the root prompt: # export LANG=en_US
3. Try starting up TSM now

And you will probably want to ask your operations staff if anyone had
increased the LV's allocation, perhaps by one physical partition with
extendlv or similar. If someone had done it, I'd have made them put Humpty
Dumpty back together as a great learning experience ;) Tell people to *NOT*
mess around with the TSM server if they do not know what they're doing.

I did a quick test with TSM 5.1 by creating a small 16 MB DB logical
volume (1 PP), started up server OK. Then I did 'extendlv tsmdblv 1',
halted server, and tried to start it up again. I got the exact same
errors you got.

I suspect you may have to remove that LV, recreate it with the expected
size that TSM wants, then do a DB restore from your most recent full db
backup tape.

But before you do that, you'll want to save a copy of your current device
config and volume history file if you have these, as well as your
dsmserv.opt file. Then look in the TSM 5.1 Server for AIX Administrator's
guide at:

http://publibfp.boulder.ibm.com/epubs/pdf/c3207680.pdf

(This is assuming you use TSM 5.1 for AIX; if you use another version,
you'll want to consult that guide instead, but the steps will probably
be similar or still exactly the same.)

DB restore is covered in Chapter 22. 'Restoring a Database to its Most
Current State' at bottom of page 524 is probably your easiest option since
it sounds like you have everything else intact -- volume history info,
logvols, stgpool vols, etc.

Then you'll have to delete (with 'rmlv -y tsmvglv11') the offending LV,
and recreate it (with 'mklv -y vg tsmvglv11 number of PPs'). Then...

Find out which tape has the most recent full DB backup, then do:

# cd /usr/tivoli/tsm/server/bin
# ./dsmserv restore db devclass=whatever vol=tape volser

If that command worked (it's a preview, basically), then do:

# ./dsmserv restore db devclass=whatever vol=tape volser commit=yes

...which will make the restore actually happen, for real.

The actual restore operation is no big deal if you have a good and recent
db backup tape, and know which tape it is. I did this as part of testing
recently, and it worked right off the bat with no problems at all.

If you don't know which tape volser has the latest full db backup, then
you could look into your 

Re: Client login with admin id and password

2003-03-12 Thread Cook, Dwight E
Well, since a system privileged admin id could change the node's password
and then connect without using their admin id  password (use the one they
just set it to) I can see why the straight use of their id  password would
be allowed.

Just another reason why management should pay their TSM admin's well ;-)

Dwight



-Original Message-
From: Gerhard Rentschler [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 12, 2003 10:01 AM
To: [EMAIL PROTECTED]
Subject: Client login with admin id and password


Hello,
I always thought that a tsm admin does not have access to client data. I
think I learned something new.
Calling dsmc or dsm with -node=tarzan and specifying a valid admin id and
password (system privilege) gives access to node tarzan's data. At least it
is possible to list the files. I haven't tried to restore data. This is
indeed documented. However, I would prefer if there were a message in the
activity log saying that admin id was used.
Am I wrong? Could someone explain this feature in more detail?

Best regards
Gerhard
---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany


Re: Client login with admin id and password

2003-03-12 Thread Cook, Dwight E
So right Wanda !
I just tried in 4.2 using my sys admin id and its password to connect with,
then tried to look at a backup of
tsm q backup /usr/tivoli/tsm/client/ba/bin/dsm.sys
ANS1092E No files matching search criteria were found
tsm q backup /usr/tivoli/tsm/client/ba/bin/dsm.sys -inact
ANS1092E No files matching search criteria were found
tsm quit
[EMAIL PROTECTED]/home/zdec23  ls -l /usr/tivoli/tsm/client/ba/bin/dsm.sys
-rw-r--r--   1 root system  5086 Oct 29 06:09
/usr/tivoli/tsm/client/ba/
bin/dsm.sys

Then I tried as myself with the proper node password, still couldn't see the
backup copy of the file...

Then tried as root with the proper node password, worked just fine :-)

 Size  Backup DateMgmt Class A/I File
   ----- --- 
5,086  10/30/2002 06:07:46DEFAULT A
/usr/tivoli/tsm/client/ba/bin/dsm.sys
tsm

Yet another reason to stay current on code !

Dwight


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 12, 2003 12:23 PM
To: [EMAIL PROTECTED]
Subject: Re: Client login with admin id and password


That USED to be true.  The ability to access client data using the admin id
and password was added as a feature somewhere, maybe 3.7, don't remember.

-Original Message-
From: Gerhard Rentschler [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 12, 2003 11:01 AM
To: [EMAIL PROTECTED]
Subject: Client login with admin id and password


Hello,
I always thought that a tsm admin does not have access to client data. I
think I learned something new.
Calling dsmc or dsm with -node=tarzan and specifying a valid admin id and
password (system privilege) gives access to node tarzan's data. At least it
is possible to list the files. I haven't tried to restore data. This is
indeed documented. However, I would prefer if there were a message in the
activity log saying that admin id was used.
Am I wrong? Could someone explain this feature in more detail?

Best regards
Gerhard
---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany


Re: Looking for a manual

2003-03-11 Thread Cook, Dwight E
I have SC33-6340-02
Tivoli Data Protectio for R/3
Installatio User 's Guide for Oracle
Version 3 Release 2 11

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 11, 2003 8:44 AM
To: [EMAIL PROTECTED]
Subject: Looking for a manual


Hi *SM-ers!
I'm looking for the following manual in HTML format:
Tivoli Data Protection for SAP Installation and User's Guide V3R2
(SC33-6389-02)
Please note the -02!
It doesn't seem to be available for download (IBM links to the Tivoli site,
which we all know is no more...) so I was hoping that somebody out there
would take the time to ZIP it for me. I think this manual will be on a newer
TDP for R/3 cd.
Thank you VERY much in advance!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**


Re: 3494 cleaning (2 nd. try)

2003-03-10 Thread Cook, Dwight E
Eric,
did you double check that you still have cleaning cycles left on
your cleaning tape(s) ?

  mtlib -l/dev/lmcp3 -qL | more
Library Data:
   operational state..Automated Operational State
   functional state...00
   input stations.1
   output stations1
   input/output statusALL input stations empty
  ALL output stations empty
   machine type...3494
   sequence number13256
   number of cells1881
...
   bulk output empty cells0
   avail 3490 cleaner cycles..0
   avail 3590 cleaner cycles..667  

Dwight


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Monday, March 10, 2003 8:27 AM
To: [EMAIL PROTECTED]
Subject: 3494 cleaning (2 nd. try)


Hi *SM-ers!
I haven't received an answer yet, so I'll give it another try:

In the IBM Redbook IBM Magstar Tape Products Family: A Practical Guide I
read the following line:

For 3590 Magstar drives, use a value of 999 mounts to perform cleaning based
on drive request rather than library initiated.

So I changed the value to 999, but nothing happens. Both drives are still
displaying *CLEAN. The library doesn't seem to pick up the drives cleaning
request. How can I make the library clean the drives?
Thanks in advance for any reply!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**


Re: 3494 cleaning (2 nd. try)

2003-03-10 Thread Cook, Dwight E
make sure your cleaning volume mask is set properly...
I think by default it is CLN999
I've set it to CLN***
do this all from the console on the ATL...

Dwight



-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Monday, March 10, 2003 10:18 AM
To: [EMAIL PROTECTED]
Subject: Re: 3494 cleaning (2 nd. try)


Hi Richard!
Thank you very much for not ignoring me this time :-)
The output from  /usr/bin/mtlib -l $LMCP -vqK -s fffd:

Performing Query Inventory Volume Count Data using /dev/lmcp0
Inventory Volume Count Data:
   sequence number..10143
   number of volumes0
   category.FFFD

It looks like my library doesn't see the cleaning tape.
About two weeks ago I saw the first cleaning errors in the AIX error log. I
went to the library and I saw that the cleaning tape was ejected to the bulk
I/O area. So I removed it from the library and I placed a new one in the
bulk I/O area. The library picked it from there and placed it in an empty
cell. I though that that was enough, but apparently not. Is there a special
procedure for checking in a cleaner cartridge?
Thanks again!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]
Sent: Monday, March 10, 2003 16:10
To: [EMAIL PROTECTED]
Subject: Re: 3494 cleaning (2 nd. try)


I haven't received an answer yet, so I'll give it another try:

We've simply been ignoring you.  ;-)

In the IBM Redbook IBM Magstar Tape Products Family: A Practical Guide I
read the following line:

For 3590 Magstar drives, use a value of 999 mounts to perform cleaning
based
on drive request rather than library initiated.

So I changed the value to 999, but nothing happens. Both drives are still
displaying *CLEAN. The library doesn't seem to pick up the drives cleaning
request. How can I make the library clean the drives?
Thanks in advance for any reply!!!

Well, the first thing I would check is whether you have cleaning tapes in
your
library...that they have cycles left...that they have prefixes (like CLN)
matching the spec defined in the library, etc.

 # Get number of tapes:
 /usr/bin/mtlib -l $LMCP -vqK -s fffd

 # Get available cleaner cycles number:
 /usr/bin/mtlib -l $LMCP -qL

No cleaning tapes = no cleaning.  Available cycles is something we have to
watch
for, as it can deplete rather quietly.  (Exhausted cleaning tapes may
auto-eject, but operators may send them offsite.  ;-)

  Richard Sims, BU


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**


Re: Move date to different media type

2003-03-06 Thread Cook, Dwight E
Sure...
just move data vol stg=newstgpool
all the time I move tapes back into diskpools...

Dwight



-Original Message-
From: Michael Raine [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 06, 2003 9:36 AM
To: [EMAIL PROTECTED]
Subject: Move date to different media type


Has anyone every tried moving data from one stg to another with different
device
classes/media type?
The two media type have different storage capacities.

Did you have any success or where there issues?

Thanks


Re: progressive backup vs. full + incremental

2003-03-05 Thread Cook, Dwight E
Old style Full+Incr (as it was some 15-20 years ago...)
Say you run weekend full with weekday incrementals.
Say on Friday, your environment goes down
You restore from your weekend full
You restore from your Mon incr (all the data on the tapes)
You restore from your Tue incr (all the data on the tapes)
You restore from your Wed incr (all the data on the tapes)
You restore from your Thu incr (all the data on the tapes)
You restore from your Fri Incr (if it had already run, all the data
on the tapes)
Now, if 75% of the data on the system changes daily, you just restored
475%...
So if the environment has 1 TB of data, you just restored 4.75 TB's
to get it back...

The other method, (TSM's incremental forever) you simply restore the 1 TB
and you are done...
That is, TSM tracks all the data thus there is no need to lay down all those
incrementals that you won't use anyway.

The difference between the old way and the new way is simply the data base
to track all the information.

and remember, you really aren't going back to the very first backup with the
new way...
you are simply going back to ONLY THE DATA YOU NEED!
Tapes hold a LOT of data that can be accessed very fast (as compared to 20
years ago)
Current day tapes, drives,  ATL's can almost be viewed as old dasd when you
look at the capacities  access times.
So in our new way you just keep all the data out there ~somewhere~ and
simply ask for what you need when you need it.

Dwight



-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 05, 2003 8:21 AM
To: [EMAIL PROTECTED]
Subject: progressive backup vs. full + incremental


Hello everyone!

I was wondering why the full + incremental would result in a longer restore
time than the progressive backup methodology?  From several co-workers
point of view they thought that it would be quicker on the full +
incremental because you wouldn't have to go back to the beginning backups
of the file and restore all of the incrementals, you would just go back to
the most recent full backup and apply the incrementals after that point.
When I went to explain the reasoning behind this, I had some problems
understanding the concept myself, so I was hoping someone could explain
both methods and why they differ in restore time and why progressive is
better than the full + incremental.  Thank you so much for any help you can
lend on this matter!



Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338


Re: Dir, Filespace backup question

2003-02-26 Thread Cook, Dwight E
One thing you have to watch out for is that...
when you do that you alter the keys which TSM uses...
Key fields in various situations are NODE, FILE_SYSTEM (filespace_name),
directory path ie stuff between the mount point  where the file resides
(hl_name), FILE (ll_name)
Now you used to have /app/1st72gbarray/somedir/somefile
keys used to be
app
1st72gbarray/somedir
somefile
now if you do what it seems you will do your keys will change as follows...
app/1st72gbarray
somedir
somefile
So in the future you might have to do the following...
If looking for older copies of backups  archives use
q backup {/app}/1st72gbarray/somedir/somefile
If looking for newer copies of backups  archives use
q backup {/app/1st72gbarray}/somedir/somefile
Use the {} to designate what should constitute the ~file_system~ name

Oh, and I guess since it radically changes the keys, it goes without saying
that it will assume the new names are all new files...

There is the ability to rename your filesystems within TSM but I've never
specifically looked at the results of that action as it relates to the
issues mentioned above...

hope this helps,
later,

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Farren Minns [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 26, 2003 4:08 AM
To: [EMAIL PROTECTED]
Subject: Dir, Filespace backup question


Hi TSMers

I have a question for you.

I'm running TSM 4.2.2.12 on a Solaris 2.7 machine. We are about to do some
reconfiguration of the file systems of one of our clients as follows.

At the moment we have one massive /app dir with loads of subdirs for
various applications (5 72GB disks software raided...yuk). What we are
looking at doing is making some of the dir's actual mount points. i.e. the
/app/users/fred would be copied to a new mount point making sure that all
file permissions, access dates etc remain unchanged, and then mount that
new file system as /app/users/fred (same as before).

Will TSM see this as completely new data, regardless of whether or not we
maintain the same dates, time stamp etc. If, so, is there a way round this?

Please feel free to ask for more info If I have been too vague.

Many thanks

Farren Minns - John Wiley  Son Ltd


Re: delete volhist t=dbb (was no subject)

2003-02-21 Thread Cook, Dwight E
where you used today=today-500 is saying you only want to delete entries
OLDER than 500 days ago... In other words, you are asking it to KEEP things
that are less than about 1.5 years, probably not what you want...
Use del volhist t=dbb tod=today-7 or some number smaller than 500...
I only keep one weeks worth of tsm data base backups.
ALSO you can't delete the ONLY data base backup tape no matter how hard you
try.
TSM is very nice and won't let you shoot yourself in the foot...
hope this helps...

Dwight


-Original Message-
From: jiangguohong1 [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 20, 2003 8:18 PM
To: [EMAIL PROTECTED]
Subject:


hi,
in our practice application,i have a question for tivoli storage
manamger.the version of tsm software that we used is 5.1.0.0,in order to
protect tsm server database,we use the tsm administrator command backup db
BRtype=full devc=ltodevclass to perform database full backup,the backup
database located into one scratch volume in ibm 3584 tape library,use the
command q libv to check the upon scratch volumes status that is dbbackup
status.i want to let the dbbackup volume turn into scratch or private
volume, in other words,as normally data backup.verify the volume history
file during performing q volh command,the following information is
contained:
database backup time:09/29/2002
i use del volh type=dbb toda=today-500 and attemp to delete the dbbackup
volume in the
volume history file,the result shows that zero sequential volume history
entries were successfully deleted,then i use q volh command to verify the
volume history file,the entry that described database backup is always
exist,so i dont't delete the dbbackup volume. can you help me how to delete
the dbbackup volume?
please write back me ASAP.



Re: Anyone doing 1TB+ Backup Nightly

2003-02-21 Thread Cook, Dwight E
Here there are DB servers with 3.8 TB oracle SAP instances on them and they
backup at a peak rate of 304-342 GB/hr.
How ?
Large client server (Sun E10K with 32 processors)
Gb ethernet
15 concurrent client sessions
client compression
goes to diskpool, not straight to tape...

Each session is able to keep a processor running at about 90+%.
Each session sees a peak transfer rate of about 5+ GB/hr of COMPRESSED
CLIENT DATA at an average of 3.8/1 compression ratio that is about 19+ GB/hr
of actual oracle data base space.
Network transfer rates run about 80-90 GB/hr but at an average of 3.8/1
compression that is the 304-342 GB/hr of actual oracle data base space.
TSM server is a 7017-S70 with only two processors and really doesn't run
that busy (30-45% maybe)
TSM server does utilize ESS storage but we see this same sort of abilities
on tsm servers using large amounts of SSA disks configured as JBOD... but we
try to limit things to one drawer per SSA loop.

Now I said peak transfer rates because at that peak rate the 3.8 TB data
base should backup in about 12.8 hours but it actually runs between 14-16
hours.

Also one can utilize 100 Mb/sec fast ethernet if they are OK with just 40
GB/hr data transfer rates that puts the backup from  14 hours to just under
24...


Dwight


-Original Message-
From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 21, 2003 8:34 AM
To: [EMAIL PROTECTED]
Subject: Anyone doing 1TB+ Backup Nightly


I do 1TB+ backups nightly which run for about 8 hours.  A large part of our
backup is a 500GB Oracle database backup to TSM.  The database backups up
over a private 1Gb connection to a AIX P660-6h1 server with 2GB of RAM and 4
processors.   The data then goes to nine 2disk stripe sets, over a 1Gb SAN
connection (I went with stripe sets because RAID5 was just to slow).  I'm
getting high iowat times on my AIX TSM server.  I'm having a hard time
pinpointing exactly where the bottleneck is at.  Currently it takes 51/2
hours to backup 500GB but I know it can be better.

My question is are any of you that are moving this amount of data faster
than me using a more powerful server like a IBM 690.

THanks
***EMAIL DISCLAIMER*** This
email and any files transmitted with it may be confidential and are intended
solely for the use of th individual or entity to whom they are addressed.
If you are not the intended recipient or the individual responsible for
delivering the e-mail to the intended recipient, any disclosure, copying,
distribution or any action taken or omitted to be taken in reliance on it,
is strictly prohibited.  If you have received this e-mail in error, please
delete it and notify the sender or contact Health Information Management
312.996.3941.



Re: q backup shows wrong mgmtclass, BUG ?

2003-02-21 Thread Cook, Dwight E
If you don't specify a dirmc, tsm will associate the directory entries with
the LONGEST retention management class available in the domain under which
the node is registered...


Dwight


-Original Message-
From: Michael Kindermann [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 21, 2003 11:53 AM
To: [EMAIL PROTECTED]
Subject: q backup shows wrong mgmtclass, BUG ?


Hello,

i tested a client backupresults by q backup command:
the result ist, the files are saved with the default managementclass, thats
all right.
But the directories all show a diffrent managmentclass, which this client
should never heard off. It is not in the dsm.opt/dsm.sys file. This
managementclass saves data for 6 years.
The corresponding stg-pool contains data from a lot of clients, but there
should be only one.
The default managementclass for all policydomains isnt this managmentclass.

The server is 4.2.2.7, for now i only checked 2 clients 5.1.5 Linux(Debian),
4.1.2.12 NT 4.0 (This client hasnt touch for 1 year, the wrong
managementclass exist only a few weeks)

Whats gone wrong, is this a bug?

Thanxs
M.Kindermann
Wuerzburg/Germany
[EMAIL PROTECTED]

--
+++ GMX - Mail, Messaging  more  http://www.gmx.net +++
Bitte ldcheln! Fotogalerie online mit GMX ohne eigene Homepage!



Re: can't delete filespace

2003-02-20 Thread Cook, Dwight E
3.1 is really old !
Try using a wild card...
first try
q file ZWORYKIN.ITG.UIUC.EDU  *c
if that lists what you desire, try
del file ZWORYKIN.ITG.UIUC.EDU *c

Dwight


-Original Message-
From: Alexander Lazarevich [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 20, 2003 8:08 AM
To: [EMAIL PROTECTED]
Subject: can't delete filespace


ADSM server 3.1 running on AIX 4.3.3 system.

I can't delete a known filespace:

tsm: ADSMq file zworykin.itg.uiuc.edu

Node Name Filespace Platform Filespace Capacity
  Name   Type  (MB)
- ---    - 
ZWORYKIN.ITG.UIUC.EDU \\zworykin\$c WinNTNTFS   8,746.3
ZWORYKIN.ITG.UIUC.EDU \\zworykin\$f WinNTNTFS  17,500.5

tsm: ADSMdelete file zworykin.itg.uiuc.edu \\zworykin\$c type=any
ANR0852E DELETE FILESPACE: No matching file spaces found for node
ZWORYKIN.ITG.UIUC.EDU.
ANS8001I Return code 11.

Server logs don't give any more info that isn't up above here. Any ideas?
I hope the database isn't corrupted. How do I start debugging this?

Thanks in advance,

Alex
---   ---
   Alex Lazarevich | Systems Administrator | Imaging Technology Group
Beckman Institute - University of Illinois
   [EMAIL PROTECTED] | (217)244-1565 | www.itg.uiuc.edu
---   ---



Re: can't delete filespace

2003-02-20 Thread Cook, Dwight E
O try a query occupancy on the node...
you might find that you've already cleared its data for the most part BUT
probably can't wipe it totally since the registry information is under that
filespace.

Dwight


-Original Message-
From: Alexander Lazarevich [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 20, 2003 9:29 AM
To: [EMAIL PROTECTED]
Subject: Re: can't delete filespace


I know 3.1 is old. We are going to upgrade to TSM 5.1 in the next few
months, assuming the budget get's funded.

It still ins't working:

tsm: ADSMq file zworykin.itg.uiuc.edu  *c$

Node Name  Filespace Platform Filespace Capacity
   Name   Type  (MB)
-- ---    - 
ZWORYKIN.ITG.UIUC.EDU  \\zworykin\c$ WinNTNTFS   8,746.3

tsm: ADSMdel file zworykin.itg.uiuc.edu *c$
ANR2238W This command will result in the deletion of all inventory
references to the data on filespaces that match the pattern *c$ for node
ZWORYKIN.ITG.UIUC.EDU, whereby rendering the data unrecoverable.

Do you wish to proceed? (Yes (Y)/No (N)) yes
ANS8003I Process number 667 started.

tsm: ADSMq pr
ANR0944E QUERY PROCESS: No active processes found.
ANS8001I Return code 11.

tsm: ADSMq file zworykin.itg.uiuc.edu *c$

Node Name  Filespace Platform Filespace Capacity
   Name   Type  (MB)
-- ---    - 
ZWORYKIN.ITG.UIUC.EDU  \\zworykin\$c WinNTNTFS   8,746.3


I'd blow away the whole node, but it has another filespace which I need to
preserve. Any more ideas?

Thanks!

Alex
---   ---
   Alex Lazarevich | Systems Administrator | Imaging Technology Group
Beckman Institute - University of Illinois
   [EMAIL PROTECTED] | (217)244-1565 | www.itg.uiuc.edu
---   ---




On Thu, 20 Feb 2003, Cook, Dwight E wrote:

 3.1 is really old !
 Try using a wild card...
 first try
 q file ZWORYKIN.ITG.UIUC.EDU  *c
 if that lists what you desire, try
 del file ZWORYKIN.ITG.UIUC.EDU *c

 Dwight


 -Original Message-
 From: Alexander Lazarevich [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, February 20, 2003 8:08 AM
 To: [EMAIL PROTECTED]
 Subject: can't delete filespace


 ADSM server 3.1 running on AIX 4.3.3 system.

 I can't delete a known filespace:

 tsm: ADSMq file zworykin.itg.uiuc.edu

 Node Name Filespace Platform Filespace Capacity
   Name   Type  (MB)
 - ---    - 
 ZWORYKIN.ITG.UIUC.EDU \\zworykin\$c WinNTNTFS   8,746.3
 ZWORYKIN.ITG.UIUC.EDU \\zworykin\$f WinNTNTFS  17,500.5

 tsm: ADSMdelete file zworykin.itg.uiuc.edu \\zworykin\$c type=any
 ANR0852E DELETE FILESPACE: No matching file spaces found for node
 ZWORYKIN.ITG.UIUC.EDU.
 ANS8001I Return code 11.

 Server logs don't give any more info that isn't up above here. Any ideas?
 I hope the database isn't corrupted. How do I start debugging this?

 Thanks in advance,

 Alex
 ---   ---
Alex Lazarevich | Systems Administrator | Imaging Technology Group
 Beckman Institute - University of Illinois
[EMAIL PROTECTED] | (217)244-1565 | www.itg.uiuc.edu
 ---   ---




Re: Newbie question about Space Reclamation

2003-02-19 Thread Cook, Dwight E
In looking at the output of the q mount I see the ~waiting~ and
remembered...
often if a tape encounters errors upon being mounted, recovery work will be
performed on the tape automatically (triggered by the drive) and it will
show ~waiting on mount~ until the drive gets through with it (or at least
with 3590's I've seen this)
It might be that the tape is actually in a drive but hasn't ~officially~
been mounted yet.
I don't know if LTO tapes have all the VCR information on them like a 3590
does but if so, you might be seeing this mount pause while the system
attempts to fix problems that the drive has discovered...


Dwight



-Original Message-
From: Stephen Comiskey [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 11:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Newbie question about Space Reclamation


Dwight,

Thanks for the pointers...

I did a q vol 476afxl1 f=d and access=Read/Write, the tape is in the
library and there are no requests outstanding and Mount Limit = Drives :-(

My head is starting to hurt !!

Stephen


|-+---
| |   Cook, Dwight E|
| |   DWIGHT.E.COOK@s|
| |   aic.com|
| |   |
| |   19/02/03 16:47  |
| |   |
|-+---

---
|
  |
|
  |To:  Stephen Comiskey/Dublin/IE/RoyalSun@RoyalSunInt
|
  |cc:
|
  |Subject: RE: Newbie question about Space Reclamation
|

---
|




THIS MESSAGE ORIGINATED ON THE INTERNET - Please read the detailed
disclaimer below.

---


check :
  availability of the tape 476AFXL1 (q vol 476AFXL1 f=d)
make sure it is available ie. acc=readwrite
  availability of the tape within the atl (q libvol * 476AFXL1)
make sure it is checked in...
  try a query request because if a tape needs to be checked into a
library a request will be issued
  availability of your drives (q drive)
make sure they are all on-line and OK...
  mount limit associated with the device class used by these tapes (q
dev blah f=d)
look for Mount Limit: DRIVES  OK, drives states limit is
equal to available drives
could have a hard number...
try those things first...

Dwight



-Original Message-
From: Stephen Comiskey [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 10:19 AM
To: [EMAIL PROTECTED]
Subject: Newbie question about Space Reclamation


Hi,

I'm only new to TSM and on a bit of a steep learning curve.

We have a scheduled job each Friday to reclaim our copy pool tapes (update
stg backup_copy_pool reclaim=75), the process starts and the output volume
is mounted OK however the server wait for the input (60K+ seconds).  I've
pasted some output from the console below...

q proc
Process Description  Status

 -
Space ReclamationOffsite Volume(s) (storage pool
  BACKUP_COPY_POOL), Moved Files: 0, Moved Bytes:
  0, Unreadable Files: 564, Unreadable Bytes:
  10,214,383. Current Physical File (bytes):
  180,031,126

  Waiting for mount of input volume 476AFXL1 (821
  seconds).

  Current output volume: 505AFXL1.

q mount
ANR8379I Mount point in device class LTOCLASS1 is waiting for the volume
mount to complete, status:
WAITING FOR VOLUME.
ANR8330I LTO volume 505AFXL1 is mounted R/W in drive LTO_DRIVE1 (\\.
\TAPE0), status: IN USE.
ANR8334I 2 matches found.

I know I'm missing something real simple here, I've search a number of
forums (including this lists archives) but I may be entering the wrong
search parameters

Any assistance you can provide is much appreciated.

cheers
Stephen



---
The following message has been automatically added by the mail gateway to
comply with a Royal  SunAlliance IT Security requirement.

As this email arrived via the Internet you should be cautious about its
origin and content. Replies which contain sensitive information or
legal/contractual obligations are particularly vulnerable. In these cases
you should not reply unless you are authorised to do so, and adequate
encryption is employed.

If you have any questions, please speak to your local desktop support team
or IT security contact.
---



Re: Inclexcl not picking up Management Class

2003-02-07 Thread Cook, Dwight E
I'll assume you did have backslashes when you used them in the
include/exclude statement and not the slashes you have in your note...

I've found that when in doubt, ask TSM what it expects...
Try doing a q file boxwa009 and take its specification for D: and use
it...
probably \\boxwa009\d$ so try \\boxwa009\d$\...\*
just a thought.

Dwight


-Original Message-
From: John Naylor [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 07, 2003 9:55 AM
To: [EMAIL PROTECTED]
Subject: Inclexcl not picking up Management Class


Hi All,
I have an annoying problem
Hopefully the Doc will be able to fix me up
Seriously, I need  to make a particular drive  in a WINNT client use large
capacity tapes for its backups
I have a management class LARGE which does this and works for other
clients in
this domain
For this particular client, however we code the option file and we have
tried a
lot of variations
it insists on seeing this line per query inclexcl  :-
include  d:*large
as
include d:/.../*

I know the line is being read because the query incl/excl shows it in a
different position if I move it around in the options file.

The full options file follows:-

  PASSWORDACCESSGENERATE
TCPSERVERADDRESSIBMBK
NODENAME  BOXWA009
schedmode prompted
DOMAIN c: D: E: L:
include d:*large
Exclude.File e:\pagefile.sys
EXCLUDE.DIR c:\temp
EXCLUDE.DIR c:\Program Files\Sophos SWEEP for NT
EXCLUDE.DIR c:\Documents And Settings
EXCLUDE.FILE c:\WINNT\system32\config\default
EXCLUDE.FILE c:\WINNT\system32\config\default.log
EXCLUDE.FILE c:\WINNT\system32\config\sam
EXCLUDE.FILE c:\WINNT\system32\config\sam.log
EXCLUDE.FILE c:\WINNT\system32\config\security
EXCLUDE.FILE c:\WINNT\system32\config\security.log
EXCLUDE.FILE c:\WINNT\system32\config\software
EXCLUDE.FILE c:\WINNT\system32\config\software.log
EXCLUDE.FILE c:\WINNT\system32\config\system
EXCLUDE.FILE c:\WINNT\system32\config\system.alt
EXCLUDE.FILE c:\Program Files\Tivoli\TSM\baclient\dsmsched.log
EXCLUDE.FILE c:\WINNT\system32\dns\dns.log
EXCLUDE.FILE c:\WINNT\system32\PERFLIB_*.*

Thanks for any input,
John




**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**



Re: PMR 02528, 082

2003-02-07 Thread Cook, Dwight E
And ya know... this is why I just LOVE TSM !

you can do anything you want...

take one drawer of SSA and set it up as JBOD and put it all as a storage
pool for non-critical backups/archives

take another drawer of SSA and set it up in a raid of your choice for more
critical backups/archives

bleed either or both of those into tape pools that have copy pools or ones
that don't

do things like, with hsm, force a backup to exist prior to migration, make
the backups go to different storage pools than the migrated data,
yadayadayada...

Any level of protection I've ever been asked/required to provide, I've been
able to achieve with ADSM / TSM.

Dwight



-Original Message-
From: Allen Barth [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 07, 2003 11:41 AM
To: [EMAIL PROTECTED]
Subject: Re: PMR 02528, 082


[lobbing armor piercing verbage]

 Oh ye of narrow vision and holder of golden horseshoe of hardware luck:

1.  Have you ever lost a stg pool disk?  And before you answer well ya
just backup again
2. SOME DATA IS IMPOSSIBLE TO BACKUP AGAIN!
3. I REPEAT!

Along with regular clients, I backup data from Sybase and Oracle via
SQLBACKTRACK.  Some of this data is an incremental.  In this case
incremental FROM WITHIN the db server.  IE point-in-time data of pages
that have changed.  Should that data be lost, there is no way to restore
beyond what was lost unless another FULL or complete backup has been
created, but it then could be the case that the full is too late a
point-in-time.

Yup, I already been down that road, and there isn't any good sights to
see.  Since then, I use raid-5 storage pools with floating hot spares
whereever I can.   I know I'm still open to hardware failure issues, but
the likelyhood of taking a hit is greatly reduced.  Performance hit?  Our
performance measurement guy saw almost not difference in throughput with
raid-5 versus non raid-5 in the TSM environment.  Basically says to me
that the ever famed bottleneck isn't visiting dasd land right now.  Also
keep in mind that NO storage layout/method/etc can protect against
corrupted data being written.

--
Al




Stapleton, Mark [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
02/07/03 08:51 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: PMR 02528, 082


From Steven Schraer:
It looks like for storage pools are recommend for raid 0
(mirroring), raid
0+1 (mirroring  stripping) or raid 5 (distributed parity).
These are 0+safe
methods to protect the storage pool.  Do you know of any
companies that use just raid 1 (stripping) on their storage
pools?  Is there an issue of tsm loosing a storage pool and
the database having issues due to the lost storage pool data?

From: William Rosette [mailto:[EMAIL PROTECTED]]
 Does any TSM gurus have any suggestions for our AIX admin

[donning my advocacy armor]
I still don't see any reason to create redundancy for the disk storage
pool. Unless you're not using a tape library, there's no reason for it.
The disk pool should get flushed to a more stable medium, and that flush
should take place fairly soon after the client backups to disk finish.
Why waste gobs of disk on something that's going to flushed clear every
day?

As far the db and log are concerned, just create volume copies with TSM
and make sure the copies are on disks that are on separate disks.

That's really all there is to it.

--
Mark Stapleton ([EMAIL PROTECTED])



Re: Archive with delete question

2003-02-06 Thread Cook, Dwight E
I would believe this would be due to file sizes and the fact that TSM uses
aggregates, where a lot of little files will be bunched together.
Or to do with other things along the lines of txnbytelimit txngroupmax
etc...

TSM won't delete the file from the client UNTIL IT IS SURE IT HAS IT ON THE
SERVER AND THAT TRANSFER IS COMMITTED TO THE TSM DB.  (thank goodness)  so
you are probably just seeing the differences in transactions based on the
unique settings of all your misc. parameters that control the way ~things
work~.

Dwight



-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 06, 2003 1:14 PM
To: [EMAIL PROTECTED]
Subject: Archive with delete question


Quick question:

Sometimes when we run the command  archive /somedir/somefile*
-delete=yes, we have seen 2 different behaviors:
- We see that it archives 1 file then deletes it, archives the next
file, deletes it, etc.
- We see that it archives a bunch of files before it starts deleting
files. Kind of in batches of files.

I know I've seen both behaviors, but I'm not sure why. Is it ..
- interactive dsmc command as opposed to a cron job?
- different versions of TSM?
- different OSes?
- a TSM client setting I'm not familiar with?

I have looked through the manuals and found nothing, so I thought
I'd bounce it off the group before I started to delve into testing to see
where and when I see the different behaviors.

We are looking for more consistent behavior from the command to keep
busy filesystems from filling up.

Thanks,

Ben Bullock
Unix administrator
Micron Technology Inc.



Re: 3590 Partitioning

2003-01-28 Thread Cook, Dwight E
I hadn't thought of that but in looking at my old GC35-0154-02 IBM SCSI Tape
Drive, Medium Changer,  Library Device Drivers (installation  user's
guide) I do see where the write option of tapeutil allows you to write a
file.

# backup ~myfile.tar~ to tape
tapeutil -f/dev/rmt0 write -s myfile.tar
can then read with
# restore ~myfile.tar~ from tape
tapeutil -f/dev/rmt0 read -d myfile.tar
all sorts of useful stuff around chapter 10 (tape subcommands on pages
89-92) which is where the above examples came from.

Dang-it Steve, now I'm going to have to play around with creating a tape
validation script since I see the wtest  rtest commands... oh well, this
old dog might as well learn a new trick or two...

Dwight


-Original Message-
From: Steve Harris [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 27, 2003 5:39 PM
To: [EMAIL PROTECTED]
Subject: 3590 Partitioning


HI All,

This is  a 3590 question  rather than TSM as such, but this is the best
forum for it.


I need to take system images of several AIX boxes each week to SAN Attached
3590E drives in my 3494.
It seems like overkill to devote a whole 3590 tape to each image as they
will only be a few gig each.

I stumbled across some doc which implies that a 3590 tape can be partitioned
into smaller segments which can then be used independently. (see items 36
and 38 on the tapeutil menu).  However, this is old doc, and I assume the
feature is from the early days of 3590 when 10GB was an enormous amount of
storage.

Has anyone used this partitioning feature? in what circumstances?
Are there any gotchas?

Thanks

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia.




**
This e-mail, including any attachments sent with it, is confidential
and for the sole use of the intended recipient(s). This confidentiality
is not waived or lost if you receive it and you are not the intended
recipient(s), or if it is transmitted/ received in error.

Any unauthorised use, alteration, disclosure, distribution or review
of this e-mail is prohibited.  It may be subject to a statutory duty of
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this
e-mail in error, you are asked to immediately notify the sender by
telephone or by return e-mail.  You should also delete this e-mail
message and destroy any hard copies produced.
**



Re: At what level does the 4.2.x server have the session count fix?

2003-01-28 Thread Cook, Dwight E
what platform ?
when did it break ?
I'm at 4.2.2.0 on AIX and I get into the millions on a regular basis...even
10's of millions as I seem to remember.

   Sess
 Number
---
1,354,1
 76
1,365,0
 47
1,369,9
 14

Dwight


-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 28, 2003 10:30 AM
To: [EMAIL PROTECTED]
Subject: At what level does the 4.2.x server have the session count fix?


In 4.2.3.0, after the session count gets to 65K, the session number can no
longer be stored in the summary table resulting in an ANRD message.
(Can anyone say maxint?)

I'm pretty sure a fix is out... I hope.



Re: Restoring files off one server to another

2003-01-28 Thread Cook, Dwight E
OK, is the server a client of a tsm server or is it the actual tsm
server ?
if it is just a different client node, just do a
dsmc -virtualnode=other_name
and you may (if you know the other nodes password into the tsm server)
restore its backups to your current client box.

Dwight


-Original Message-
From: Hope Zaleski [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 28, 2003 11:40 AM
To: [EMAIL PROTECTED]
Subject: Restoring files off one server to another


Hello Fellow TSMr's

I have a server which has died and we were thinking if it
were possible to take selected files that were backed up
from this dead server and puting them onto another server
of the same brand. Is this possible and if so, where can I
find the info that will tell me how it's done?




Hope Zaleski
Network Assistant/Faculty-Staff Support Coordinator
Carthage College
Kenosha Wisconsin
262-551-5748



Re: 3494 and dsmserv restore db

2003-01-27 Thread Cook, Dwight E
I can say that from an RS/6000 AIX server using a 3494-L12 (with 3590-B1A's)
that the restore db works JUST FINE !
You don't have to do anything funny...

Dwight



-Original Message-
From: Steve Roder [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 27, 2003 1:05 PM
To: [EMAIL PROTECTED]
Subject: 3494 and dsmserv restore db


Hi All,

 According to the doc., dsmserv restore db only supports manual and
scsi libraries.  Has anyone on this list restored a db via a 3590 drive
inside a 3494?  Since the 3494 is not libtype=scsi, I am thinking that I
will have to put my 3494 into pause or manual mode, and fake TSM into
thinking the 3590 is standalone, and then manually insert the dbbackup
volume into the correct drive.

Has anyone done this?

Thanks,

Steve Roder, University at Buffalo
HOD Service Coordinator
VM Systems Programmer
UNIX Systems Administrator (Solaris and AIX)
TSM/ADSM Administrator
([EMAIL PROTECTED] | (716)645-3564)



Re: archiving questions

2003-01-23 Thread Cook, Dwight E
Also, how often will they perform the archives ???
how many files make up the data ???
etc...
You want to take such things into consideration because they will have an
impact on your environment in general (in the form of tsm server data base
growth, tsm db performance issues, tape library capacity issues, etc...).

Anymore I look at data that is to be kept for over 1 year as stagnant data
(that I don't want in the environments).
When long term retention is required, I generally register a node by the
name of existing node name_exp, have them push any data they require kept
to the tsm server using that node name, then I export to two sets of tapes
and send to offsite locations.

Also, the way you may use relative dates during an import, if they need to
extend the retention, all I do is extend the time I keep the tapes.

just some food for thought...

Dwight



-Original Message-
From: Andrew Raibeck [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 23, 2003 7:30 AM
To: [EMAIL PROTECTED]
Subject: Re: archiving questions


Hi Michelle,

You will need a management class with an archive copy group that retains
the data for 3 years. Therefore you will need to:

1) Create a new management class in the desired domain/policy set. (See
DEFINE MGMTCLASS in the Administrator's Reference.)

2) Create an archive copygroup in the new management class with the
desired retention. (See DEFINE COPYGROUP in the Administrator's
Reference.)

3) Activate the policy set. (See ACTIVATE POLICYSET in the Administrator's
Reference.)

If you are not the TSM server administrator, then you will need to have
that person perform these tasks. Once the new management class is
available, you can run the archive and bind the files to that new
management class.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




Michelle Wiedeman [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
01/23/2003 06:03
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:archiving questions



Hi all,

A client of mine requests that 220GB of databases are being archived on
tape and kept for 3 years.
I've been looking for a way on how to do this but the manuals only speak
of
how the archive client command works. Not really on how things work on the
serverside. I assume I'll have to make another managementclaas which
defines
how long to keep trhe files etc. but I'm not sure how to go about this.
below are the specifics on our current mg classes; the domain I'm speaking
of is the one called progress.

Does anyone have some ideas on how to do this?

thnx a lot,
michelle

**
*** --- Q DOMAIN
**


Policy   ActivatedActivated Number ofDescription

Domain   Policy   Default  Registered
Name Set  Mgmt  Nodes
  Class
-----

BAKKERLA-BAK_POLI-BAK_MNGM- 4domain voor progress
 ND   CY   NTCLASS

STANDARD STANDARD STANDARD 34Installed default
policy
  domain.




**
*** --- Q MGMTCLASS
**


PolicyPolicyMgmt  Default   Description

DomainSet Name  Class Mgmt
NameName  Class ?
- - - -

BAKKERLA- ACTIVEBAK_MNGM- Yes   progress
managementcl-
 ND  NTCLASS ass

BAKKERLA- BAK_POLI- BAK_MNGM- Yes   progress
managementcl-
 NDCYNTCLASS ass

STANDARD  ACTIVESTANDARD  Yes   Installed default

 management class.

STANDARD  STANDARD  STANDARD  Yes   Installed default

 management class.




**
*** --- Q COPYGROUP * * * STANDARD TYPE=BACKUP
**


PolicyPolicyMgmt  Copy  Versions Versions   Retain  Retain
DomainSet Name  Class Group Data DataExtraOnly
NameName  NameExists  Deleted Versions Version
- - - -    ---

Re: extending archive retentions

2003-01-22 Thread Cook, Dwight E
NO but YES, sort of...
NO, you can't alter the management class (and thus the retention period) of
archived files

BUT you could do something like export the node (or as little data as
possible but still including the data you need)
then you could save those export tapes for 5 years...
When you import a node, you can request that is use relative dates, so
archived data will still be available for the same number of remaining days
as when it was exported. (that was about as clear as mud...)

Dwight


-Original Message-
From: Glass, Peter [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 22, 2003 11:19 AM
To: [EMAIL PROTECTED]
Subject: extending archive retentions


I have some clients who have archived files with 1-year retentions. Now they
say these files need to be retained for 5 years.
Is there a way we can extend these retentions without having to retrieve and
re-archive these files?
If so, how?
Thanks, in advance.

Peter Glass
Distributed Storage Management (DSM)
Wells Fargo Services Company
 * [EMAIL PROTECTED]




Re: extending archive retentions

2003-01-22 Thread Cook, Dwight E
 the default(If you later change or
replace the default management class, the server uses the updated default
management class to manage the archive copy)...here is the doc that is from
the TSM publications, the Server Guide.


Archive Copies
Archive copies are never rebound because each archive operation creates a
different archive copy. Archive copies remain bound to the management class
name specified when the user archived them.

If the management class to which an archive copy is bound no longer exists
or no longer contains an archive copy group, the server uses the default
management class. If you later change or replace the default management
class, the server uses the updated default management class to manage the
archive copy.

If the default management class does not contain an archive copy group, the
server uses the archive retention grace period specified for the policy
domain.


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 22, 2003 12:25 PM
To: [EMAIL PROTECTED]
Subject: Re: extending archive retentions


Have you actually tested that ?
My understanding (from long ago) was that internally, archives are/were
stored with an ~expires on date~ or ~expires after so many days~.  A
~security~ feature... once an archive was created, that was it, no changing
it... because if you could extend a retention period, you could also shorten
it.  (but then again, an admin with sys auth would just delete the
filespace)

I might have to test that...

Dwight


-Original Message-
From: Miller, Ryan [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 22, 2003 12:13 PM
To: [EMAIL PROTECTED]
Subject: Re: extending archive retentions


Actually, you can't assign a different management class to an already
performed archive, but you CAN alter the management class retention period
itself and effectively alter the retention period for all archives that used
that management class, so there in lies the possible problem.  If other
archives have used this management class, they will also be retained for the
longer period.  But if the number of archives that have used this management
class is low, this may be your best and easiest solution.  I have done that
before to save myself considerable time and effort when things like this
need to be done.  Once the 5 years is up, change the management class
retention time back and all will be normal again.

Ryan Miller

Principal Financial Group

Tivoli Certified Consultant
Tivoli Storage Manager v4.1


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 22, 2003 11:59 AM
To: [EMAIL PROTECTED]
Subject: Re: extending archive retentions


NO but YES, sort of...
NO, you can't alter the management class (and thus the retention period) of
archived files

BUT you could do something like export the node (or as little data as
possible but still including the data you need)
then you could save those export tapes for 5 years...
When you import a node, you can request that is use relative dates, so
archived data will still be available for the same number of remaining days
as when it was exported. (that was about as clear as mud...)

Dwight


-Original Message-
From: Glass, Peter [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 22, 2003 11:19 AM
To: [EMAIL PROTECTED]
Subject: extending archive retentions


I have some clients who have archived files with 1-year retentions. Now they
say these files need to be retained for 5 years.
Is there a way we can extend these retentions without having to retrieve and
re-archive these files?
If so, how?
Thanks, in advance.

Peter Glass
Distributed Storage Management (DSM)
Wells Fargo Services Company
 * [EMAIL PROTECTED]




Re: Can a backup be stopped if not completed by a certain time?

2003-01-21 Thread Cook, Dwight E
You can look into two server options that can cancel a session if is isn't
getting X amount of data transfered after Y period of time.
THROUGHPUTDATA- THRESHOLD Specifies a throughput threshold that a client
session must reach to prevent
being cancelled after the time threshold is reached.
THROUGHPUTTIME- THRESHOLD Specifies the time threshold for a session after
which it may be cancelled for
low throughput.

Say you have a 120 GB data base, and you expect 40 GB/hr transfer rate thus
it should finish in 3 hours...
throughputtimethreshold 180(minutes)
throughputdatathreshold 11650 (KB/sec)

So if my math is right, the above would cancel ANY client session after 3
hours IF it wasn't running at 40 GB/hr.

NOTE the ANY client session...
I don't know if there is anything on the client side to do the same
sort of thing.

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: David Browne [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 21, 2003 7:33 AM
To: [EMAIL PROTECTED]
Subject: Can a backup be stopped if not completed by a certain time?


I run a pre and post command to stop and then restart applications on an NT
server in order to back it up.
The applications have to be brought up by a certain time in the morning.
Is there any way I can do a time check during  the backups or set a time
for the applications be restarted even if the backup has not completed?



Re: AIX, TSM, IP Routing...

2003-01-17 Thread Cook, Dwight E
Daniel,
this is all just ~network routing~.
We have, for example, an AIX 4.3.3 ML09~ish (I don't know if any maint. has
gone on lately) with 3 NIC's (2-100Mb  1-Gb).
They are each on a different subnet.
~default~ routing takes subnet traffic out the interface attached to that
subnet and uses the default gateway for subnet traffic not on any subnet
attached directly to a/the server.
a.b.129.x traffic will go out an a.b.129.z subnet interface
a.b.55.x traffic will go out an a.b.55.z subnet interface
a.b.12.x traffic will go out an a.b.12.z subnet interface

Now, I would guess you probably don't have a whole bunch of different
subnets within your organization (or at least served by your tsm server) so
you may set up routes on your tsm server like...
Say you have 6 subnets with 3 being as mentioned above and the other 3 being
a.b.C, a.b.D,  a.b.E
So all traffic on the same subnets as your tsm server are OK already
Now just put subnet routes on your tsm server to direct traffic as required.
(I believe this would be the proper syntax)
if a.b.C nodes have best route out the a.b.129.z interface issue
route -v -net a.b.C  a.b.129.z
if a.b.D nodes have the best route out the a.b.55.z interface issue
route -v -net a.b.D a.b.55.z
and if a.b.E nodes have the best route out the a.b.12.z interface, issue
route -v -net a.b.E a.b.12.z

hope this helps...

Dwight



-Original Message-
From: Daniel Sparrman [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 17, 2003 12:24 AM
To: [EMAIL PROTECTED]
Subject:


Hi

This is perhaps more of a communication question rather than a storage
management question, but I will still ask, considering some of you out
there should have encountered this problem.

One of our customers have a P-Series 610 running AIX ML 4.3.3.09. It's
equipped with 2 100Mbs Ethernet adapters and one GB adapter. Previously,
they all were located on the same subnet. Now, we've move the GB adapter
into a separate subnet.

The configuraiton looks like this:

1st 100Mbs Ethernet = 192.168.1.1/24 default gateway 192.168.1.254
2nd 100Mbs Ethernet = 192.168.1.2/24
1st Gigabit Ethernet = 192.168.5.1/24

This has become a problem. Everytime a client connects to the TSM server,
it responds from the adapter which have the default gateway set. The
customer is running spanning tree, which means that it wont allow the TSM
server to receive data on one adapter, and respond from another. However, I
haven't found a way to have the TSM server respond from the same adapter on
which the client connected. In my opinion, this can only be solved in to
ways; either having a default gateway for each adapter, or, setting some
kind of option, telling the TSM server to awalys respond on the same
adapter on which it received the data.

BUT, the first alternative doesnt work. AIX 4.3.3.09, in contrary to 5L,
wont let me set multiple default gateways, specifying one for each adapter.
In AIX 5L, I have the option, when setting static routes, of binding the
static route to a specific adapter. When trying to add a second default
gateway on AIX 4.3.3.09, it only tells me that there is already a default
gateway.

The second alternative, however, seems theoretically impossible. The AIX
server shouldnt be able to respond to a client, connection from a different
subnet, from the adapter on which it received the initial client session.
This is because the adapter dont have a default gateway, and therefore
shouldn't be able to find it's way to the client, located on a different
subnet.

This problem still existed when we had all adapters running on the same IP
subnet. The GB adapter could recieve data initiated from the clients, but
the respones always went through the first 100Mbs ethernet adapter, which
had the default gateway.

Is there anybody out there running an AIX server with multiple adapters,
which have multiple subnets, and has been able to have the AIX respond to
the client on the same adapter on which the client initially started the IP
session?

Perhaps I'm not a communication expert, but to me, this seems like a fairly
simple problem.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervdgen 6B
183 62 HDGERNDS
 Vdxel: 08 - 754 98 00
Mobil: 070 - 399 27 51



Re: database reorganisation

2003-01-17 Thread Cook, Dwight E
If you want to remove a DB volume because you aren't using that much space
based on the Pct. Util., 
but you can't because the max reduction isn't large enough.

Be it good or bad, I can't say but we've had 10 TSM servers (most are going
on 7 years old now) and the only place I've ever unloaded  loaded the TSM
data base has been in our test environment... just to see what goes on...
 (our TSM db's are between 8 GB  32 GB's)

Dwight 



-Original Message-
From: Francois Chevallier [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 17, 2003 4:31 AM
To: [EMAIL PROTECTED]
Subject: database reorganisation


What are the best criteria to know if it's time to reorganize the tsm
database (by unloaddb /loaddbb)
Sincerly

François Chevallier
Parc Club du Moulin à Vent
33 av G Levy
69200 - Vénissieux - France
tél : 04 37 90 40 56



Re: Missed Backup

2003-01-17 Thread Cook, Dwight E
It is a double check type situation...
A schedule won't run unless the client node has had at least ONE opportunity
to see that it is going to occur.
Say schedmode is prompted and your q sched period is 24 (and for example,
happens at 12:00 noon-ish)
OK, if I were a ~bad admin~ I might try to schedule an event to run on a
client that would run a command
rm -R *
not a pretty sight !
So that client node's admin checks his log at 13:00, sees things are
normal...
I sneak in and put my schedule on the TSM server at 14:00 to run at 15:00...

see the problem !

Now to correct the situation of a node not picking up a schedule, just
bounce the scheduler ONCE the alterations to schedules has taken place.
First thing the client scheduler does upon starting is to ask the server
~what is going on, when...~
at that time the server will respond with the currently scheduled tasks
(which will include your recent alterations) and thus things will be OK...

*** JUST ANOTHER WAY TSM PROTECTS THINGS A WHOLE LOT BETTER THAN OTHER
PRODUCTS *

but I know I'm preaching to the choir

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 17, 2003 9:10 AM
To: [EMAIL PROTECTED]
Subject: Missed Backup


TSM Server AIX 4.3.3
TSM Server Software 5.1.5.2

Why is it that anytime a client is removed from a schedule and added to a
different one the backup is missed? This always happens from my
recollection. No matter what the server version I have been it's happened.
The only way I've seen around this is to stop and start the scheduler on the
client after the client has been assigned to the new schedule.
Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Re: mtlib output question

2003-01-16 Thread Cook, Dwight E
the last 3 are state, class, type.

values ???
State: I like 00  anything other then that is a problem...
Class: Looks like 10 is 3590 1/2 inch cartridge tape
Type: Looks like 00 is HPCT 320m nominal length
  and 01 is EHPCT extended length


EA0025 03EA 00 10 01
zdec23@tsmutl01/home/zdec23  mtlib -l/dev/lmcp5 -qV -VEA0002
Volume Data:
   volume state.00
   logical volume...No
   volume class.3590 1/2 inch cartridge tape
   volume type..EHPCT extended length
   volser...EA0002
   category.03EA
   subsystem affinity...01 02 03 04 05 06 07 08
09 0A 0B 0C 0D 0E 0F 10
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
zdec23@tsmutl01/home/zdec23 
AAA044 0190 00 10 00
zdec23@tsmutl01/home/zdec23  mtlib -l/dev/lmcp1 -qV -VAAA006
Volume Data:
   volume state.00
   logical volume...No
   volume class.3590 1/2 inch cartridge tape
   volume type..HPCT 320m nominal length
   volser...AAA006
   category.0190
   subsystem affinity...07 08 09 0A 0B 0C 0D 0E
0F 10 03 04 05 06 01 02
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
zdec23@tsmutl01/home/zdec23 

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Justin Bleistein [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 16, 2003 10:58 AM
To: [EMAIL PROTECTED]
Subject: mtlib output question


Hey Fellow AIXer's and TSMer's,

  The following is the output of the mtlib -l/dev/lmcp0 -qI command
(output truncated). Now as you know the mtlib command is a command which
uses the atldd  AIX device driver to communicate with the IBM3494's
database telling you which tapes are physically in the library. Now I
understand the first field of this output is obviously the volser number of
the tape and the second field is the category. I'm confused as to what the
last three fields mean: 00, 10, 01. Any assistance as too what these
fields indicate about a particular volume would be appreciated. Thanks!.

PR3516 012C 00 10 01
PR3517 012C 00 10 01
PR3518 012C 00 10 01
PR3519 012C 00 10 01
PR3520 012C 00 10 01
PR3521 012C 00 10 01
PR3524 012C 00 10 01
PR3525 012C 00 10 01
PR3526 012C 00 10 01
PR3527 012C 00 10 01
PR3528 012C 00 10 01
PR3529 012C 00 10 01
PR3530 012C 00 10 01
PR3532 012C 00 10 01
PR3533 012C 00 10 01
PR3534 012C 00 10 01
PR3538 012C 00 10 01
PR3548 012C 00 10 01

--Justin Richard Bleistein
Unix Systems Administrator (Sungard eSourcing)
Desk: (856) 866 - 4017
Cell:(856) 912 - 0861
Email: [EMAIL PROTECTED]



Re: 3494 and tapes question.

2003-01-15 Thread Cook, Dwight E
that becomes a real nightmare fast !
I've had to do that in the past and I can say that it really REALLY isn't
what you want to do !
How big is your library ?  I'd look into adding a storage expansion frame.
OR look into drive upgrades from B's to E's... that alone doubles your
capacity, then if you get K tapes you double again (so you end up with 4x
the capacity)

Last time I did it, I was just trying to make things last until we moved an
atl, at which time the added capacity would be added.  We had to bring in
tape racks, spend hours (generally a full day) checking tapes out, putting
them into the racks (making sure to keep them in order), putting scratch
tapes in, checking them into TSM.
Now generally about 3 days later we would have to do all this again... going
to a new section in the tape rack to put this group of volumes in so we
could keep things in volser order (no, our racks weren't large enough to
have absolute slots for all possible volsers)  OH and make sure and plan on
working at least one weekend day and maybe two because every now and again
you will have to take all the tapes out of the rack, sequence them all and
put them back in the rack...
Now after the HATE calls about how a restore failed of some critical data
due to media not being in the atl at 03:00 AM, remember it will take TSM one
hour for the ~insert media into the atl~ to time out, and getting up,
driving in, explaining at the ~calling Jesus to the cross~ meetings about
why an automated tape library requires a person to drive in at 03:00 to have
a tape mounted and why the restore then failed again because the data set
spanned tapes and you happened to randomly eject the 2nd tape to make room
for the first so as soon as you got home you got a page to return.
Now, besides doing your regular work and all the previously mentioned stuff
you get to spend time figuring out which tapes, outside the atl, have now
gone scratch and may be checked back in... Oh and yes, you have to figure
out some way to easily generate some list of some 100 ? 200 ? 300 ? random,
not in sequence, volsers to build checkin commands around...

get the picture ? ? ?

Shoot me now, PLEASE !

just my 2 cents worth...

Dwight


-Original Message-
From: Farren Minns [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 15, 2003 10:47 AM
To: [EMAIL PROTECTED]
Subject: 3494 and tapes question.


Hi TSMers

I am looking at what to do in the next year or so when our tape library
becomes full. Baring in mind that we don't do much in the way of restores
(most backups are for DR), is it feasible to remove tapes from the library
and just keep them on site in case they are requested for a restore,
reclamation etc. Does anyone out there do this kind of thing and how does
it work for you?

Thanks all

Farren Minns - John Wiley  Sons Ltd



Re: Unable to restore entire directory structure

2003-01-15 Thread Cook, Dwight E
Sure, probably on your new server those don't exist as filesystems
Only filesystems may be used in domains so the source server is treating
them as filesystems.
the filesystem is part of a key internally in tsm...
on your ~test~ box you probably only have /var as the filesystem...
use something like
restore {/var/opt/oracle/lawtest/arch}/* -subdir=yes
to force it to look past the key using filesystem /var
this can also bite you if you do something like change subdirectories into
independent filesystems and go looking for backups from when they were just
subdirectories...

I believe I got the syntax right with the {}'s but you might want to double
check the manual...
mainly the {}'s are to force the classification of what tsm should think is
the filesystem name, so use them to outline the filesystem name as it is
thought to exist within tsm for the source client node... use q file
sourcenode from an admin command to double check what tsm things are the
proper names.

hope this helps...

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Sonya Gilliland [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 15, 2003 12:26 PM
To: [EMAIL PROTECTED]
Subject: Unable to restore entire directory structure


We are currently using TSM version 5.1.5.2 to backup our N-class servers
running HP-UX 11.00. In order to test restores, we are attempting to
restore backups to a virtual node and have found that we are unable to
restore the entire directory structure. For
example, on the system being backed up, the following Directory  exists
- /var/opt/oracle/lawtest with several sub directories and files.  When
attempting to restore the entire directory structure to another node,
only a sub directory will be restored with no files and not all sub
directories will be restored even though each
directory is selected in the restore. The directories to be backup are
specified in the dsm.opt file as domain /var/opt/oracle/lawtest/arch,
domain /var/opt/oracle/lawtest/data1, domain
/var/opt/oracle/lawtest/index1 ... etc. Does anyone have any suggestions
on restoring the entire directory structure?



Re: To Collate or Not to Collate? - UPDATE

2003-01-15 Thread Cook, Dwight E
Allen makes a critical implied statement of YOU MUST TEST YOUR RECOVERY PLAN
!
If not and you have to use it and it doesn't go smooth...
Personally, I classify myself as probably being at the lowest place on the
earth... and even if something were to initially miss me, it would
undoubtedly/eventually settle on top of me! ! !
So... you request the test and state its requirement as to ensure
functionality.
If management says no, make sure and put together a few things on what
~might~ go wrong like potentially 36 hours of nothing but tape mounts 
dismounts to restore just a single server (if like in Allen's case where 800
tapes were required)
Then make sure and save all the e-mails (print out and lock in a fireproof
safe) so you can mount a defense later ;-)

Dwight

-Original Message-
From: Allen Barth [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 15, 2003 12:31 PM
To: [EMAIL PROTECTED]
Subject: Re: To Collate or Not to Collate? - UPDATE


Glad you found it!

However, regarding the collocation=no issue on copypools

Having done TSM DR recoveries at an offsite location a few times, let me
share my experiences.   When I started with TSM I had one copypool with
colloc=no.  We then did our first DR recovery test where my goal was to
recover the TSM server and ONE other AIX client.  The base os (AIX) was to
restored via sysback, user filesystems via TSM.   Originally this required
the ENTIRE copypool (800+ tapes back then) to be sent from the vault to
the DR location as TSM provides no command to see which vols would be
needed for a client node restore (hint, hint, IBM).   Since then I've been
able to put an SQL query together to get that info but it takes quite a
while to execute.  This trims down the number of tapes, but the number of
tapes was still quite large(100 +).  Furthermore, the number of tape mount
requests during the restore was astronomical, as tapes were requested
multiple times.  After re-thinking TSM and DR needs, I now have a
separated stgpool tree for unix data.   Collocation is enabled for both
primary and copypools.  At the last DR test, the number of tapes needed
from the vault was further reduced to around 40, and the restore process
took significantly less time.   Let's not forget to factor in the time
required for physical tape processing (mount delay, seek time, rewind,
unload).   This can add up to significant wall time.

Regards,
Al



Re: Why my library doesn't compress data?

2003-01-14 Thread Cook, Dwight E
So to me that indicates that your clients are probably compressing the data.
Here are examples from some of my environments...
Volume Name   Storage  Device  EstimatedPct   Volume
  Pool NameClass Name   Capacity   Util   Status
(MB)
  ---  --  -  -

AAA025PR3590   3590DEVC  9,262.5  100.0Full
AAA0313590P1   3590DEVC  9,150.4  100.0Full
AAA042PR3590   3590DEVC  9,170.1  100.0Full
AAA0493590P1   3590DEVC  9,194.0  100.0Full
AAA0553590P1   3590DEVC  9,080.6  100.0Full
AAA0633590P1   3590DEVC  9,142.9  100.0Full
AAA0663590P1   3590DEVC  9,210.3  100.0Full
...
AAA1433590P1   3590DEVC 15,914.5   95.7Full
AAA45510YRARCH 3590DEVC 40,000.02.8  Filling
AAB04710YRARCH 3590DEVC 20,000.06.3  Filling
AAB0783590P1   3590DEVC 10,240.0   43.5  Filling
AAB12810YRARCH 3590DEVC 40,000.04.7  Filling
AAB1523590P1   3590DEVC 10,240.0   11.9  Filling
AAB918ARCHLOGTPEC  3590DEVC 10,240.0   72.3  Filling
AAC836PR3590   3590DEVC 10,240.0   74.4  Filling
AAF2753590P1   3590DEVC 22,485.5  100.0  Filling
AAF35910YRARCHCP   3590DEVC 10,000.0   74.1  Filling

OK, now this might look odd but I'll explain where things come from
Used to I never assigned a Est/Max Capacity (MB) in my environments.
Back with older 3590 tape drivers, the driver estimated the capacity at
10,000MB as is seen with volume AAF359.
When new technology came out (and an associated driver, and remember newer
drivers support the older devices)
anyway, the default estimated capacity changed to 20,000MB as seen in volume
AAB047.
Once again, new stuff came out and new drivers were installed and the
default capacity changed to 40,000MB as seen in volume AAA455.
Now, since this environment was still the old 3590-B1A tape drives with old
J series tapes, the base capacity was still only 10 GB (aka 10,000MB).
Once I became tired of seeing all these 20,000MB's  40,000MB's in an
enviroment that was actually 10,000MB I assigned a value of 10,240 MB to the
Est/Max Capacity (MB) for device class 3590DEVC (see volumes like AAB078).
Now, that estimated capacity value is assigned when a scratch volume becomes
private AND doesn't change until the volume either becomes full (see AAA143)
OR the amount of data written to it exceeds the estimated capacity (see
AAF275).
So with all this said, what I can say about the above tapes is...
Tapes like AAF359 were first written to back long ago (about 2 tape driver
levels ago).
Tapes like AAB047 were first written to back sort of long ago (about 1 tape
driver level ago).
Tapes like AAB128 were first written recently (on the current tape driver
level) but prior to me setting the Est/Max Capacity (MB) for the device
class 3590devc.
Tapes like AAC836 were first written very recently (on the current tape
driver level) after I got tired of seeing false information being reported
and set the Est/Max Capacity (MB) to 10,240 MB.
Tapes like AAA066 probably contain ONLY client data that was compressed at
the client... because when the tape drive attempts to compress the already
compressed data, it actually grows, thus these tapes don't hold their
reported 10 GB (because the 10 GB read from disk grows when written to tape
but the value reported by TSM is the amount that was pulled from disk to put
on the tape).
Tapes like AAA143 lets me know that 1 or more clients aren't doing as they
have been requested and are NOT using TSM client compression... because the
compression at the tape drive actually reduced its size. (and we know
already compressed data grows with attempted further compression)


Dwight



-Original Message-
From: Elio Vannelli [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 14, 2003 2:27 AM
To: [EMAIL PROTECTED]
Subject: Re: Why my library doesn't compress data?


Thanks for your answer.
The problem is that TSM marks the tape as 1MB while the tape is
FILLING. When the volume is FULL TSM says 6500MB. I don't have only
*.zip in my 250GB file server, but a lot of *.pdf, *.doc, images, CAD
files, ...
None of the 40 volumes marked as FULL reaches 7GB.

Thanks in advance

Elio Vannelli



Re: TSM migration problem recovery log question???

2003-01-14 Thread Cook, Dwight E
Because of internal locks that aren't being readily resolved.
Look for a client (or clients) that have been connected for a long time (say
4-6+ hours)
Also look for other things like expiration, reclamation, migration, etc...
processes that might be running also.
Might only been client sessions though...
Anyway, generally (I've discovered) the log won't clear until those long
running clients either finish normally OR I cancel them.
We have had to increase some of our log sizes to 6 GB (6144 MB) in order to
allow things to run to completion and not have our log fill up and cause
sessions to die off due to log space being exhausted.

Now, media inaccessible, how is your library attached ?
I've noticed (in the past) that if the library manager can't talk to TSM (or
tsm can't talk to the library manager) AND tsm requires a tape mount (which
fails because of lack of communication) even though I restored
communications to the library, I would still have to bounce TSM to get it to
start talking to the library again...
Is there something that is causing a loss of communications with your
library prior to this media not available situation ?
When things fail due to media not available does it report a specific volser
?
When things are bounced and start working again, does the problem volser (if
identified) become accessible ?

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 14, 2003 7:23 AM
To: [EMAIL PROTECTED]
Subject: TSM migration problem  recovery log question???


The recovery log on TSM is 95% full.  I do a full DB backup everyday, which
I thought would reset the log to 0%.  It is in normal mode, not
rollforward.  Why would it be so high when I just did a full backup last
night that ended at 9 PM.  The log right now is 4.6 GB with 4.2 GB
utilized.  Any suggestions?

We have TSM 4.1.3 on OS/390.  My other problem is that our disk storage
pool filled to 100%, getting message anr1021w: media inaccessible.  When
trying to force migration (which should've occurred automatically) the
processes still failed with the media inaccessible messages.  We were not
in tape drive allocation and plenty of scratch volumes were available.  No
messages in the logs even pointed to a specific tape that was being called
in.  Bouncing the tsm server triggered one of four possible migration
processes.  Around 8 hours after bouncing the server 4 migration processes
finally did kick off after hitting the set threshold.  Any ideas or insight
as to what the problem could be???

Thanks for any help,

Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: (logical/physical volumes)

2003-01-13 Thread Cook, Dwight E
What I have noticed over the years (in AIX environments) is that TSM spreads
the load out across its direct access volumes.
Say you have 9 logical volumes across 3 physical and you have 3 inbound
client sessions...
TSM will spread that inbound traffic across 3 of its (locical) volumes.
If you have 3 logical volumes per physical volumes, the inbound work might
be going all to the same physical volume.
So there is a possibility of a slight performance problem by having multiple
logical volumes per physical volume, I say slight because I very rarely see
small numbers of inbound sessions (or activity) on the tsm servers.
If you resize to a 1 logical per physical, you might see a slight increase
in performance...
just my 2 cents worth based on general observations over the past 7 years.

Dwight


-Original Message-
From: David Browne [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 13, 2003 8:19 AM
To: [EMAIL PROTECTED]
Subject:


We are running TSM 4.2.2.0 on OS390/2.10 and we are in the process of
replacing our mod 3 dasd with mod 9's.

Currently I have a disk storagepool with a mixture of mod 3 and mod 9 dasd
volumes, however when I define my dasd storage pool volumes I sized  them
as mod 3.
 I also have my database volumes broken in to mod 3 size volumes if they
reside  on a mod 9.

My question is will I see a performance issue if I resize all of my storage
pool volumes  and database volumes to a full mod 9 in size?



Re: /VAR skipped during incremental

2003-01-13 Thread Cook, Dwight E
Are you sure it skips /var ?
Could it be that /var is just a subdirectory under the filesystem / ?
(and you don't see ~...processing file system /var...~ ? )
I'd also double check the include/exclude list specified in the dsm.sys file
for the tsm server being used.
I'd also check the restart data of the scheduler against the last time the
dsm.sys file or the include/exclude list file was changed... if either of
those files are newer than the last start time of the scheduler, I'd bounce
the scheduler and see what happens.
Not that anyone would ever do this BUT someone might set an include/exclude
list to
exclude /.../*
bounce the scheduler
set the include/exclude list back to normal (to make things ~look~ OK).
(sigh...)
WOW ! gives me an idea...
a daily job that reports to me any clients that have save dates for
those files which are newer than the last bounce date of the scheduler
task...

Dwight



-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 13, 2003 8:36 AM
To: [EMAIL PROTECTED]
Subject: /VAR skipped during incremental


Hi *SM-ers!
I have got one TSM client (4.2.1.0) which skips /VAR during an incremental.
dsmc i /var works fine, there are no errors in the dsmerror.log.
Does anybody know what could be the cause of this?
Thanks in advance!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**



Re: Why my library doesn't compress data?

2003-01-13 Thread Cook, Dwight E
OK... TSM will estimate the capacity of a device/media pair based on what
you set with the device class definition and if nothing is specified there,
it will be based on what the ~driver~ reports, THIS IS WHILE A VOLUME IS
FILLING.
Now what is actually shown as the capacity once the volume is FULL is what
has been written to the tape.
What does all of this mean ?
I'd say the 6500MB is close to your stated 7 GB native capacity... if you do
a q dev f=d and see Est/Max Capacity (MB) as blank, the 6500 is being
provided by the device driver and that is where it is coming from...
Now the 21 GB is an ~estimated at 3/1 compression and if your full volumes
show only about 12-13 GB that means that either you have uncompressed
cliient that doesn't compress very well OR you have a mixture of compressed
 uncompressed (at the client) client data.
Basically when a tape's EndOfTape marker is reached, TSM assigns a capacity
equal to what it has written to that specific tape.
If you front end things with a diskpool, the amount reported on a tape is
what was read from the disk and put on the tape.
Client compressed data doesn't reduce at the tape drive with tape drive
compression SO if all your client data came into your tsm server as ~client
compressed~, I'd expect the capacity of FULL tapes to  be about 7 GB.
If all your data came in from your clients uncompressed AND it were all
Oracle DB file type data I'd expect your FULL volumes to be about 21-22 GB.
If any of the data is something like a .zip file or other type of already
compressed data outside tsm client compression, it won't compress at the
drive either and that can cause variations in what the capacity is reported
on FULL volumes.

whew, my explanation isn't very pretty but does it clear things up a bit ???

Dwight



-Original Message-
From: Elio Vannelli [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 13, 2003 10:06 AM
To: [EMAIL PROTECTED]
Subject: Why my library doesn't compress data?


HI all,

This is a newby question.

I have an IBM 3575 that could store 7GB native data per volume or
compress them till 21GB per volume.
My file server stores 250GB. When I have a full backup of my file server
(under Windows 2000) I have 40 volumes and every volume has
EST_CAPACITY_MB between 6200MB and 6500MB. Other servers can store
till 13200MB, but no more.

May you help me, please?

Thanks in advance.

Elio Vannelli



Inactive version expiration, just a note...

2003-01-13 Thread Cook, Dwight E
With the recent Window virus that zeros out files blah blah balh
I tested the behavior of TSM expiration (on an AIX 4.3.3, TSM 4.2.2.0
server)
What did I double check ? ? ?
With retain extra, does that mean
A) X days from the time a file goes inactive
or
B) X days from the creation date of an inactive file

In other words,
if I have an active version that is basically 100 days old
and if my retain extra value is 35
and if I get a new ~active version~ today
thus making the first ~inactive version~ actually 100 days old in TSM
but it has only been 0 days in an inactive status
DOES EXPIRATION wipe the file today or 36 days from now? ? ?

Answer in my environment is B)
TSM doesn't cut off an inactive verion until it has been inactive for X
numbers of days.

So for all those folks out there with my retention example spread sheet
from 1998 that helps explain how tsm processes file expiration  the such...
you might add a note internally such as I did.
*
Note: inactive versions expire # days after they went inactive, not if the
inactive version's create date is older than # days.
*
When I was initially testing things back then, I performed daily alterations
thus making the full understanding a little unclear because the age of the
file was within 1 day of when it went inactive.

In re-reading the manual on retainextra of define copy, it still isn't
totally clear SOOO nothing like actually testing and seeing what happens
:-)

Just thought folks might like to know (for sure)...

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: Inactive version expiration, just a note...(addition)

2003-01-13 Thread Cook, Dwight E
One of these days I'll learn to THINK ! ! !

I said B) and then gave the statement of A) (sigh, can I go home yet???)

IT is A) all the way ! ! !

sorry if anyone had heart flutters...

Inactive files don't get purged until they have been inactive for retain
extra number of days.

Dwight



-Original Message-
From: Cook, Dwight E
Sent: Monday, January 13, 2003 11:38 AM
To: '[EMAIL PROTECTED]'
Subject: Inactive version expiration, just a note...


With the recent Window virus that zeros out files blah blah balh
I tested the behavior of TSM expiration (on an AIX 4.3.3, TSM 4.2.2.0
server)
What did I double check ? ? ?
With retain extra, does that mean
A) X days from the time a file goes inactive
or
B) X days from the creation date of an inactive file

In other words,
if I have an active version that is basically 100 days old
and if my retain extra value is 35
and if I get a new ~active version~ today
thus making the first ~inactive version~ actually 100 days old in TSM
but it has only been 0 days in an inactive status
DOES EXPIRATION wipe the file today or 36 days from now? ? ?

Answer in my environment is B)
TSM doesn't cut off an inactive verion until it has been inactive for X
numbers of days.

So for all those folks out there with my retention example spread sheet
from 1998 that helps explain how tsm processes file expiration  the such...
you might add a note internally such as I did.
*
Note: inactive versions expire # days after they went inactive, not if the
inactive version's create date is older than # days.
*
When I was initially testing things back then, I performed daily alterations
thus making the full understanding a little unclear because the age of the
file was within 1 day of when it went inactive.

In re-reading the manual on retainextra of define copy, it still isn't
totally clear SOOO nothing like actually testing and seeing what happens
:-)

Just thought folks might like to know (for sure)...

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: label/checkin libvolume

2003-01-13 Thread Cook, Dwight E
Well, TSM doesn't want to label them while they are PRIVATE and TSM doesn't
really know what is going on...
You might be able to do a del vol against them to get them back to scratch,
then do your label libvol
I believe you can use the volrange if you specify search=yes BUT I'd make
sure and use overwrite=no to be safe.
You might be able to get your label libvol command to run even with them as
private if you used overwrite=yes but I never like using that especially if
I specify a volrange... creates room for errors and your
vols would still be in a private status.
Since you have 220 tapes you need to correct...
if you have excel  a unix box with vi do this...
in excel, in 1,A  stick your initial volser (like AAA001)
then do the little expand down the column and let excel fill
you will notice that excel will increment the numbers :-)
now cut that out of the excel spread sheet and over in a unix window, vi a
file, call it MYMACRO
then go into insert mode and paste all your volsers.
Now go into cursor movement mode and do a
:g/^/s//del vol /
that will stick
del vol
at the front of all your volsers, now save that and call it as a macro from
an  admin session.
takes less than a minute...
Other helpful VI edit commands if you have a list of volsers already in a
file and you wish to do something like build checkin commands...
:g/^/s//checkin libvol mylibrname /
followed by
:g/$/s// checklabel=no status=scratch dev=mydevtype/
and now you just wq the file and call it from an admin session...
you can do checkouts the same way...
hope this helps.

Dwight



-Original Message-
From: Peter Ford [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 13, 2003 1:32 PM
To: [EMAIL PROTECTED]
Subject: label/checkin libvolume


Last Friday, we upgraded our 3584 tape library with an expansion frame.
After the expansion was installed, I loaded 220 new LTO tapes and ran a
checkin libvol   TSM tried to use these volumes over the weekend and
couldn't read the labels (because they aren't labelled).  Well, I should
have run a label libvol ..., and now all of the new tapes have their
status set to Private thanks to TSM.  I tried to run label libvol ...
this morning when I discovered the issue, but it didn't work due to the fact
they are already in the library.

My question is, how do I retroactively label these volumes?  Is there some
command that I could issue to update them all at one (ie: volrange=XXX,XXX)?


Thanks in advance!
Peter

Peter Ford
System Engineer


Stentor, Inc.
 5000 Marina Blvd,
 Brisbane, CA 94005-1811
 Main Phone: 650-228-
 Fax: 650 228-5566
 http://www.stentor.com
 [EMAIL PROTECTED]



Re: Objects compressed by

2003-01-13 Thread Cook, Dwight E
I recall being told or reading (a long time ago) that the algorithm used by
3590 tape drives is the same as used by the adsm/tsm client.
Either at the client or at the drive, it is just running data through a
program...
but this would tend to support different possibilities based on what piece
of hardware and how that manufacture/engineer designed things.

Dwight



-Original Message-
From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 13, 2003 2:32 PM
To: [EMAIL PROTECTED]
Subject: Re: Objects compressed by


On Thu, 2003-01-09 at 07:11, Zlatko Krastev/ACIT wrote:
 Yes, usually TSM node-compression gets better compression ratio that tape
 drive compression.

I'd like to see figures on this. My experience has been that
hardware-based compression is both faster and more efficient than
software-based compression.

--
Mark Stapleton ([EMAIL PROTECTED])



Re: TSM backup command line

2003-01-09 Thread Cook, Dwight E
I take it you have a specific reason why don't you want him running
(completing) a night's backup ?
If it is only due to files that change frequently being backed up a second
time within a day, you could look into setting the backup copy group's
frequency to 1.  (all depends on specifically you are wanting to stop)
You could do something like have an admin schedule that would lock the node
in the morning and unlock it just prior to its next backup (but that can
bite you)
If the admin is really causing problems (as I know they can) try explaining
why they shouldn't be doing what they are doing and how it impacts things
overall...
If they don't see the light, then go down the ~lock the node out~ road (and
make sure and save some of the mail messages where you tried to resolve
things in a friendly way.

Dwight


-Original Message-
From: Bernard Rosenbloom [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 08, 2003 3:07 PM
To: [EMAIL PROTECTED]
Subject: TSM backup command line


We have an end-user that accesses the TSM backup client command line on
our TSM INTEL clients (4.2.1.30) and runs the dsmc i command to start
maual backups on servers that failed scheduled backups the previous
night. He is the System Administrator on the INTEL boxes so we can't
prevent him from acessing the TSM command line.
We don't want the want the incremental backup to run. Anybody have any
ideas on what TSM measures we can take prevent the backup from running
or kill the session ?

AIX(4.3.3) TSM (4.2.1.11) server
Thanks
Bernard



Re: Retention policy for inactive files

2003-01-09 Thread Cook, Dwight E
That would be a combination of the retain extra, versions data exists, 
versions data deleted options of the backup copy group.

If management says I want everything for the last 60 days but nothing older
then that
vde:unlimited
vdd:unlimited
re:60
(retain only:60 also)

Now if by date, they mean June 10th 2002 or something specific like that...
tell them good luck
or you could update (and activate...) all your backup copy groups on a daily
basis :-(


Dwight



-Original Message-
From: Kleynerman, Arthur [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 08, 2003 10:29 PM
To: [EMAIL PROTECTED]
Subject: Retention policy for inactive files


Hello all,

I am faced with a situation where I need to delete all inactive files past a
certain date, but keep all inactive version prior to that same date. Is
there any way that this can be accomplished?

I have a TSM server V4.2.3 running on AIX 4.3.3.

Thanks,
Arthur


---

The information contained in this e-mail message, and any attachment
thereto, is confidential and may not be disclosed without our express
permission.  If you are not the intended recipient or an employee or agent
responsible for delivering this message to the intended recipient, you are
hereby notified that you have received this message in error and that any
review, dissemination, distribution or copying of this message, or any
attachment thereto, in whole or in part, is strictly prohibited.  If you
have received this message in error, please immediately notify us by
telephone, fax or e-mail and delete the message and all of its attachments.
Thank you.

Every effort is made to keep our network free from viruses.  You should,
however, review this e-mail message, as well as any attachment thereto, for
viruses.  We take no responsibility and have no liability for any computer
virus which may be transferred via this e-mail message.



FW: Archives are much slower than backup

2003-01-09 Thread Cook, Dwight E
I sent this yesterday and attached the doc/text on the cleanup archdir
command but it didn't post because it was to big.
If someone needs the info on cleanup archdir I could send it directly to
them.

Dwight



-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 08, 2003 2:27 PM
To: 'ADSM: Dist Stor Manager'
Subject: RE: Archives are much slower than backup


has the environment been around for a long time ?
lots of archives taken on a regular basis ?

look into the cleanup archdir command
I've attached the info on it...


Dwight



-Original Message-
From: Kempadasiah, Umesh [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 08, 2003 2:13 PM
To: [EMAIL PROTECTED]
Subject: Archives are much slower than backup


Hi ALL,
 I have TSM server and client ver 5.1.5
The archives are very much slower than the selctive or incremental backups.
Both are backed up to tapes directly
Is there anything I can tune or modify at the server or client level to
improve the speed of archives?


Thanks and regards
Umesh K



How to find zero byte files that are backed up ?

2003-01-09 Thread Cook, Dwight E
OK, I've had a question put to me and so far I just don't know...
With such viruses, as the ones that zero out files under windows, is there
an easy way to see if TSM is holding any zero-byte files ? ? ?  (ie. check
from the TSM server for potentially infected client nodes...)

I tried testing under Unix (AIX) with a zero byte file and...
I thought that I might be able to look in the adsm.contents table for
file_size=0
but what I found is (two things)...
1) due to aggregates, the file size listed for most files is the
size of the aggregate
2) if you have a zero byte file, you won't even have an entry for it
in the adsm.contents table
The output of a show version doesn't list anything about filesize...

Anyone have any tricks to do such a discovery ? (of zero byte backed up
files)

Dwight



Re: How to check utilization of your storagepools automaticly

2003-01-09 Thread Cook, Dwight E
It is found in the admin guide manual for the server for your platform...
Unix layout is for TSM 4.2 is...(as found in the IBM/Tivoli manual)
Field Contents
1 Product version
2 Product sublevel
3 Product name, 'ADSM',
4 Date of accounting (mm/dd/)
5 Time of accounting (hh:mm:ss)
6 Node name of TSM client
7 Client owner name (UNIX)
8 Client Platform
9 Authentication method used
10 Communication method used for the session
11 Normal server termination indicator (Normal=X'01', Abnormal=X'00')
12 Number of archive store transactions requested during the session
13 Amount of archived files, in kilobytes, sent by the client to the server
14 Number of archive retrieve transactions requested during the session
15 Amount of space, in kilobytes, retrieved by archived objects
16 Number of backup store transactions requested during the session
17 Amount of backup files, in kilobytes, sent by the client to the server
18 Number of backup retrieve transactions requested during the session
19 Amount of space, in kilobytes, retrieved by backed up objects
20 Amount of data, in kilobytes, communicated between the client node and
the server
during the session
21 Duration of the session, in seconds
22 Amount of idle wait time during the session, in seconds
23 Amount of communications wait time during the session, in seconds
24 Amount of media wait time during the session, in seconds
25 Client session type. A value of 1 or 4 indicates a general client
session. A value of 5
indicates a client session that is running a schedule.
26 Number of space-managed store transactions requested during the session
27 Amount of space-managed data, in kilobytes, sent by the client to the
server
28 Number of space-managed retrieve transactions requested during the
session
29 Amount of space, in kilobytes, retrieved by space-managed objects
30 Product release
31 Product level


Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Macmurray, Andrea (CAG-CC)
[mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 09, 2003 9:54 AM
To: [EMAIL PROTECTED]
Subject: Re: How to check utilization of your storagepools automaticly


Does anybody know where I can find a description of the data found in the
dsmaccnt.log. I do have a feeling that I found what I was looking for for a
long time. (statistics of what my SQL nodes are backing up).

Thanks


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 09, 2003 8:57 AM
To: [EMAIL PROTECTED]
Subject: Re: How to check utilization of your storagepools automaticly


That isn't easy to answer...

here is my 2 cents worth...
For tape pools, you really need to track utilization over a period of time.
Look for how much data is added daily, how much might be going away, how
much expires off each tape, do tapes readily become available via
reclamation or do you end up with lots of tapes at 60% util for extended
periods of time.
I set up little jobs to provide me information like
Thu Jan 9 06:05:17 CST 2003 You have 199 SCRATCH and 1256 PRIVATE volumes of
which 15 are filling, the rest are full.
on a daily basis for all the TSM servers here...
In a unix environment this is just an echo command with a bunch of
dsmadmc -id=blah -pass=blah q libvol | grep Scratch | wc -l
dsmadmc -id=blah -pass=blah q libvol | grep Private | wc -l
dsmadmc -id=blah -pass=blah q vol stat=filling | grep Filling | wc -l

After you track things for a while you can determine at what rate tapes are
consumed and when you will need more.

Same sort of thing applies to the diskpool(s).
If you push to diskpools and then migrate to tape, in general you will want
at least as much total diskpool space as you take in data on a nightly
basis... dsmaccnt.log records are good for this sort of information.

So I don't believe one can totally automate things BUT I do have a lot of
info sent to me automatically for review on a daily  or monthly basis...

Dwight


-Original Message-
From: Nilsson Niklas [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 09, 2003 8:39 AM
To: [EMAIL PROTECTED]
Subject: How to check utilization of your storagepools automaticly


Hi



I need a way to check my tapepools when to add more scratch and my
diskpools to add more disc's?



/Regards Niklas



Re: Client password problem

2003-01-07 Thread Cook, Dwight E
When using passwordaccess generate, you, as an admin, will first need to
connect (using a dsmc session) and do anything to force a connection (like a
q sched), at which time you will be prompted for the password as was set
during the registration of the node to the new tsm server... after that, a
generated password will be used.

Dwight



-Original Message-
From: Hunley, Ike [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 07, 2003 1:29 PM
To: [EMAIL PROTECTED]
Subject: Client password problem


I moved server backups to a different MVS TSM.  Now each time the server
attempts to access TSM I get the dreaded 137 authentication failure.

I've added password access generate to DSM.SYS and recycled the TSM
scheduler several times.

What else can I do to correct this?





Blue Cross Blue Shield of Florida, Inc., and its subsidiary and
affiliate companies are not responsible for errors or omissions in this
e-mail message. Any personal comments made in this e-mail do not reflect the
views of Blue Cross Blue Shield of Florida, Inc.



Re: $$ACTIVE$$

2002-12-31 Thread Cook, Dwight E
Correct, if you have NEVER gone through the ~validate policy~  ~activate
policy~ you will see an active one listed as $$ACTIVE$$, just a reminder
that you are running with something YOU haven't looked into.

Also if you import a policy from another server, and that policy had an
ACTIVE policy set over on the other server, it will import and have an
ACTIVE policy set BUT it will be listed at $$ACTIVE$$, once again, to let
you know you don't really know for sure what is running because YOU haven't
validated it and/or activated it.

hope this helps...
later,

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Oliver Martin [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 31, 2002 5:25 AM
To: [EMAIL PROTECTED]
Subject: $$ACTIVE$$


Im running TSM Version 3.7.4 on an NT Server 4.0 SP6 and when I make an q
domain I get $$ACTIVE$$ as the active policy class. But I havent defined an
policy set with this name.

Thanks for help.

Oli



3590 (E or H) drives per FC interface (6227) ???

2002-12-31 Thread Cook, Dwight E
I've looked through the archives and can't find much.
I've waded through IBM's web sites and can't find specifics...

Has anyone found any recomendations on max number of 3590's (FC attached)
per FC card on host ?

Environments I'm  looking at are 7017-S70's that have a max of 4 FC cards
(6227's) per system.
I'm already driving my ESS storage with 2, that would leave me with 2 cards
to drive 8 tape drives.
(or maybe mix everything together and let the device drivers balance the
loads )

So is anybody driving 4 3590's off a single FC adapter and if so how are
your data transfer rates look with 1, 2, 3  all 4 drives running ?
Math says
1Gb (1024Mb/sec) FC card is 128 MB/sec or 450 GB/hr
100 MB/sec burst data transfer rate of 3590 is 351 GB/hr
so really 1 card per 1 drive but 4 cards could drive 5 drives...
NOW...
With 3590-B1A's  E1A's that are SCSI attached (one drive per one scsi
adapter) I see migrations run at
B1A'a 22-ish GB/hr
E1A's 32-ish GB/hr
Now SCSI attached drives have a burst rate of 40 MB/sec 140 GB/hr
So on my E1A's I see migrations run at 23% of the burst rate

So if I expect the same from FC, migrations across FC would be at 23% of 100
MB/sec or 23% of 351 GB/hr or 80 GB/hr if I have one drive per one FC
card...

So with all that said, a single FC adapter should be able to drive five and
a half (5.5-ish) FC 3590-E1A's or H1A's and run at least as good as SCSI
attached at a one-to-one atachment... right???

Anyone have any thoughts...
(other than I need a vacation)

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: URGENT:ANS9455E dsmwatchd: Unable to join the local failover group with rc=3!

2002-12-27 Thread Cook, Dwight E
Humm.
Might help to provide some more configuration information.
I'm getting the picture of SP nodes, two set up in a ~high availability~
configuration ???

For example purposes, lets say there are 3 boxes, box1, box2, box3
box1's hostname is TSM
it has only one schedule (or does it have more?) with its tsm server
(which is itself) 
it always gets a Missed for its scheduled event
box2 always has Completed event(s)
box3 always has Completed event(s)

Is box2 or box3 a failover node of box1 ?  

I'm not familiar with High Availability configurations but it seems that
maybe the TSM Software either thinks you are running HA but you're not OR
the TSM Software is trying to link up to the failover node and can't.

Might be that TSM is trying to tell you something elsewhere isn't quite
right...

AND are you running TSM's HSM on BOX1 ? ? ?  (message hints of HSM running)
Just my own thoughts but IF you have a node that is a tsm client of itself,
it IS NOT WISE to have HSM active on it !
What if HSM migrates your DEVCONFIG file ? ? ?  or your DSMSERV.OPT file ? ?
?  just to name a couple...
(mistakes are possible when configuring HSM and someone somewhere
might include something you don't want included)

I think this problem might be out of my ballpark but I'll always offer my 2
cents worth...

Dwight


-Original Message-
From: rachida elouaraini [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 27, 2002 5:59 AM
To: [EMAIL PROTECTED]
Subject: URGENT:ANS9455E dsmwatchd: Unable to join the local failover
group with rc=3!


Hi all,
Your help is very needed,
The status of the CLIENT.BACKUP schedule is Missed ONLY for ONE client
TSM,
that is the server TSM (the status for the others clients TSM is completed).
I have others schedules related to this node (TSM server), the status is
always
MISSED.
I have done what Dwight told me (verify that the scheduler is running,
stopping
it and restarting it) but in vain.
The following message is written EVERY 6 minuts in the dsmerror.log file :

ANS9455E dsmwatchd: Unable to join the local failover group with rc=3!

I find again messages like :

12/26/02   20:23:08 TcpRead(): recv(): errno = 73
12/26/02   20:23:08 sessRecvVerb: Error -50 from call to 'readRtn'.
12/26/02   20:23:08 ANS1809E Session is lost; initializing session reopen
procedure.
12/26/02   20:23:09 ANS1809E Session is lost; initializing session reopen
procedure.
12/26/02   20:23:23 ANS1810E TSM session has been reestablished.
12/27/02   09:51:47 ANS9433E dsmmigfs: dm_send_msg failed with errno 22.
12/27/02   09:51:47 ANS9402E dsmmigfs: Cannot notify dsmwatchd to recover
HSM
operations on a failure node.
12/27/02   09:51:47 ANS9425E dsmmigfs: It was not possible to notify the
dsmwatchd in order to distribute a message within the failover group. The
data
of the current operation may get lost.

Waht these messages mean?
Any help is very appreciated.


__
Boîte aux lettres - Caramail - http://www.caramail.com



Re: How much data is being backed up per client?

2002-12-27 Thread Cook, Dwight E
A q stg somediskpool f=d will show
...
  Migration in Progress?: Yes
Amount Migrated (MB): 276,894.60
Elapsed Migration Time (seconds): 10,881
...

and once the migration process(es) finish, that should show what was
migrated between when migrations first kicked off and when the last one
finished (or current time if they are still running as in my example above).

Best source of info on client traffic is the dsmaccnt.log file in the tsm
server install directory.
If you, from an admin session, issue the set accounting on it will start
writing accounting records for each client session to the dsmaccnt.log
file.  Make sure you have plenty of space in the filesystem (depending on
how many clients you have and how many sessions they initiate).  Then just
write some little script/pgm that will sum up the info in the records for
the different nodes.  Since we have 10 TSM servers, there are some things I
do only on a monthly basis...

I do thing like generate overall traffic numbers like
12/21/2002 tsmsrvz 1218113360 KB 3574432 sec
12/22/2002 tsmsrvz 451869328 KB 3452613 sec
12/23/2002 tsmsrvz 332773700 KB 1675172 sec
12/24/2002 tsmsrvz 1404730729 KB 2841210 sec
12/25/2002 tsmsrvz 285941973 KB 1306029 sec
12/26/2002 tsmsrvz 292468155 KB 1213964 sec

Others that show breakdown of types of traffic by tsm client by tsm server
(below are ArchCnt, ArchKB, RetrCnt, RetrKB, BkupCnt, BkupKB, RestCnt,
RestKB, CommKB, CommSec)
11/27/2002 tsmsrvx a_client 69 233419808 0 0 0 0 0 0 238684872 34567
11/27/2002 tsmsrvx a_client 5110 4275779270 0 0 0 0 0 0 4276434480
720268
11/28/2002 tsmsrvx a_client 331 1095677328 0 0 0 0 0 0 1114648820 145480
11/29/2002 tsmsrvx a_client 5112 4277868551 0 0 0 0 0 0 4278524072
737904
11/30/2002 tsmsrvx a_client 52 195162800 0 0 0 0 0 0 200597228 29377

others that are totals by server of the different types of traffic
(below shows arch KB, bkup KB, retr KB, rest KB, Inbound KB, Outbound KB)
11/27/2002 tsmsrvx 4509199078 0 0 0 4509199078 0
11/28/2002 tsmsrvx 1095677328 0 0 0 1095677328 0
11/29/2002 tsmsrvx 4277868551 0 0 0 4277868551 0
11/30/2002 tsmsrvx 195162800 0 0 0 195162800 0

Dwight


-Original Message-
From: Nelson Kane [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 27, 2002 8:59 AM
To: [EMAIL PROTECTED]
Subject: How much data is being backed up per client?


Season's Greetings Tsm'ers,
1) Can someone tell me how to check the amount of data being backed up per
client, and how to obtain the same info for previous days.
2) How much data was in my Disk pool before I migrated it all!
Thanks
-Kane



[no subject]

2002-12-27 Thread Cook, Dwight E
Could be the reusedelay for the storage pool they did belong to...
(I use 0 days so I can't say how they look once all the data expires and
they are in a ~waiting to be reused~ state)

Other then that, there was a little bug a while back (sometime prior to
4.2.2.0) that would leave hanging pointers somewhere and such volumes would
require a delete vol volser to be issued against them to clear things
up.

Dwight 


-Original Message-
From: Markus Veit [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 27, 2002 9:50 AM
To: [EMAIL PROTECTED]
Subject: 


Hi TSMers,
does anybode have an idea on why volumes in a library have a status of
Private
with last use Data
but do not belong to a storagepool. Volumes with a status of private with no
last use are defect, ie can't read the label.
volume_name LIBRARY_NAME  STATUS  LAST_USEACCESS
41L1ADIC100 Private Data
01L1ADIC100 Private Data
68L1ADIC100 Private Data
61L1ADIC100 Private Data
51L1ADIC100 Private Data
47L1ADIC100 Private Data
45L1ADIC100 Private Data
79L1ADIC100 Private Data
08L1ADIC100 Private Data
02L1ADIC100 Private Data
000135L1ADIC100 Private Data
06L1ADIC100 Private Data
09L1ADIC100 Private Data
22L1ADIC100 Private Data
TSM Server 4.2.1.9
W2k  SP3

Mit freundlichen Grüßen / Best Regards

Markus Veit



Re: Slow AIX Client backup

2002-12-20 Thread Cook, Dwight E
try watching the interface on the tsm server, if your site in general
doesn't run client compression, your server's interface might be flooded by
other traffic.  (heck, even if your clients run compression you still might
flood the interface)

#!/bin/ksh
# my_ent_stats
# if run hourly, will give output to track hourly traffic...
#   output files are
# /home/dsmadmin/ent#.stats
# /home/dsmadmin/ent#.stats.Dow  where Dow is 3 letter abbr of weekday
# THIS MUST BE RUN BY ROOT IN ORDER TO RESET THE INTERFACE STATISTICS
#
for INTERFACE in $(netstat -in | grep ^e | cut -d' ' -f1 | sort -u )
do
echo $(date +%a-%b-%d-%Y-%H:%M) $(entstat $INTERFACE | grep ^Bytes | sed
's/
 / /g' | sed 's/Bytes:/ /g')  /home/dsmadmin/$INTERFACE.stats
echo $(date +%a-%b-%d-%Y-%H:%M)  /home/dsmadmin/$INTERFACE.stats.$(date
+%a)
entstat $INTERFACE  /home/dsmadmin/$INTERFACE.stats.$(date +%a)
entstat -r $INTERFACE 1/dev/null 21
done
chmod ugo+rw /home/dsmadmin/en*
exit


Dwight



-Original Message-
From: Anderson, Michael - HMIS [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 19, 2002 3:29 PM
To: [EMAIL PROTECTED]
Subject: Slow AIX Client backup


I have an AIX client that is backing up very slow. It is an IBM J50
running an application called Softlab. It use to take
4 1/2 hours per night going to tape. When we put it on TSM for testing
it was doing 4 gb in about 4 hours. Which I said
was slow, compared to are other AIX boxes. We checked the NIC and the
switch settings both are at 100/full. They had
no problem with this cause it was still a half hour quicker then what
they had. Now we have put it into the rotation and
they are screaming it has doubled to like 8 hours. I only being the
administrator and not knowing much about the network
group or the AIX systems group, do not know what else to check. Does
anybody have any clues for me? I am running
TSM server ver 4.2.3 (on AIX 4.3.3)  client ver 4.2.2.1

 Thanks
 Mike Anderson
 [EMAIL PROTECTED]


CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
is for the sole use of the intended recipient(s) and may contain
confidential
and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.



Re: server access through different network card (ipadres)

2002-12-19 Thread Cook, Dwight E
Nope,
well, TSM uses standard routing (from the client back to the TSM
server)
The TSM server uses standard routing to reach the client AS SPECIFIED IN THE
TCPCLIENTADDRESS for scheduled events and the packet return address for
non-scheduled activity.

So just set a route on the client that says, if going to the IP address of
the tsm server, use this route and specify the ~other~ nic.

We run this way for all our production systems (actually we just have an
entire secondary network that is a backup network only, well primarily)
been doing it for 7 years now :-)

Dwight



-Original Message-
From: Michelle Wiedeman [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 19, 2002 5:25 AM
To: [EMAIL PROTECTED]
Subject: server access through different network card (ipadres)
Importance: High


hi all,


server adsm 3.1 on AIX 4.3.3 ML 10
client tsm4.1 on AIX 4.3.3 ML 10

This client has been having problems with its network. every night during
the backup the costumers cannot access their databases because the network
card cant handle the load.
Now The idea is to use a second networkcard solely for TSM. Now I cant find
anywhere how I can make TSM understand to access through the other
networkcard as its always accesses the client on clientname and not IP
adres.
And as far as ive seen tsm accesses through the primary IP adres.


does anyone know how to do this? and if its actually possible?
I must have it solved by tonight.

michelle



Re: SELECT statement to determine if any drives available

2002-12-19 Thread Cook, Dwight E
If you are just needing to know if reclamation is running, you might just
look at the adsm.processes table...

tsm: TSMSRV02select * from processes

PROCESS_NUM: 2193
PROCESS: Space Reclamation
 START_TIME: 2002-12-19 07:31:00.00
FILES_PROCESSED: 5831
BYTES_PROCESSED: 4798075822
 STATUS: Volume AAD615 (storage pool 3590P1), Moved Files: 6098,
Moved
  Bytes: 4,798,478,585, Unreadable Files: 0, Unreadable
Bytes:
  0. Current Physical File (bytes): None

  Current input volume: AAD615.

  Current output volume: AAE514.




Dwight


-Original Message-
From: Christian Sandfeld [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 19, 2002 6:34 AM
To: [EMAIL PROTECTED]
Subject: SELECT statement to determine if any drives available


Hi list,

Does anybody know of a SELECT statement that will show if any drives are
available for use (not in use by another process or session)?

I need this for scripting purposes as I have only two drives in my library,
and while doing reclamation both drives are in use.


Kind regards,

Christian



Re: Expiring Data / Versioning

2002-12-18 Thread Cook, Dwight E
Uhmmm, Lawrie, I don't think you will get what you want...
Look at TSM as a big file cabinet with copies of files from other servers.
A file in TSM is ACTIVE if it is the way it looked on the client during
the last incremental (or selective)
A file in TSM in INACTIVE if it isn't the way it looked on the client
during the last incr/sel (which includes having been deleted from the
client)
Now, Versions Data Exists says how many copies of that file is kept in the
TSM file cabinet as long as it still exists on the client.
Versions Data Deleted says how many copies to immediately reduce down to as
soon as the file is found to have been deleted from the client.
Retain Extra says how long to keep INACTIVE copies (from the time they go
inactive)
Retain Only says how long to keep the LAST copy of a file that has been
deleted from a client

If you want to keep no more than 7 days you will need
A) Versions Data Exists of 7 so if a file changes daily, you will have 7
on the 8th day the 1st day's copy will expire due to VDE 7
B) Retain Extra 7 so if a file only changes weekly,
VDE won't expire because on day 8 you will only have 2 copies
BUT the old copy will be 8 days old and will go away due to RE 7
C) Versions Data Deleted 7, to always have at least 7 copies/days
D) Retain Only 30 to keep the last copy of a deleted file for 30 days (past
its deletion)

Personally I always make my Versions Data Deleted greater than one BECAUSE
what if a disk is going bad and nobody notices.
Say on night one is hoses up some of a file that is viewed as a change to
that file ??? So TSM backs it up resulting in the active copy being bad...
Now more errors on the disk make the file totally go away, a backup runs,
and the file is marked deleted and during expiration TSM cuts down to the
last copy, that BAD one made the night before  ! ! !  YIKES !!!

Personally I use VDE-7, VDD-3, RE-35, RO-95 which has kept the user
community pretty happy over the years


Dwight





-Original Message-
From: Lawrie Scott - Persetel Q Vector
[mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 4:22 AM
To: [EMAIL PROTECTED]
Subject: Expiring Data / Versioning


Hi All

Thanx in advance, new to this and not sure I've set this up correctly.

I originally wanted to keep 7 versions and delete after 30 days any deleted
files.

I run TSM Server 5.1 and I'm trying to expire all my data down to only one
copy of the data. My versions were set to 7 and the rest as shown, I changed
it down to 3 and ran an inventory expire this made no changes. Have I setup
my copy group incorrectly. I wish to finally setup the copy group to have
seven days(not versions) backups and to only keep deleted files for a max of
30 days.

Copy Group Name STANDARD
Versions Data Exists 3
Versions Data Deleted 1
Retain Extra Versions 0
Retain Only Version 30
Copy Mode MODIFIED
Copy Serialization DYNAMIC
Copy Frequency 0

Kind Regards
LAWRIE SCOTT
For:
Persetel/Q Vector KZN
Company Registration Number: 1993/003683/07
Tel: +27 (0) 31 5609222
Fax: +27 (0) 31 5609495
Cell: +27 (0) 835568488
E-mail:  [EMAIL PROTECTED]

Notice: This message and any attachments are confidential and intended
solely for the addressee. If any have received this message in error, please
notify Lawrie Scott at Comparex Africa (Pty) Ltd immediately, telephone
number +27 (0) 31 5609222. Any unauthorised use, alteration or dissemination
is prohibited. Comparex Africa (Pty) Ltd accepts no liability whatsoever for
any loss whether it be direct, indirect or consequential, arising from
information made available and actions resulting there from.



Re: Is there a tsm client that will run on a sequent box DYNIX/ptx(R) V4.4.4

2002-12-18 Thread Cook, Dwight E
Found it !
Thanks !
that's what I get for only looking at 5.1 stuff :-(

Dwight



-Original Message-
From: Radoslav Bendik [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 7:56 AM
To: [EMAIL PROTECTED]
Subject: Re: Is there a tsm client that will run on a sequent box
DYNIX/ptx(R) V4.4.4


On Wed, 18 Dec 2002, Sias Dealy wrote:

 No, your not going blind.

Maybe you both are looking wrong direction :)


 There is no TSM client that will run on DYNIX/ptx.

There is.


 Here is the URL that shows what platform are supported by TSM

 http://www.tivoli.com/products/index/storage-mgr/platforms.html

Here are the urls with TSM client for Dynix/Numaq:

ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/
client/v4r2/NUMAQ-PTX/
ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/
client/v4r1/NUMAQ-PTX/
ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/
client/v3r7/Sequent45/
ftp://service.boulder.ibm.com/storage/adsm/fixes/v3r1/sequent/


  On, Cook, Dwight E ([EMAIL PROTECTED]) wrote:

  I didn't see anything but I'm going blind in my old age...
  Is there any tsm client that will run on a Sequent box with DYNIX/ptx(R)
  V4.4.4


The ADSM version 3.1 client should work with this Dynix version. The only
question is interoperability with current server versions. TSM versions
3.7 and above work only on Dynix/NumaQ 4.5.x (according to README).


--
rado b
Why Did You Reboot That Machine?



Is there a tsm client that will run on a sequent box DYNIX/ptx(R) V4.4.4

2002-12-17 Thread Cook, Dwight E
I didn't see anything but I'm going blind in my old age...
Is there any tsm client that will run on a Sequent box with DYNIX/ptx(R)
V4.4.4

Dwight



Re: 3494 libray lifespan??

2002-12-16 Thread Cook, Dwight E
Read ALL the fine print on LTO drives  associated media !
They are a far cry from 3590's !
check the life expectancy in mounts, full reel reads/writes, etc...

Dwight



-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Monday, December 16, 2002 12:23 PM
To: [EMAIL PROTECTED]
Subject: Re: 3494 libray lifespan??


I think it is more of a question when a VTS replacement would become
available and 3590 is significantly slower than other technologies.  There
is still development going on for the 3494 library itself.  There will be a
replacement 3590 drive soon.  It is not clear if that drive will be in the
3494 library or in a LTO type library.  My guess is both, but I have not
seen a product direction, so anything is possible.

At this point, I would review my data requirements, install new capacity in
LTO for open systems if it is more cost effective, and seek an answer from
IBM.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 13, 2002 1:16 PM
To: [EMAIL PROTECTED]
Subject: 3494 libray lifespan??


Does anybody have an idea what is reasonable to expect for the lifespan of a
3494 library?

My manager wants to know when we should budget for a new one. But I've never
heard of anybody replacing a 3494 because of age - just upgrading to new
drives!



Re: filespaces not deleted

2002-12-13 Thread Cook, Dwight E
When a file system just ~up  goes away~  TSM has no way of knowing WHY ???
might have died due to the media on which it resides died
might have gone away because someone just unmounted it
might have gone away because someone deleted it (for whatever reason)

So TSM freezes that entire filesystem until
it comes back and resumes normal incremental processing
OR
you purge it from an admin session...



Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Macmurray, Andrea (CAG-CC)
[mailto:[EMAIL PROTECTED]]
Sent: Friday, December 13, 2002 10:05 AM
To: [EMAIL PROTECTED]
Subject: filespaces not deleted


Hello TSM'ers

With the danger that some of you might think I am not the sharpest tool in
the shed, there is one thing about filespaces I do not understand. Why are
there filespaces which will not expire?? If I rename a node and existing
files go under a new directory structure I would have expected that TSM
looks at the old filespaces as deleted and applies the normal retention
periods as specified in the Mgmtclasses for a deleted file. Is there
somebody out there who would be able to give me short explanation why this
is not happening and how I can identify all my dead filespaces out there. I
am pretty sure I do have tons of them out there.


Thanks
Andrea Mac Murray



Re: 3494 libray lifespan??

2002-12-13 Thread Cook, Dwight E
We have 7 that are currently 6 or 7 years old and still going strong !
Two were old VTS libraries from our old MVS environment (L14's I believe is
their technical #)
just throw away a few parts and you've got an L-12 :-)
I think all started in the area of 4 frames and most are 8 frames now.
Only the old VTS are HA libraries, all the rest have just a single accessor
(but dual grippers).
To be truthful, they will probably only be purged from the market by lack of
parts  increased cost of service only after IBM has a new line of ATL's
that they want to push out to the market and even then, other support firms
will buy old ones for parts and continue to offer service contracts.


Dwight


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 13, 2002 12:16 PM
To: [EMAIL PROTECTED]
Subject: 3494 libray lifespan??


Does anybody have an idea what is reasonable to expect for the lifespan of a
3494 library?

My manager wants to know when we should budget for a new one.
But I've never heard of anybody replacing a 3494 because of age - just
upgrading to new drives!



Re: Management Class and TDP for R/3

2002-12-12 Thread Cook, Dwight E
BUT tdp/r3 doesn't use the backup copy group of the management class, it
uses the archive copy group.
the reason for 4 (or more) is so if you go straight to a tape drive... like
this...
Storage  Device   EstimatedPctPct  High  Low  Next Stora-
Pool NameClass NameCapacity   Util   Migr   Mig  Mig  ge Pool
   (MB) Pct  Pct
---  --  --  -  -    ---  ---
PR3590A  3590B1A0.00.00.090   70
PR3590B  3590B1A0.00.00.090   70
PR3590C  3590B1A0.00.00.090   70
PR3590D  3590B1A0.00.00.090   70

PolicyPolicyMgmt  CopyRetain
DomainSet Name  Class Group  Version
NameName  Name
- - - - 
SAP   ACTIVEB7.3_R35- STANDARD10
 .95_A10-
 _PR_A
SAP   ACTIVEB7.3_R35- STANDARD10
 .95_A10-
 _PR_B
SAP   ACTIVEB7.3_R35- STANDARD10
 .95_A10-
 _PR_C
SAP   ACTIVEB7.3_R35- STANDARD10
 .95_A10-
 _PR_D

then the copy destinations for each of the above mentioned management
classes goes to the corresponding storage pool listed above.
This is a test server, which is why there is no data in the storage pools
(in case anyone catches that)
Make as many different management class ~sets~ as you might ever want to
keep data for,
that is, I have a set for 10 days, one for 35 days, one for 185 days, 370
days, etc

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: brian welsh [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 2:06 PM
To: [EMAIL PROTECTED]
Subject: Management Class and TDP for R/3


Hello,

Server AIX 5.1, TSM 4.2.2.8. Client AIX 4.3, TSM 4.2.1.25, and TDP for R/3
3.2.0.11 (Oracle)

Question about TDP.

We wanted to use the Management Class on the TSM-server like this:
Versions Data ExistsNOLIMIT
Versions Data Deleted   3
Retain Extra Versions   21
Retain Only Version 30

In the Guide Tivoli Data Protection for R/3 Installation  User's Guide for
Oracle is mentioned to define 4 different Management Classes. I was
wondering how other people have defined there Management Classes. Support
told me to use 4 different Management Classes because it's in the manual so
they mention something with it (maybe for in the future), but what is the
advantage of 4 different Management Classes. We have 8 TDP-clients. The
manual is saying every client his own 4 different Management Classes. It's
not easy to manage that way.

Besides you have to define an archive-copy group. We don't want to use
Versioning in TDP, but use expiration via TSM-client/server. How many days I
have to define the archive-copy group. I think it has to be 21 days. I guess
that when using Versioning in TDP you have to set these parameter on .
Am I right.

Is there somebody who want to share the definitons used for TDP for
Management Classes and Copy Groups?

Thanx,

Brian.





_
Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/



Re: K-cartridges besides J-cartridges

2002-12-12 Thread Cook, Dwight E
and if you've ever had a notch tab fall out and get lost...in a pinch you
can (for a K tape) get the bottom one from a J tape and the top one from a
cleaning tape.  The colors will be all out of whack but the slots will be
proper :-)

Dwight



-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 11:47 PM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


And, if you look at a J and a K tape on the notches you will notice that
they have a different configuration.  So, if you put a K tape in a J only
drive, it will kick it right back out.  I thought that all B to E upgrade
kits were K capable.  I did not think that kit came out until the E upgrade
kit for K was available.  If your upgrade kits were installed after July
2001, they just about have to be K capable.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 9:16 AM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


Should see a 2X sticker on the back of your 3590 drive if it is set to
deal with double length tapes. Also the internal tape spool is green (along
with one or two other internal parts, I seem to recall) so you might be able
to look in through the cooling holes in the top of the case to double check
(if you don't find a 2X sticker on the drive)


Dwight



-Original Message-
From: Jim Sporer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 4:25 PM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


There is a difference between the J and K tapes and the 3590 drives need to
support the extended length tapes.  If the drives are new there is no
problem but older 3590E drives may need an upgrade to support the extended
length tapes.  Check with your CE. Jim Sporer


At 11:12 PM 12/10/2002 +0100, you wrote:
Hello,

AIX 5.1, TSM 4.2.1.19, IBM 3494 library, 3590E-drives

At the moment we use about 900 J-cartridges in our library. Is it
possible to checkin K-cartridges and use them besides the J-cartridges.

During upgrade from our drives from 3590B to 3590E we changed status of
our volumes to read-only because of different tracks used on tape.

As far as I know is the only difference between J and K the length of
the tape, so I guess there will be no problem.

I'm wondering if someone is using both J and K cartridges and if there
encounter any problems.

Thanx,

Brian.






_
Chatten met je online vrienden via MSN Messenger.
http://messenger.msn.nl/



Re: Help on TDP for SAP R/3 Oracle

2002-12-11 Thread Cook, Dwight E
Well,
I'll reference two things here, folks using TDP/R3 (threaded
version) and
 TSM client compression might want to read ALL of this.


PMR93135 was from where tdp/r3 3.2.0.8 ignored the setting of
client determined compression.
This was fixed in 3.2.0.11.
Now, TDP/R3  ~RLCOMPRESSION~  sucks... all it does is take out free space
basically.
(that is my own personal opinion)
I like setting client compression ON and using that because we see about
3.8/1 compression !
So, which can be sent between two boxes the fastest, 1 TB or 270 GB 
(duh...)
BUT you will drive the heck out of your SAP DB servers !
Oh, and if you send to cached diskpools, ya know the old deal of
~compression on~  ~compressalways no~
In working my PMR93135 problem, IBM Germany took things one step better...
tdp/r3 from 3.2.0.11 (and beyond) preallocates tsm server storage at
110% of actual file size,
if a file grows with compression it won't ~seem~ to grow and thus
resends are eliminated
IF you happen to still see resends due to data growing with
compression,
there is now an environment variable XINT_TSMOFFSET
pre-allocations will occur at a +10% by default
OR what ever value is specified in the environment variable
XINT_TSMOFFSET

COOL !

NO
PMR04127  when using client compression and multiple sessions (like
10, 15, or 20)
TDP/R3 just WILL NOT USE CPU CYCLES !!
It is from a thread option they were using when creating
those threads.
I currently have a fix-test executable for 3.2.0.x
The fix won't be out until end of 1st qtr 2003 in 5.1 TDP/R3
(I know, ((()

IF you are running with client compression and your CPU just isn't being
utilized AND you are on a TDP/R3 that uses threads AND you need more
performance, you might look into getting the fixed executable.
That will probably take opening a problem, making reference to pmr04127,
etc.

BUT back to your main question...
We set client compression ON in the dsm.sys file in the SErver entry
COMPRESSION YES
Then, when using TDP/R3 3.2.0.11  later, in the dsm.opt file we set
COMPRESSALWAYS YES
We send the data to a cached diskpool during the night  migrate to tape
during the day.
If we experience a SAP PR failure, we restore from the cached diskpool copy.
(generally in record time)

hope this helps :-)


Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: brian welsh [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 4:19 PM
To: [EMAIL PROTECTED]
Subject: Help on TDP for SAP R/3 Oracle


Hello,

Server AIX 5.1, TSM 4.2.2.8. Client AIX 4.3, TSM 4.2.1.35, and TDP 3.2.0.11

Question about compression.

We forced compression for this client at server-side, by setting option
Compression at client-node options to server. In client-option file there is
no compression-option.

We thought that this will be enough to compress also TDP -backup and archive
data. We checked and there is no compression. Tivoli Support say first that
TDP is compressing the files always, it a feature of TDP. And later they say
we have to activate this in TDP on the client.

I'm wondering how other sites set compression in this situation. On the
server, in TDP, on the client, ...

Thanx,

Brian.





_
Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/



Re: K-cartridges besides J-cartridges

2002-12-11 Thread Cook, Dwight E
Should see a 2X sticker on the back of your 3590 drive if it is set to
deal with double length tapes.
Also the internal tape spool is green (along with one or two other internal
parts, I seem to recall) so you might be able to look in through the cooling
holes in the top of the case to double check (if you don't find a 2X
sticker on the drive)


Dwight



-Original Message-
From: Jim Sporer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 4:25 PM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


There is a difference between the J and K tapes and the 3590 drives need to
support the extended length tapes.  If the drives are new there is no
problem but older 3590E drives may need an upgrade to support the extended
length tapes.  Check with your CE.
Jim Sporer


At 11:12 PM 12/10/2002 +0100, you wrote:
Hello,

AIX 5.1, TSM 4.2.1.19, IBM 3494 library, 3590E-drives

At the moment we use about 900 J-cartridges in our library. Is it possible
to checkin K-cartridges and use them besides the J-cartridges.

During upgrade from our drives from 3590B to 3590E we changed status of our
volumes to read-only because of different tracks used on tape.

As far as I know is the only difference between J and K the length of the
tape, so I guess there will be no problem.

I'm wondering if someone is using both J and K cartridges and if there
encounter any problems.

Thanx,

Brian.






_
Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/



Re: Client backup verification

2002-12-10 Thread Cook, Dwight E
A lot of people will say a lot of different things in response to your
question BUT...

If you schedule a nightly incremental backup, (annd if you retain event
status long enough) you can do things like
q event * * begind=-7 endd=today
from a dsmadmc admin session and see the results (as reported by the
clients) for all the scheduled events over the last week.  A missed is a
big problem but often times a failed can be OK, dependes on what exactly
failed... (file might have changed during backup and/or deleted between when
tsm built its list of files to backup and actually got around to backing up
that specific file...)

BUT I like:

select node_name,filespace_name as Filespace Name
,cast((backup_end) as varchar(10)) as Date from adsm.filespaces where
(cast((current_timestamp-backup_end)day as decimal(18,0))3 )

The above query command, if issued from a dsmadmc session will show you
each file space for each client that hasn't ~backed up~ in the last 3 days
(may adjust the # ov days by that last ...3)

Now what does this really tell you ?
It points out file spaces/systems that have been removed from a client.
(tsm won't EVER automatically purge that data, you must with a ~del
file node filespace~ command)
It points out a lot of things that might not be pointed out in other
places...
like when, for some odd reason, all other things report successes
YET there is still some sort of failure ???

just my 2 cents worth...

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Spearman, Wayne [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 12:51 PM
To: [EMAIL PROTECTED]
Subject: Client backup verification


Hi all,
New to TSM. Would like to know how others are verifying client backups are
ending normally. Running on AIX. Please share macros,command, and/or
scripts.

Thanks in advance

Wayne



This message and any included attachments are from NOVANT HEALTH INC.
and are intended only for the addressee(s). The information contained
herein may include trade secrets or privileged or otherwise confidential
information. Unauthorized review, forwarding, printing, copying,
distributing, or using such information is strictly prohibited
and may be unlawful. If you received this message in error, or have
reason to believe you are not authorized to receive it, please promptly
delete this message and notify the sender by e-mail.

Thank you.



Re: Import node tape mount problem !

2002-12-05 Thread Cook, Dwight E
Do a q req to look for the request # then issue a reply to that request.
Syntax

-REPLY--request_number--++---
  '-LABEL--=--volume_label-'

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: chris rees [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 05, 2002 3:19 AM
To: [EMAIL PROTECTED]
Subject: Import node tape mount problem !


All

I'm having a problem with export/import of a node from 1 TSM server to
another.  Both servers are at 5.1.5.0 and have a scsi attached LTO drive
attached.

The client filespaces that I have exported are unicode. The export to tape
seemed to work successfully going by what was in the actlog.

Its on the import that I'm having a problem.  I kick off the import process
telling it the nodename, giving it a * for unicode filespace name and also
entering the volume name the export was done to.

The process kicks off and asks for the tape to be mounted. I mount the tape,
in the actlog is shows that the label has been verifyed and that the tape is
mounted. All fine so far.  However the process itself just sits there still
saying its waiting for the tape to be mounted.

Anyone got any ideas.?  Am I doing something wrong somewhere?

Regards

Chris










_
Help STOP SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail



Re: Permanent retention

2002-12-02 Thread Cook, Dwight E
How I do long term retention is...
I register a node by a new name of old_name_exp  ex. dbserver01_exp
then, on that box, I create a special SErver entry it the dsm.sys file that
uses that ~_exp node name
ex.
SErver export_srv
NODE dsmserver01_exp
rest of normal stuff
Then I tell whoever it is to archive what they want
I then produce two exports of that node and send them off-site
I have a sample set of tapes I keep locally to verify media stability and
ability to import after server upgrades.
Best way I've come up with...

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Dorothy LIM Kay Choo [mailto:[EMAIL PROTECTED]]
Sent: Monday, December 02, 2002 12:48 AM
To: [EMAIL PROTECTED]
Subject: Permanent retention


Hi,

I would like to seek advise on how to use TSM to keep my information
permanent.

Before the application, execute the end-of-day processing, a set of database
will be dump.

After the end-of-day processing, there is another dump to database which
will overwrite the previous.

I need to keep both set for a retention of 7 years.

Any advise ?
Regards,
Dorothy Lim
Systems  Network Division, United Overseas Bank Limited
mailing address: 396 Alexandra Road, #04-00, BP Tower Singapore
119954
e-mail:  [EMAIL PROTECTED]   // tel: 371 5813   // fax: 270
3944

Privileged/Confidential information may be contained in this
communication (which includes any attachment(s)).  If you are not an
intended recipient, you must not use, copy, disclose, distribute or retain
this communication or any part of it.  Instead, please delete all copies of
this communication from your computer system and notify the sender
immediately by reply email.  Thank you.



Re: Permanent retention

2002-12-02 Thread Cook, Dwight E
do the directory archives still go to the management class with the longest
retention ?
I put a 10 year archive management class in systems I built 6 years ago and
as people performed thousands of archives a day (for each of hundreds of
registered clients) the tsm databases grew out of controll...
I believe such issues are currently resolved (one way or another) but I
still don't make long term archive management classes available... I deal
with those needs in other ways (export of the data/node)

Dwight



-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Monday, December 02, 2002 11:39 AM
To: [EMAIL PROTECTED]
Subject: Re: Permanent retention


I agree with Zlatko; I don't think the archives will grow your data base all
that much.

But even if your DB got too big over time, you could use a second instance
of the TSM server and use just THAT server for your archives, to split the
DB into more manageable pieces.

But you shouldn't have to do that for a long time.



-Original Message-
From: Dorothy LIM Kay Choo [mailto:[EMAIL PROTECTED]]
Sent: Monday, December 02, 2002 4:43 AM
To: [EMAIL PROTECTED]
Subject: Re: Permanent retention


Hi Zlatko,

Thks for the suggestion. I understand that the archive function will result
in huge database. I believe there is a protential problem if the database
grows too big.

Because of the above, I tried to used backupset which was recommended to be
as a better option.

Regards,
Dorothy Lim
Systems  Network Division, United Overseas Bank Limited
mailing address: 396 Alexandra Road, #04-00, BP Tower Singapore
119954
e-mail:  [EMAIL PROTECTED]   // tel: 371 5813   // fax: 270
3944

Privileged/Confidential information may be contained in this
communication (which includes any attachment(s)).  If you are not an
intended recipient, you must not use, copy, disclose, distribute or retain
this communication or any part of it.  Instead, please delete all copies of
this communication from your computer system and notify the sender
immediately by reply email.  Thank you.


-Original Message-
From: Zlatko Krastev/ACIT [mailto:[EMAIL PROTECTED]]
Sent: Monday, December 02, 2002 4:01 PM
To: [EMAIL PROTECTED]
Subject: Re: Permanent retention


Look at the Archive function of TSM client. It can be invoked from CLI
by using dsmc archive objects options or from GUI/Web pressing
button Archive instead of Backup.
With appropriate settings in archive copygroup (which is different from
backup copygroup) this is very easy to achieve. Look for def copy t=a
server command.

Zlatko Krastev
IT Consultant






Dorothy LIM Kay Choo [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
02.12.2002 08:47
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Permanent retention


Hi,

I would like to seek advise on how to use TSM to keep my information
permanent.

Before the application, execute the end-of-day processing, a set of
database
will be dump.

After the end-of-day processing, there is another dump to database which
will overwrite the previous.

I need to keep both set for a retention of 7 years.

Any advise ?
Regards,
Dorothy Lim
Systems  Network Division, United Overseas Bank Limited
mailing address: 396 Alexandra Road, #04-00, BP Tower Singapore
119954
e-mail:  [EMAIL PROTECTED]   // tel: 371 5813   // fax:
270
3944

Privileged/Confidential information may be contained in this
communication (which includes any attachment(s)).  If you are not an
intended recipient, you must not use, copy, disclose, distribute or retain
this communication or any part of it.  Instead, please delete all copies
of
this communication from your computer system and notify the sender
immediately by reply email.  Thank you.



Re: select statement

2002-11-27 Thread Cook, Dwight E
You can do the math in the statement

tsm: TSMSRVxxselect avg  (total_mb/1024)  as Average Total GB from
auditocc

Average Total GB

  26   (small tsm server)

OR
tsm: TSMSRVxxselect avg  (total_mb/1024)  as Average Total GB from
auditocc

Average Total GB

7106 (a bigger tsm server)

Dwight



-Original Message-
From: Jamshid Akhter [mailto:[EMAIL PROTECTED]]
Sent: Saturday, November 23, 2002 5:55 AM
To: [EMAIL PROTECTED]
Subject: Re: select statement


select sum(total_mb) datamgr from auditocc
Example result  13 222 225 which is approximately 13 GB
I don't see the column GB in this table.

Hope that will help



- Original Message -
From: Ruksana Siddiqui [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, November 21, 2002 11:08 PM
Subject: select statement


 Hi there,

 What is the select statement I can use to grab the avg_total in GB of each
 client the TSM server is backing ..?

 With regards,

 CAUTION - This message may contain privileged and confidential information
intended only for the use of the addressee named above. If you are not the
intended recipient of this message you are hereby notified that any use,
dissemination, distribution or reproduction of this message is prohibited.
If you have received this message in error please notify AMCOR immediately.
Any views expressed in this message are those of the individual sender and
may not necessarily reflect the views of AMCOR.




Re: volhist file

2002-11-27 Thread Cook, Dwight E
the internal copy in the data base gets updated/created on the fly (as a
volume's status changes)
BUT to cut the external file use the backup volhist command from either an
admin session or an admin schedule.
The backup volhist command will write the internal table to the file
specified in the dsmserv.opt (or to a file or files specified via the
optional parameter of the backup volhist command)
How often/when it gets written to the external file automatically, I don't
know off the top of my head.

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Hussein Abdirahman [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, November 27, 2002 7:43 AM
To: [EMAIL PROTECTED]
Subject: volhist file


Hi TSMers;

Perhaps a tough question to a newbie, but I would like
to know when does the volume history file get created/updated


thanks in advance

Hussein M. Abdirahman
AIX administrator
IrwinToy Canada



Re: Include Backup and Archive data to different mgmtclasses

2002-11-25 Thread Cook, Dwight E
To bind archive data to a different management class, look at the
-archmc=blah option of the archive command.
Maybe that will work for you...

dsmc archive -pass=blah -archmc=MC2 -subdir=yes c:\temp\*


Dwight



-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Monday, November 25, 2002 10:18 AM
To: [EMAIL PROTECTED]
Subject: Include Backup and Archive data to different mgmtclasses


I am trying to bind a subset of files to a mgmtclass of MC1 for backup data
and MC2 for archive data.  Is this possible?

My option file has:
Include.backup c:\temp\* MC1
Include.archive c:\temp\* MC2

Data backed up is bound to the proper MC (MC1) but the archived data is also
bound to MC1.  I'm running 5.1.1.3 win32 client.

Thanks,

Tim Rushforth
City of Winnipeg



Re: Archive Schedule on Windows Systems

2002-11-25 Thread Cook, Dwight E
Use the archive command and don't look at items backed up, look at items
archived...

create a MYARCHIVE.BAT with the following stuff...
C:\PROGRA~1\Tivoli\TSM\baclient\dsmc.exe archive -pass=blah -subdir=yes
D:\mypath\*
C:\PROGRA~1\Tivoli\TSM\baclient\dsmc.exe archive -pass=blah -subdir=yes
D:\mypath2\*
C:\PROGRA~1\Tivoli\TSM\baclient\dsmc.exe archive -pass=blah -subdir=yes
D:\mypath3\*
and just set that bat file up in your task scheduler to run when you
desire...

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Manuel Schweiger [mailto:[EMAIL PROTECTED]]
Sent: Monday, November 25, 2002 10:03 AM
To: [EMAIL PROTECTED]
Subject: Archive Schedule on Windows Systems


Hi folks,

Here a really dumb question but how the hell can I setup a schedule job
to archive some objects on a Windows 2000 client on a regular basis. No
matter what I try my schedule job always says it has checked 2 items (even
if I give him 10) and has backed up 0.
Any suggestions?

Systemdata:
TSM Server 4.2.1.15
Client: W2K Server sp3, TSM Client 4.2.1.32

regards, Manuel

- Original Message -
From: Paolo Nasca [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, November 23, 2002 9:05 PM
Subject: Re: TDP Exchange v5.1.5


 I will start to study brick level backup and restore in few days.
 Ragards
 Paolo Nasca
 Datasys Informatica
 [EMAIL PROTECTED]
 [EMAIL PROTECTED]

 - Original Message -
 From: Bruce Kamp [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Wednesday, November 13, 2002 6:49 PM
 Subject: TDP Exchange v5.1.5


  Has anybody started using v5.1.5 of TDP for Exchange?  I received the CD

  noticed it has an option for brick level backups!
 
  --
  Bruce Kamp
  Midrange Systems Analyst II
  Memorial Healthcare System
  E: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
  P: (954) 987-2020 x4597
  F: (954) 985-1404
  ---



Re: 3590 Tape Drives

2002-11-21 Thread Cook, Dwight E
Make sure IBM has removed the cleaner blocks from the drives.
If tapes are being eaten, I'd say it is probably poorly adjusted pressures
in the drive somewhere or something isn't in proper alignment BUT I'm NOT an
IBM technician so I really shouldn't say...
In 7 years across 7 ATL's with 7 year old tapes (for the most part) we are
still going strong !
In all we've only lost somewhere between 10-15 tapes to total destruction
(so 1-2 a year out of some 15,000)


Dwight



-Original Message-
From: Martina Sawatzki [mailto:[EMAIL PROTECTED]]
Sent: Thursday, November 21, 2002 5:30 AM
To: [EMAIL PROTECTED]
Subject: Re: 3590 Tape Drives


We have all four drives chewing up tapes on  regular intervals. To make a
rough estimate I would say that this happens  once or sometimes twice  a
month and all drives are effected in the same way. I have no idea if this
is the normal wear  due to lots of mounts.
We always check out the tape which caused the problem and in most cases we
have a technician change parts of the drive or even the whole drive.
The age of our tapes is different. We exchanged a lot of them to new ones,
but also still  have  tapes which are 3, maybe 4 years old.



Re: tape-to-tape performance

2002-11-21 Thread Cook, Dwight E
As much of a performance hit as you want !
With about every TSM task (straight backups via resourceutilization or
with tdp/r3 with the number of concurrent sessions, etc...) you can tune how
busy TSM will keep your system.
We have our production boxes at one site and our disaster recovery center
miles away, our TSM server(s) are located at the DR site, we have to
compress all client data to send it across the OC12 between the sites.  For
example, we can crank up TSM activities to suck up 90% of 20 processors (out
of 32) on a Sun E10K by using client compression...
So an OC12 is 622 Mb/sec or roughly 273 GB/hr, on a nightly basis we see it
running somewhere between 175-200 GB/hr so we still have about 1/3 capacity
BUT we see 3.8/1 compression of most data base data so we are actually
moving somewhere around 760 GB/hr of client filespace and I'd hate to be
pay'n for an OC24 (or more) between the sites (if we could even get it) !

One view is that if you beef up the processors to assist in backup, the end
results to the user community is increased performance during their
production use ;-) It's a (god I hat to say this) WIN ! WIN ! situation...
Just remind management that today they are more than ever likely to have to
depend on their DR !


Dwight



-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, November 20, 2002 2:35 PM
To: [EMAIL PROTECTED]
Subject: Re: tape-to-tape performance


thanks for that insight. i forgot to mention we are running fibre channel
and not scsi drives. yes, the data is must have... it is all production data
and is being copied for DRM.

how much of a performance hit will it take to turn compression on at the
client level?

-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, November 20, 2002 3:22 PM
To: [EMAIL PROTECTED]
Subject: Re: tape-to-tape performance


Is 1.6 TB the amount of must have/critical information ?
and is that already compressed ?
Knowing each tape mount runs about 90 seconds, any way to reduce tape mounts
will speed up copies.
If you have collocation on, turning it off ~could~ help... and collocation
can/is set on both primary pools and copy pools.
Now if the data isn't compressed by the client the data is uncompressed at
the drive, moved through the processor as ~full size~ data, then
recompressed at the destination drive.  If your clients have the horsepower,
turn on client compression, that way not so much data is moved across the
processor's buss.
Don't daisy chain any of your 3590's if they are older scsi.
You might be able to reduce your data down if you force the application
owners to review their required CRITICAL/MUST HAVE data and send it to
isolated pools for copies to take offsite.

just some thoughts...

Dwight

-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, November 20, 2002 1:26 PM
To: [EMAIL PROTECTED]
Subject: tape-to-tape performance


we run a 3494 tape library with 3590 tape drives. after our backup are
finished, we run a backup stgpool to a copy pool on about 1.6TB of data. as
you can imagine, even with 8 drives this takes some time. are there any
server, device or other parameters we can tune to improve the tape-to-tape
performance?


steve



Re: Help Urgently needed

2002-11-20 Thread Cook, Dwight E
Also might want to check your ~PATH~ and if TSMUTIL1.DLL exists somewhere in
that path.
Might be it can't find the entry point because it can't find the file to
begin with :-O

Dwight

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Amini, Mehdi
Sent: Wednesday, November 20, 2002 7:27 AM
To: [EMAIL PROTECTED]
Subject: Help Urgently needed


When I try to create a scheduler service on a NT4 running TSM Version
4.2.31 I get this error:


The procedure Entry Point TSMEnumDependentServiceEnd Could not be
located in the Dynamic Link Library TMSUTIL1.DLL

Please help

Thanks



Mehdi Amini
LAN/WAN Engineer
ValueOptions
12369 Sunrise Valley Drive
Suite C
Reston, VA 20191
Phone: 703-390-6855
Fax: 703-390-2581



**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
sender by email, delete and destroy this message and its attachments.


**



Re: tape-to-tape performance

2002-11-20 Thread Cook, Dwight E
Is 1.6 TB the amount of must have/critical information ?
and is that already compressed ?
Knowing each tape mount runs about 90 seconds, any way to reduce tape mounts
will speed up copies.
If you have collocation on, turning it off ~could~ help... and collocation
can/is set on both primary pools and copy pools.
Now if the data isn't compressed by the client the data is uncompressed at
the drive, moved through the processor as ~full size~ data, then
recompressed at the destination drive.  If your clients have the horsepower,
turn on client compression, that way not so much data is moved across the
processor's buss.
Don't daisy chain any of your 3590's if they are older scsi.
You might be able to reduce your data down if you force the application
owners to review their required CRITICAL/MUST HAVE data and send it to
isolated pools for copies to take offsite.

just some thoughts...

Dwight

-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, November 20, 2002 1:26 PM
To: [EMAIL PROTECTED]
Subject: tape-to-tape performance


we run a 3494 tape library with 3590 tape drives. after our backup are
finished, we run a backup stgpool to a copy pool on about 1.6TB of data. as
you can imagine, even with 8 drives this takes some time. are there any
server, device or other parameters we can tune to improve the tape-to-tape
performance?


steve



Re: Where did the online manuals go?

2002-11-12 Thread Cook, Dwight E
http://www.tivoli.com/support/public/Prodman/public_manuals/td/TD_PROD_LIST.
html

just worked for me but it wasn't working at about 10:00 CDT

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Alexander Verkooijen [mailto:alexander;sara.nl]
Sent: Tuesday, November 12, 2002 9:40 AM
To: [EMAIL PROTECTED]
Subject: Where did the online manuals go?


Hello,

Does anybody know where the online manuals are?

They used to be at

http://www.tivoli.com/support/public/Prodman/public_manuals/td/TD_PROD_LIST.
html
#S

but that link is dead now.

There is a notice on the Tivoli site telling
me that the manuals have been moved:

http://www.tivoli.com/support/public/Prodman/public_manuals/info_center/redi
rect
_info.html

And after a few seconds it re-directs me to the
dead link above!

I've tried the new (IBM) support site,
but it seems that registration is required
to access it.

We used to give our customers the URL to the
manuals so they could solve most of their own
problems. Since we can't expect our customers
to register on the IBM site our only
alternative would be to maintain a repository
of manuals on our own website. Keeping such
a repository up to date would be very
time consuming.

Regards,

Alexander
---
Alexander Verkooijen([EMAIL PROTECTED])
Senior Systems Programmer
SARA High Performance Computing



Re: How to write script to automate TSM 5.11

2002-11-12 Thread Cook, Dwight E
You can do a
upd vol * acc=readw whereacc=unavail
or
upd vol * acc=readw whereacc=reado


Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Ron Lochhead [mailto:RLochhead;CSE-INSURANCE.COM]
Sent: Tuesday, November 12, 2002 2:52 PM
To: [EMAIL PROTECTED]
Subject: How to write script to automate TSM 5.11


I am attempting to find out how to write scripts for TSM 5.11 in an Win 2k
environment to automate the various commands during a backup.  Currently, I
am using the command line to access the TSM server thru my workstation.
What I want to do is write a script for instance, q vol *
access=unavailable which an operator could type one command and get
results of which vols need to be updated.  Then that output file would be
the input for the upd vol [volname] access=readwrite.

If anyone has written that or knows where I can go to find out that info, I
would appreciate it.

Thanks,
Ron Lochhead



TSM cost cutting measures...

2002-11-11 Thread Cook, Dwight E
OK, one would need to verify with Tivoli/IBM but I seem to recall hearing
TSM servers being licensed by physical machine.
IF that is the case then multiple virtual machines under VMware opens up a
whole new world of possibilities as far as cost savings go!
AND where you could run multiple windows based servers on virtual machines,
you could cut costs even more with multiple virtual Linux servers.
just some thoughts

Dwight



Re: Diskpool volume mirroring

2002-11-06 Thread Cook, Dwight E
Good reason for IBM ESS Storage ;-)

for diskpool volume mirroring just use AIX mirroring...

Dwight



-Original Message-
From: Loon, E.J. van - SPLXM [mailto:Eric-van.Loon;KLM.COM]
Sent: Wednesday, November 06, 2002 8:07 AM
To: [EMAIL PROTECTED]
Subject: Diskpool volume mirroring


Hi *SM-ers!
Our AIX guys have added 6 SSA disk to our TSM server for mirroring and now I
want to add them to TSM.
However, I cannot find a way to define a mirror volume for diskpool volumes.
Now my guess is that TSM mirroring is only available for database and log
volumes and not for diskpool volumes. I'm I right?
If so, the only alternative is using AIX mirroring for diskpool volumes?
Thanks in advance for any reply!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**



Re: Moveing the database..

2002-11-04 Thread Cook, Dwight E
Just simply dsmfmt the new ones on the new volumes with new names,
then define them to tsm BUT don't extend the db or log,
then delete the old ones.
TSM will ~move~ the data to the new ones and all is well.
OR
Just simply dsmfmt the new ones on the new volumes with new names,
then define them as mirrored copies of the existing ones
then delete the old/original ones.

either way works...
can be done while other activity is going on but best to do in slow times.

Dwight



-Original Message-
From: Cahill, Ricky [mailto:Ricky.Cahill;EQUITAS.CO.UK]
Sent: Monday, November 04, 2002 2:29 AM
To: [EMAIL PROTECTED]
Subject: Moveing the database..


I have a need to move the log files and database as currently they are on a
raid 5 set shareing this with the diskpools.

I'd like to move the database and log files onto seperate mirrored pairs to
increase the performance and better use the disk space we have.

I've hunted around in the manuals and cannot find how i can do this.

Can anyone point me in the right direction please??

Thanks in advance.

Rikk




Equitas Limited, 33 St Mary Axe, London EC3A 8LL, UK
NOTICE: This message is intended only for use by the named addressee
and may contain privileged and/or confidential information.  If you are
not the named addressee you should not disseminate, copy or take any
action in reliance on it.  If you have received this message in error
please notify [EMAIL PROTECTED] and delete the message and any
attachments accompanying it immediately.

Equitas reserve the right to monitor and/or record emails, (including the
contents thereof) sent and received via its network for any lawful business
purpose to the extent permitted by applicable law

Registered in England: Registered no. 3173352 Registered address above





Re: Repeat after me...

2002-10-31 Thread Cook, Dwight E
I always just send mail to [EMAIL PROTECTED] and do a
set adsm-l nomail
that way the msgs don't come in while I'm away
then when I return I send [EMAIL PROTECTED] a msg
set adsm-l mail
to turn things on again...

Dwight


-Original Message-
From: Hamish Marson [mailto:hamish;TRAVELLINGKIWI.COM]
Sent: Thursday, October 31, 2002 12:19 PM
To: [EMAIL PROTECTED]
Subject: Repeat after me...


I will not configure my out of office spamming software to reply to list
traffic!


--

I don't suffer from Insanity... | Linux User #16396
I enjoy every minute of it...   |
|
http://www.travellingkiwi.com/  |



Re: www.tivoli.com gone???

2002-10-31 Thread Cook, Dwight E
Gone to me also...
I noticed it gone earlier when I needed to access TDP/Oracle manuals
Finally I just went and dug up the install CD (sigh)

Dwight 



-Original Message-
From: Andrew Raibeck [mailto:storman;US.IBM.COM]
Sent: Thursday, October 31, 2002 12:00 PM
To: [EMAIL PROTECTED]
Subject: Re: www.tivoli.com gone???


I can still get to it, and though I work for IBM, I am not aware of any 
magic powers that enable only IBMers to look at it.

Are you trying to go to a page within www.tivoli.com, or did you actually 
try the main www.tivoli.com page? Try the main page to see if you can get 
to it. If that doesn't work, then try flushing your web browser cache, 
restart the browse, and try the main page again.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




Dirk Billerbeck [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/31/2002 10:02
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:www.tivoli.com gone???

 

Hello,

is anybody able to connect to the Tivoli website??

The connection attempt times out and the error message says something like
www.tivoli.com has no DNS entry???

What is going on there by Tivoli? I need the knowledge base and the device
support list. It's urgent...

Mit freundlichen Grüßen,
Met vriendelijke groeten,
With best regards,
Bien amicalement,

CU/2,
Dirk Billerbeck


Dirk Billerbeck
GE CompuNet Kiel
Enterprise Computing Solutions
Am Jaegersberg 20, 24161 Altenholz (Kiel), Germany
Phone: +49 (0) 431 / 3609 - 117, Fax: +49 (0) 431 / 3609 - 190,
Internet: dirk.billerbeck @ gecits-eu.com


This email is confidential. If you are not the intended recipient,
you must not disclose or use the information contained in it.
If you have received this mail in error, please tell us
immediately by return email and delete the document.



Re: www.tivoli.com gone??? (new site)

2002-10-31 Thread Cook, Dwight E
Uh Sept 25 to Oct 31 is about 36 days... and in a note I received back
then stated that the tivoli site would only be around for another 37 days so
it looks like it is all Officially GONE !
http://www.ibm.com/software/sysmgmt/products/support/
is what I was given for the new support page (took a while but I found the
msg...

here it is (below)
later ,
Dwight


September 25, 2002

To:  All Registered Tivoli Customers World Wide

Subject: It's time to move to the new Tivoli web site!

Impact: Tivoli product support will soon only be available from IBM Online
Software Support - Only 37 days to go!

It's time!    We are now asking all Tivoli Web Support users to begin the
process of registration on the new IBM Online Support Site.    During the
last 30 days, we have received a few emails about our planned move.
Some concerns and some good questions were asked about the steps involved
in moving to this new site.    The concerns primarily have been about 'Ask
Tivoli' and the need to have a simple cross product search available on
IBM.    If you have not visited our new site yet, all information is
segmented into individual  product pages.   Each page is really  a complete
product reference for getting all available information for one product,
but they don't support searching across the solutions for several Tivoli
products.

 In response to this feedback, we have updated the home page for Tivoli
support, with a new simple search bar.    To give it a try, here is the
URL:
 http://www.ibm.com/software/sysmgmt/products/support/   As you will see,
this search bar will ask you to provide key words and select a
topic.    You can select the high level topic, like 'Systems Management' or
a specific discipline within a category.    This will help you do broader
searches without getting records from the whole portfolio of IBM Software
Products.   This is really just a specific use of some significant new
search engine and data base indexing enhancements IBM.com deployed 3 weeks
ago.   We hope you like it as this is a more accurate level of search
isolation than we provided in Tivoli's support site.

To help with the special questions for registration, we are providing
attachments from this memo you can print off.  The Lotus and Microsoft
files both contain the same detail.  Please review these closely.
(See attached file: Registration Instructions MS.doc)(See attached file:
Registration Instructions WP.lwp)

If you do continue to use the www.tivoli.com/support site, you will begin
to see other changes.
-We will be removing the 'Ask Tivoli' feature during the month of
October.
-We have migrated most content to IBM Online Software Support, and are
continuing to add hundreds of new items every month to the IBM site.
-The extensive reference materials for the IBM Tivoli Storage products are
all available from the IBM site with special references on the product
specific     pages.   The data on Tivoli.com/support for Storage products
is no longer being updated.
-The Tivoli problem submission tool will be removed by November 1st, or
shortly after.   This is replaced by ESR on IBM Online Software Support.
-Final content migrations are being addressed for the older downloads data,
but new downloads are being delivered to the IBM site.
-As we near the closing of the Tivoli.com/Support site, the URL's for all
the key areas will be transferred through redirects to the IBM site.

Thank you for all the feedback provided so far.     We are very interested
in your input and it continues to affect our current and future enhancement
plans for your online support options!   We know you placed a lot of value
in the Tivoli.com/support site and have appreciated all the positive survey
feedback provided in the past.    We know moving to a new information
layout presents some challenges, but once you've gained some familiarity,
we hope you find this a better total solution.    We also have a number of
common links always available in the 'Left' and 'Right' navigation bars, so
take a few minutes to get familiar with those options as well.    They will
help you shortcut your way to specific information, without needing to
retrace your steps to the home page.

As changes are delivered to both the Tivoli and IBM web site online support
locations, additional communications will be sent to you.   We also want to
hear from you as you find ways to improve this site.   Our Online Software
Support is not only getting a new look, but a much more significant focus
on continuing improvements and finding out how we can serve you better
online!

As you do access the new Tivoli support location for IBM Online Software
Support, please note that a feedback option is always available in the left
navigation bar.

We value your business and are working to make this migration a positive
improvement in the level of online support you can expect from IBM.

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 

Re: Encryption

2002-10-22 Thread Cook, Dwight E
Been a while and I'd have to double check but...
You might not want to use compression if you use encryption...
I believe it encrypts first then tries to compress and encrypted data
doesn't compress (much).
Something to double check.

Dwight



-Original Message-
From: J D Gable [mailto:josh.gable;eds.com]
Sent: Monday, October 21, 2002 4:13 PM
To: [EMAIL PROTECTED]
Subject: Encryption


Does anybody have any evidence/research as to what kind of additional
overhead encryption puts on a client when processing a backup (CPU,
Memory, etc.)?  I am running some tests myself, but the numbers are
staggering (we're seeing up to a 300% increase in the backup time in
some cases).  I understand that it is largely based on the horsepower of
the node, but I was wondering if anyone else is seeing the same, or if
anyone knew a place I could get some additional info on encryption.

Thanks in advance for your time,
Josh



Re: How to merge filespaces

2002-10-22 Thread Cook, Dwight E
What you will find (last time I checked...) 
Now, was the old filespace name eliminated totally ???
If so, TSM doesn't purge any of that data.
TSM doesn't know that the file system was removed, it only knows it isn't
available (maybe just not mounted...).
Existing inactive versions will expire naturally BUT the currently active
versions won't go away (ever), you will have to eventually purge them
manually.
AND based on the way your filespace names changed, you might need to look
into using {} to identify file systems...
Say you used to have a filesystem /oracle and under that you had
/oracle/d110  /oracle/d120  /oracle/d130 subdirectories but now you've
changed these to individual filesystems... if you had data saved under the
file system /oracle but under the .../d110/ subdirectory, say
/oracle/d110/myoraclefile.dbf  to find this you might have to do
query backup {/oracle}/d110/myoraclefile.dbf
to identify /oracle as the filesystem, if /oracle doesn't exist anymore and
they are all now /oracle/d110 etc...

probably about as clear as mud if you haven't had to deal with items like
this in the past...

hope this helps, 

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Michael Heiermann [mailto:Michael.Heiermann;LINDE-MH.DE]
Sent: Tuesday, October 22, 2002 4:23 AM
To: [EMAIL PROTECTED]
Subject: How to merge filespaces


is it possible to merge filespaces ?
our problem: we changed filespacename when we moved to a new storagebox.
new backups are directed to the new filespace, the old filespace still 
exists containing some old versions
we might need in the future.

thanks and regards

Michael Heiermann
OD1 Systembetreuung 
LINDE AG  Material Handling
Schweinheimer Straße 34
D-63743 Aschaffenburg
Tel.:   ++49 6021 99-1293
Fax.:   ++49 6021 99-6293 
e-mail:   [EMAIL PROTECTED]

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail 
in error) please notify the sender immediately and destroy this e-mail. 
Any unauthorised copying, disclosure or distribution of the material 
in this e-mail is strictly forbidden.
Any views expressed in this message are those of  the  individual
sender,  except  where  the sender specifically states them to be
the views of Linde Material Handling.

Since January 2002 we use the e-mail domain linde-mh.de instead
of linde-fh.de.

This mail has been swept for the presence of computerviruses.



Re: Audit Library question.

2002-10-22 Thread Cook, Dwight E
All depends on your type of library !

IBM 3494-L12

To get the atl to actually scan the barcodes of the tapes you must go to the
operator console and do a
command, inventory, inventory update full
or something close to that (changes across levels of the library manager
code)
then inside tsm an audit library checklabel=barcode only checks against the
library manager's data base.

Dwight


-Original Message-
From: KEN HORACEK [mailto:KHORACEK;INCSYSTEM.COM]
Sent: Tuesday, October 22, 2002 1:03 PM
To: [EMAIL PROTECTED]
Subject: Re: Audit Library question.


Not true...
With checklabel=barcode, all of the barcodes are read.  This is then checked
with the internal memory of the library as to what the library's inventory
says is where.  The tape is mounted, only if the barcode is mis-read.

Ken
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/2002 10:44:50 AM 
At 11:29 AM -0400 10/22/02, David Longo said:
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.

So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).

So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.

But how can it have a problem reading the barcode label if check-=b
doesn't even try to read the labels?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.

-
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
-
GWIASIG 0.07



Re: dsmserv.dsk

2002-10-11 Thread Cook, Dwight E

Is this a new/fresh install ???
if so, run
dsmserv format 1 yourlogfile  1 yourdbfile
the log file and db file should be ones you FIRST format with the operating
system command
dsmfmt -m -db yourdbfile 
where  is the number of MB the file should be
and
dsmfmt -m -log yourlogfile 
the dsmfmt command should reside in your install directory.



Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Jacque Mergens [mailto:[EMAIL PROTECTED]]
Sent: Friday, October 11, 2002 9:42 AM
To: [EMAIL PROTECTED]
Subject: dsmserv.dsk


I have just installed 5.1 and the latest patches on an AIX server but cannot
get my server to run because of an error reading the dsmserv.dsk file.

This file was not generated.  How do I generate this on the fly?

Jacque



Re: Slow (1 MB a minute) brrestore from LTO

2002-10-09 Thread Cook, Dwight E

What did you have your multiplexing set at ???
If you are only using a single tape drive you will want to really experiment
with setting the multiplexing up from one.
AND I'd make sure and use some form of compression, either RL_COMPRESSION or
straight client compression, once again, experiment.
How big is your data base ??? If I were you I'd really look into more drives
if your DB is very big.
I never count on a single session with the TSM server to move more than
about 2 GB/hr of data, now if you use client compression that 2 GB could be
as much as 7.6 GB of actual db space... the we get our rates by cranking up
multiple concurrent sessions, up to 20-25 to move 40+ GB/hr of compressed
data or 152 GB/hr of DB space.
Well, that was with the old backint 2.4.25.6, currently I'm working with IBM
on a little slow down problem we are seeing with the 3.2.0.11 code... it
just refuses to perform as well as the old code :-(

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Eric Winters [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 09, 2002 3:34 AM
To: [EMAIL PROTECTED]
Subject: Slow (1 MB a minute) brrestore from LTO


Can anyone help speed up my brrestore?

TSM Server 4.2.1.7 on AIX 4.3

The SAP server backed up using TDP for R/3 3.2.0.1 over a Gigabit network
to an AIX 4.3  TSM Server with single scsi LTO tape drive with a throughput
of about 1 GB a minute. (The SAP server and the TSM Server are both on AIX
4.3).

I'm now restoring in a DR test. The TSM Server has been installed on the
same system as the SAP Server. The TSM database restored successfully and
the filesystems were rapidly restored too. The COMMMETHOD was set to TCPIP.
We are now trying to perform a brrestore. It works but is very slow, at
approx 1-2 MB a minute. I've upgraded the TDP to 3.2.0.12 but this made no
difference. I've changed the commmethod to sharedmem but it's made no
difference -  indeed the session shown in 'q ses' shows a commethod of
TCPIP so perhaps this didn't kick in after all. (although commmethod
SHAREDMEM is present in dsmserv.opt - and system restarted with this).

Both the backup client and TDP for R/3 use the same dsm.sys file.

The performance of the LTO drive was fine for restoring filesystems. It's
just the brrestore performance that's shocking. It would take several
months to restore at this rate. What can it be?
I backed up originally with just one session and I'm restoring with just
one session. There is only one session displayed in 'q ses' and it's
running - but so so slow.

The dsm.sys is very ordinary but here it is for good measure.

SErvername  tsm1
   COMMmethod sharedmem
   TCPServeraddress   10.1.1.9
   passwordaccess generate
   inclexcl   /usr/tivoli/tsm/client/ba/bin/dsm.inx
   schedlogname   /usr/tivoli/tsm/client/ba/bin/dsmsched.log
   schedlogretention7
   passworddir/usr/tivoli/tsm/client/ba/bin

I'd be very pleased to hear of any ideas to get a reasonable restore rate -
at this rate it will apparently finish in January 2003!


Thanks,

Eric Winters
Sydney Australia



Re: space reclamation when using server-to-server communication

2002-10-08 Thread Cook, Dwight E

Where do your virtual volumes go ??? (straight to tape or first to a buffer
diskpool ???)
If they go straight to tape there is the possibility that your second
virtual volume to be reclaimed ends up being on the physical volume to which
your first reclamation process opened its output volume.  (thus the output
voume gets closed on the remote server because the tape is rewound to find
the second input/reclamation volume)

What is the mount retention of your device class for your virtual volumes
???

Generally virtual volumes are only used for a unit of work (or until their
estimated capacity is reached), since your reclamation processes are
actually TWO different processes I would expect TSM to close that output
virtual volume at the end of the first process and thus terminate its use.
I would not expect the second process to pick that back up and try to use
it... and this might be what is going on in that the virtual volume is
closed upon completion of the first reclamation and then the second process
tries to use it but it is closed...

Try this, if your mount retention for your device class for your virtual
volumes isn't zero, try setting it to zero.  This way upon completion of the
first reclamation every one / all routines involved will agree to close out
that virtual volume,  then when your second reclamation initiates it will
call for a new scratch.
NOW for the flip side of my twisted thinking... if your mount retention IS
currently zero, try setting it up to 1 or 2, IF currently zero it ~might~ be
triggering an early close of the volume even though the one environment
knows it has additional work to go to it.

BUT generally I'd expect the normal flow of things to be: upon completion of
the first reclamation task, the output virtual volume to be closed (since
that is a completed unit of work) and upon the initiation of the second
reclamation task, a new output virtual volume to be opened...


Dwight


-Original Message-
From: Sascha Braeuning
[mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 08, 2002 2:35 AM
To: [EMAIL PROTECTED]
Subject: space reclamation when using server-to-server communication


Hello TSMers,

I've got a problem with my space reclamation for virtual volumes. We use
two TSM Servers Version 4.2.2.0 (OS Sun Solaris 2.7). They do
server-to-server communication. When space reclamation starts for the first
reclaimable virtual volume in a storage pool, everything looks fine. Then
the second reclaimable virtual volume starts their reclamation process and
I allways get an ANRD error.

Here is what TSM reports:

ANR8340I  SERVER volume TBS.BFS.032957914 mounted.
ANR8340I  SERVER volume TBS.BFS.033992020 mounted.
ANR1340I  Scratch volume TBS.BFS.033992020 is now defined in storage pool
REMOTE_UNIX.
ANR0986I  Process 360 for SPACE RECLAMATION running in the background
processed 34 items for a total of 1.997.933
  bytes with a completion state of SUCCESS at 14:05:16.
ANR1041I  Space reclamation ended for volume TBS.BFS.032957914.
ANR0984I  Process 361 for SPACE RECLAMATION started in the background at
14:05:16.
ANR1040I  Space reclamation started for volume TBS.BFS.033704013 storage
pool REMOTE_UNIX (process number 361).
ANR1044I  Removable volume TBS.BFS.033704013 is required for space
reclamation.
ANR8340I  SERVER volume TBS.BFS.033704013 mounted.
ANR1341I  Scratch volume TBS.BFS.032957914 has been deleted from storage
pool REMOTE_UNIX.
ANR8213E  Session 93 aborted due to send error; error 32.
ANRD  pvrserv.c(918): ThreadId 43 ServWrite: Error writing SERVER
volume TBS.BFS.033992020 rc=30.
  
ANR1411W  Access mode for volume TBS.BFS.033992020 now set to read-only due
to write error.


When I move data from TBS.BFS.033992020, no problems occured. Can anybody
explain, what happened at the server?


MfG/regards
Sascha Brduning


Sparkassen Informatik, Fellbach

OrgEinheit: 6322
Wilhelm-Pfitzer Str. 1
70736 Fellbach

Telefon:   (0711) 5722-2144
Telefax:   (0711) 5722-1634

Mailadr.:  [EMAIL PROTECTED]



Re: Renaming a TSM server.

2002-10-08 Thread Cook, Dwight E

Sure, it is easy and stable...
We've moved 8 servers in the past.

Once all done use
Syntax

-Set SERVername--server_name-

to change the name of your running TSM server...

Actually we just had new equipment at the new location and transported a
copy of the DB.
Backed it up at the old location, drove it to the new location, did a
restore db at the new location, pointed all the tsm clients to the new tsm
server and bounced their schedulers.
We basically moved each environment in less than 90 minutes (that was db
backup, drive,  db restore time).

Dwight
-Original Message-
From: Ochs, Duane [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 08, 2002 10:35 AM
To: [EMAIL PROTECTED]
Subject: Renaming a TSM server.


Good Morning everyone,
 Due to some reallocating of resources it has become necessary to relocated
one of my TSM servers and Jukebox. Our company has historically prefixed
system names with the  geographical location of the installation.

I would like to change the name of the TSM server to comply with this naming
structure.

My plan:

Change the TSM server name and IP address to comply with the corporate
standard in the new plant.

Update my DNS to reflect the new system.

Change every client's server identification to point to the new IP.

Restart all schedulers and remote web services on all clients using this new
IP.


Questions:

Is this possible without damaging the DB ?

What other steps should be added ?

Has anyone done this before ?


Duane Ochs
Systems Administration
Quad/Graphics Inc.
414.566.2375



FW: Important Update on the Tivoli Support Web Site Migration to IBM

2002-10-02 Thread Cook, Dwight E

If you use the Tivoli web site, it is changing.
Here is some info. I received Monday.
just passing it along, 
later, 
Dwight


-Original Message-
From: Customer Support WEB Registration [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 30, 2002 8:56 PM
To: [EMAIL PROTECTED]
Subject: Important Update on the Tivoli Support Web Site Migration to
IBM 


September 25, 2002

To:  All Registered Tivoli Customers World Wide

Subject: It's time to move to the new Tivoli web site!

Impact: Tivoli product support will soon only be available from IBM Online
Software Support - Only 37 days to go!

It's time!    We are now asking all Tivoli Web Support users to begin the
process of registration on the new IBM Online Support Site.    During the
last 30 days, we have received a few emails about our planned move.
Some concerns and some good questions were asked about the steps involved
in moving to this new site.    The concerns primarily have been about 'Ask
Tivoli' and the need to have a simple cross product search available on
IBM.    If you have not visited our new site yet, all information is
segmented into individual  product pages.   Each page is really  a complete
product reference for getting all available information for one product,
but they don't support searching across the solutions for several Tivoli
products.

 In response to this feedback, we have updated the home page for Tivoli
support, with a new simple search bar.    To give it a try, here is the
URL:
 http://www.ibm.com/software/sysmgmt/products/support/   As you will see,
this search bar will ask you to provide key words and select a
topic.    You can select the high level topic, like 'Systems Management' or
a specific discipline within a category.    This will help you do broader
searches without getting records from the whole portfolio of IBM Software
Products.   This is really just a specific use of some significant new
search engine and data base indexing enhancements IBM.com deployed 3 weeks
ago.   We hope you like it as this is a more accurate level of search
isolation than we provided in Tivoli's support site.

To help with the special questions for registration, we are providing
attachments from this memo you can print off.  The Lotus and Microsoft
files both contain the same detail.  Please review these closely.
(See attached file: Registration Instructions MS.doc)(See attached file:
Registration Instructions WP.lwp)

If you do continue to use the www.tivoli.com/support site, you will begin
to see other changes.
-We will be removing the 'Ask Tivoli' feature during the month of
October.
-We have migrated most content to IBM Online Software Support, and are
continuing to add hundreds of new items every month to the IBM site.
-The extensive reference materials for the IBM Tivoli Storage products are
all available from the IBM site with special references on the product
specific     pages.   The data on Tivoli.com/support for Storage products
is no longer being updated.
-The Tivoli problem submission tool will be removed by November 1st, or
shortly after.   This is replaced by ESR on IBM Online Software Support.
-Final content migrations are being addressed for the older downloads data,
but new downloads are being delivered to the IBM site.
-As we near the closing of the Tivoli.com/Support site, the URL's for all
the key areas will be transferred through redirects to the IBM site.

Thank you for all the feedback provided so far.     We are very interested
in your input and it continues to affect our current and future enhancement
plans for your online support options!   We know you placed a lot of value
in the Tivoli.com/support site and have appreciated all the positive survey
feedback provided in the past.    We know moving to a new information
layout presents some challenges, but once you've gained some familiarity,
we hope you find this a better total solution.    We also have a number of
common links always available in the 'Left' and 'Right' navigation bars, so
take a few minutes to get familiar with those options as well.    They will
help you shortcut your way to specific information, without needing to
retrace your steps to the home page.

As changes are delivered to both the Tivoli and IBM web site online support
locations, additional communications will be sent to you.   We also want to
hear from you as you find ways to improve this site.   Our Online Software
Support is not only getting a new look, but a much more significant focus
on continuing improvements and finding out how we can serve you better
online!

As you do access the new Tivoli support location for IBM Online Software
Support, please note that a feedback option is always available in the left
navigation bar.

We value your business and are working to make this migration a positive
improvement in the level of online support you can expect from IBM.



Re: Help! Problems restoring files of a dead AIX client on another system (adsm 3.1 2.90)

2002-10-02 Thread Cook, Dwight E

What client level was Barney ?  (and what level is Fred ?)
If Barney was higher than the code on Fred then yes, natrually Fred can't
see anything for Barney (from Barney)...
Also ADSM isn't rated for AIX 4.3.3... (have to get in the standard IBM
answer there ;-) )

I would try going into the dsm.sys file and making a SErver entry of
SErver BarneyAsFred
tcpserveraddress FRED
node BARNEY
rest of stuff...

then do a (if you must use the GUI, I'd use the command line though)

dsm -serv=barneyasfred

that should work, but if it doesn't try the command line (dsmc)
in the old days the GUI had lots of odd little quirks that made me never,
ever, use it !

Dwight

-Original Message-
From: J P [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 02, 2002 12:20 AM
To: [EMAIL PROTECTED]
Subject: Help! Problems restoring files of a dead AIX client on another
system (adsm 3.1 2.90)


OK I'm having some bang-head-here moments with this
old version of adsm.

Saga:
Server fred is running ADSM 3.1 2.90  AIX 4.3.3
Client barney has been backing up to fred's tape
drives.
Barney is now dead (Barney was at the same OS level)

Problem: When dsm is brought up on fred only fred's
files  filesystems can be seen in the restore client.

What we know:
-
Barney is dead. There is stuff that was on barney's
drives that we need.
We copy barney's dsmsched.log files to fred every
morning we know he has been backing up files.  Using
NT adsm admin client we can list files in each tape
volume.  Barney's files are on tape.

What we've tried

Bring up dsm...

1. Restore - only sees fred's directories  files

2. Utilities-Access another user- enter node:
barney user:root
   - Only see's barney's base directory structure. No
files.

3. Utilities-Access another user- enter node:
barney user:admin
   - same result
   - file- connection information says node fred/user
accessing as
 node barney/user root

4. Utilities-properties-general- Node Name
   Changed Node name field from fred to barney and
clicked apply,
   OK and closed the client. Restarted dsm.
   - Same results.
   - file- connection information says node fred/user
accessing as
 node fred/user root

5. Repeat 1 through 4 again after restarting adsm
server  scheduler.
   - same results
 What am I missing??

Help!!!


__
Do you Yahoo!?
New DSL Internet Access from SBC  Yahoo!
http://sbc.yahoo.com



Re: retention question (may be trivial)

2002-10-01 Thread Cook, Dwight E

If a client quits backing up to a server, the server never has anything to
base the status of an active file on.
In other words, it never knows if it has turned inactive.
Since active files are there as long as the node is, if a box never backs up
again, its active files never go away.

Dwight


-Original Message-
From: Chetan H. Ravnikar [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 30, 2002 3:47 PM
To: [EMAIL PROTECTED]
Subject: retention question (may be trivial)


Hello respected people

forgive me, if this question was talked or asked a million times :)

We currently have a server in production *A* (DLT)

we plan to replace the server *A* with a new beefer server *B* (AIT III)
on the day of putting *B* to production, we plan to rename *B* as *A* and
offcousrse rename *A* to *A-old*

this helps us do as little as zero changes on 50 so clients getting backed
up on to *A*

Since the media being different, we plan no exporting data and etc..

Our intent is to migrate no -data from the old system to the new system,
but keep only the backupsets created on the old system for audit or legal
issues, where as let the data expire through its normal retention.

Assuming we want to keeo the old-system up for a while

The question I have, how is the retention handled with respect to the last
copy of files (for all nodes) backed up on the old system!?

will they expire or stay forever, since the clients are no longer backed
up to the old-system.

Is there a value I can look at to say,
Other than what I have set on the *Retain only version to 120 days*
They will expire after a *x* days or will it not expire at all


thanks for your input
Chetan



Re: Disk-Pool-Volume delete not possible

2002-09-30 Thread Cook, Dwight E

This is one I had a long time ago... hanging pointers, here is how to solve.

make sure all data is migrated from disk pools to tape pools
do q vol dev=disk to capture all info on what disks are where
write macros to delete all disk volumes (except the problem one) and define
them back
then when you have an available window
backup your db
run your macro do delete the disk vols
(this saves hours with the audit by deleting all other disk volumes)
shutdown tsm
perform a dsmserv auditdb diskstorage fix=yes  to audit only the disk
storage portion of the data base
(I believe that is the proper syntax, I can't find my note from the
last time I did that and the adsm.org web site seems to be down currently...
there will be notes from me from last year that detail this)
start tsm back up
delete the problem volume
run your macro to redefine all your disk volumes back to your storage pools
have a beer...



Dwight


-Original Message-
From: Christoph Pilgram
[mailto:[EMAIL PROTECTED]]
Sent: Monday, September 30, 2002 7:31 AM
To: [EMAIL PROTECTED]
Subject: Disk-Pool-Volume delete not possible


Hi all,

on one of my TSM-Servers ( TSM-Vers : 4.1.4   / AIX 4.3.3 ) I have a problem
with one volume of my disk-pool :

I wanted to reorganize my pool-volumes, so I deleted all existing
disk-volumes of that storage-pool. All but one volumes could be deleted.
For this volume the Message came :ANR2406E DELETE VOLUME: Volume
/TSM1/stgc/stg6.dsm still contains data.
I made a q content but no data were shown. I tried do delete with
discarddata=yes the reply was the same  :ANR2406E DELETE VOLUME: Volume
/TSM1/stgc/stg6.dsm still contains data.
Then I changed the status to offline.
I changed my volume-groups on AIX (by this all files from that disk-stgpool
were deleted).
Now I did build all new volumes.
But how can I get rid of this one offline disk-volume ( I generated a new
one with same size but other name and renamed it to the old name : setting
it online caused an error message : ANR1315E Vary-on failed for disk volume
/TSM1/stgc/stg6.- dsm - invalid label block.

Thanks for any help

Chris



Re: archiving with different management classes

2002-09-26 Thread Cook, Dwight E

Yep, don't complicate things...
just use the -archmc=blah to bind a specific archive run's data to a
specific management class...
just add an addition option of
-archmc=MGMT2_SWS


Dwight


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 26, 2002 7:29 AM
To: [EMAIL PROTECTED]
Subject: archiving with different management classes


Hi,

I've got to take every month a monthly archive and once a year a yearly
archive. Of course the protection of the latter archive is longer.

The default management class has the correct archive copy group for the
monthly archive (management class MGMT_SWS). I've created a second
management class (MGMT2_SWS) for the yearly archive.

The default dsm.opt client options file binds the archive to the correct
management class for the monthly archive. I've created a second client
options file dsm2.opt with an include.archive statement towards the
management class MGMT2_SWS

If I launch the archive from the command prompt as:

dsmc archive -optfile=c:\temp\dsm2.opt -subdir=yes c:\*.* d:\*.*

the archive of the files is indeed done with the management class MGMT2_SWS

If I specify a schedule:

Policy Domain Name POL_SWS
Schedule Name TEST4
Description -
Action ARCHIVE
Options -optfile=c:\temp\dsm2.opt -subdir=yes
Objects c:\hp\* d:\windows\winzip81\*

the archives are bind towards the management class MGMT_SWS and not towards
the management class MGMT2_SWS. The scheduler is bound to the dsm.opt file
but if I specify another client options file in the options, this should
override the default settings.

Am I missing something?

Kurt



  1   2   3   4   5   >