can not backup nds with netware client 5.1.5

2002-10-22 Thread Tim Brown
have seen this problem reported recently, just throwing in my complaint
netware 5.1 server, service pack4 with tsm client 5.1.5 

unable to specify nds in domain statement

DOMAIN ALL-LOCAL NDS
also tried
DOMAIN ALL-LOCAL DIR

ANS1036S Invalid option 'DOMAIN' found in options file


Tim Brown
Systems Specialist
Central Hudson Gas  Electric
284 South Avenue
Poughkeepsie, NY 12601
 
Phone: 845-486-5643
Fax: 845-486-5921
Pager: 845-455-6985
 
[EMAIL PROTECTED]



How to merge filespaces

2002-10-22 Thread Michael Heiermann
is it possible to merge filespaces ?
our problem: we changed filespacename when we moved to a new storagebox.
new backups are directed to the new filespace, the old filespace still 
exists containing some old versions
we might need in the future.

thanks and regards

Michael Heiermann
OD1 Systembetreuung 
LINDE AG  Material Handling
Schweinheimer Straße 34
D-63743 Aschaffenburg
Tel.:   ++49 6021 99-1293
Fax.:   ++49 6021 99-6293 
e-mail:   [EMAIL PROTECTED]

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail 
in error) please notify the sender immediately and destroy this e-mail. 
Any unauthorised copying, disclosure or distribution of the material 
in this e-mail is strictly forbidden.
Any views expressed in this message are those of  the  individual
sender,  except  where  the sender specifically states them to be
the views of Linde Material Handling.

Since January 2002 we use the e-mail domain linde-mh.de instead
of linde-fh.de.

This mail has been swept for the presence of computerviruses.



Re: Encryption

2002-10-22 Thread Cook, Dwight E
Been a while and I'd have to double check but...
You might not want to use compression if you use encryption...
I believe it encrypts first then tries to compress and encrypted data
doesn't compress (much).
Something to double check.

Dwight



-Original Message-
From: J D Gable [mailto:josh.gable;eds.com]
Sent: Monday, October 21, 2002 4:13 PM
To: [EMAIL PROTECTED]
Subject: Encryption


Does anybody have any evidence/research as to what kind of additional
overhead encryption puts on a client when processing a backup (CPU,
Memory, etc.)?  I am running some tests myself, but the numbers are
staggering (we're seeing up to a 300% increase in the backup time in
some cases).  I understand that it is largely based on the horsepower of
the node, but I was wondering if anyone else is seeing the same, or if
anyone knew a place I could get some additional info on encryption.

Thanks in advance for your time,
Josh



Sony Library LIB-162

2002-10-22 Thread Sascha Braeuning
Hello all,

I've got a question about a Sony Library. Has anybody some experiences
(good or bad) with the Sony StoreStation AIT Library LIB-162. What is your
opinion about AIT Cartridges?


MfG
Sascha Bräuning


Sparkassen Informatik, Fellbach

OrgEinheit: 6322
Wilhelm-Pfitzer Str. 1
70736 Fellbach

Telefon:   (0711) 5722-2144
Telefax:   (0711) 5722-1634

Mailadr.:  [EMAIL PROTECTED]



dismiss dsmadmc header output

2002-10-22 Thread Michael Kindermann
Hello,
find this question once in the list, but didn't find any answer.
Is there a way, something like a switch or an option, to influence the
dsmadmc-output, to give only the interesting result and no overhead ?

Trying to scripting some task in a shell-script. And iam a little anoyed,
becaus it
not very difficult to get some output from the dsmserver. But it is  to
reuse the information in the script.
For example:
I want to remove a Node, so i have first to delete the filespace. I also
have to del the association. Iam afraid to use wildcards like 'del filespace
node_name * ' in a script , so i need the filespacenames.
I make an dsmadmc -id=... -pa=...  q filespace node_name * or q select
filspacename from filespaces.
All i need is the name, but i get a lot of serverinformation:

Tivoli Storage Manager
Command Line Administrative Interface - Version 4, Release 1, Level 2.0
(C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved.

Session established with server ADSM: AIX-RS/6000
  Server Version 4, Release 2, Level 2.7
  Server date/time: 10/22/2002 11:34:44  Last access: 10/22/2002 11:26:01

ANS8000I Server command: 'q node TSTW2K'

Node Name Platform Policy Domain  Days Since
Days Since Locked?
   Name   Last Acce-
  Password
  ss
   Set
-  -- --
-- ---
TSTW2KWinNTSTANDARD  277
   278   No

ANS8002I Highest return code was 0.

Greetings

Michael Kindermann
Wuerzburg / Germany



--
+++ GMX - Mail, Messaging  more  http://www.gmx.net +++
NEU: Mit GMX ins Internet. Rund um die Uhr f|r 1 ct/ Min. surfen!



Re: Transaction Log Restores

2002-10-22 Thread Del Hoobler
 Question is, can I still archive these log files so they can be
 applied to the previous backup?

 No.  You cannot. That is because Domino no longer tracks them.

 Would it be possible to manually copy the 'missing' log files back to the
 restore area before doing the restore? In that way, maybe the client
would
 find the logs at the time when they were needed in the process?

 Alternatively, could you do a restore to a PIT just before the missing
 logs, without an activate, then in some way apply the missing logs, and
 return to the TDP to apply logs and activate after the missing ones?

Richard,

You first idea might work and is the one you should try.

As for your second idea... as you have described,
I really don't know how you could accomplish it.
How do you propose to apply the missing logs?
The only two ways I know to apply logs is through
TDP for Domino and through a Domino server recovery.

Del



RAID5 in TSM

2002-10-22 Thread Raghu S
Hi,

There was a lot of discussion on this topic before.But i am requesting TSM
gurus give their comments again.

The set up is like this.

TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

392 MB memory, P III

  Adaptech Ultra SCSI

Hard Disk :  Internal   Hardware RAID 5:

 array A : 8.678GB * 3 : 17.356GB data and 8.678 GB
parity

 array B : 35.003 GB * 3 : 70.006GB data and 35.003
GB parity.


Both array A and array B are connected to the same channel.

OS and TSM 5.1 are installed on array A

TSM data base, recovery log and Disk storage pool are installed in array B.

Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the same
array

Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
Server. But i could take the backup,archive and restore with this
combination )

Number of Clients : 55, all are windows

Incremental backup : 1GB/ client/day.

backup window : 9AM to 6PM with 50% randamization ( all are in polling mode
)

LAN : 100Mbps

End of the day only 10 clients could finish the backup.Remaining all are
missing or ? ( in progress ) or failed.

Through the entire backup window the CPU load is 100% with dsmsvc.exe
holding 98%

I tested with various options. I stopped the schedular and fired 3 clients
backup manually at the same time.Each client has 1 GB of incremental data.
It took three hours to finish the backup. While backing up i observed there
was lot of idletime outs of sessions.

Network choke is not there. I checked this with FTP.

Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
storage pool all are on the RAID 5 )? I asked the customer to arrange a
testing machine without any RAID. I will be getting that in two days.Before
going on to the testing i like to know your comments on this.



Regards

Raghu S Nivas
Consultant - TSM
DCM Data Systems Ltd
New Delhi
India.

e-mail: [EMAIL PROTECTED],[EMAIL PROTECTED]



Re: DiskXtender 2000 High CPU Usage after install

2002-10-22 Thread Justin Case
YES I have been running Diskxtender 2000 for about 18 months and the dxspy
deamon runs
continuously checking for files to update so this is a common issue. It is
one that we have to live with..
Justin




Niklas Lundstrom [EMAIL PROTECTED]@VM.MARIST.EDU
on 10/21/2002 05:27:34 AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:[EMAIL PROTECTED]
cc:

Subject:DiskXtender 2000 High CPU Usage after install


Hello

I have installed DX2000 on our big NT-fileserver, NT4 sp6a, approx 250 Gb
and 3million files,
Now, after the install, the systemprocess is running very high, about
50-100% of the cpu usage.
Have anyone else had this problem??

Regards
Niklas Lundström
 Swedbank





RAID5 in TSM

2002-10-22 Thread Raghu S
Hi,

There was a lot of discussion on this topic before.But i am requesting TSM
gurus give their comments again.

The set up is like this.

TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

392 MB memory, P III

  Adaptech Ultra SCSI

Hard Disk :  Internal   Hardware RAID 5:

 array A : 8.678GB * 3 : 17.356GB data and 8.678 GB
parity

 array B : 35.003 GB * 3 : 70.006GB data and 35.003
GB parity.


Both array A and array B are connected to the same channel.

OS and TSM 5.1 are installed on array A

TSM data base, recovery log and Disk storage pool are installed in array B.

Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the same
array

Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
Server. But i could take the backup,archive and restore with this
combination )

Number of Clients : 55, all are windows

Incremental backup : 1GB/ client/day.

backup window : 9AM to 6PM with 50% randamization ( all are in polling mode
)

LAN : 100Mbps

End of the day only 10 clients could finish the backup.Remaining all are
missing or ? ( in progress ) or failed.

Through the entire backup window the CPU load is 100% with dsmsvc.exe
holding 98%

I tested with various options. I stopped the schedular and fired 3 clients
backup manually at the same time.Each client has 1 GB of incremental data.
It took three hours to finish the backup. While backing up i observed there
was lot of idletime outs of sessions.

Network choke is not there. I checked this with FTP.

Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
storage pool all are on the RAID 5 )? I asked the customer to arrange a
testing machine without any RAID. I will be getting that in two days.Before
going on to the testing i like to know your comments on this.



Regards

Raghu S Nivas
Consultant - TSM
DCM Data Systems Ltd
New Delhi
India.



Re: How to merge filespaces

2002-10-22 Thread Cook, Dwight E
What you will find (last time I checked...) 
Now, was the old filespace name eliminated totally ???
If so, TSM doesn't purge any of that data.
TSM doesn't know that the file system was removed, it only knows it isn't
available (maybe just not mounted...).
Existing inactive versions will expire naturally BUT the currently active
versions won't go away (ever), you will have to eventually purge them
manually.
AND based on the way your filespace names changed, you might need to look
into using {} to identify file systems...
Say you used to have a filesystem /oracle and under that you had
/oracle/d110  /oracle/d120  /oracle/d130 subdirectories but now you've
changed these to individual filesystems... if you had data saved under the
file system /oracle but under the .../d110/ subdirectory, say
/oracle/d110/myoraclefile.dbf  to find this you might have to do
query backup {/oracle}/d110/myoraclefile.dbf
to identify /oracle as the filesystem, if /oracle doesn't exist anymore and
they are all now /oracle/d110 etc...

probably about as clear as mud if you haven't had to deal with items like
this in the past...

hope this helps, 

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Michael Heiermann [mailto:Michael.Heiermann;LINDE-MH.DE]
Sent: Tuesday, October 22, 2002 4:23 AM
To: [EMAIL PROTECTED]
Subject: How to merge filespaces


is it possible to merge filespaces ?
our problem: we changed filespacename when we moved to a new storagebox.
new backups are directed to the new filespace, the old filespace still 
exists containing some old versions
we might need in the future.

thanks and regards

Michael Heiermann
OD1 Systembetreuung 
LINDE AG  Material Handling
Schweinheimer Straße 34
D-63743 Aschaffenburg
Tel.:   ++49 6021 99-1293
Fax.:   ++49 6021 99-6293 
e-mail:   [EMAIL PROTECTED]

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail 
in error) please notify the sender immediately and destroy this e-mail. 
Any unauthorised copying, disclosure or distribution of the material 
in this e-mail is strictly forbidden.
Any views expressed in this message are those of  the  individual
sender,  except  where  the sender specifically states them to be
the views of Linde Material Handling.

Since January 2002 we use the e-mail domain linde-mh.de instead
of linde-fh.de.

This mail has been swept for the presence of computerviruses.



Re: RAID5 in TSM

2002-10-22 Thread Lawrence Clark
Even though there may be a slight performance hit on writes, I've placed
the TSM DB on RAID-5 to ensure availability and no down time in case of
a disk loss.

 [EMAIL PROTECTED] 10/22/02 08:03AM 
Hi,

There was a lot of discussion on this topic before.But i am requesting
TSM
gurus give their comments again.

The set up is like this.

TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

392 MB memory, P III

  Adaptech Ultra SCSI

Hard Disk :  Internal   Hardware RAID 5:

 array A : 8.678GB * 3 : 17.356GB data and
8.678 GB
parity

 array B : 35.003 GB * 3 : 70.006GB data and
35.003
GB parity.


Both array A and array B are connected to the same channel.

OS and TSM 5.1 are installed on array A

TSM data base, recovery log and Disk storage pool are installed in
array B.

Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the
same
array

Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
Server. But i could take the backup,archive and restore with this
combination )

Number of Clients : 55, all are windows

Incremental backup : 1GB/ client/day.

backup window : 9AM to 6PM with 50% randamization ( all are in polling
mode
)

LAN : 100Mbps

End of the day only 10 clients could finish the backup.Remaining all
are
missing or ? ( in progress ) or failed.

Through the entire backup window the CPU load is 100% with dsmsvc.exe
holding 98%

I tested with various options. I stopped the schedular and fired 3
clients
backup manually at the same time.Each client has 1 GB of incremental
data.
It took three hours to finish the backup. While backing up i observed
there
was lot of idletime outs of sessions.

Network choke is not there. I checked this with FTP.

Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
storage pool all are on the RAID 5 )? I asked the customer to arrange
a
testing machine without any RAID. I will be getting that in two
days.Before
going on to the testing i like to know your comments on this.



Regards

Raghu S Nivas
Consultant - TSM
DCM Data Systems Ltd
New Delhi
India.



new tapedrive in 3575 library

2002-10-22 Thread Michelle Wiedeman
Hi all,

first of all,
aix 4.3.2.0
adsm 3.1 on NSM
magstar 3575 with 3 3570 c drive tapeunits. attached to 1 wide scsi
differential controller.
The adresses already in use by the units and the mediachanger are
20-60-00-0,1, 20-60-00-0,0, 20-60-00-1,0 and 20-60-00-2,0
(total package is reffered to by ibm as IBM 3466 Network Storage Manager)

I want to add another tapeunit to the library, In the server there is
another scsi adapter availeble.
My idea is to break the chain of the one adapter to the 3 tapeunits and hook
the third unit to the second scsi adapeter and chain it to the new
one.(still with me?)

Now i want to know if there are any known problems, issues to consider,
things to do in advance etc.
In other company's ive worked for hardware issues where always a big no, and
everything had to be done by an IBM engineer.

I hope someone can help me out.

Thnx,\
michelle



Re: Co-location

2002-10-22 Thread Matt Simpson
At 3:53 PM -0400 10/21/02, Thach, Kevin said:

The person that installed our environment basically set up 6 Policy Domains:
Colodom, Exchange, Lanfree, MSSQL, Nocodom, and Oracle.

99% of the clients are in the Nocodom (non-collocated) domain, which has one
policy set, and one management class which has one backup copy group with
retention policies set to NOLIMIT, 3, 60, 60.


This is away from the topic of Kevin's question, but his background
info led to a question that we're looking at right now.

We currently have one big disk pool for all our backups, which
migrates to one tape pool, which we copy to another tape pool for
offsite.

We turned on co-location on the onsite tape pool a couple of weeks
ago, because we just started using the SQL TDP, and the doc
recommended colocation.  We turned it back off this morning, because
we were running out of tapes and had a lot of them that were only 5%
full.

We would like to do what Kevin says he's doing: specify colocation
for a small number of our clients and leave it off for a bunch of
them.  But, if I understand correctly, colocation isn't specified
directly in the management  class.  It's specified on the tape
storage pool definition. So specifying colocation for some  clients
but not all would require multiple tape storage pools, which wouldn't
really be a problem.  But it looks like that would also require
multiple disk storage pools, because, as far as I can tell, the only
way to get a client into a different tape pool is to have it in a
different disk pool.

We'd really like to avoid carving up our disk space into more smaller
pools.  But, as far as I can tell, that's the only way to use
colocation selectively.  Am I missing something, or is that the way
it works?
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: Monitoring oracle Backups

2002-10-22 Thread Lawrie Scott
HI Joe

I do an Oracle backup and SAP backups the Unix staff have done a whole lot
of fancy scripting for me and extract data to a log file which is then
e-mailed to all the relevant people each day. Possibly you could get some
scripting done for this. I have incled a sample output below from our daily
SAP backups.

If you require the script used maybe I can arrange a copy.

*===
*Date: Mon Oct 21 20:00:18 SAST 2002
*===
**
*   Starting backup of SPP   *
**
BKI1215I: Average transmission rate was 40.181 GB/h (11.429 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 21:35:49 2002 .
BKI0021I: Elapsed time: 35 min 24 sec .
BKI0024I: Return code is: 0.
BKI1215I: Average transmission rate was 0.019 GB/h (0.005 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 21:36:22 2002 .
BKI0021I: Elapsed time: 29 sec .
BKI0024I: Return code is: 0.
**
*   Backup of SPP complete   *
**
**
*Starting backup of redo logs*
**
BKI1215I: Average transmission rate was 12.054 GB/h (3.429 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 21:37:04 2002 .
BKI0021I: Elapsed time: 40 sec .
BKI0024I: Return code is: 0.
BKI1215I: Average transmission rate was 11.124 GB/h (3.164 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 21:37:35 2002 .
BKI0021I: Elapsed time: 30 sec .
BKI0024I: Return code is: 0.
BKI1215I: Average transmission rate was 0.082 GB/h (0.023 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 21:38:13 2002 .
BKI0021I: Elapsed time: 38 sec .
BKI0024I: Return code is: 0.
*===
*Date: Mon Oct 21 20:38:15 SAST 2002
*===
*===
*Commence SPD backup at 20:38:15
*===
**
* Shutdown Appl  db for SPD *
**
**
*   Starting backup of SPD   *
**
BKI1215I: Average transmission rate was 40.049 GB/h (11.392 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 22:19:55 2002 .
BKI0021I: Elapsed time: 40 min 43 sec .
BKI0024I: Return code is: 0.
BKI1215I: Average transmission rate was 0.016 GB/h (0.004 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 22:20:54 2002 .
BKI0021I: Elapsed time: 35 sec .
BKI0024I: Return code is: 0.
**
*   Backup of SPD complete   *
**
**
*Start up SAP for SPD*
**
*===
*Commence SPQ backup at 21:22:20
*===
**
* Shutdown Appl  db for SPQ *
**
**
*   Starting backup of SPQ   *
**
BKI1215I: Average transmission rate was 49.056 GB/h (13.954 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 23:00:59 2002 .
BKI0021I: Elapsed time: 37 min 39 sec .
BKI0024I: Return code is: 0.
BKI1215I: Average transmission rate was 0.020 GB/h (0.006 MB/sec).
BKI0020I: End of backint program at: Mon Oct 21 23:01:43 2002 .
BKI0021I: Elapsed time: 29 sec .
BKI0024I: Return code is: 0.
**
*   Backup of SPQ complete   *
**
**
*Start up SAP for SPQ*
**

- Original Message -
From: Wholey, Joseph (TGA\MLOL) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 3:26 PM
Subject: Monitoring oracle Backups


Recently installed TDP for Oracle.  DBA's would like to be paged real time
if a database backup fails.  Aside from the paging part, does anyone know
where I can pull the status of an Oracle database
backup as soon as it completes.  I don't want to extract from the client
logs (when would I start looking?)  And the Activity log does not supply all
that much useful information.  Any help would be
greatly appreciated.

thx.  -joe-



Re: Testing the day of week in a script ....

2002-10-22 Thread Lawrie Scott
Hi

Where can one find this manual.

Lawrie

- Original Message -
From: Steve Harris [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 1:56 AM
Subject: Re: Testing the day of week in a script 


Arnaud,

TSM comes with a rich set of SQL functions.  Get a hold of Andy Raibeck's
Using the SQL Interface manual, it is very useful (search the archives to
find where it is)

Date functions include

Day  - Day of month
dayname
dayofweek - 1..7 Sunday..Saturday
dayofyear
daysinmonth
daysinyear
Monthname

I've just been reviewing the manual and there are some neat ones that I'm
going to try

coalesce(value,value,value...)   returns the first non-null in the list of
values
nullif(expression1,expression2) returns null if the expressions are equal or
expression1 otherwise

Andy, your manual is great, but its a bit out of date.  Any chance of an up
to date version or even better making it a part of the shipped manual set?

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia

 [EMAIL PROTECTED] 21/10/2002 19:39:12 
Hi *SM'ers,

I'm looking for an SQL command to check the current day of week, before
starting the conditional excution of a subsequent script. I already found
something that seems to work, looking like :

Script blah ...
select date_time, message from actlog where
cast((current_timestamp-date_time) minutes as decimal(8,0))  1 and
dayname(date_time) = 'Tuesday'
if(rc_ok) goto tuesday
If (rc_notfound) goto restofweek
tuesday :
run script1
exit
restofweek :
run script2
exit

Unfortunately this kind of select statement take ages to complete, because
it has to go through all the activity log to find the last lines written, as
we keep one month of it !
Does anybody have a faster way doing this kind of query/test  ? (without
calling an AIX script)
Thanks in advance 
Cheers.

Arnaud

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



**
This e-mail, including any attachments sent with it, is confidential
and for the sole use of the intended recipient(s). This confidentiality
is not waived or lost if you receive it and you are not the intended
recipient(s), or if it is transmitted/ received in error.

Any unauthorised use, alteration, disclosure, distribution or review
of this e-mail is prohibited.  It may be subject to a statutory duty of
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this
e-mail in error, you are asked to immediately notify the sender by
telephone or by return e-mail.  You should also delete this e-mail
message and destroy any hard copies produced.
**



Re: new tapedrive in 3575 library

2002-10-22 Thread Gene Greenberg
I'm using 6 3570's in a magstar 3575 with the following addresses on two
adapters:

rmt1  10-68-00-3,0
rmt2  10-68-00-4,0
rmt3  10-68-00-5,0
rmt4  20-60-00-0,0
rmt5  20-60-00-1,0
rmt6  20-60-00-2,0

I set up these with no problems and looked up the device address on 3575
library on IBM website.

Gene Greenberg Jr.
Lead, System Administrator
Round Rock ISD
1311 Round Rock Ave.
Round Rock, TX 78681




  Michelle Wiedeman
  michelle.wiedeman@MTo:   [EMAIL PROTECTED]
  ULTRIX.COM cc:
  Sent by: ADSM: DistSubject:  new tapedrive in 3575 
library
  Stor Manager
  [EMAIL PROTECTED]
  U


  10/22/02 04:33 AM
  Please respond to
  ADSM: Dist Stor
  Manager






Hi all,

first of all,
aix 4.3.2.0
adsm 3.1 on NSM
magstar 3575 with 3 3570 c drive tapeunits. attached to 1 wide scsi
differential controller.
The adresses already in use by the units and the mediachanger are
20-60-00-0,1, 20-60-00-0,0, 20-60-00-1,0 and 20-60-00-2,0
(total package is reffered to by ibm as IBM 3466 Network Storage Manager)

I want to add another tapeunit to the library, In the server there is
another scsi adapter availeble.
My idea is to break the chain of the one adapter to the 3 tapeunits and
hook
the third unit to the second scsi adapeter and chain it to the new
one.(still with me?)

Now i want to know if there are any known problems, issues to consider,
things to do in advance etc.
In other company's ive worked for hardware issues where always a big no,
and
everything had to be done by an IBM engineer.

I hope someone can help me out.

Thnx,\
michelle



Re : Re: Co-location

2002-10-22 Thread Guillaume Gilbert
Since we use 9840 tapes, we didn't want clients with 1-2 gb of data to use 1 whole 
tape. So we did just what you described. We have over  50% of our clients in this
situation. We put the limit at 10-12 GB (a 9840 holds 20 GB). Sure you have to carve 
up you're disk pool but the small clients don't require alot of disks. Their pool
is only 10 GB and it can hold a night's worth of backups easily.

Guillaume Gilbert
CGI Canada




Matt Simpson [EMAIL PROTECTED]@VM.MARIST.EDU on 2002-10-22 09:46:13

Veuillez répondre à ADSM: Dist Stor Manager [EMAIL PROTECTED]

Envoyé par :  ADSM: Dist Stor Manager [EMAIL PROTECTED]


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Co-location

At 3:53 PM -0400 10/21/02, Thach, Kevin said:
The person that installed our environment basically set up 6 Policy Domains:
Colodom, Exchange, Lanfree, MSSQL, Nocodom, and Oracle.

99% of the clients are in the Nocodom (non-collocated) domain, which has one
policy set, and one management class which has one backup copy group with
retention policies set to NOLIMIT, 3, 60, 60.

This is away from the topic of Kevin's question, but his background
info led to a question that we're looking at right now.

We currently have one big disk pool for all our backups, which
migrates to one tape pool, which we copy to another tape pool for
offsite.

We turned on co-location on the onsite tape pool a couple of weeks
ago, because we just started using the SQL TDP, and the doc
recommended colocation.  We turned it back off this morning, because
we were running out of tapes and had a lot of them that were only 5%
full.

We would like to do what Kevin says he's doing: specify colocation
for a small number of our clients and leave it off for a bunch of
them.  But, if I understand correctly, colocation isn't specified
directly in the management  class.  It's specified on the tape
storage pool definition. So specifying colocation for some  clients
but not all would require multiple tape storage pools, which wouldn't
really be a problem.  But it looks like that would also require
multiple disk storage pools, because, as far as I can tell, the only
way to get a client into a different tape pool is to have it in a
different disk pool.

We'd really like to avoid carving up our disk space into more smaller
pools.  But, as far as I can tell, that's the only way to use
colocation selectively.  Am I missing something, or is that the way
it works?
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.







Re: Co-location

2002-10-22 Thread Jane Bamberger
HI,

I think your problem might be the NOLIMIT option - it saves a copy of all
files - whether or not they are deleted - it doesn't clear space off your
disks during expiration - we had the same problem - until I removed NOLIMIT
off our Novell policy.

Jane
%%
Jane Bamberger
IS Department
Bassett Healthcare
607-547-4750
- Original Message -
From: Matt Simpson [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 9:46 AM
Subject: Re: Co-location


 At 3:53 PM -0400 10/21/02, Thach, Kevin said:
 The person that installed our environment basically set up 6 Policy
Domains:
 Colodom, Exchange, Lanfree, MSSQL, Nocodom, and Oracle.
 
 99% of the clients are in the Nocodom (non-collocated) domain, which has
one
 policy set, and one management class which has one backup copy group with
 retention policies set to NOLIMIT, 3, 60, 60.

 This is away from the topic of Kevin's question, but his background
 info led to a question that we're looking at right now.

 We currently have one big disk pool for all our backups, which
 migrates to one tape pool, which we copy to another tape pool for
 offsite.

 We turned on co-location on the onsite tape pool a couple of weeks
 ago, because we just started using the SQL TDP, and the doc
 recommended colocation.  We turned it back off this morning, because
 we were running out of tapes and had a lot of them that were only 5%
 full.

 We would like to do what Kevin says he's doing: specify colocation
 for a small number of our clients and leave it off for a bunch of
 them.  But, if I understand correctly, colocation isn't specified
 directly in the management  class.  It's specified on the tape
 storage pool definition. So specifying colocation for some  clients
 but not all would require multiple tape storage pools, which wouldn't
 really be a problem.  But it looks like that would also require
 multiple disk storage pools, because, as far as I can tell, the only
 way to get a client into a different tape pool is to have it in a
 different disk pool.

 We'd really like to avoid carving up our disk space into more smaller
 pools.  But, as far as I can tell, that's the only way to use
 colocation selectively.  Am I missing something, or is that the way
 it works?
 --


 Matt Simpson --  OS/390 Support
 219 McVey Hall  -- (859) 257-2900 x300
 University Of Kentucky, Lexington, KY 40506
 mailto:msimpson;uky.edu
 mainframe --   An obsolete device still used by thousands of obsolete
 companies serving billions of obsolete customers and making huge obsolete
 profits for their obsolete shareholders.  And this year's run twice as
fast
 as last year's.




Re: Audit Library question.

2002-10-22 Thread Matt Simpson
At 10:41 AM -0500 10/18/02, Todd Lundstedt said:

How long should the command
Audit Library LibName CheckLabel=BARCODE
take to process.  I have less than 200 tapes in the library.  I have done
this before and it took less than 5 mins.  This one has been running for
over 40 minutes now.


Our situation is just the opposite.  We have a 3584 library connected
to TSM 4.2.2.0 on Solaris.
Audit Library LibName CheckLabel=BARCODE
completes in a few seconds, with no movement of the library robotics.
It doesn't appear to be checking any barcodes at all. I haven't tried
it with CHecklabel=Yes yet.
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Transferring data from an old server to a new one.

2002-10-22 Thread HEMPSTEAD, Tim
All,

We are (finally) doing some upgrades to our TSM system to bring it up to
date (and onto a support version of the software).  We are planning to go
from 3.7.4.0 to 5.1.5.x.

Now we will be implementing the new TSM server on a new physical server and
using new tape hardware (IBM 3584). Then we can migrate our client systems
across from the old server to the new one.

Our problem is what to do with the data currently in the old system.  We
need to be able to restore from any point in the last 5 weeks.

Now, I'm sure people have come across this before, what is the best way of
doing this?
Our initial thoughts were:

1). transfer the data across to the new systems in some way, (server to
server connection) ... but we aren't really sure how to do this, (I've done
something like it but that was on a course 12 months ago and in a much
smaller scale).

2). repoint the client, if a restore is needed, to the old server ... but
then we get into issues with software level compatibility's between
different client and server releases, (e.g. a 5.1.x client restoring off of
a 3.7.4 server) and complications due to using TDP's for Oracle and Domino.

Has anyone else been in this situation and what method did they use to get
around it?

Regards

Tim

--
Tim Hempstead, [EMAIL PROTECTED]
Unix Technical Specialist
SchlumbergerSema


_
This email is confidential and intended solely for the use of the
individual to whom it is addressed. Any views or opinions presented are
solely those of the author and do not necessarily represent those of
SchlumbergerSema.
If you are not the intended recipient, be advised that you have received
this email in error and that any use, dissemination, forwarding, printing,
or copying of this email is strictly prohibited.

If you have received this email in error please notify the
SchlumbergerSema Helpdesk by telephone on +44 (0) 121 627 5600.
_



Re: Monitoring oracle Backups

2002-10-22 Thread Cowperthwaite, Eric
Joe,

Our DBA's have their backup scripts send an email with success or failure
messages. The sys admins turn on sendmail on the Solaris servers and have it
forward to our mail servers. We then build distribution lists in sendmail
for the various messaging needs. Here's a sample of the email notification
piece of the RMAN scripts.

###
#  Check for errors returned from RMAN
###
if [ $? -gt 0 ];
then
   opendb
   CURTIME=`date '+%m-%d-%Y %H:%M'`
   echo   $LogFile
   echo Oracle full backup failed for instance \${TARGET_SID}\ 
$LogFile
   echo due to bad return code from RMAN at \${CURTIME}\  $LogFile
   mailx -s ${TARGET_SID} full backup failed at: $CURTIME \
rman_backups@sasmcd40 \
 /dev/null
   exit 2
else
   CURTIME=`date '+%m-%d-%Y %H:%M'`
   echo   $LogFile
   echo Full backup of instance \${TARGET_SID}\  $LogFile
   echo completed successfully at \${CURTIME}\  $LogFile
   mailx -s ${TARGET_SID} full backup succeeded at: $CURTIME \
rman_backups@sasmcd40 \
 /dev/null
fi

Eric Cowperthwaite
EDS

 -Original Message-
 From: Wholey, Joseph (TGA\MLOL) [mailto:JWholey;EXCHANGE.ML.COM]
 Sent: Tuesday, October 22, 2002 6:26 AM
 To: [EMAIL PROTECTED]
 Subject: Monitoring oracle Backups


 Recently installed TDP for Oracle.  DBA's would like to be
 paged real time if a database backup fails.  Aside from the
 paging part, does anyone know where I can pull the status of
 an Oracle database
 backup as soon as it completes.  I don't want to extract from
 the client logs (when would I start looking?)  And the
 Activity log does not supply all that much useful
 information.  Any help would be
 greatly appreciated.

 thx.  -joe-




Oracle 8.1.7 backup failure

2002-10-22 Thread Neil Rasmussen
Tim,

TDP Oracle 2.1.0.9 is a pretty old version of TDP Oracle - there was not
real robust logging in that version. A few questions: Is this your only
backup that is failing? How about TSM API logging? Some of TDP Oracle log
events get logged to the API log file.

In the mean time try this: 'ENV=(DSMO_DEBUG=49)';

This will cause TDP Oracle to trace to the sbtio.log file located in the
Oracle target db's 'bdump' directory. You can try to analyze this trace
file or email it to me and I will take a look at it. In fact we can take
this offline from the ListServ and then post our findings at the end.
Thanks.

-Original Message-
From: HEMPSTEAD, Tim [mailto:Tim.HEMPSTEAD;READING.SEMA.SLB.COM]
Sent: Friday, October 18, 2002 5:06 AM
To: [EMAIL PROTECTED]
Subject: Oracle 8.1.7 backup failure


People,

Anybody have any ideas on how we can fix the following.  We have Oracle
8.1.7.0 backing up to TSM 3.7.4 using TDP for Oracle 2.1.0.9 under AIX
4.3.3
ML8.

Backup fails with the error message as below after approximately 2hrs and
errpt indicates that Oracle core dumps at this time as well.

Thanks

Tim


The error obtained is as follows:
RMAN-00571: ===
RMAN-00569: === ERROR MESSAGE STACK FOLLOWS ===
RMAN-00571: ===
RMAN-03015: error occurred in stored script backup_db_level_0
RMAN-03006: non-retryable error occurred during execution of command:
backup
RMAN-07004: unhandled exception during command execution on channel tape1
RMAN-10035: exception raised in RPC: ORA-00447: fatal error in background
process
RMAN-10031: ORA-19583 occurred during call to
DBMS_BACKUP_RESTORE.BACKUPPIECECREATE

--
Tim Hempstead, [EMAIL PROTECTED]
Unix Technical Specialist
SchlumbergerSema


Regards,

Neil Rasmussen
Software Development
TDP for Oracle
[EMAIL PROTECTED]



TSM 4.2.3 on OS/390 2.10

2002-10-22 Thread Brian L. Nick
Good morning all,

 We are currently running TSM 4.2.1.9 on OS/390 2.10 and are having some
issues. It has been suggested to us that we upgrade the TSM server to 4.2.3
and I wanted to know if anyone is running that level? I have been told that
a data base upgrade is not required to move to this release which is a good
thing. If anyone has any experience with this release I'd appreciate
comments, suggestions, problems or any information that you can provide.

 I understand that the only way to get to this release currently is via the
fixtest but I am told a PTF will be available soon.

 Thanks,
Brian


Brian L. Nick
Systems Technician - Storage Solutions
The Phoenix Companies Inc.
100 Bright Meadow Blvd
Enfield CT. 06082-1900

E-MAIL:  [EMAIL PROTECTED]
PHONE:   (860)403-2281



Re: Transferring data from an old server to a new one.

2002-10-22 Thread Davidson, Becky
Tim
From what kind of server to what kind of server?  What kind of disk are you
on and how is it attached?  Can you attach your old tape hardware to the new
one?

We moved servers but we were moving AIX to AIX and could just disconnected
the disk tsm was on and then reattach to the new server.  Another time we
did the move we did and export and import of the data.  Are you trying to do
a new rebuild?  Can you start the new server and then just shutdown the old
after the 5 weeks?  A 5.1 client shouldn't have a problem restoring from a
3.7 server it is just a problem when a 5.1 client backs it up and a lower
client tries to restore.

Good luck
Becky

-Original Message-
From: HEMPSTEAD, Tim [mailto:Tim.HEMPSTEAD;READING.SEMA.SLB.COM]
Sent: Tuesday, October 22, 2002 9:35 AM
To: [EMAIL PROTECTED]
Subject: Transferring data from an old server to a new one.


All,

We are (finally) doing some upgrades to our TSM system to bring it up to
date (and onto a support version of the software).  We are planning to go
from 3.7.4.0 to 5.1.5.x.

Now we will be implementing the new TSM server on a new physical server and
using new tape hardware (IBM 3584). Then we can migrate our client systems
across from the old server to the new one.

Our problem is what to do with the data currently in the old system.  We
need to be able to restore from any point in the last 5 weeks.

Now, I'm sure people have come across this before, what is the best way of
doing this?
Our initial thoughts were:

1). transfer the data across to the new systems in some way, (server to
server connection) ... but we aren't really sure how to do this, (I've done
something like it but that was on a course 12 months ago and in a much
smaller scale).

2). repoint the client, if a restore is needed, to the old server ... but
then we get into issues with software level compatibility's between
different client and server releases, (e.g. a 5.1.x client restoring off of
a 3.7.4 server) and complications due to using TDP's for Oracle and Domino.

Has anyone else been in this situation and what method did they use to get
around it?

Regards

Tim

--
Tim Hempstead, [EMAIL PROTECTED]
Unix Technical Specialist
SchlumbergerSema


_
This email is confidential and intended solely for the use of the
individual to whom it is addressed. Any views or opinions presented are
solely those of the author and do not necessarily represent those of
SchlumbergerSema.
If you are not the intended recipient, be advised that you have received
this email in error and that any use, dissemination, forwarding, printing,
or copying of this email is strictly prohibited.

If you have received this email in error please notify the
SchlumbergerSema Helpdesk by telephone on +44 (0) 121 627 5600.
_



Migrations

2002-10-22 Thread Gill, Geoffrey L.
I have a disk pool that is set Migration Continue=no, High Migration=80, Low
Migration=60. This disk pool is 390GB in size and every morning when I come
in migration is running, and would run till it is empty, I've seen it. I've
verified the settings during the migration and I see no reason that this
should be happening. The disk fills with SAP R/3 data each evening and some
other regular stuff.

Has anyone else come across this? I haven't seen the other disk pools I have
do this.

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:gillg;saic.com [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



system object problem

2002-10-22 Thread Michelle DeVault
I've heard a great deal of discussion about the
system object problem.  What server versions is it a
problem in?  I haven't noticed it so far, but am
planning an upgrade and don't want to get myself into
unecessary trouble.

__
Do you Yahoo!?
Y! Web Hosting - Let the expert host your web site
http://webhosting.yahoo.com/



Re: Transferring data from an old server to a new one.

2002-10-22 Thread Bill Fitzgerald
One possibility is to connect your old tape drives or the equivalent to the new 
system, define them as a separate library if that is what you are using, and then 
restore the database to the new server.

Once they are connected, you can used the move data or move node command to transfer 
the data from the old tapes to the new tapes. 

Then when all the data is moved, remove the old drives from the system. 

drawbacks; 
time necessary to do the moves may exceed the 5 week time frame
cost of connecting the old drives to the new server.


 [EMAIL PROTECTED] 10/22/02 10:35AM 
All,

We are (finally) doing some upgrades to our TSM system to bring it up to
date (and onto a support version of the software).  We are planning to go
from 3.7.4.0 to 5.1.5.x.

Now we will be implementing the new TSM server on a new physical server and
using new tape hardware (IBM 3584). Then we can migrate our client systems
across from the old server to the new one.

Our problem is what to do with the data currently in the old system.  We
need to be able to restore from any point in the last 5 weeks.

Now, I'm sure people have come across this before, what is the best way of
doing this?
Our initial thoughts were:

1). transfer the data across to the new systems in some way, (server to
server connection) ... but we aren't really sure how to do this, (I've done
something like it but that was on a course 12 months ago and in a much
smaller scale).

2). repoint the client, if a restore is needed, to the old server ... but
then we get into issues with software level compatibility's between
different client and server releases, (e.g. a 5.1.x client restoring off of
a 3.7.4 server) and complications due to using TDP's for Oracle and Domino.

Has anyone else been in this situation and what method did they use to get
around it?

Regards

Tim

--
Tim Hempstead, [EMAIL PROTECTED] 
Unix Technical Specialist
SchlumbergerSema


_
This email is confidential and intended solely for the use of the
individual to whom it is addressed. Any views or opinions presented are
solely those of the author and do not necessarily represent those of
SchlumbergerSema.
If you are not the intended recipient, be advised that you have received
this email in error and that any use, dissemination, forwarding, printing,
or copying of this email is strictly prohibited.

If you have received this email in error please notify the
SchlumbergerSema Helpdesk by telephone on +44 (0) 121 627 5600.
_



Oracle and TDP issues

2002-10-22 Thread Zoltan Forray
We just purchased the TSM TDP for Oracle on NT/2K.

We installed it, only to realize it won't work since there is no sign of
RMAN.EXE on this machine?

So, the owner of this box/package contacts the vendor. Their response
wasOracle is version 7.3.4, not 8i or 9i, which will not run with their
app.  said the person she spoke to did not seem to know what RMAN
was.

So, how does one backup an Oracle app/database, using the TDP without RMAN
?

I am not an Oracle person but even I know that RMAN is the utility to do
database backup/restore/maintenance !

Suggestions, anyone ?



Re: Reclamation Setting Survey

2002-10-22 Thread J M
Just out of curiosity- what are you using for reclamation settings for
primary tape pool data?

Currently we have over 100 WinNT platforms (all in one policy domain)backing
up data (1+ TB incremental) to large primary disk pool, which migrates to
primary tape, tape copy, etc... The data is a mix of filesystem incrementals
and TDP backup objects (database/exchange). Currently we have our
reclamation threshold set to 60, but we're curious to know what other
similar environments are successfully using?

_
Get faster connections -- switch to MSN Internet Access!
http://resourcecenter.msn.com/access/plans/default.asp



AW: Oracle and TDP issues

2002-10-22 Thread Rupp Thomas (Illwerke)
AFAIK: The tool to backup a Oracle 7 db is called EBU (Enterprise backup
Utility) 
and seems to be a bit different to RMAN.
But you can use TDP for ORACLE to backup version 7 dbs as well. (see
installation
Guide Page 3).

HTH

Kind regards
Thomas Rupp
Vorarlberger Illwerke AG




-Ursprüngliche Nachricht-
Von: Zoltan Forray [mailto:zforray;VCU.EDU] 
Gesendet: Dienstag, 22. Oktober 2002 16:59
An: [EMAIL PROTECTED]
Betreff: Oracle and TDP issues


We just purchased the TSM TDP for Oracle on NT/2K.

We installed it, only to realize it won't work since there is no sign of
RMAN.EXE on this machine?

So, the owner of this box/package contacts the vendor. Their response
wasOracle is version 7.3.4, not 8i or 9i, which will not run with their
app.  said the person she spoke to did not seem to know what RMAN
was.

So, how does one backup an Oracle app/database, using the TDP without RMAN
?

I am not an Oracle person but even I know that RMAN is the utility to do
database backup/restore/maintenance !

Suggestions, anyone ?


--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG
--



Réf. : Re: Reclamation Setting Survey

2002-10-22 Thread Guillaume Gilbert
On our servers using 9840 tapes, we set it to 40 since the tapes are very fast and we 
can do alot in one day. On LTO and DLT its usually 50 since it is very long to
reclaim tapes.

Guillaume Gilbert
CGI Canada




J M [EMAIL PROTECTED]@VM.MARIST.EDU on 2002-10-22 11:09:12

Veuillez répondre à ADSM: Dist Stor Manager [EMAIL PROTECTED]

Envoyé par :  ADSM: Dist Stor Manager [EMAIL PROTECTED]


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Reclamation Setting Survey

Just out of curiosity- what are you using for reclamation settings for
primary tape pool data?

Currently we have over 100 WinNT platforms (all in one policy domain)backing
up data (1+ TB incremental) to large primary disk pool, which migrates to
primary tape, tape copy, etc... The data is a mix of filesystem incrementals
and TDP backup objects (database/exchange). Currently we have our
reclamation threshold set to 60, but we're curious to know what other
similar environments are successfully using?

_
Get faster connections -- switch to MSN Internet Access!
http://resourcecenter.msn.com/access/plans/default.asp







Re: Reclamation Setting Survey

2002-10-22 Thread Prather, Wanda
I have mutiple TSM environments, and the reclamation for various onsite and
off-site tape pools varies from 50 - 80.

You set the reclamation limits based on how many scratch tapes you need to
get back, the natural turnover rate due to version expiration, or the number
of available slots you have in the tape library, or all the above.

If you don't need more scratch tapes, why bother to set reclamation lower,
etc.

So the answer, as usual:  it depends on your environment


Original Message-
From: J M [mailto:jm_seattle;HOTMAIL.COM]
Sent: Tuesday, October 22, 2002 11:09 AM
To: [EMAIL PROTECTED]
Subject: Re: Reclamation Setting Survey


Just out of curiosity- what are you using for reclamation settings for
primary tape pool data?

Currently we have over 100 WinNT platforms (all in one policy domain)backing
up data (1+ TB incremental) to large primary disk pool, which migrates to
primary tape, tape copy, etc... The data is a mix of filesystem incrementals
and TDP backup objects (database/exchange). Currently we have our
reclamation threshold set to 60, but we're curious to know what other
similar environments are successfully using?

_
Get faster connections -- switch to MSN Internet Access!
http://resourcecenter.msn.com/access/plans/default.asp



Re: Oracle and TDP issues

2002-10-22 Thread J D Gable
Our DBAs use BMC software called SQL Backtrack.

Josh

Zoltan Forray wrote:

 We just purchased the TSM TDP for Oracle on NT/2K.

 We installed it, only to realize it won't work since there is no sign of
 RMAN.EXE on this machine?

 So, the owner of this box/package contacts the vendor. Their response
 wasOracle is version 7.3.4, not 8i or 9i, which will not run with their
 app.  said the person she spoke to did not seem to know what RMAN
 was.

 So, how does one backup an Oracle app/database, using the TDP without RMAN
 ?

 I am not an Oracle person but even I know that RMAN is the utility to do
 database backup/restore/maintenance !

 Suggestions, anyone ?

begin:vcard
n:Gable;J D
tel;fax:(918) 292-5156
tel;work:(918) 292-4159
x-mozilla-html:FALSE
org:Backup and Recovery;Enterprise Coverage
version:2.1
email;internet:[EMAIL PROTECTED]
title:Infrastructure Specialist
adr;quoted-printable:;;EDS=0D=0A4000 North Mingo Road;Tulsa;OK;74116;
fn:J D Gable
end:vcard



Re: Oracle and TDP issues

2002-10-22 Thread Rafael Mendez
Hi Zoltan,
I am not Oracle expert too but I think I can give you some clue.
Like you should know, in Oracle 8i RMAN appeared replacing to EBU (Enterprise Backup 
Utility). So, In your case, you have to install EBU (if my mind still works, I think 
the last EBU version was 2.2) which is included with Oracle 7.x.
Actually, TDP for Oracle does not support Oracle 7.x versions and, as far as I know, 
EBU is also out of support from Oracle side.

Anyway, I suggest you to register into Oracle site and search for EBU support.

Regards,
Rafael
-- Mensaje original --
to: Zoltan Forray [EMAIL PROTECTED]
cc:
date: 10/22/2002 11:06:52 AM
subject: Oracle and TDP issues



 We just purchased the TSM TDP for Oracle on NT/2K.



 We installed it, only to realize it won't work since there is no sign of

 RMAN.EXE on this machine?



 So, the owner of this box/package contacts the vendor. Their response

 wasOracle is version 7.3.4, not 8i or 9i, which will not run with their

 app.  said the person she spoke to did not seem to know what RMAN

 was.



 So, how does one backup an Oracle app/database, using the TDP without RMAN

 ?



 I am not an Oracle person but even I know that RMAN is the utility to do

 database backup/restore/maintenance !



 Suggestions, anyone ?




___
Obtin gratis tu cuenta de correo en StarMedia Email. !Regmstrate hoy mismo!. 
http://www.starmedia.com/email



Re: Migrations

2002-10-22 Thread David Longo
I suspect what is happening is that as you say the SAP fills the disk
pool.  So when migration starts and after it gets below the
lowmig point, it will keep going bascially till it migrates the data
for
the current node's data it was working on when it hit the lowmig
point.
So if this was a large amount of data, essentially it would empty the
disk pool.


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/02 10:00AM 
I have a disk pool that is set Migration Continue=no, High
Migration=80, Low
Migration=60. This disk pool is 390GB in size and every morning when I
come
in migration is running, and would run till it is empty, I've seen it.
I've
verified the settings during the migration and I see no reason that
this
should be happening. The disk fills with SAP R/3 data each evening and
some
other regular stuff.

Has anyone else come across this? I haven't seen the other disk pools I
have
do this.

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:gillg;saic.com [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154


MMS health-first.org made the following
 annotations on 10/22/2002 11:28:48 AM
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Re: Migrations

2002-10-22 Thread Prather, Wanda
Yep, this tends to happen when you have only a small number of large clients
in a disk pool.

When the disk pool fills above the HIGHMIG value and triggers a migration,
TSM picks the LARGEST CLIENT (or maybe the largest filespace, I forget) in
the pool and migrates that WHOLE chunk out.  Then TSM checks to see it's
below LOWMIG yet.  If not, it picks the next largest client and migrates
that one.

So if all the remaining data in the disk pool belongs to one client, it can
go down to 0.

Working as designed.

-Original Message-
From: Gill, Geoffrey L. [mailto:GEOFFREY.L.GILL;SAIC.COM]
Sent: Tuesday, October 22, 2002 10:01 AM
To: [EMAIL PROTECTED]
Subject: Migrations


I have a disk pool that is set Migration Continue=no, High Migration=80, Low
Migration=60. This disk pool is 390GB in size and every morning when I come
in migration is running, and would run till it is empty, I've seen it. I've
verified the settings during the migration and I see no reason that this
should be happening. The disk fills with SAP R/3 data each evening and some
other regular stuff.

Has anyone else come across this? I haven't seen the other disk pools I have
do this.

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:gillg;saic.com [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Re: Oracle and TDP issues

2002-10-22 Thread Laura Buckley
Zoltan,

You can't Oracle using the TDP without RMAN.  There are many people who
write their own scripts to backup Oracle.  You can backup a tablespace
at a time, by puttting the tablespace in backup mode and then using a
SELECTIVE backup on the tablespace (which is a file.)  You could write a
script to do this, one tablespace at a time.  The biggest problem we
have encountered doing this, is that when a new tablespace is added to
the database, you must be sure to modify your backup script accordingly.

Laura Buckley
STORServer, Inc.
[EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU] On Behalf Of
Zoltan Forray
Sent: Tuesday, October 22, 2002 8:59 AM
To: [EMAIL PROTECTED]
Subject: Oracle and TDP issues


We just purchased the TSM TDP for Oracle on NT/2K.

We installed it, only to realize it won't work since there is no sign of
RMAN.EXE on this machine?

So, the owner of this box/package contacts the vendor. Their response
wasOracle is version 7.3.4, not 8i or 9i, which will not run with their
app.  said the person she spoke to did not seem to know what RMAN
was.

So, how does one backup an Oracle app/database, using the TDP without
RMAN ?

I am not an Oracle person but even I know that RMAN is the utility to do
database backup/restore/maintenance !

Suggestions, anyone ?



Re: Audit Library question.

2002-10-22 Thread David Longo
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.
So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/02 08:27AM 
At 10:41 AM -0500 10/18/02, Todd Lundstedt said:
How long should the command
Audit Library LibName CheckLabel=BARCODE
take to process.  I have less than 200 tapes in the library.  I have
done
this before and it took less than 5 mins.  This one has been running
for
over 40 minutes now.

Our situation is just the opposite.  We have a 3584 library connected
to TSM 4.2.2.0 on Solaris.
Audit Library LibName CheckLabel=BARCODE
completes in a few seconds, with no movement of the library robotics.
It doesn't appear to be checking any barcodes at all. I haven't tried
it with CHecklabel=Yes yet.
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge
obsolete
profits for their obsolete shareholders.  And this year's run twice as
fast
as last year's.


MMS health-first.org made the following
 annotations on 10/22/2002 11:30:57 AM
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Re: System Object expiration with 4.2.30

2002-10-22 Thread Seay, Paul
Have you done cleanup backupgroups?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Jolliff, Dale [mailto:xjolliff;TI.COM]
Sent: Monday, October 21, 2002 4:46 PM
To: [EMAIL PROTECTED]
Subject: System Object expiration with 4.2.30


Has anyone seen positive confirmation of successful SYSTEM OBJECT expiration
after upgrading to 4.2.3.0?

I applied 4.2.3.0 to a netstor box this morning, and two expire inventory
runs have completed successfully, but I have not seen any substantial change
in our SYSTEM OBJECT problem.



Re: Encryption

2002-10-22 Thread Jim Smith
Dwight,

You are correct that compressed data would not encrypt well, since TSM
uses a compression algorithm that works well with redundant data (and
compression kills redundancy) - but, the TSM b/a client does in fact
compress first and then encrypt the data to avoid this.   So go ahead and
use both encryption and compression if you would like.

I don't have any performance data from development on the b/a client using
encryption.  What we did test in development was the b/a client vs. plain
DES 56-bit encryption algorithms to make sure the client was not adding
unnecessary overhead.  DES is allot of number crunching, and it is by
nature very CPU intensive and will slow the b/a client down noticeably.

Thanks,
Jim Smith
TSM Client Development


Been a while and I'd have to double check but...
You might not want to use compression if you use encryption...
I believe it encrypts first then tries to compress and encrypted data
doesn't compress (much).
Something to double check.

Dwight



-Original Message-
From: J D Gable [mailto:josh.gable;eds.com]
Sent: Monday, October 21, 2002 4:13 PM
To: [EMAIL PROTECTED]
Subject: Encryption


Does anybody have any evidence/research as to what kind of additional
overhead encryption puts on a client when processing a backup (CPU,
Memory, etc.)?  I am running some tests myself, but the numbers are
staggering (we're seeing up to a 300% increase in the backup time in
some cases).  I understand that it is largely based on the horsepower of
the node, but I was wondering if anyone else is seeing the same, or if
anyone knew a place I could get some additional info on encryption.

Thanks in advance for your time,
Josh



Re: System Object expiration with 4.2.30

2002-10-22 Thread Jolliff, Dale
We haven't - level two asked me not to.
He asked me to open another PMR about it this morning, and said he has seen
some other customers with the similar conditions.



-Original Message-
From: Seay, Paul [mailto:seay_pd;NAPTHEON.COM]
Sent: Monday, October 21, 2002 9:35 PM
To: [EMAIL PROTECTED]
Subject: Re: System Object expiration with 4.2.30


Have you done cleanup backupgroups?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Jolliff, Dale [mailto:xjolliff;TI.COM]
Sent: Monday, October 21, 2002 4:46 PM
To: [EMAIL PROTECTED]
Subject: System Object expiration with 4.2.30


Has anyone seen positive confirmation of successful SYSTEM OBJECT expiration
after upgrading to 4.2.3.0?

I applied 4.2.3.0 to a netstor box this morning, and two expire inventory
runs have completed successfully, but I have not seen any substantial change
in our SYSTEM OBJECT problem.



Re: can not backup nds with netware client 5.1.5

2002-10-22 Thread Jim Kirkman
I have an ETR open with IBM on this, but just had to bump it from sev 3 to
sev 2 because no one has responded since I opened it on Thurs. a.m.

You can run a manual and it works, tsm i nds

Tim Brown wrote:

 have seen this problem reported recently, just throwing in my complaint
 netware 5.1 server, service pack4 with tsm client 5.1.5

 unable to specify nds in domain statement

 DOMAIN ALL-LOCAL NDS
 also tried
 DOMAIN ALL-LOCAL DIR

 ANS1036S Invalid option 'DOMAIN' found in options file

 Tim Brown
 Systems Specialist
 Central Hudson Gas  Electric
 284 South Avenue
 Poughkeepsie, NY 12601

 Phone: 845-486-5643
 Fax: 845-486-5921
 Pager: 845-455-6985

 [EMAIL PROTECTED]

--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
966-5884



Re: dismiss dsmadmc header output

2002-10-22 Thread Mark Stapleton
From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
Michael Kindermann
 Is there a way, something like a switch or an option, to influence the
 dsmadmc-output, to give only the interesting result and no overhead ?

 Trying to scripting some task in a shell-script. And iam a little anoyed,
 becaus it
 not very difficult to get some output from the dsmserver. But it is  to
 reuse the information in the script.

Lo, and the great god UNIX made grep, and it was good. And grep begat awk,
and sed. And sed lived 376 years, and begat Perl, and Python. And it was
good and there was joy in all the land.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: Oracle and TDP issues

2002-10-22 Thread Zoltan Forray/AC/VCU
We do too, on an AIX system.





J D Gable [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/22/2002 11:20 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Oracle and TDP issues


Our DBAs use BMC software called SQL Backtrack.

Josh

Zoltan Forray wrote:

 We just purchased the TSM TDP for Oracle on NT/2K.

 We installed it, only to realize it won't work since there is no sign of
 RMAN.EXE on this machine?

 So, the owner of this box/package contacts the vendor. Their response
 wasOracle is version 7.3.4, not 8i or 9i, which will not run with their
 app.  said the person she spoke to did not seem to know what RMAN
 was.

 So, how does one backup an Oracle app/database, using the TDP without
RMAN
 ?

 I am not an Oracle person but even I know that RMAN is the utility to do
 database backup/restore/maintenance !

 Suggestions, anyone ?





josh.gable.vcf
Description: Binary data


Re: Migrations

2002-10-22 Thread Mark D. Rodriguez
Gill, Geoffrey L. wrote:


I have a disk pool that is set Migration Continue=no, High Migration=80, Low
Migration=60. This disk pool is 390GB in size and every morning when I come
in migration is running, and would run till it is empty, I've seen it. I've
verified the settings during the migration and I see no reason that this
should be happening. The disk fills with SAP R/3 data each evening and some
other regular stuff.

Has anyone else come across this? I haven't seen the other disk pools I have
do this.

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:gillg;saic.com [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Geoff,

Based on the information your are giving me I might offer this
explanation.  First of all MIGContinue only has meaning if you have
MIGDelay set to something other than 0, but I don't think that is a
factor here.  The way migration works is as follows:

  1. Once stg_pool utilization exceeds the HIghmig value a migration
 process or processes are started.
  2. ITSM will pick the largest single filespace backup (or archive
 package) in the stg_pool and move it AND all of the other
 filespaces for that node.
  3. Once all data for that node has been moved for that node ITSM
 checks to see if stg_pool utilization is below the  LOwmig value
 if not then go back to step 2.

There is a bit of a difference in this process if you use MIGDelay and
MIGContinue, I have posted on that process several times before so I
want go through it all again.

I am guessing that since this stg_pool is for your SAP data which is
probably quite large that what you are seeing is a single node with data
that is a very large portion of the stg_pool space.  And therefore, once
ITSM starts to migrate it's data it will continue way past the low
threshold.  As you can see above, ITSM does not check LOwmig until
after it moves all data for a given node.  I beleive this is waht you
are seeing.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===



Re: Audit Library question.

2002-10-22 Thread Matt Simpson
At 11:29 AM -0400 10/22/02, David Longo said:

With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.


So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).


So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.


But how can it have a problem reading the barcode label if check-=b
doesn't even try to read the labels?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: dismiss dsmadmc header output

2002-10-22 Thread Seay, Paul
My favorite way is to use a keyword on the beginning of each output line and
then either use grep, read, or a string match in perl to only pick the lines
that have the goodies.

The following select is an example:

Select 'keyout', node_name, filespace_name, capacity, pct_util from
filespaces

Generates all the garbage and output lines like this.

keyoutN07139\\n07139\c$
28.2

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Mark Stapleton [mailto:stapleto;BERBEE.COM]
Sent: Tuesday, October 22, 2002 12:07 PM
To: [EMAIL PROTECTED]
Subject: Re: dismiss dsmadmc header output


From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
Michael Kindermann
 Is there a way, something like a switch or an option, to influence the
 dsmadmc-output, to give only the interesting result and no overhead ?

 Trying to scripting some task in a shell-script. And iam a little
 anoyed, becaus it not very difficult to get some output from the
 dsmserver. But it is  to reuse the information in the script.

Lo, and the great god UNIX made grep, and it was good. And grep begat awk,
and sed. And sed lived 376 years, and begat Perl, and Python. And it was
good and there was joy in all the land.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: dismiss dsmadmc header output

2002-10-22 Thread Fred Johanson
And while the great god UNIX was doing that, some programmer at ADSM coded

dsmadmc ... -tab -outfile=whatever

and

dsmadmc ... -comma -outfile=whaterver

Which can also be used with  filename instead of the outfile.


At 11:07 AM 10/22/2002 -0500, you wrote:

From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
Michael Kindermann
 Is there a way, something like a switch or an option, to influence the
 dsmadmc-output, to give only the interesting result and no overhead ?

 Trying to scripting some task in a shell-script. And iam a little anoyed,
 becaus it
 not very difficult to get some output from the dsmserver. But it is  to
 reuse the information in the script.

Lo, and the great god UNIX made grep, and it was good. And grep begat awk,
and sed. And sed lived 376 years, and begat Perl, and Python. And it was
good and there was joy in all the land.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: RAID5 in TSM

2002-10-22 Thread Matt Simpson
At 9:27 AM -0400 10/22/02, Lawrence Clark said:

Even though there may be a slight performance hit on writes, I've placed
the TSM DB on RAID-5 to ensure availability and no down time in case of
a disk loss.


With RAID 5, is there any point in software mirroring (dual copies of
database)?

Our DB is on RAID5 (Shark).  We also have 2 copies of it, except for
one extent that we added in a crunch when it filled up.  Are the dual
copies overkill on RAID 5?  I know that even RAID is not totally
infallible, and we could have a potential disaster that wipes out the
whole Shark.  But that's why we have backups.  The chances of that
are pretty slim, and I can't imagine any scenario where we could have
a RAID failure that wouldn't leave us so dead that we'd have to
restore anyway.
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: Audit Library question.

2002-10-22 Thread David Longo
I imagine the checkl=barocde was introduced to shorten audit, without
it you would have to mount every tape in library - which would take
some considerable time with some libraries!  What you are doing is
checkinbg the barcode label in library memory as opposed to checking
the
magnetic tape label header.

The ideal short way is to have the library do it's inventory, which
reads
barcodes and is quick, then do audit with checkl=barcode.  Whole
process shouldn't take more than a few minutes - there may be some
library units that take longer.  This complete process should take
care
of anything that has gotten out of sync.  I have had a few cases where
there was still something out of sync and had to do detailed
examination
to correct.

It can have  a problem reading the barcode if the laser scanner
couldn't
read the label.  That can happen some times - especially if you don't
use
original manufacturers labels.  If you have AIX server and use
tapeutil
with inventory action, it will show the slot status for tapes like
these
in abnormal status.  When the audit with checkl=barcode runs it
finds
this and no barcode label for that slot and mounts the tape in that
slot
to read the magnetic label and update TSM's inventory.

A brief overview as I have seen it in action many times.



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/02 01:44PM 
At 11:29 AM -0400 10/22/02, David Longo said:
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.

So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).

So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.

But how can it have a problem reading the barcode label if check-=b
doesn't even try to read the labels?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge
obsolete
profits for their obsolete shareholders.  And this year's run twice as
fast
as last year's.


MMS health-first.org made the following
 annotations on 10/22/2002 02:04:16 PM
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Re: Audit Library question.

2002-10-22 Thread KEN HORACEK
Not true...
With checklabel=barcode, all of the barcodes are read.  This is then checked with the 
internal memory of the library as to what the library's inventory says is where.  The 
tape is mounted, only if the barcode is mis-read.

Ken
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/2002 10:44:50 AM 
At 11:29 AM -0400 10/22/02, David Longo said:
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.

So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).

So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.

But how can it have a problem reading the barcode label if check-=b
doesn't even try to read the labels?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.

-
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
-
GWIASIG 0.07



Re: RAID5 in TSM

2002-10-22 Thread Seay, Paul
Are you running compression?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Raghu S [mailto:raghu;COSMOS.DCMDS.CO.IN]
Sent: Tuesday, October 22, 2002 8:03 AM
To: [EMAIL PROTECTED]
Subject: RAID5 in TSM


Hi,

There was a lot of discussion on this topic before.But i am requesting TSM
gurus give their comments again.

The set up is like this.

TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

392 MB memory, P III

  Adaptech Ultra SCSI

Hard Disk :  Internal   Hardware RAID 5:

 array A : 8.678GB * 3 : 17.356GB data and 8.678 GB
parity

 array B : 35.003 GB * 3 : 70.006GB data and 35.003
GB parity.


Both array A and array B are connected to the same channel.

OS and TSM 5.1 are installed on array A

TSM data base, recovery log and Disk storage pool are installed in array B.

Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the same
array

Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
Server. But i could take the backup,archive and restore with this
combination )

Number of Clients : 55, all are windows

Incremental backup : 1GB/ client/day.

backup window : 9AM to 6PM with 50% randamization ( all are in polling mode
)

LAN : 100Mbps

End of the day only 10 clients could finish the backup.Remaining all are
missing or ? ( in progress ) or failed.

Through the entire backup window the CPU load is 100% with dsmsvc.exe
holding 98%

I tested with various options. I stopped the schedular and fired 3 clients
backup manually at the same time.Each client has 1 GB of incremental data.
It took three hours to finish the backup. While backing up i observed there
was lot of idletime outs of sessions.

Network choke is not there. I checked this with FTP.

Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
storage pool all are on the RAID 5 )? I asked the customer to arrange a
testing machine without any RAID. I will be getting that in two days.Before
going on to the testing i like to know your comments on this.



Regards

Raghu S Nivas
Consultant - TSM
DCM Data Systems Ltd
New Delhi
India.



Re: Reclamation Setting Survey

2002-10-22 Thread Tab Trepagnier
TSM 4.1.5 on AIX 4.3.3

We set primary pools to reclaim at 50%.  That's 3570 and LTO media.  Our
DLT copypool reclaims at 60% during our weekly maintenance window.

Tab Trepagnier
TSM Administrator
Laitram Corporation






J M [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/22/2002 10:09 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Reclamation Setting Survey


Just out of curiosity- what are you using for reclamation settings for
primary tape pool data?



_
Get faster connections -- switch to MSN Internet Access!
http://resourcecenter.msn.com/access/plans/default.asp



Re: os/2 a supported client?

2002-10-22 Thread Dirk Billerbeck
Sorry, OS/2 (BTW: IBMs OWN operating system!!) is no longer supported! :
-(( The latest client version is v3.7.2.27, you can find it here:

ftp://index.storsys.ibm.com/tivoli-storage-management/patches/client/v3r7/OS2/

This client works only up to a v4.2.x TSM server, NOT with v5.1 or higher!
At least this is what we found out when we first tried to connect a v3.7
OS/2 client to a v5.1 TSM server for Windows. This is very sad because we
still have dozend of customers with hundreds of OS/2 systems that can't
move to TSM v5.1...

Mit freundlichen Grüßen,
Met vriendelijke groeten,
With best regards,
Bien amicalement,

CU/2,
Dirk Billerbeck


Dirk Billerbeck
GE CompuNet Kiel
Enterprise Computing Solutions
Am Jaegersberg 20, 24161 Altenholz (Kiel), Germany
Phone: +49 (0) 431 / 3609 - 117, Fax: +49 (0) 431 / 3609 - 190,
Internet: dirk.billerbeck @ gecits-eu.com


This email is confidential. If you are not the intended recipient,
you must not disclose or use the information contained in it.
If you have received this mail in error, please tell us
immediately by return email and delete the document.





[EMAIL PROTECTED]@VM.MARIST.EDU on 21.10.2002 17:48:45

Please respond to [EMAIL PROTECTED]

Sent by: [EMAIL PROTECTED]


To:  mailbox.dekelnsm
cc:
Subject: os/2 a supported client?


 -- 



Is OS/2 a supported TSM client?  I can't find info on
the web site to say that, but have found several old
posts in the ADSM-L archives to suggest that it may
be.


M.

__
Do you Yahoo!?
Y! Web Hosting - Let the expert host your web site
http://webhosting.yahoo.com/




Re: Audit Library question.

2002-10-22 Thread Todd Lundstedt
The reason I started the audit was because TSM was not reporting the tape
in the library, yet the library knew the tape was inserted.  I could see
the tape in the library (with my own eyes).  Using the manual operations |
move tape functions from the LCD display on the library, the library was
able to move the tape out and back into the library.  But Query LIBVolume
did not show the tape in the library.
I thought Audit Library with checklabel=barcode should fix it, but after 2
hours, the process hadn't ended.  So I cancelled it.
What I ended up doing was manually removed the tape (via the move tape
functions from the LCD panel of the library), and then turned around and
did a Checkin process for the tapes in the Bulk I/O slots.  After that, the
query libvolume command reported the tape in the library.
This tells me that there is/was no problem with the barcode, or the reader,
and possibly even the library memory (since the library knew it had the
tape all along).  Something funky going on with the Audit Library process,
for sure.



|+--
||  David Longo |
||  David.Longo@HEALTH-|
||  FIRST.ORG  |
||  Sent by: ADSM: Dist|
||  Stor Manager   |
||  [EMAIL PROTECTED]|
||  U  |
||  |
||  |
||  10/22/2002 01:02 PM |
||  Please respond to   |
||  ADSM: Dist Stor|
||  Manager|
||  |
|+--
  
---|
  |
   |
  |  To: [EMAIL PROTECTED]  
   |
  |  cc:   
   |
  |  Fax to:   
   |
  |  Subject: Re: Audit Library question.  
   |
  
---|




I imagine the checkl=barocde was introduced to shorten audit, without
it you would have to mount every tape in library - which would take
some considerable time with some libraries!  What you are doing is
checkinbg the barcode label in library memory as opposed to checking
the
magnetic tape label header.

The ideal short way is to have the library do it's inventory, which
reads
barcodes and is quick, then do audit with checkl=barcode.  Whole
process shouldn't take more than a few minutes - there may be some
library units that take longer.  This complete process should take
care
of anything that has gotten out of sync.  I have had a few cases where
there was still something out of sync and had to do detailed
examination
to correct.

It can have  a problem reading the barcode if the laser scanner
couldn't
read the label.  That can happen some times - especially if you don't
use
original manufacturers labels.  If you have AIX server and use
tapeutil
with inventory action, it will show the slot status for tapes like
these
in abnormal status.  When the audit with checkl=barcode runs it
finds
this and no barcode label for that slot and mounts the tape in that
slot
to read the magnetic label and update TSM's inventory.

A brief overview as I have seen it in action many times.



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/02 01:44PM 
At 11:29 AM -0400 10/22/02, David Longo said:
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.

So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).

So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem 

Re: Audit Library question.

2002-10-22 Thread Todd Lundstedt
Not true on my library, Ken...
I have run several audits using checklabel=barcode before with success.
The arm has never moved in the library with an audit using checkl=b.



|+
||  KEN HORACEK   |
||  KHORACEK@INCS|
||  YSTEM.COM|
||  Sent by:  |
||  ADSM: Dist   |
||  Stor Manager |
||  [EMAIL PROTECTED]|
||  IST.EDU  |
|||
|||
||  10/22/2002|
||  01:03 PM  |
||  Please respond|
||  to ADSM: Dist|
||  Stor Manager |
|||
|+
  
---|
  |
   |
  |  To: [EMAIL PROTECTED]  
   |
  |  cc:   
   |
  |  Fax to:   
   |
  |  Subject: Re: Audit Library question.  
   |
  
---|




Not true...
With checklabel=barcode, all of the barcodes are read.  This is then
checked with the internal memory of the library as to what the library's
inventory says is where.  The tape is mounted, only if the barcode is
mis-read.

Ken
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/2002 10:44:50 AM 
At 11:29 AM -0400 10/22/02, David Longo said:
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.

So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).

So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.

But how can it have a problem reading the barcode label if check-=b
doesn't even try to read the labels?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.

-
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
-
GWIASIG 0.07



TSM OS390 jump from 4.1.4 to 4.2.2.13 concerns

2002-10-22 Thread Zoltan Forray/AC/VCU
As you recall, I just jumped my OS390 server from  4.1.4 to 4.2.2.13.

Well, this weekend was the first EXPIRE INVENTORY run since the upgrade.

Now, I am finding hundreds of tapes needing recycling. In fact, I am going
to need to bump my reclaimation threshold or reclaim will run until
doomsday and monopolize my tape drives !

Was there a major bug fixed somewhere in this version-span-upgrade or
should I be concerned ?

I went back and checked my logs for the EXPIRE INVENTORY finished message
over the last 2-weeks and the ### of items expired hasn't changed, greatly
(around 2-3M).

The database %UTILIZED hasn't changed, greatly, either.



Re: Audit Library question.

2002-10-22 Thread Cook, Dwight E
All depends on your type of library !

IBM 3494-L12

To get the atl to actually scan the barcodes of the tapes you must go to the
operator console and do a
command, inventory, inventory update full
or something close to that (changes across levels of the library manager
code)
then inside tsm an audit library checklabel=barcode only checks against the
library manager's data base.

Dwight


-Original Message-
From: KEN HORACEK [mailto:KHORACEK;INCSYSTEM.COM]
Sent: Tuesday, October 22, 2002 1:03 PM
To: [EMAIL PROTECTED]
Subject: Re: Audit Library question.


Not true...
With checklabel=barcode, all of the barcodes are read.  This is then checked
with the internal memory of the library as to what the library's inventory
says is where.  The tape is mounted, only if the barcode is mis-read.

Ken
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/2002 10:44:50 AM 
At 11:29 AM -0400 10/22/02, David Longo said:
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.

So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).

So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.

But how can it have a problem reading the barcode label if check-=b
doesn't even try to read the labels?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.

-
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
-
GWIASIG 0.07



Re: 3590 Firmware

2002-10-22 Thread Allen Barth
A little off base?  Yes and no.   My CE will gladly do FMR microcode
updates (for a fee).  You can download them from
index.storsys.ibm.com/3590/code3590 (well this is where I get them for
AIX) and be aware that you may also need some compatable level of Atape
driver too (See index.storsys.ibm.com/devdrvr/AIX/  )   you will need to
use the tapeutil program to load the microcode to your drives one at a
time and then reset them.







Lloyd Dieter [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/20/02 08:31 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:3590 Firmware


All,

We recently replaced a set of 3590B drives with H's in a 3494.  During the
upgrade, I inquired of the CEs regarding what the latest firmware was, and
if they could verify that the upgraded drives had the latest code.

The reply I received was that the procedures for 3590 firmware updates had
changed, and that the customer is now responsible for obtaining the FMR
tape.

Anytime in the past I have gone through firmware updates on 3590 drives,
the CE always provided/created the FMR tape.

A quick check around the web site doesn't seem to indicate anything new
along these lines, nor does a visit to index.storsys.

When I asked one of the CEs for the procedure for getting the code, he
provided me with an e-mail address of someone within IBM.  I fired one off
to him, along with the information I was told I required (Co. name, drive
SN, etc.), but have received no reply.

Has anyone else heard this, or are these guys a little off base?

-Lloyd

--
-
Lloyd Dieter-   Senior Technology Consultant
   Synergy, Inc.   http://www.synergyinc.cc   [EMAIL PROTECTED]
 Main:585-389-1260fax:585-389-1267
-



Space Reclamation Failing

2002-10-22 Thread prasanna S ghanekar
Hi All,
Running TSM V5.1.0 on Windows 2000 server SP2. Started running Space Reclamation 
process on the Tape Volumes. The process runs for a while moves some data and then 
just stops. Tried to cancel process, it remains pending but doesn't stop.

What could be possibly wrong ??

Any help is greatly appreciated.

Thanks,

Prasanna Ghanekar
EDS





Get 250 full-color business cards FREE right now!
http://businesscards.lycos.com



Re: RAID5 in TSM

2002-10-22 Thread Suad Musovich
Realistically, if Array B dies, you lose the Database and recovery
log. The mirroring ain 't giving you protection from that.
I would separate the recovery log to array A and lose the mirroring on
both DB and log(maybe mirror log between arrays).
Restoration of a broken DB should only mean a couple hours outage, in
your case.


On Wed, 2002-10-23 at 01:03, Raghu S wrote:
 Hi,

 There was a lot of discussion on this topic before.But i am requesting TSM
 gurus give their comments again.

 The set up is like this.

 TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

 392 MB memory, P III

   Adaptech Ultra SCSI

 Hard Disk :  Internal   Hardware RAID 5:

  array A : 8.678GB * 3 : 17.356GB data and 8.678 GB
 parity

  array B : 35.003 GB * 3 : 70.006GB data and 35.003
 GB parity.


 Both array A and array B are connected to the same channel.

 OS and TSM 5.1 are installed on array A

 TSM data base, recovery log and Disk storage pool are installed in array B.

 Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

 Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the same
 array

 Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


 TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
 Server. But i could take the backup,archive and restore with this
 combination )

 Number of Clients : 55, all are windows

 Incremental backup : 1GB/ client/day.

 backup window : 9AM to 6PM with 50% randamization ( all are in polling mode
 )

 LAN : 100Mbps

 End of the day only 10 clients could finish the backup.Remaining all are
 missing or ? ( in progress ) or failed.

 Through the entire backup window the CPU load is 100% with dsmsvc.exe
 holding 98%

 I tested with various options. I stopped the schedular and fired 3 clients
 backup manually at the same time.Each client has 1 GB of incremental data.
 It took three hours to finish the backup. While backing up i observed there
 was lot of idletime outs of sessions.

 Network choke is not there. I checked this with FTP.

 Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
 storage pool all are on the RAID 5 )? I asked the customer to arrange a
 testing machine without any RAID. I will be getting that in two days.Before
 going on to the testing i like to know your comments on this.



 Regards

 Raghu S Nivas
 Consultant - TSM
 DCM Data Systems Ltd
 New Delhi
 India.



Re: Audit Library question.

2002-10-22 Thread Murray, Jim
Not so here with an ATL 7100 checklabel=barcode and it Always moves up and
down the rows, can only assume it is reading as it goes.

Jim Murray
Senior Systems Engineer
Liberty Bank
860.638.2919
[EMAIL PROTECTED]
I hear and I forget.
I see and I remember.
I do and I understand.
 -Confucius



-Original Message-
From: Todd Lundstedt [mailto:Todd_Lundstedt;VIA-CHRISTI.ORG]
Sent: Tuesday, October 22, 2002 14:28
To: [EMAIL PROTECTED]
Subject: Re: Audit Library question.


Not true on my library, Ken...
I have run several audits using checklabel=barcode before with success.
The arm has never moved in the library with an audit using checkl=b.



|+
||  KEN HORACEK   |
||  KHORACEK@INCS|
||  YSTEM.COM|
||  Sent by:  |
||  ADSM: Dist   |
||  Stor Manager |
||  [EMAIL PROTECTED]|
||  IST.EDU  |
|||
|||
||  10/22/2002|
||  01:03 PM  |
||  Please respond|
||  to ADSM: Dist|
||  Stor Manager |
|||
|+

---
|
  |
|
  |  To: [EMAIL PROTECTED]
|
  |  cc:
|
  |  Fax to:
|
  |  Subject: Re: Audit Library question.
|

---
|




Not true...
With checklabel=barcode, all of the barcodes are read.  This is then
checked with the internal memory of the library as to what the library's
inventory says is where.  The tape is mounted, only if the barcode is
mis-read.

Ken
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/2002 10:44:50 AM 
At 11:29 AM -0400 10/22/02, David Longo said:
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.

So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).

So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.

But how can it have a problem reading the barcode label if check-=b
doesn't even try to read the labels?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.

-
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
-
GWIASIG 0.07



The information transmitted is intended only for the person
or entity to which it is addressed and may contain confidential
and/or privileged material. If you are not the intended
recipient of this message you are hereby notified that any use,
review, retransmission, dissemination, distribution, reproduction
or any action taken in reliance upon this message is prohibited.
If you received this in error, please contact the sender and
delete the material from any computer.  Any views expressed
in this message are those of the individual sender and may
not necessarily reflect the views of the company.




Re: Audit Library question.

2002-10-22 Thread Matt Simpson
At 11:03 AM -0700 10/22/02, KEN HORACEK said:

With checklabel=barcode, all of the barcodes are read.  This is then
checked with the internal memory of the library as to what the
library's inventory says is where.


That's not what I'm seeing, and that's not what I think I'm reading
from others here.
When I execute the audit checklabel=barcode for our 3584 library, it
completes almost instantaneously with no movement of the library
robotics.  I don't see how it could possibly be reading the barcode
labels.  I suspect it's doing what I think others have suggested:
remembering the barcode labels that it has read previously.

At 2:02 PM -0400 10/22/02, David Longo wrote:

I imagine the checkl=barocde was introduced to shorten audit, without
it you would have to mount every tape in library - which would take
some considerable time with some libraries!


I understand that reading the barcode is a good alternative to
mounting the tape and reading the internal label.  But what I'm
saying is that it doesn't appear to be reading the barcodes when the
audit is executed.


 What you are doing is
checkinbg the barcode label in library memory as opposed to checking
the
magnetic tape label header.


That's what I thought .. checking the barcode label in library
memory.  But in my interpretation, checklabel=barcode should mean
read the barcode now, not tell me what it thinks it is based on its
memory of the last time  it read it.



The ideal short way is to have the library do it's inventory, which
reads
barcodes and is quick, then do audit with checkl=barcode.


OK .. thatmakes sense.  The library inventory physically reads the
barcode labels and updates the internal memory if necessary, and then
the TSM audit checklabel=barcode causes the library's memory to be
synced with TSM.  In my opinion, the ideal short way  would be to
have the TSM audit checklabel=barcode command really tell the library
to read the barcodes, eliminating the need to do the library
inventory in a previous step.   When I say checklabel=barcode, I mean
checklabel=barcode, I don't mean check your internal memory. But I
don't know if that's a limitation in the library or TSM.

--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: os/2 a supported client? (other platforms?)

2002-10-22 Thread Suad Musovich
How about other older versions/platforms?

Has anyone tested older 3.1/7 clients with the 5.1 server? 


On Wed, 2002-10-23 at 08:22, Dirk Billerbeck wrote:
 Sorry, OS/2 (BTW: IBMs OWN operating system!!) is no longer supported! :
 -(( The latest client version is v3.7.2.27, you can find it here:
 
 ftp://index.storsys.ibm.com/tivoli-storage-management/patches/client/v3r7/OS2/
 
 This client works only up to a v4.2.x TSM server, NOT with v5.1 or higher!
 At least this is what we found out when we first tried to connect a v3.7
 OS/2 client to a v5.1 TSM server for Windows. This is very sad because we
 still have dozend of customers with hundreds of OS/2 systems that can't
 move to TSM v5.1...
 
 Mit freundlichen Grüßen,
 Met vriendelijke groeten,
 With best regards,
 Bien amicalement,
 
 CU/2,
 Dirk Billerbeck
 
 
 Dirk Billerbeck
 GE CompuNet Kiel
 Enterprise Computing Solutions
 Am Jaegersberg 20, 24161 Altenholz (Kiel), Germany
 Phone: +49 (0) 431 / 3609 - 117, Fax: +49 (0) 431 / 3609 - 190,
 Internet: dirk.billerbeck @ gecits-eu.com
 
 
 This email is confidential. If you are not the intended recipient,
 you must not disclose or use the information contained in it.
 If you have received this mail in error, please tell us
 immediately by return email and delete the document.
 
 
 
 
 
 [EMAIL PROTECTED]@VM.MARIST.EDU on 21.10.2002 17:48:45
 
 Please respond to [EMAIL PROTECTED]
 
 Sent by: [EMAIL PROTECTED]
 
 
 To:  mailbox.dekelnsm
 cc:
 Subject: os/2 a supported client?
 
 
  -- 
 
 
 
 Is OS/2 a supported TSM client?  I can't find info on
 the web site to say that, but have found several old
 posts in the ADSM-L archives to suggest that it may
 be.
 
 
 M.
 
 __
 Do you Yahoo!?
 Y! Web Hosting - Let the expert host your web site
 http://webhosting.yahoo.com/
 



TSM NT 4.0 Client 5.1.5.0 restore problem

2002-10-22 Thread David Longo
I have an NT 4.0 SP6 client with a C drive as NTFS.  4GB disk 92% full.
I had a restore problem with this machine when I had 4.2.2.0 client,
which was APAR IC33683 that was fixed with new 5.1.5.0.  I rebuilt
the machine with NT and installed 5.1.5.0 client.  It bombed out with
restore being out of disk space.  Per the q ses on the 4.2.2.10
server
I had only restored about 2.7GB.

Output from dsmerror.log below.  (I was wondering if the fact that I
install
a 5.1.5.0 client and then try a full restore of unit that includes a
4.2.2.0
client is part of the problem??)Just log files that are about 20MB in
size
from an application.
---
10/17/2002 17:36:15 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0921-2_HL7MGR_PROD-2.log': disk
full condition
10/17/2002 17:41:41 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0921-2_HL7MGR_PROD-2.log': disk
full condition
10/17/2002 17:44:05 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0922-1_HL7MGR_PROD-1.log': disk
full condition
10/17/2002 17:46:09 ANS1005E TCP/IP read error on socket = 304, errno =
10054, reason : 'Unknown error'.
10/17/2002 17:46:09 ANS1301E Server detected system error
--

Thanks,


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


MMS health-first.org made the following
 annotations on 10/22/2002 02:58:28 PM
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Re: can not backup nds with netware client 5.1.5

2002-10-22 Thread Jim Kirkman
Jim,

Thanks. It looks like it's the all-local parm that is the problem. It
seems that as long as nds is first in the domain statement things work
OK. I have the following in an opt file and both nds and sys backed
updomain nds: sys:
no quotes.

--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
966-5884


Jim Smith wrote:


 Jim,

 This looks like a bug that was introduced in TSM 5.1.5 b/a client.
 For now, it looks like you should be able to code DOMAIN NDS: with
 the : and it will work.  Pls. instruct level-2 to take an APAR on
 this to get this fixed.

 Thanks,
 Jim Smith
 TSM Client Development

 I have an ETR open with IBM on this, but just had to bump it from sev
 3 to
 sev 2 because no one has responded since I opened it on Thurs. a.m.

 You can run a manual and it works, tsm i nds

 Tim Brown wrote:

  have seen this problem reported recently, just throwing in my
 complaint
  netware 5.1 server, service pack4 with tsm client 5.1.5
 
  unable to specify nds in domain statement
 
  DOMAIN ALL-LOCAL NDS
  also tried
  DOMAIN ALL-LOCAL DIR
 
  ANS1036S Invalid option 'DOMAIN' found in options file
 
  Tim Brown
  Systems Specialist
  Central Hudson Gas  Electric
  284 South Avenue
  Poughkeepsie, NY 12601
 
  Phone: 845-486-5643
  Fax: 845-486-5921
  Pager: 845-455-6985
 
  [EMAIL PROTECTED]

 --
 Jim Kirkman
 AIS - Systems
 UNC-Chapel Hill
 966-5884




Re: Audit Library question.

2002-10-22 Thread Kauffman, Tom
This seems to be library dependent. We've had both an STK 9710 and an IBM
3584. Both of these libraries, as a function of library startup (or after
the door had been opened/closed) would scan all slots and build an in-memory
list of volumes and slots in the controller.

Any TSM library audit with checklabel=barcode just gets back the volser and
slot from the library; if the library has an internal memory, it should come
from there. Without an internal data store of some kind, it will need to
scan the tapes.


Tom Kauffman
NIBCO, Inc

 -Original Message-
 From: Murray, Jim [mailto:JMurray;LIBERTY-BANK.COM]
 Sent: Tuesday, October 22, 2002 1:46 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Audit Library question.


 Not so here with an ATL 7100 checklabel=barcode and it Always
 moves up and
 down the rows, can only assume it is reading as it goes.

 Jim Murray
 Senior Systems Engineer
 Liberty Bank
 860.638.2919
 [EMAIL PROTECTED]
 I hear and I forget.
 I see and I remember.
 I do and I understand.
  -Confucius



 -Original Message-
 From: Todd Lundstedt [mailto:Todd_Lundstedt;VIA-CHRISTI.ORG]
 Sent: Tuesday, October 22, 2002 14:28
 To: [EMAIL PROTECTED]
 Subject: Re: Audit Library question.


 Not true on my library, Ken...
 I have run several audits using checklabel=barcode before
 with success.
 The arm has never moved in the library with an audit using checkl=b.



 |+
 ||  KEN HORACEK   |
 ||  KHORACEK@INCS|
 ||  YSTEM.COM|
 ||  Sent by:  |
 ||  ADSM: Dist   |
 ||  Stor Manager |
 ||  [EMAIL PROTECTED]|
 ||  IST.EDU  |
 |||
 |||
 ||  10/22/2002|
 ||  01:03 PM  |
 ||  Please respond|
 ||  to ADSM: Dist|
 ||  Stor Manager |
 |||
 |+

 -
 --
 |
   |
 |
   |  To: [EMAIL PROTECTED]
 |
   |  cc:
 |
   |  Fax to:
 |
   |  Subject: Re: Audit Library question.
 |

 -
 --
 |




 Not true...
 With checklabel=barcode, all of the barcodes are read.  This is then
 checked with the internal memory of the library as to what
 the library's
 inventory says is where.  The tape is mounted, only if the barcode is
 mis-read.

 Ken
 [EMAIL PROTECTED]


  [EMAIL PROTECTED] 10/22/2002 10:44:50 AM 
 At 11:29 AM -0400 10/22/02, David Longo said:
 With checklabel=barcode, what happens is that TSM reads the internal
 memory of the library as to what the library's inventory says is
 where.

 So checklabel=barcode doesn't really mean read the barcodes?  It just
 means check the library's internal memory?  I guess that's still
 useful in some circumstances, if there'e a possibility that TSM and
 the library have gotten out of sync.
 But it would be nice if things mean what they say.  Suppose I really
 want it to read the barcodes?  Suppose I think the library's internal
 memory has gotten confused somehow, and I  want to do a physical
 audit of barcode locations to compare with the internal memory?  Is
 this possible? Or is it a function of the library (which I guess
 might  make more sense).

 So generally that won't take long.  And a drive needs to be available
 for
 the case where library had a problem reading a barcode label, that
 tape
 can be mounted in a tape drive to verify - even if using checkl=b.

 But how can it have a problem reading the barcode label if check-=b
 doesn't even try to read the labels?



 --


 Matt Simpson --  OS/390 Support
 219 McVey Hall  -- (859) 257-2900 x300
 University Of Kentucky, Lexington, KY 40506
 mailto:msimpson;uky.edu
 mainframe --   An obsolete device still used by thousands of obsolete
 companies serving billions of obsolete customers and making
 huge obsolete
 profits for their obsolete shareholders.  And this year's run
 twice as fast
 as last year's.

 -
 This e-mail, including attachments, may include confidential and/or
 proprietary information, and may be used only by the person
 or entity to
 which it is addressed. If the reader of this e-mail is not
 the intended
 recipient or his or her authorized agent, the reader is
 hereby notified
 that any dissemination, distribution or copying of this e-mail is
 prohibited. If you have received this e-mail in error, please
 notify the
 sender by replying to this message and delete this e-mail immediately.
 -
 GWIASIG 0.07


 
 The information 

Re: Audit Library question.

2002-10-22 Thread KEN HORACEK
We too have a 3584.  When an AUDIT LIBRARY checklabel=barcode (run daily) occurs, the 
arm moves over the entire library of tapes, reading the barcodes.  I have witnessed 
this.  If there is something that was done at installation time, of our LTO, to effect 
this, I couldn't say.  Maybe someone out there, intimate with the hardware, can let us 
all know.


Ken
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/2002 11:53:54 AM 
At 11:03 AM -0700 10/22/02, KEN HORACEK said:
With checklabel=barcode, all of the barcodes are read.  This is then
checked with the internal memory of the library as to what the
library's inventory says is where.

That's not what I'm seeing, and that's not what I think I'm reading
from others here.
When I execute the audit checklabel=barcode for our 3584 library, it
completes almost instantaneously with no movement of the library
robotics.  I don't see how it could possibly be reading the barcode
labels.  I suspect it's doing what I think others have suggested:
remembering the barcode labels that it has read previously.

At 2:02 PM -0400 10/22/02, David Longo wrote:
I imagine the checkl=barocde was introduced to shorten audit, without
it you would have to mount every tape in library - which would take
some considerable time with some libraries!

I understand that reading the barcode is a good alternative to
mounting the tape and reading the internal label.  But what I'm
saying is that it doesn't appear to be reading the barcodes when the
audit is executed.

  What you are doing is
checkinbg the barcode label in library memory as opposed to checking
the
magnetic tape label header.

That's what I thought .. checking the barcode label in library
memory.  But in my interpretation, checklabel=barcode should mean
read the barcode now, not tell me what it thinks it is based on its
memory of the last time  it read it.


The ideal short way is to have the library do it's inventory, which
reads
barcodes and is quick, then do audit with checkl=barcode.

OK .. thatmakes sense.  The library inventory physically reads the
barcode labels and updates the internal memory if necessary, and then
the TSM audit checklabel=barcode causes the library's memory to be
synced with TSM.  In my opinion, the ideal short way  would be to
have the TSM audit checklabel=barcode command really tell the library
to read the barcodes, eliminating the need to do the library
inventory in a previous step.   When I say checklabel=barcode, I mean
checklabel=barcode, I don't mean check your internal memory. But I
don't know if that's a limitation in the library or TSM.

--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.

-
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
-
GWIASIG 0.07



Re: TSM NT 4.0 Client 5.1.5.0 restore problem

2002-10-22 Thread Rushforth, Tim
Was the data backed up using compression?  If so, the q ses shows compressed
data.

-Original Message-
From: David Longo [mailto:David.Longo;HEALTH-FIRST.ORG]
Sent: October 22, 2002 1:57 PM
To: [EMAIL PROTECTED]
Subject: TSM NT 4.0 Client 5.1.5.0 restore problem

I have an NT 4.0 SP6 client with a C drive as NTFS.  4GB disk 92% full.
I had a restore problem with this machine when I had 4.2.2.0 client,
which was APAR IC33683 that was fixed with new 5.1.5.0.  I rebuilt
the machine with NT and installed 5.1.5.0 client.  It bombed out with
restore being out of disk space.  Per the q ses on the 4.2.2.10
server
I had only restored about 2.7GB.

Output from dsmerror.log below.  (I was wondering if the fact that I
install
a 5.1.5.0 client and then try a full restore of unit that includes a
4.2.2.0
client is part of the problem??)Just log files that are about 20MB in
size
from an application.
---
10/17/2002 17:36:15 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0921-2_HL7MGR_PROD-2.log': disk
full condition
10/17/2002 17:41:41 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0921-2_HL7MGR_PROD-2.log': disk
full condition
10/17/2002 17:44:05 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0922-1_HL7MGR_PROD-1.log': disk
full condition
10/17/2002 17:46:09 ANS1005E TCP/IP read error on socket = 304, errno =
10054, reason : 'Unknown error'.
10/17/2002 17:46:09 ANS1301E Server detected system error
--

Thanks,


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


MMS health-first.org made the following
 annotations on 10/22/2002 02:58:28 PM

--
This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.  If
you receive this message in error, please immediately delete it and all
copies of it from your system, destroy any hard copies of it, and notify the
sender.  You must not, directly or indirectly, use, disclose, distribute,
print, or copy any part of this message if you are not the intended
recipient.  Health First reserves the right to monitor all e-mail
communications through its networks.  Any views or opinions expressed in
this message are solely those of the individual sender, except (1) where the
message states such views or opinions are on behalf of a particular entity;
and (2) the sender is authorized by the entity to give such views or
opinions.


==



Re: Audit Library question.

2002-10-22 Thread David Longo
Following the discussion here (and in past months), part of the
problem is the way libraries respond.  I believe (Tivoli maybe coluld
chime in and give the fine points on this) that when audit library
checkl=barcode is issued that a certain SCSI command is sent to
library.
With the varied libraries out there supported, from simple to complex,
the library interprets the command slightly differently in what it is
requested to do.  So as they say on TV Your results may not be the
same - or is that what your stock broker says?

On the things getting out of sync question.  I just looked at the help
audit library command on a TSM 4.2.2.10 server.  Notice the wording
carefully:

TSM deletes missing volumes and updates moved volume locations. TSM
does not automatically add new volumes; you must check in new volumes
with the CHECKIN LIBVOLUME command.

I think the implication is that it does nothing else.  That is if you
manually
put a tape in the library without the checkin command and then run
inventory/audit, TSM wil not pick it up.

Also for instance in cases I have personally seen, if you do a
checkout
and bulk I/O is full, the the prompt to operators is to remove tape
xxx
from slot yyy.  If the operator isn't paying attention and replies to
this,
then that tape is removed from TSM inventory but is STILL in the
library.
An audit and inventory will not put it back up into TSM's inventory.
There
are a few other slight variations  on this that I have seen, but you
get the idea.  I believe these circumstnace would be the same for
all automated libraries but am not completely sure.

How do you find these tapes then?  Send me $5.95 plus shipping
and handling and I'll send you instructions on how!  But seriously
folks, that's what they pay us the BIG bucks for.  In a sentence
of two: run a q libvol from TSM and in very close time sequence
run an inventory for your library with say tapeutil.  (Tapeutil
will show ALL tapes, no matter how they got in library).  Then do
a comparison.  Some libraries you may have to open and do a slot
by slot comparison.

As an alternative to q libvol which puts tapes in volser order use this
select
which puts them in element order:

select volume_name,home_element from libvolumes order by home_element


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/02 02:53PM 
At 11:03 AM -0700 10/22/02, KEN HORACEK said:
With checklabel=barcode, all of the barcodes are read.  This is then
checked with the internal memory of the library as to what the
library's inventory says is where.

That's not what I'm seeing, and that's not what I think I'm reading
from others here.
When I execute the audit checklabel=barcode for our 3584 library, it
completes almost instantaneously with no movement of the library
robotics.  I don't see how it could possibly be reading the barcode
labels.  I suspect it's doing what I think others have suggested:
remembering the barcode labels that it has read previously.

At 2:02 PM -0400 10/22/02, David Longo wrote:
I imagine the checkl=barocde was introduced to shorten audit, without
it you would have to mount every tape in library - which would take
some considerable time with some libraries!

I understand that reading the barcode is a good alternative to
mounting the tape and reading the internal label.  But what I'm
saying is that it doesn't appear to be reading the barcodes when the
audit is executed.

  What you are doing is
checkinbg the barcode label in library memory as opposed to checking
the
magnetic tape label header.

That's what I thought .. checking the barcode label in library
memory.  But in my interpretation, checklabel=barcode should mean
read the barcode now, not tell me what it thinks it is based on its
memory of the last time  it read it.


The ideal short way is to have the library do it's inventory, which
reads
barcodes and is quick, then do audit with checkl=barcode.

OK .. thatmakes sense.  The library inventory physically reads the
barcode labels and updates the internal memory if necessary, and then
the TSM audit checklabel=barcode causes the library's memory to be
synced with TSM.  In my opinion, the ideal short way  would be to
have the TSM audit checklabel=barcode command really tell the library
to read the barcodes, eliminating the need to do the library
inventory in a previous step.   When I say checklabel=barcode, I mean
checklabel=barcode, I don't mean check your internal memory. But I
don't know if that's a limitation in the library or TSM.

--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge
obsolete
profits for their obsolete shareholders.  And this year's run twice 

TSM backing up raw volumes

2002-10-22 Thread Geoff Raymer
Does anyone know if TSM Version 5.1.1.6 supports backing up raw veritas
volumes?

For instance, if I put in an include statement that said include
/dev/vx/rdsk/datadg/b01d00s09.  I want it to back up the actual disks and
not just the links.

Geoff

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
Jim Kirkman
Sent: Tuesday, October 22, 2002 1:43 PM
To: [EMAIL PROTECTED]
Subject: Re: can not backup nds with netware client 5.1.5


Jim,

Thanks. It looks like it's the all-local parm that is the problem. It
seems that as long as nds is first in the domain statement things work
OK. I have the following in an opt file and both nds and sys backed
updomain nds: sys:
no quotes.

--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
966-5884


Jim Smith wrote:


 Jim,

 This looks like a bug that was introduced in TSM 5.1.5 b/a client.
 For now, it looks like you should be able to code DOMAIN NDS: with
 the : and it will work.  Pls. instruct level-2 to take an APAR on
 this to get this fixed.

 Thanks,
 Jim Smith
 TSM Client Development

 I have an ETR open with IBM on this, but just had to bump it from sev
 3 to
 sev 2 because no one has responded since I opened it on Thurs. a.m.

 You can run a manual and it works, tsm i nds

 Tim Brown wrote:

  have seen this problem reported recently, just throwing in my
 complaint
  netware 5.1 server, service pack4 with tsm client 5.1.5
 
  unable to specify nds in domain statement
 
  DOMAIN ALL-LOCAL NDS
  also tried
  DOMAIN ALL-LOCAL DIR
 
  ANS1036S Invalid option 'DOMAIN' found in options file
 
  Tim Brown
  Systems Specialist
  Central Hudson Gas  Electric
  284 South Avenue
  Poughkeepsie, NY 12601
 
  Phone: 845-486-5643
  Fax: 845-486-5921
  Pager: 845-455-6985
 
  [EMAIL PROTECTED]

 --
 Jim Kirkman
 AIS - Systems
 UNC-Chapel Hill
 966-5884




Re: dismiss dsmadmc header output

2002-10-22 Thread Mr. Lindsay Morris
See http://www.servergraph.com/techtip.shtml

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Michael Kindermann
 Sent: Tuesday, October 22, 2002 6:33 AM
 To: [EMAIL PROTECTED]
 Subject: dismiss dsmadmc header output


 Hello,
 find this question once in the list, but didn't find any answer.
 Is there a way, something like a switch or an option, to influence the
 dsmadmc-output, to give only the interesting result and no overhead ?

 Trying to scripting some task in a shell-script. And iam a little anoyed,
 becaus it
 not very difficult to get some output from the dsmserver. But it is  to
 reuse the information in the script.
 For example:
 I want to remove a Node, so i have first to delete the filespace. I also
 have to del the association. Iam afraid to use wildcards like
 'del filespace
 node_name * ' in a script , so i need the filespacenames.
 I make an dsmadmc -id=... -pa=...  q filespace node_name * or q select
 filspacename from filespaces.
 All i need is the name, but i get a lot of serverinformation:

 Tivoli Storage Manager
 Command Line Administrative Interface - Version 4, Release 1, Level 2.0
 (C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved.

 Session established with server ADSM: AIX-RS/6000
   Server Version 4, Release 2, Level 2.7
   Server date/time: 10/22/2002 11:34:44  Last access: 10/22/2002 11:26:01

 ANS8000I Server command: 'q node TSTW2K'

 Node Name Platform Policy Domain  Days Since
 Days Since Locked?
Name   Last Acce-
   Password
   ss
Set
 -  -- --
 -- ---
 TSTW2KWinNTSTANDARD  277
278   No

 ANS8002I Highest return code was 0.

 Greetings

 Michael Kindermann
 Wuerzburg / Germany



 --
 +++ GMX - Mail, Messaging  more  http://www.gmx.net +++
 NEU: Mit GMX ins Internet. Rund um die Uhr f|r 1 ct/ Min. surfen!




Re: Audit Library question.

2002-10-22 Thread KEN HORACEK
Todd,
Apparently, not all LTO(s) are created equally.  We have a 3584.



Ken
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/2002 11:28:28 AM 
Not true on my library, Ken...
I have run several audits using checklabel=barcode before with success.
The arm has never moved in the library with an audit using checkl=b.



|+
||  KEN HORACEK   |
||  KHORACEK@INCS|
||  YSTEM.COM|
||  Sent by:  |
||  ADSM: Dist   |
||  Stor Manager |
||  [EMAIL PROTECTED]|
||  IST.EDU  |
|||
|||
||  10/22/2002|
||  01:03 PM  |
||  Please respond|
||  to ADSM: Dist|
||  Stor Manager |
|||
|+
  
---|
  |
   |
  |  To: [EMAIL PROTECTED]  
   |
  |  cc:   
   |
  |  Fax to:   
   |
  |  Subject: Re: Audit Library question.  
   |
  
---|




Not true...
With checklabel=barcode, all of the barcodes are read.  This is then
checked with the internal memory of the library as to what the library's
inventory says is where.  The tape is mounted, only if the barcode is
mis-read.

Ken
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/22/2002 10:44:50 AM 
At 11:29 AM -0400 10/22/02, David Longo said:
With checklabel=barcode, what happens is that TSM reads the internal
memory of the library as to what the library's inventory says is
where.

So checklabel=barcode doesn't really mean read the barcodes?  It just
means check the library's internal memory?  I guess that's still
useful in some circumstances, if there'e a possibility that TSM and
the library have gotten out of sync.
But it would be nice if things mean what they say.  Suppose I really
want it to read the barcodes?  Suppose I think the library's internal
memory has gotten confused somehow, and I  want to do a physical
audit of barcode locations to compare with the internal memory?  Is
this possible? Or is it a function of the library (which I guess
might  make more sense).

So generally that won't take long.  And a drive needs to be available
for
the case where library had a problem reading a barcode label, that
tape
can be mounted in a tape drive to verify - even if using checkl=b.

But how can it have a problem reading the barcode label if check-=b
doesn't even try to read the labels?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.

-
This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.
-
GWIASIG 0.07



Re: TSM NT 4.0 Client 5.1.5.0 restore problem

2002-10-22 Thread David Longo
No compression used on clients.

David Longo

 [EMAIL PROTECTED] 10/22/02 03:49PM 
Was the data backed up using compression?  If so, the q ses shows
compressed
data.

-Original Message-
From: David Longo [mailto:David.Longo;HEALTH-FIRST.ORG]
Sent: October 22, 2002 1:57 PM
To: [EMAIL PROTECTED]
Subject: TSM NT 4.0 Client 5.1.5.0 restore problem

I have an NT 4.0 SP6 client with a C drive as NTFS.  4GB disk 92%
full.
I had a restore problem with this machine when I had 4.2.2.0 client,
which was APAR IC33683 that was fixed with new 5.1.5.0.  I rebuilt
the machine with NT and installed 5.1.5.0 client.  It bombed out with
restore being out of disk space.  Per the q ses on the 4.2.2.10
server
I had only restored about 2.7GB.

Output from dsmerror.log below.  (I was wondering if the fact that I
install
a 5.1.5.0 client and then try a full restore of unit that includes a
4.2.2.0
client is part of the problem??)Just log files that are about 20MB in
size
from an application.
---
10/17/2002 17:36:15 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0921-2_HL7MGR_PROD-2.log':
disk
full condition
10/17/2002 17:41:41 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0921-2_HL7MGR_PROD-2.log':
disk
full condition
10/17/2002 17:44:05 ANS4009E Error processing
'\\hfscmhl7mpd\c$\SCMLogs\Backload_logs\0922-1_HL7MGR_PROD-1.log':
disk
full condition
10/17/2002 17:46:09 ANS1005E TCP/IP read error on socket = 304, errno
=
10054, reason : 'Unknown error'.
10/17/2002 17:46:09 ANS1301E Server detected system error
--

Thanks,


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


MMS health-first.org made the following
 annotations on 10/22/2002 02:58:28 PM

--
This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.
If
you receive this message in error, please immediately delete it and
all
copies of it from your system, destroy any hard copies of it, and
notify the
sender.  You must not, directly or indirectly, use, disclose,
distribute,
print, or copy any part of this message if you are not the intended
recipient.  Health First reserves the right to monitor all e-mail
communications through its networks.  Any views or opinions expressed
in
this message are solely those of the individual sender, except (1)
where the
message states such views or opinions are on behalf of a particular
entity;
and (2) the sender is authorized by the entity to give such views or
opinions.


==


MMS health-first.org made the following
 annotations on 10/22/2002 04:15:13 PM
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Offsite Vaulting with Locked Canisters

2002-10-22 Thread Joshua Bassi
All,

I am looking for feedback on using locked canisters for offsite
vaulting.  I have setup DRM for several accounts who used open
containers and I have setup poor man's tape rotation without DRM with
open canisters, but I have never implemented a locked canister solution.

The problem as I see it is that not all the data on the tapes will
expire at the same time.  So what will happen is that some of the tapes
in the canister are available for reclamation and then return to the
data center, but other tapes will not be available at the same time,
this will cause the tapes in the canister to be out of sync.

I have thought of 2 ways to potentially deal with this problem:

1) Every week we can create a brand new copy storage pool and backup the
primary pools to the offsite pool.  After 3 weeks all the data in the
pool can be deleted and the tapes brought back onsite.  This is not
utilizing TSM's incremental backup storage pool feature, but would
guarantee that a complete set of data was taken offsite each week.

2) Use either import/export or generate backupset to create fresh tapes
every week.

--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
Tivoli Certified Consultant - ADSM/TSM
eServer Systems Expert -pSeries HACMP
AIX, HACMP, Storage, TSM Consultant
Cell (831) 595-3962
[EMAIL PROTECTED]



Re: Offsite Vaulting with Locked Canisters

2002-10-22 Thread Mark Stapleton
From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
Joshua Bassi
 I am looking for feedback on using locked canisters for offsite
 vaulting.  I have setup DRM for several accounts who used open
 containers and I have setup poor man's tape rotation without DRM with
 open canisters, but I have never implemented a locked canister solution.

 The problem as I see it is that not all the data on the tapes will
 expire at the same time.  So what will happen is that some of the tapes
 in the canister are available for reclamation and then return to the
 data center, but other tapes will not be available at the same time,
 this will cause the tapes in the canister to be out of sync.

 I have thought of 2 ways to potentially deal with this problem:

 1) Every week we can create a brand new copy storage pool and backup the
 primary pools to the offsite pool.  After 3 weeks all the data in the
 pool can be deleted and the tapes brought back onsite.  This is not
 utilizing TSM's incremental backup storage pool feature, but would
 guarantee that a complete set of data was taken offsite each week.

 2) Use either import/export or generate backupset to create fresh tapes
 every week.

You're trying to fit a round peg in a square hole. Why do you want to
implement locked cannister mode? Security?

A backupset *can* be read without a db backup, as long as you're dealing
with like OS platforms. They're much less secure than an offsite pool.
Exports are similarly insecure.

Data from TSM's offsite tape pools cannot be read without a current database
backup and other required TSM server metadata.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: Oracle and TDP issues

2002-10-22 Thread Davidson, Becky
We used to for our SAP databases but our databases grow to fast to support
the constant size change license purchases from SQL Backtrack.  We switched
to TDP and it has helped because we lost the finger pointing between BMC and
Tivoli when we had a backtrack problem.
Becky

-Original Message-
From: Zoltan Forray/AC/VCU [mailto:zforray;VCU.EDU]
Sent: Tuesday, October 22, 2002 11:30 AM
To: [EMAIL PROTECTED]
Subject: Re: Oracle and TDP issues


We do too, on an AIX system.





J D Gable [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/22/2002 11:20 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Oracle and TDP issues


Our DBAs use BMC software called SQL Backtrack.

Josh

Zoltan Forray wrote:

 We just purchased the TSM TDP for Oracle on NT/2K.

 We installed it, only to realize it won't work since there is no sign of
 RMAN.EXE on this machine?

 So, the owner of this box/package contacts the vendor. Their response
 wasOracle is version 7.3.4, not 8i or 9i, which will not run with their
 app.  said the person she spoke to did not seem to know what RMAN
 was.

 So, how does one backup an Oracle app/database, using the TDP without
RMAN
 ?

 I am not an Oracle person but even I know that RMAN is the utility to do
 database backup/restore/maintenance !

 Suggestions, anyone ?



Re: Offsite Vaulting with Locked Canisters

2002-10-22 Thread Seay, Paul
Mark,
Think for a second, if reclaimation supported a days option versus the
current percent utilized option.  If that was the case you could set that
number of days about 7 days less than when the closed boxes are to return
and achieve the goal.

Well, I have written the code to do just that via the MOVE DATA vv
RECONSTRUCT=YES based on the change date in the drmedia table.  We have been
doing this for about 9 months now, very successfully.

I still use DRM for its management functionality, but I would not require it
to achieve the goal.

Yes, I have quite a few scripts and some elaborate processing to do this,
but it works very successfully.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Mark Stapleton [mailto:stapleto;BERBEE.COM]
Sent: Tuesday, October 22, 2002 4:55 PM
To: [EMAIL PROTECTED]
Subject: Re: Offsite Vaulting with Locked Canisters


From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
Joshua Bassi
 I am looking for feedback on using locked canisters for offsite
 vaulting.  I have setup DRM for several accounts who used open
 containers and I have setup poor man's tape rotation without DRM with
 open canisters, but I have never implemented a locked canister
 solution.

 The problem as I see it is that not all the data on the tapes will
 expire at the same time.  So what will happen is that some of the
 tapes in the canister are available for reclamation and then return to
 the data center, but other tapes will not be available at the same
 time, this will cause the tapes in the canister to be out of sync.

 I have thought of 2 ways to potentially deal with this problem:

 1) Every week we can create a brand new copy storage pool and backup
 the primary pools to the offsite pool.  After 3 weeks all the data in
 the pool can be deleted and the tapes brought back onsite.  This is
 not utilizing TSM's incremental backup storage pool feature, but would
 guarantee that a complete set of data was taken offsite each week.

 2) Use either import/export or generate backupset to create fresh
 tapes every week.

You're trying to fit a round peg in a square hole. Why do you want to
implement locked cannister mode? Security?

A backupset *can* be read without a db backup, as long as you're dealing
with like OS platforms. They're much less secure than an offsite pool.
Exports are similarly insecure.

Data from TSM's offsite tape pools cannot be read without a current database
backup and other required TSM server metadata.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: Offsite Vaulting with Locked Canisters

2002-10-22 Thread Alex Paschal
Hey, Josh.

You could write a Perl script (or pick a language) that manages these
canisters.  It can track which volumes are in each canister, check all their
%util and %recl and generate a canister %util/%recl.  Then you can have your
script issue delete vols or move datas when a canister is sufficiently
empty.  Of course, if you do this, you won't be able to use the stgpool's
pct_reclaim to do reclamation, but that's OK because your script mimics the
reclamation.  It would be like an offsitepool with superlarge tapes, so
_hopefully_ you'll get volume usage similar to what you're seeing now.  It
should also be pretty simple to write.

Personally, I don't like either the brand new copypool or the
import/export/backupset options.  They seem like they'll add too much drive
utilization.

Alex Paschal
Storage Administrator
Freightliner, LLC
(503) 745-6850 phone/vmail


-Original Message-
From: Joshua Bassi [mailto:jbassi;IHWY.COM]
Sent: Tuesday, October 22, 2002 1:21 PM
To: [EMAIL PROTECTED]
Subject: Offsite Vaulting with Locked Canisters


All,

I am looking for feedback on using locked canisters for offsite
vaulting.  I have setup DRM for several accounts who used open
containers and I have setup poor man's tape rotation without DRM with
open canisters, but I have never implemented a locked canister solution.

The problem as I see it is that not all the data on the tapes will
expire at the same time.  So what will happen is that some of the tapes
in the canister are available for reclamation and then return to the
data center, but other tapes will not be available at the same time,
this will cause the tapes in the canister to be out of sync.

I have thought of 2 ways to potentially deal with this problem:

1) Every week we can create a brand new copy storage pool and backup the
primary pools to the offsite pool.  After 3 weeks all the data in the
pool can be deleted and the tapes brought back onsite.  This is not
utilizing TSM's incremental backup storage pool feature, but would
guarantee that a complete set of data was taken offsite each week.

2) Use either import/export or generate backupset to create fresh tapes
every week.

--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
Tivoli Certified Consultant - ADSM/TSM
eServer Systems Expert -pSeries HACMP
AIX, HACMP, Storage, TSM Consultant
Cell (831) 595-3962
[EMAIL PROTECTED]



Re: RAID5 in TSM

2002-10-22 Thread Seay, Paul
We run on the ESS.  No mirroring!  You are paying for a solution that has
such high availability numbers it makes no sense.  If you want to improve
your recovery, run more incremental backups so you can roll to the most
current, do a volhistory dump, and put that information on the root disk
(not on the ESS) or a different array in the ESS on OS/390.

There is also a mirroring bug below 4.2.2.8 that can cause horrible database
backup performance.

You are correct, it is overkill for the database.  The only thing I could
possibly justify is mirroring the log on different SSA loops in the ESS so
that if the log died I could recover from the last backup.

There have been a handful of ESS array failures.  For example, a microcode
glitch that occurred if a DDM failed and a parity rebuild was in process
because of a SSA adapter special condition.  That was fixed in G5+1 of the
ESS microcode about 8 months ago.  Only one customer ever saw that problem
to my knowledge.  There was also a bad vintage of drives that has been
cleaned up.  But otherwise, things have been solid and we should trust this
hardware.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Matt Simpson [mailto:msimpson;UKY.EDU]
Sent: Tuesday, October 22, 2002 1:52 PM
To: [EMAIL PROTECTED]
Subject: Re: RAID5 in TSM


At 9:27 AM -0400 10/22/02, Lawrence Clark said:
Even though there may be a slight performance hit on writes, I've
placed the TSM DB on RAID-5 to ensure availability and no down time in
case of a disk loss.

With RAID 5, is there any point in software mirroring (dual copies of
database)?

Our DB is on RAID5 (Shark).  We also have 2 copies of it, except for one
extent that we added in a crunch when it filled up.  Are the dual copies
overkill on RAID 5?  I know that even RAID is not totally infallible, and we
could have a potential disaster that wipes out the whole Shark.  But that's
why we have backups.  The chances of that are pretty slim, and I can't
imagine any scenario where we could have a RAID failure that wouldn't leave
us so dead that we'd have to restore anyway.
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506 mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



NT 4.0 100% CPU Usage

2002-10-22 Thread Gallerson,Charles,GLENDALE,Information Technology
All  (HELP)


When backups are triggered via a batch file on NT 4.0 Clients they use 100%
of CPU and the dsmc process continues to use 100% of CPU even after the
backup shows complete and successful in TSM.  The process then has to be
killed manually in order to release the CPU cycles.

AIX 5.1
TSM 5.1



Re: Audit Library question.

2002-10-22 Thread Tab Trepagnier
Matt,

What I see on my system depends on the library.  I currently have a 3583,
two 3575s (and L12 and an L18), and an HP 4/40 DLT.  In the past I've used
a smaller 3575 (L06) and a 3570.  All have/had barcode readers.

The two largest 3575s perform barcode audits without moving anything. They
seem to simply read the library's internal inventory back to TSM.  If the
library's inventory is suspect, there is a front panel function to init
element status that causes the robot to scan all the bar codes and update
the library's inventory.

The 3570, 3575-L06, and HP 4/40 all send/sent the robot scanning the tapes
in their slots and drives.  That updated info was then reported to TSM.

I don't know what the 3583 does because it doesn't have any windows to
permit viewing the internals while in operation.

Tab Trepagnier
TSM Administrator
Laitram Corporation







Matt Simpson [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/22/2002 01:53 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Audit Library question.


At 11:03 AM -0700 10/22/02, KEN HORACEK said:
With checklabel=barcode, all of the barcodes are read.  This is then
checked with the internal memory of the library as to what the
library's inventory says is where.

That's not what I'm seeing, and that's not what I think I'm reading
from others here.
When I execute the audit checklabel=barcode for our 3584 library, it
completes almost instantaneously with no movement of the library
robotics.  I don't see how it could possibly be reading the barcode
labels.  I suspect it's doing what I think others have suggested:
remembering the barcode labels that it has read previously.

At 2:02 PM -0400 10/22/02, David Longo wrote:
I imagine the checkl=barocde was introduced to shorten audit, without
it you would have to mount every tape in library - which would take
some considerable time with some libraries!

I understand that reading the barcode is a good alternative to
mounting the tape and reading the internal label.  But what I'm
saying is that it doesn't appear to be reading the barcodes when the
audit is executed.

  What you are doing is
checkinbg the barcode label in library memory as opposed to checking
the
magnetic tape label header.

That's what I thought .. checking the barcode label in library
memory.  But in my interpretation, checklabel=barcode should mean
read the barcode now, not tell me what it thinks it is based on its
memory of the last time  it read it.


The ideal short way is to have the library do it's inventory, which
reads
barcodes and is quick, then do audit with checkl=barcode.

OK .. thatmakes sense.  The library inventory physically reads the
barcode labels and updates the internal memory if necessary, and then
the TSM audit checklabel=barcode causes the library's memory to be
synced with TSM.  In my opinion, the ideal short way  would be to
have the TSM audit checklabel=barcode command really tell the library
to read the barcodes, eliminating the need to do the library
inventory in a previous step.   When I say checklabel=barcode, I mean
checklabel=barcode, I don't mean check your internal memory. But I
don't know if that's a limitation in the library or TSM.

--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as
fast
as last year's.



Re: dismiss dsmadmc header output

2002-10-22 Thread Graham Trigge
Michael,

The only way I can think of getting only the information you require is
through SQL queries straight to the TSM databases. This will give you
precise information, and only the information you require.

Regards,

--

Graham Trigge
Senior Administrator
Telstra Enterprise Services Pty Ltd

Phone: (02) 9882 5831
Mobile: 0409 654 434
Fax: (02) 9882 5987
Email: [EMAIL PROTECTED]




Michael Kindermann
michael.kindermanTo: [EMAIL PROTECTED]
[EMAIL PROTECTED]cc:
Sent by: ADSM:   Subject: dismiss dsmadmc header 
output
Dist Stor Manager
[EMAIL PROTECTED]
EDU


22/10/2002 20:32
Please respond to
ADSM: Dist Stor
Manager






Hello,
find this question once in the list, but didn't find any answer.
Is there a way, something like a switch or an option, to influence the
dsmadmc-output, to give only the interesting result and no overhead ?

Trying to scripting some task in a shell-script. And iam a little anoyed,
becaus it
not very difficult to get some output from the dsmserver. But it is  to
reuse the information in the script.
For example:
I want to remove a Node, so i have first to delete the filespace. I also
have to del the association. Iam afraid to use wildcards like 'del
filespace
node_name * ' in a script , so i need the filespacenames.
I make an dsmadmc -id=... -pa=...  q filespace node_name * or q select
filspacename from filespaces.
All i need is the name, but i get a lot of serverinformation:

Tivoli Storage Manager
Command Line Administrative Interface - Version 4, Release 1, Level 2.0
(C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved.

Session established with server ADSM: AIX-RS/6000
  Server Version 4, Release 2, Level 2.7
  Server date/time: 10/22/2002 11:34:44  Last access: 10/22/2002 11:26:01

ANS8000I Server command: 'q node TSTW2K'

Node Name Platform Policy Domain  Days Since
Days Since Locked?
   Name   Last Acce-
  Password
  ss
   Set
-  -- --
-- ---
TSTW2KWinNTSTANDARD  277
   278   No

ANS8002I Highest return code was 0.

Greetings

Michael Kindermann
Wuerzburg / Germany



--
+++ GMX - Mail, Messaging  more  http://www.gmx.net +++
NEU: Mit GMX ins Internet. Rund um die Uhr f|r 1 ct/ Min. surfen!



Re: 3590 Firmware

2002-10-22 Thread Seay, Paul
To my knowledge, if you are on maintenance, the CE will apply the code to
the drives.  However, it is a lot easier if you just get the code and use
tapeutil to load it from your host, no tape required.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Lloyd Dieter [mailto:dieter;SNRGY.COM]
Sent: Sunday, October 20, 2002 9:31 AM
To: [EMAIL PROTECTED]
Subject: 3590 Firmware


All,

We recently replaced a set of 3590B drives with H's in a 3494.  During the
upgrade, I inquired of the CEs regarding what the latest firmware was, and
if they could verify that the upgraded drives had the latest code.

The reply I received was that the procedures for 3590 firmware updates had
changed, and that the customer is now responsible for obtaining the FMR
tape.

Anytime in the past I have gone through firmware updates on 3590 drives, the
CE always provided/created the FMR tape.

A quick check around the web site doesn't seem to indicate anything new
along these lines, nor does a visit to index.storsys.

When I asked one of the CEs for the procedure for getting the code, he
provided me with an e-mail address of someone within IBM.  I fired one off
to him, along with the information I was told I required (Co. name, drive
SN, etc.), but have received no reply.

Has anyone else heard this, or are these guys a little off base?

-Lloyd

--
-
Lloyd Dieter-   Senior Technology Consultant
   Synergy, Inc.   http://www.synergyinc.cc   [EMAIL PROTECTED]
 Main:585-389-1260fax:585-389-1267
-



Re: Oracle and TDP issues

2002-10-22 Thread Tailor, Steve
There is a program supplied with Oracle 7 called EBU.
What we are doing is to use EBU to backup the database to disk and then do
an archive of the backup.
It all works OK as long as you have the disk space.
We also have to do this with our Oracle 9 databases as TDP for that version
is not yet available.

Regards

Steve Tailor
Technical Consultant


-Original Message-
From: Zoltan Forray [mailto:zforray;VCU.EDU]
Sent: 22 October, 2002 10:59 PM
To: [EMAIL PROTECTED]
Subject: Oracle and TDP issues


We just purchased the TSM TDP for Oracle on NT/2K.

We installed it, only to realize it won't work since there is no sign of
RMAN.EXE on this machine?

So, the owner of this box/package contacts the vendor. Their response
wasOracle is version 7.3.4, not 8i or 9i, which will not run with their
app.  said the person she spoke to did not seem to know what RMAN
was.

So, how does one backup an Oracle app/database, using the TDP without RMAN
?

I am not an Oracle person but even I know that RMAN is the utility to do
database backup/restore/maintenance !

Suggestions, anyone ?



Re: Oracle and TDP issues

2002-10-22 Thread Zoltan Forray
Thanks to everyone for all the helpful information about Oracle, RMAN and
EBU.

Will dig into to see if they installed EBU on this server and dig up some
scripts/bat/cmd files to do the dumping/backup.





Tailor, Steve [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/22/2002 08:57 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Oracle and TDP issues


There is a program supplied with Oracle 7 called EBU.
What we are doing is to use EBU to backup the database to disk and then do
an archive of the backup.
It all works OK as long as you have the disk space.
We also have to do this with our Oracle 9 databases as TDP for that
version
is not yet available.

Regards

Steve Tailor
Technical Consultant


-Original Message-
From: Zoltan Forray [mailto:zforray;VCU.EDU]
Sent: 22 October, 2002 10:59 PM
To: [EMAIL PROTECTED]
Subject: Oracle and TDP issues


We just purchased the TSM TDP for Oracle on NT/2K.

We installed it, only to realize it won't work since there is no sign of
RMAN.EXE on this machine?

So, the owner of this box/package contacts the vendor. Their response
wasOracle is version 7.3.4, not 8i or 9i, which will not run with their
app.  said the person she spoke to did not seem to know what RMAN
was.

So, how does one backup an Oracle app/database, using the TDP without RMAN
?

I am not an Oracle person but even I know that RMAN is the utility to do
database backup/restore/maintenance !

Suggestions, anyone ?



NT4.0 IE 5 Requirements

2002-10-22 Thread Gallerson,Charles,GLENDALE,Information Technology
Is IE 5 required for TSM BA Client to run properly on NT 4.0 with Service
Pack 6?  Are there any special dll requirements for NT 4.0 for TSM client
5.1 to work properly.  When IE 5 is installed on the client the backup works
if it is not installed it hangs the CPU at 100%



TDP for Domino Retention

2002-10-22 Thread Gill, Geoffrey L.
Would someone please explain to me what these settings will mean to me as it
relates to TDP for domino backups? The schedule is below. Is there going to
be 45 versions of each type of backup or what? I'm told they want to be able
to go back 45 days. I'm not sure how this should be set up though. And I'm
having a hell of a time satisfying anyone on this.

Domain   * Schedule NameAction Start Date/Time  Duration Period
Day
 -  --   --
---
DOMINO_DOM DOMARCH_M-F  CMD02/14/02 06:00:001 H6 H
WD
DOMINO_DOM DOMINCR_M-F  CMD04/09/02 19:00:001 H1 D
WD
DOMINO_DOM DOMSELECTIVE_S-  CMD04/14/02 19:00:001 H1 D
Sun
UNDAY





Policy Domain Name
DOMINO_DOM

Policy Set Name
DOMINO_POL

Mgmt Class Name
DOMINO_MGT

Copy Group Name
STANDARD

Versions Data Exists
45

Versions Data Deleted
3

Retain Extra Versions
45

Retain Only Version
45

Copy Mode
MODIFIED

Copy Serialization
DYNAMIC

Copy Frequency
0
Thanks,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:gillg;saic.com [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Re: RAID5 in TSM

2002-10-22 Thread Raghu S
I tested with compression and without compression.No much difference in
performance.



Seay, Paul
seay_pd@NAPTTo: [EMAIL PROTECTED]
HEON.COMcc:
Sent by: Subject: Re: RAID5 in TSM
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
RIST.EDU


10/22/2002
06:29 AM
Please
respond to
ADSM: Dist
Stor Manager





Are you running compression?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Raghu S [mailto:raghu;COSMOS.DCMDS.CO.IN]
Sent: Tuesday, October 22, 2002 8:03 AM
To: [EMAIL PROTECTED]
Subject: RAID5 in TSM


Hi,

There was a lot of discussion on this topic before.But i am requesting TSM
gurus give their comments again.

The set up is like this.

TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

392 MB memory, P III

  Adaptech Ultra SCSI

Hard Disk :  Internal   Hardware RAID 5:

 array A : 8.678GB * 3 : 17.356GB data and 8.678 GB
parity

 array B : 35.003 GB * 3 : 70.006GB data and 35.003
GB parity.


Both array A and array B are connected to the same channel.

OS and TSM 5.1 are installed on array A

TSM data base, recovery log and Disk storage pool are installed in array B.

Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the same
array

Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
Server. But i could take the backup,archive and restore with this
combination )

Number of Clients : 55, all are windows

Incremental backup : 1GB/ client/day.

backup window : 9AM to 6PM with 50% randamization ( all are in polling mode
)

LAN : 100Mbps

End of the day only 10 clients could finish the backup.Remaining all are
missing or ? ( in progress ) or failed.

Through the entire backup window the CPU load is 100% with dsmsvc.exe
holding 98%

I tested with various options. I stopped the schedular and fired 3 clients
backup manually at the same time.Each client has 1 GB of incremental data.
It took three hours to finish the backup. While backing up i observed there
was lot of idletime outs of sessions.

Network choke is not there. I checked this with FTP.

Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
storage pool all are on the RAID 5 )? I asked the customer to arrange a
testing machine without any RAID. I will be getting that in two days.Before
going on to the testing i like to know your comments on this.



Regards

Raghu S Nivas
Consultant - TSM
DCM Data Systems Ltd
New Delhi
India.



Re: RAID5 in TSM

2002-10-22 Thread Raghu S
Here i am not worried about the protection.I am worried about the
performace. Most of my client backups are failed.



Suad Musovich
s.musovich@AUCKLTo: [EMAIL PROTECTED]
AND.AC.NZ   cc:
Sent by: ADSM:  Subject: Re: RAID5 in TSM
Dist Stor
Manager
[EMAIL PROTECTED]
.EDU


10/23/2002 12:49
AM
Please respond to
ADSM: Dist Stor
Manager





Realistically, if Array B dies, you lose the Database and recovery
log. The mirroring ain 't giving you protection from that.
I would separate the recovery log to array A and lose the mirroring on
both DB and log(maybe mirror log between arrays).
Restoration of a broken DB should only mean a couple hours outage, in
your case.


On Wed, 2002-10-23 at 01:03, Raghu S wrote:
 Hi,

 There was a lot of discussion on this topic before.But i am requesting
TSM
 gurus give their comments again.

 The set up is like this.

 TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

 392 MB memory, P III

   Adaptech Ultra SCSI

 Hard Disk :  Internal   Hardware RAID 5:

  array A : 8.678GB * 3 : 17.356GB data and 8.678
GB
 parity

  array B : 35.003 GB * 3 : 70.006GB data and
35.003
 GB parity.


 Both array A and array B are connected to the same channel.

 OS and TSM 5.1 are installed on array A

 TSM data base, recovery log and Disk storage pool are installed in array
B.

 Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

 Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the
same
 array

 Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


 TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
 Server. But i could take the backup,archive and restore with this
 combination )

 Number of Clients : 55, all are windows

 Incremental backup : 1GB/ client/day.

 backup window : 9AM to 6PM with 50% randamization ( all are in polling
mode
 )

 LAN : 100Mbps

 End of the day only 10 clients could finish the backup.Remaining all are
 missing or ? ( in progress ) or failed.

 Through the entire backup window the CPU load is 100% with dsmsvc.exe
 holding 98%

 I tested with various options. I stopped the schedular and fired 3
clients
 backup manually at the same time.Each client has 1 GB of incremental
data.
 It took three hours to finish the backup. While backing up i observed
there
 was lot of idletime outs of sessions.

 Network choke is not there. I checked this with FTP.

 Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
 storage pool all are on the RAID 5 )? I asked the customer to arrange a
 testing machine without any RAID. I will be getting that in two
days.Before
 going on to the testing i like to know your comments on this.



 Regards

 Raghu S Nivas
 Consultant - TSM
 DCM Data Systems Ltd
 New Delhi
 India.



Re: RAID5 in TSM

2002-10-22 Thread Raghu S
Paul,


keeping TSM database,log and disk storage pool on RAID5 degrades the
performance???

Regards

Raghu



Seay, Paul
seay_pd@NAPTTo: [EMAIL PROTECTED]
HEON.COMcc:
Sent by: Subject: Re: RAID5 in TSM
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
RIST.EDU


10/22/2002
06:29 AM
Please
respond to
ADSM: Dist
Stor Manager





Are you running compression?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Raghu S [mailto:raghu;COSMOS.DCMDS.CO.IN]
Sent: Tuesday, October 22, 2002 8:03 AM
To: [EMAIL PROTECTED]
Subject: RAID5 in TSM


Hi,

There was a lot of discussion on this topic before.But i am requesting TSM
gurus give their comments again.

The set up is like this.

TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

392 MB memory, P III

  Adaptech Ultra SCSI

Hard Disk :  Internal   Hardware RAID 5:

 array A : 8.678GB * 3 : 17.356GB data and 8.678 GB
parity

 array B : 35.003 GB * 3 : 70.006GB data and 35.003
GB parity.


Both array A and array B are connected to the same channel.

OS and TSM 5.1 are installed on array A

TSM data base, recovery log and Disk storage pool are installed in array B.

Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the same
array

Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
Server. But i could take the backup,archive and restore with this
combination )

Number of Clients : 55, all are windows

Incremental backup : 1GB/ client/day.

backup window : 9AM to 6PM with 50% randamization ( all are in polling mode
)

LAN : 100Mbps

End of the day only 10 clients could finish the backup.Remaining all are
missing or ? ( in progress ) or failed.

Through the entire backup window the CPU load is 100% with dsmsvc.exe
holding 98%

I tested with various options. I stopped the schedular and fired 3 clients
backup manually at the same time.Each client has 1 GB of incremental data.
It took three hours to finish the backup. While backing up i observed there
was lot of idletime outs of sessions.

Network choke is not there. I checked this with FTP.

Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
storage pool all are on the RAID 5 )? I asked the customer to arrange a
testing machine without any RAID. I will be getting that in two days.Before
going on to the testing i like to know your comments on this.



Regards

Raghu S Nivas
Consultant - TSM
DCM Data Systems Ltd
New Delhi
India.



Re: Co-location

2002-10-22 Thread Chris Gibes
Matt,

You are absolutely correct.  Co-location is by storage pool, not by
management class.  So yes, you would need to carve your disk up into
multiple storage pools to selectively use co-location, or you could set
up a tape pool that was co-located and go directly to tapes but
caution! you would need to have enough drives to accomplish this.  I
would also add the old, disk is so cheap, but my guess is that it's
not viable to add more disk (or you're on one of those platforms where
disk is not cheap...)

I guess one thing to consider is that while you may be carving your disk
up into smaller pools, the total amount of disk and the total amount
being backed up are going to be the same regardless of how many pools
you have, so carving one big pool up, shouldn't be that big of an issue,
as long as you put some planning into the size of the disk pools.

Chris Gibes
Tivoli Certified Consultant
IBM Certified System Administrator
[EMAIL PROTECTED]



We'd really like to avoid carving up our disk space into more smaller
pools.  But, as far as I can tell, that's the only way to use
colocation selectively.  Am I missing something, or is that the way
it works?
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:msimpson;uky.edu
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge
obsolete
profits for their obsolete shareholders.  And this year's run twice as
fast
as last year's.



Re: RAID5 in TSM

2002-10-22 Thread Suad
Then remove the mirroring.

On Wed, 2002-10-23 at 16:43, Raghu S wrote:
 Here i am not worried about the protection.I am worried about the
 performace. Most of my client backups are failed.



 Suad Musovich
 s.musovich@AUCKLTo: [EMAIL PROTECTED]
 AND.AC.NZ   cc:
 Sent by: ADSM:  Subject: Re: RAID5 in TSM
 Dist Stor
 Manager
 [EMAIL PROTECTED]
 .EDU


 10/23/2002 12:49
 AM
 Please respond to
 ADSM: Dist Stor
 Manager





 Realistically, if Array B dies, you lose the Database and recovery
 log. The mirroring ain 't giving you protection from that.
 I would separate the recovery log to array A and lose the mirroring on
 both DB and log(maybe mirror log between arrays).
 Restoration of a broken DB should only mean a couple hours outage, in
 your case.


 On Wed, 2002-10-23 at 01:03, Raghu S wrote:
  Hi,
 
  There was a lot of discussion on this topic before.But i am requesting
 TSM
  gurus give their comments again.
 
  The set up is like this.
 
  TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0
 
  392 MB memory, P III
 
Adaptech Ultra SCSI
 
  Hard Disk :  Internal   Hardware RAID 5:
 
   array A : 8.678GB * 3 : 17.356GB data and 8.678
 GB
  parity
 
   array B : 35.003 GB * 3 : 70.006GB data and
 35.003
  GB parity.
 
 
  Both array A and array B are connected to the same channel.
 
  OS and TSM 5.1 are installed on array A
 
  TSM data base, recovery log and Disk storage pool are installed in array
 B.
 
  Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array
 
  Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the
 same
  array
 
  Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B
 
 
  TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
  Server. But i could take the backup,archive and restore with this
  combination )
 
  Number of Clients : 55, all are windows
 
  Incremental backup : 1GB/ client/day.
 
  backup window : 9AM to 6PM with 50% randamization ( all are in polling
 mode
  )
 
  LAN : 100Mbps
 
  End of the day only 10 clients could finish the backup.Remaining all are
  missing or ? ( in progress ) or failed.
 
  Through the entire backup window the CPU load is 100% with dsmsvc.exe
  holding 98%
 
  I tested with various options. I stopped the schedular and fired 3
 clients
  backup manually at the same time.Each client has 1 GB of incremental
 data.
  It took three hours to finish the backup. While backing up i observed
 there
  was lot of idletime outs of sessions.
 
  Network choke is not there. I checked this with FTP.
 
  Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
  storage pool all are on the RAID 5 )? I asked the customer to arrange a
  testing machine without any RAID. I will be getting that in two
 days.Before
  going on to the testing i like to know your comments on this.
 
 
 
  Regards
 
  Raghu S Nivas
  Consultant - TSM
  DCM Data Systems Ltd
  New Delhi
  India.



Re: New Features??

2002-10-22 Thread Abelo Kaj-Flemming
Hello Mahesh

I've reported this problem to Tivoli.
APAR IC34754 has been created for this problem.

Kaj

 -Original Message-
 From: Mahesh Tailor [mailto:MTailor;CARILION.COM]
 Sent: 21. oktober 2002 20:11
 To: [EMAIL PROTECTED]
 Subject: New Features??


 Hello, all!

 I just installed the TSM 5.1.5.1 patch on an AIX 4.3.3.10 system.  No
 problems yet.  But, when I q pro I get the following.  Looks normal
 enough, except for the ~'s and \.  I get the \ for all processes.

 Offsite Volume(s) (storage pool COPYPOOL), Moved
 Files: 9, Moved Bytes: 15,743,970,368,
 Unreadable Files: 2, Unreadable Bytes: 0.
 Current Physical File (bytes):
 1,249,432,856~Current input volume:
 T10524.~Current output volume: T10130.\

 Has anyone else seen this? And, can something be done about it?

 TIA

 Mahesh




Re: RAID5 in TSM

2002-10-22 Thread Roger Deschner
Your database is I/O-bound. You have, essentially, duplicated your
protection at different levels. You need to choose between 1) RAID-5 or
2) TSM Software mirroring, but you should not use both. Either one will
protect you from a disk drive failure, and either one will allow the
server to stay up and running when a failure occurs.

I believe that the best way is to use TSM Software mirroring on JBOD
(i.e. plain, raw) disk drives for the database and log, and RAID-5 for
the online disk storage pools. But if you are stuck using RAID-5 for
some reason, then do not use TSM Server mirroring as well; it is
redundant.

When you allocate your mirror copies, get the two copies as far away
from each other as possible. At minimum, they must be on separate
physical disk drives. Better to have them on separate I/O channels (SCSI
bus...). This separation will both help performance, and improve your
ability to continue running after a disk problem.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]



On Tue, 22 Oct 2002, Raghu S wrote:

Hi,

There was a lot of discussion on this topic before.But i am requesting TSM
gurus give their comments again.

The set up is like this.

TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

392 MB memory, P III

  Adaptech Ultra SCSI

Hard Disk :  Internal   Hardware RAID 5:

 array A : 8.678GB * 3 : 17.356GB data and 8.678 GB
parity

 array B : 35.003 GB * 3 : 70.006GB data and 35.003
GB parity.


Both array A and array B are connected to the same channel.

OS and TSM 5.1 are installed on array A

TSM data base, recovery log and Disk storage pool are installed in array B.

Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the same
array

Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
Server. But i could take the backup,archive and restore with this
combination )

Number of Clients : 55, all are windows

Incremental backup : 1GB/ client/day.

backup window : 9AM to 6PM with 50% randamization ( all are in polling mode
)

LAN : 100Mbps

End of the day only 10 clients could finish the backup.Remaining all are
missing or ? ( in progress ) or failed.

Through the entire backup window the CPU load is 100% with dsmsvc.exe
holding 98%

I tested with various options. I stopped the schedular and fired 3 clients
backup manually at the same time.Each client has 1 GB of incremental data.
It took three hours to finish the backup. While backing up i observed there
was lot of idletime outs of sessions.

Network choke is not there. I checked this with FTP.

Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
storage pool all are on the RAID 5 )? I asked the customer to arrange a
testing machine without any RAID. I will be getting that in two days.Before
going on to the testing i like to know your comments on this.



Regards

Raghu S Nivas
Consultant - TSM
DCM Data Systems Ltd
New Delhi
India.

e-mail: [EMAIL PROTECTED],[EMAIL PROTECTED]




Re: RAID5 in TSM

2002-10-22 Thread Mark D. Rodriguez
Raghu S wrote:


Hi,

There was a lot of discussion on this topic before.But i am requesting TSM
gurus give their comments again.

The set up is like this.

TSM Server : Windows NT 4.0 SP6, TSM 5.1.0.0

   392 MB memory, P III

 Adaptech Ultra SCSI

   Hard Disk :  Internal   Hardware RAID 5:

array A : 8.678GB * 3 : 17.356GB data and 8.678 GB
parity

array B : 35.003 GB * 3 : 70.006GB data and 35.003
GB parity.


Both array A and array B are connected to the same channel.

OS and TSM 5.1 are installed on array A

TSM data base, recovery log and Disk storage pool are installed in array B.

Database : 2GB+2GB = 4 GB  and mirrored at TSM level on the same array

Recovery Log : 500MB + 500 MB = 1 GB and mirrored at TSM level on the same
array

Disk Storage pool : 10GB+10GB+10GB+10GB+5GB=45GB on array B


TSM client: 4.1.2.12 ( Tivoli says 4.1.2.12 is not supported with 5.1
Server. But i could take the backup,archive and restore with this
combination )

Number of Clients : 55, all are windows

Incremental backup : 1GB/ client/day.

backup window : 9AM to 6PM with 50% randamization ( all are in polling mode
)

LAN : 100Mbps

End of the day only 10 clients could finish the backup.Remaining all are
missing or ? ( in progress ) or failed.

Through the entire backup window the CPU load is 100% with dsmsvc.exe
holding 98%

I tested with various options. I stopped the schedular and fired 3 clients
backup manually at the same time.Each client has 1 GB of incremental data.
It took three hours to finish the backup. While backing up i observed there
was lot of idletime outs of sessions.

Network choke is not there. I checked this with FTP.

Whats the bottleneck here? Is RAID 5 is creating problems ( DB,log and
storage pool all are on the RAID 5 )? I asked the customer to arrange a
testing machine without any RAID. I will be getting that in two days.Before
going on to the testing i like to know your comments on this.



Regards

Raghu S Nivas
Consultant - TSM
DCM Data Systems Ltd
New Delhi
India.

e-mail: [EMAIL PROTECTED],[EMAIL PROTECTED]



Hi,

Well you have heard several answers so far so I won't repeat what they
have said, but I would like to add a couple of new things that can
improve you performance.

   * First of all 392MB of memory isn't enough for a desktop machine
 today let alone a server.  Windows performance is very much
 dictated by the amount of memory it has.  I would upgrade to at
 least 2GB, you might get by or at least see improvements with 1GB,
 but 2GB will be much better.
   * You don't mention your network config, but having multiple NICs
 will help.  Also, make sure your are using full duplexmode if at
 all possible.
   * You are using polling, I prefer server prompted because I can
 better control the load on my system by tuning it with
 maxsessions and maxschedsessions.  With server prompted I can
 bring the server upto peak operating load rapidly and keep at its
 peak until the client load starts to reduce due to client backups
 completing.  Backup windows are always considerably smaller using
 this methodology.
   * If you must stick with polling then reduce your randomization to
 about 25%.  Also, make sure that your  maxsessions and
 maxschedsessions are set to accomadate the load.
   * All your drives are on one SCSI bus, this is bad!  Add more SCSI
 controllers or at least take advantage of multiple channels on the
 same card.
   * Take ITSM DB and LOG off of RAID 5 and just use ITSM mirroring.
   * Make sure LOG and DB are not on same drive(s)
   * But LOG and DB mirrors on different SCSI bus from primary
   * You must decide if you need to protect your storage pool with RAID
 or not.  This can be one of those philosophical debates, but you
 must decide that in the event of a disk crash can you afford to
 loose a nights worth of backups, i.e. can it easily be retrieved
 from the client or has the data already been changed.
   * If you don't need the protection of RAID 5 for your storage pools,
 then make one volume per drive.  Again spread these across SCSI
 channels and controllers if possible.

There have been many posts on this list about performance tuning so you
can refer to them if you need to.  However I am willing to bet the
biggest problem here is the limited memory.  The next biggest problem is
the disk configuration.

Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE