Re: D2D on AIX

2004-09-20 Thread Zlatko Krastev/ACIT
/testfs may grow to 1 TB

there is a way to cheat:
def v filepool /test2fs/file
def v filepool /test2fs/file0001
...
so using a script you can manually define more volumes outside the
devclass-defined directory. their sizes are as defined in the devclass.

another approach might be hierarchy of filepools:
 def devclass fileclass1 devtype=file maxcap=64g dir=/test1fs
 def stgpool filepool1 fileclass1 pooltype=primary maxscratch=100
 def devclass fileclass2 devtype=file maxcap=64g dir=/test2fs
 def stgpool filepool2 fileclass2 pooltype=primary maxscratch=100
 upd stg filepool1 next=filepool2
 def devclass fileclass3 devtype=file maxcap=64g dir=/test3fs
 def stgpool filepool3 fileclass3 pooltype=primary maxscratch=100
 upd stg filepool2 next=filepool3
...

You have to take into account that the latter is unusual and during the
years there were some nasty APARs for hierarchies of more than two storage
pools (DISKPOOL - TAPEPOOL).

Zlatko Krastev
IT Consultant






Eliza Lau [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
20.09.2004 20:52
Please respond to ADSM: Dist Stor Manager

To: [EMAIL PROTECTED]
cc: (bcc: ADSM-L/ACIT)
Subject:Re: D2D on AIX


Mark,

Where do you define this
I use this command to define a diskpool:
 def devclass fileclass devtype=file maxcap=64g dir=/testfs
 def stgpool filepool fileclass pooltype=primary maxscratch=100

/testfs is a JFS2 filesystem.  How big can /testfs grow to?  The documents
say 1TB.
When volumes are created in the stgpool filepool, it creates a volume of
64G,
which is the max value you can specify.  Where do you define the 500GB you
said.

Eliza



 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
 Behalf Of Eliza Lau
 Our 3494 with 3590K tapes in 3 frames is getting full.
 Instead of adding another frame or upgrading to 3590H or 3592
 tapes we are looking into setting up a bunch of cheap ATA
 disks as primary storage.
 
 The FILE devclass defines a directory as its destination and
 JFS2 has a max file system size of 1TB.  Does it mean the
 largest stgpool I can define is 1TB?

 No. What it means is that the largest single volume in your diskpool can
 be 1TB. You could have, say, 30 volumes @ 500GB per volume, making a
 total storage pool size of 15TB. Every two volumes would be in their own
 filesystem.

 If you're using a disk farm as your primary storage pool, fault
 tolerance is strongly recommended. RAID0 and RAID1+0 would be more
 expensive; RAID5 might make more sense, as long as you were using a
 proper monitoring system (properly set up) to watch the health of your
 disks. Are you using CACHE=YES in your proposed disk solution?

 --
 Mark Stapleton ([EMAIL PROTECTED])
 Berbee Information Networks




Re: Informix Data Protection

2004-09-20 Thread Zlatko Krastev/ACIT
The manual you are looking for is named IBM Informix Backup and Restore
Guide. You can find it at URL
http://www-306.ibm.com/software/data/informix/pubs/library/interim/ct1slna-pdf.html


Or you can browse the whole library of Informix Dynamic Server manuals at
http://www-306.ibm.com/software/data/informix/pubs/library/ids_9.html

Zlatko Krastev
IT Consultant






Richard Mochnaczewski [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
13.09.2004 15:54
Please respond to ADSM: Dist Stor Manager

To: [EMAIL PROTECTED]
cc: (bcc: ADSM-L/ACIT)
Subject:Informix Data Protection


Hi,

Does anyone have any good documentation on how to use the Informix Data
Protection agent ? I was able to install the Data Protection agent on one
of
my test AIX servers but I'm not totally clear on how to use onbar as the
backup scheme. I'm familiar with onload/onunload and doing level 0 backups
on my other servers, but I have never used onbar. What's more, the
Informix
Data Protection agent only shows how to setup the agent and shows nothing
pertaining to the actual backup. It just says to consult the Informix
guide
for more details. I have checked the IBM site, but am having problems
finding step by step procedures on using onbar and how Tivoli fits into it
all. Any help would be greatly appreciated.

Rich


Re: More mount points thru space reclamation

2004-01-07 Thread Zlatko Krastev
Can you spare few more minutes to report the misleading error message to 
IBM? It seems you've spent nearly a whole day beating your head over it. 
Save this time to the others.

Zlatko Krastev
IT Consultant






Lars-Erik Öhman [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
02.01.2004 16:24
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Re: More mount points thru space reclamation


I found the problem my self! I had to increase the Maximum Scratch 
Volumes Allowed from 100 that I had as a maximum of volumes, to more! 
Cryptic error message but now it´s solved!!

/Larsa 


Re: List all tapes, highlighting those in library.

2004-01-07 Thread Zlatko Krastev
There is no need to prase anything, just create more complex query
(without marrying the DBA, as per Wanda's advice :-)):
dsmadmc select v.volume_name, -
(select library_name from libvolumes -
where libvolumes.volume_name=v.volume_name) -
as in Library, -
(select home_element from libvolumes -
where libvolumes.volume_name=v.volume_name) -
as at Element -
from volumes v

Do not forget that plain select from libvolumes will not list only data
volumes but also scratches, dbbackups, exports, backupsets, etc. (all
volhistory tracked thingies).

Zlatko Krastev
IT Consultant






Dan Foster [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06.01.2004 01:46
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: List all tapes, highlighting those in library.


Hot Diggety! Deon George was rumored to have written:
 Has anybody got an SQL, that can list all VOLUMES, and highlight (either
 showing the element number or something) those that are in the library?

 This report would be useful to see quickly if a list of tapes are
already
 in the library - or which tapes in the list are not and need to be
checked
 in.

I got a list of tables by doing:

tsm select tabnames,remarks from tables

Then I decided to look at the table called LIBVOLUMES because your request
is essentially the SQL equivalent of 'query libvolume'.

So then I decided to see what fields (columns) were present in the table
called 'LIBVOLUMES' with:

tsm: MYSERVERselect colname from columns where tabname='LIBVOLUMES'

COLNAME
--
LIBRARY_NAME
VOLUME_NAME
STATUS
OWNER
LAST_USE
HOME_ELEMENT
CLEANINGS_LEFT

Based on that, I figured only three columns might be useful. So:

tsm select library_name,volume_name,home_element from libvolumes

...which would produce an output like:

tsm: MYSERVERselect library_name,volume_name,home_element from libvolumes

LIBRARY_NAME   VOLUME_NAMEHOME_ELEMENT
-- -- 
3584LIB1   MYS000 1026
3584LIB1   MYS001 1027
3584LIB1   MYS002 1028
3584LIB1   MYS003 1029
3584LIB1   MYS004 1030
3584LIB1   MYS005 1031
[...snip...]

If you have only one library, you are free to leave the library_name off
the select query.

Parse the results as you like, regardless of if it's called via an
internal
TSM server-side script or if it's called via a script that parses the
output of calling dsmadmc in batch mode. It is trivial in either case.

-Dan


Re: IBM Website down?

2004-01-06 Thread Zlatko Krastev
I have been accusing IBM of wrong-doing with M$ many times, but this one
is far from the truth. I am browsing successfully the site using good old
Netscape Communicator 4.76/Win  4.78/AIX, Konqueror/Linux, etc.
Sometimes even such a mighty site can get overloaded. I still remember
(eons ago) the limit of 200 simultaneous ftp users on it.
Another issue with many FTP sites might be that Enable folder view for
FTP sites setting of Internet Exploder.

Zlatko Krastev
IT Consultant






Dwight McCann [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.12.2003 07:41
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:IBM Website down?


David,

It is my experience that this occurs only when you attempt to download
via web browser that is other than IE!  Strangely, IBM is only Microsoft
friendly.  Try IE or an FTP client.

Dwight McCann


Re: Multiple TSM Servers/Informix restore

2003-12-14 Thread Zlatko Krastev
I would say the advice was incomplete and your feelings are quite
correct.
Restore of a TSM server DB is the first step inoperational recovery
process. Next (though not so lengthy) step is to configure the libraries,
drives, paths, and ... storage agents + polled clients.

Imagine the mess when the restored SAPPROD contacts a node in polling
mode and convinces it to perform a backup. Execution of
preschedulepostschedule scripts in the middle of the day might become a
big headache.

The problem wouldn't be resolved by simply renaming restored server to
SAPQAS as the library manager knows that SAPPROD is the onwer of those
libvolumes. The mess can get deeper and deeper.


What actually are you trying to accomplish?

Zlatko Krastev
IT Consultant






Willem Roos [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
11.12.2003 13:23
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Multiple TSM Servers/Informix restore


Hi list,

May i pick your collective brain?

We have multiple TSM Servers (called TSM_SERVER, SAPPROD and SAPQAS).
TSM_SERVER owns the 3584, SAPPROD and SAPQAS are library clients.

When we wish to restore an Informix database backed up on SAPPROD to
SAPQAS, our local friendly TSM support company has advised us to

1) backup tsm db on SAPPROD,
2) restore SAPPROD's tsm db to SAPQAS
3) copy ixbar file
4) and restore

However, i have this strange feeling that TSM_SERVER is now getting
terribly confused between SAPPROD (on sapprod) and SAPPROD (on sapqas)
and starts messing up tape [dis]mount requests etc.

What do you think?

Many thanks in advance,

---
  Willem Roos - (+27) 21 980 4941
  Per sercas vi malkovri


Re: Admin authority class

2003-12-12 Thread Zlatko Krastev
owner!

Zlatko Krastev
IT Consultant






David E Ehresman [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
11.12.2003 18:46
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Admin authority class


What is the minimum admin authority class needed for an admin id to
change a node password?


Re: Admin authority class

2003-12-12 Thread Zlatko Krastev
Sorry,

quick fingers and slow brain, I really need a vacation :-((

The previous answer was wrong and misleading. The correct answer is:
you need at least policy authority over the policy domain (the node is
in). Other options are unrestricted policy authority or system authority.

Zlatko Krastev
IT Consultant






David E Ehresman [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
12.12.2003 18:28
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Admin authority class


Does Owner admin authority class allow you to reset the password for a
node for which you do not know the password?  If so, how would one do
this?

David

 [EMAIL PROTECTED] 12/12/2003 11:13:55 AM 
owner!

Zlatko Krastev
IT Consultant






David E Ehresman [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
11.12.2003 18:46
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Admin authority class


What is the minimum admin authority class needed for an admin id to
change a node password?


Re: Auditing Offsite volume

2003-12-12 Thread Zlatko Krastev
TSM prevents from deleting empty copypool volumes when they are offsite
just to ensure you will bring then back onsite one day. When you bring the
volume back and let the TSM server know the fact, the volume gets deleted
at once and becomes scratch.

If this is your problem, the method to resolve it is outlined above. If
there is another problem and the volume is indeed empty, bring it onsite
and try to relabel it.

Zlatko Krastev
IT Consultant






Bill Fitzgerald [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
12.12.2003 16:56
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Auditing Offsite volume


I have a volume in my copypool that is a problem

it is access=offsite
location=vault

it will not reclaim and if I try to delete it, it say it is empty offsite
and can not be delete

I know that there is a way to Audit this volume while it is still offsite

how do I do that?

Bill

William Fitzgerald
Software Programmer
TSM Administrator
Munson Medical Center
[EMAIL PROTECTED]


Re: SQL TDP configurations

2003-12-10 Thread Zlatko Krastev
I personally prefer the mixed scenario:
- using include/exclude list of the API client metadata is sent to a
dedicated disk pool (which *does not* migrate to tape)
- full and differential backups go direct to tape
- transaction log backups go to another disk pool which in turn migrates
to tape

As any other solution this has its drawbacks, but I like it and it works
(knock on wood, we been hit once by a bug. Thank you for helping us to
resolve it, Del).

Zlatko Krastev
IT Consultant






Del Hoobler [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09.12.2003 19:55
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: SQL TDP configurations


Dave,

I find this to be a mix. That is, there are some customers
backing their SQL databases to disk first, then migrating
them to tape. There are other customers backing them up
directly to tape. There is no correct answer, because it
depends on your service level agreements, your TSM Server
infrastructure, whether you are using Data Protection
for SQL backup striping, colocation settings, backup strategy, etc.
Maybe others on this list can share the techniques
that work best for them.

Thanks,

Del



 It looks like our particular problem is with the machine itself.

 I guess the origin of the question start because, is backing up the SQL
 databases straight to tape the correct thing to do?  What is everyone
 else doing?


Re: Replacement of Tape Library

2003-12-10 Thread Zlatko Krastev
Let me first ask a question before answering yours: why are you going to
*downgrade*?? Are you going to use 3583 somewhere else? It can be upgraded
with Ultrium 2 drives.

You can read from and even write to existing tapes without paying
attention what is the name of the new library. You can delete old drives
and library, and define new ones under same name. You can also define new
library and use UPDate DEVclass your tape class LIBRary=new 3582
name. The process is simple and very straightforward.

Zlatko Krastev
IT Consultant






Crawford, Lindy [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
08.12.2003 10:17
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Replacement of Tape Library


HI TSMers,



We are replacing our existing 3583 tape library with a new 3582
library.Is there anything I should consider when installing this
library.

If I keep the library name the same as the other library, will I still be
able to see read / restore the data from the existing tapes ?

Thank you for your help in advance.



Lindy Crawford
Information Technology
Nedbank Corporate - Property and Asset Finance
Tel : 031 - 3642185
Fax : 031 - 3642946
Cell: 083 632 5982
Email   : [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 







This email and any accompanying attachments may contain confidential and
proprietary information.  This information is private and protected by law
and accordingly if you are not the intended recipient you are requested to
delete this entire communication immediately and are notified that any
disclosure copying or distribution of or taking any action based on this
information is prohibited.

Emails cannot be guaranteed to be secure or free of errors or viruses. The
sender does not accept any liability or responsibility for any
interception
corruption destruction loss late arrival or incompleteness of or tampering
or interference with any of the information contained in this email or for
its incorrect delivery or non-delivery for whatsoever reason or for its
effect on any electronic device of the recipient.

If verification of this email or any attachment is required please request
a
hard-copy version


Re: Backsets To Removable Media

2003-12-10 Thread Zlatko Krastev
mkisofs + cdrecord

Zlatko Krastev
IT Consultant






Renee Davis [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09.12.2003 23:11
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Backsets To Removable Media


We are testing the feature on TSM that allows you to generate a backset
and
copy it to removable media. Our server (7026-6H1) has a 4.7 GB SCSI-2
DVD-RAM drive. We wish to cut the backsets to DVDs. The Admin Guide says
that once the backup set is generated, you must use 3rd-party software to
label and copy the backset to your media. Any suggestions of software or
strategies to accomplish this on an AIX box?

OS System: AIX 5.2.0.1
TSM Level: 5.2.0.2

--
Renee Davis
University of Houston


Re: Unixware client error

2003-12-10 Thread Zlatko Krastev
As any other v4.1 client this one is out of support.
Even if it was supported, I would expect some change after SCO's attempt
to squeeze some money from IBM :-)

Zlatko Krastev
IT Consultant






Gill, Geoffrey L. [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09.12.2003 07:41
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Unixware client error


Is anyone using the latest Unixware client without errors? For some reason
the admin of this client is getting this message when he tries to start
the
scheduler.



12/05/03   11:42:19 sigwait failed in pkWaitshutdown.



I think this is the latest for that guy. Are they not supporting it any
longer?



ftp://ftp.software.ibm.com/storage/tivoli-storage-management/patches/client/
v4r1/UnixWare/v411/
ftp://ftp.software.ibm.com/storage/tivoli-storage-management/patches/client
/v4r1/UnixWare/v411/



Thanks,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154


Re: Full diskpool clients to tape, but when the diskpool is migrate d the clients still goes to tape

2003-12-01 Thread Zlatko Krastev
- (on mbox.infotel.bg)

email-body was scanned and no virus found
-
Yes, it is normal.
1. The diskpool gets full
2. Some sessions request backups and end up with mount requests
3. The diskpool is drained and there is some space in it
4. New sessions backup to diskpool while old sessions still wait their 
requests to get satisfied

A possible workaround for this issue is to set maxnummp=0 for offending 
nodes but their backups will tend to fail due to lack of server storage 
space.
The real solution is to scale your diskpool when your nodes scale!!!

Zlatko Krastev
IT Consultant






Niklas Lundstrom [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
01.12.2003 10:33
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Full diskpool  clients to tape, but when the diskpool is 
migrate d the 
clients still goes to tape


hello
 
I guess the subjectline says it all.
When my diskpool gets full the new sessions goes directly to the tapepool,
but when the diskpool is migrated and gets free space the sessions still
wait for mountpoint in my tapepool. 
Is it normal behaviour??
 
Mvh
Niklas Lundström
Föreningssparbanken IT
08-5859 5164
 




Re: TSM from AIX 4.3.3 to AIX 5.2

2003-12-01 Thread Zlatko Krastev
Your scenario ought to work flawlessly.
As a precaution you can drain your disk pool(s) and take a DB backup.

Zlatko Krastev
IT Consultant






Steven Bridge [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
27.11.2003 21:44
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM from AIX 4.3.3 to AIX 5.2


I am planning to 'migrate' a TSM server from an F40 running AIX 4.3.3
to a p615 running AIX 5.2.0.

We are running TSM version 5.1.8.0 .
This has been installed on the new machine.

I was intending to physically re-attach the disks and tape library
to the new machine. The TSM data is all in TSM specific volume groups,
so I plan to exportvg and importvg these.

I would then copy across the dsmserv.dsk and dsmserv.opt files to
the new machine, add tape  library devices again and hopefully
restart TSM on the new machine.

Does this sound like it should work or have I forgotten anything ?

The new machine is running in 64 bit mode so the code base won't be
exactly the same - but I can't see that this should make any
difference. Am I correct in this assumption ?

+--+
 Steven Bridge Systems Group, Information Systems, EISD
  University College London


Re: data in wrong stgpool

2003-12-01 Thread Zlatko Krastev
Single move nodedata ought to be enough for single node. Depending on
the number of nodes, several moves can be less time consuming than
collocation of the whole stgpool.
clean archdir and clean backupgroup are intended for totally different
purposes and the only surprise would be if they did what you wanted.

Natural expiration of the data is good option and might save you some tape
drive load if you do move nodedata later. But your course of action is
correct and you can do it even right now.

Zlatko Krastev
IT Consultant






Kurt A Rybczyk [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
25.11.2003 21:51
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:data in wrong stgpool


I recently discoverd that someone setup a copygroup wrong and data was
being sent to the wrong storage pool. My problem is that the stgpool it
was being sent to also has a copystgpool. I did a move nodedata node
from=source to=dest and that ran okay. It moved from one primary pool
to another. Now, I'd like to get the data in the copystgpool removed too.
I've tried expire i and backup stg. I've also tried clean archdir and
clean backupgroup and that hasn't worked. The only other thing I can come
up with is to collocate the offsite pool and do a bunch of move nodedata
to isolate that data on specific offsite volumes, then doing a del v
discard=y on them.

Any thoughts or suggestions would be much appreciated.  Thanks in advance.

kr

--
Kurt Rybczyk


Re: Antw: Please help: Question concerning cleanup backupgroups

2003-12-01 Thread Zlatko Krastev
This is misleading!!!
The whole idea behind clean backupgroups command is to be abel to run it
in parts, i.e. do some work and cancel the process. Committed transactions
will be confirmed and will stay. Only the last transaction will be rolled
back! That's why the completion state is 'cancelled' - the process is
aware of cancellation and will continue next time!
Reverting to logmode=normal is exposing server operations to an
unnnecessary risk.

Log filling issue can be resolved by using a DB backup trigger (look at h
def dbb command), or by running the clean backupgroups several days and
cancelling it before the log gets filled.

Zlatko Krastev
IT Consultant






Bernd Wiedmann [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
25.11.2003 18:33
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Antw: Please help: Question concerning cleanup backupgroups


Is your log running in roll-foward-mode???

if yes turn it to normal (only Pending transactions will be loged),
that shoud do it.

when you restart the process, it has to do all work again, i think
... 8)

cu
Bernd Wiedmann


Re: ANR9999D TSM Server 4.2.2.2 and Client 5.1.5.15

2003-11-29 Thread Zlatko Krastev
Verify the nametype of the node's filespaces in the TSM server. Maybe the 
client expects the filespaces to be unicode but the server's node 
definition does not allow automatic filespace rename. Output of `q no 
c010191b f=d` and `q fi c010191b f=d` might help.

Zlatko Krastev
IT Consultant






Sascha Braeuning [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
28.11.2003 15:13
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:ANRD TSM Server 4.2.2.2 and Client 5.1.5.15


Hello TSMers,

I've got a problem with an updated client. We updated the Windows NT 
client
from version 3.1 to version 5.1.5.15. Our TSM server is running under Z/OS
and TSM Version 4.2.2.2. I tested backup and restore behavior and
everything looks fine. The services started without a problem and the
client got his backuptime from the server (we use prompted mode).

Now I am getting this terrible error message on the TSM server:

28.11.2003 13:46:00   ANRD SMEXEC(1627): ThreadId139004 Session 
78272
with
   client C010191B (WinNT) rejected - server does not
   support the UNICODE mode that the client was
requesting.

When I check the clienterrorlogs, there seems to be everything alright. No
errors! Can anybody help?

Mit freundlichen Grüßen

Sascha Bräuning
DB/DC C/S-Systeme/Überwachungsverfahren FE/KA

Sparkassen Informatik GmbH  Co.KG
Wilhelm-Pfitzer-Straße 1
70736 Fellbach
Telefon: 0711/5722-2144
Telefax: 0711/5722-2147
Notes: Sascha Bräuning/6322/FE/SI/[EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED]


Re: encryption: 56 to 128

2003-11-29 Thread Zlatko Krastev
Joe,

I understand you pretty well as I am in your shoes - I am meeting our
customers every day. The only thing I can do is to stress on improved
network security if data in transit is the main concern, and to stress on
application protection for data sensitive in long-term. One of the
arguments in that TSM cannot cover everything better than the applications
themselves can.

But lack of serious encryption is drawback of TSM, so hopefully sooner
or later IBM will realize that. The efforts to brute-force attack even
56-bit encrypted data usually are indeed more than to gather the same data
with social engineering. Anyway sometimes people are just fond of
modern/fashion thingies without actually needing them. So whenever they
ask for a 128-bit something and we cannot provide it, the people
cross the street (as per IBM beloved ATM example) and go to another
vendor.

How to open an enhacement request with IBM  Hard question!
I've asked my local IBM sales rep several times without success. So I've
escalated the issue to our regional Central and Eastern Europe salesperson
for Tivoli. The answer was to collect some cases and send the data to him
to open a request. At which level and with which person (!!) you should do
it in U.S.A. is up to you to investigate.

Zlatko Krastev
IT Consultant






Joe Crnjanski [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
27.11.2003 20:01
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: encryption: 56 to 128


Thank you for understanding me Zlatko.

The reason I'm asking for this is what we hear from our prospect customers
on initial sales meeting. When question comes to security and encryption,
if we can we try to avoid exact answer. Usually we say data is encrypted
with industry standard encryption. If we have somebody that knows little
bit more about this and he/she asks what kind of encryption, and we say 56
bit they all look to each other.
You have to remember that we are backing up data off-site over the
Internet and encryption becomes big issue for us.
And we all know that 56bit is very old technology(5-8 years; not sure).
You can not do internet banking without 128bit for at least 3-4 years.
128bit is standard on win2000, even nt4 had 128bit encryption with one of
the service packs.

Back to the GIVE US A CHANCE

How we open enhancement request from IBM !!

Joe Crnjanski
Infinity Network Solutions Inc.
Phone: 416-235-0931 x26
Fax: 416-235-0265
Web:  www.infinitynetwork.com



-Original Message-
From: Zlatko Krastev [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 27, 2003 12:08 PM
To: [EMAIL PROTECTED]
Subject: Re: encryption: 56 to 128


-- ... without the TSM database, a TSM tape is worhtlees...

This is not completely correct. Some data can be read from the tapes but
you will not know is it the latest version and from which time period it
is. There was such a tool in the past. Look the list archives for
adsmtape.

-- ... any data that transits on the network is encrypted. Usually it's
not.

Fully agree ... but the initial question was what to if we want to be
secure. If I am Mr. IT Manager and see two security issues, insecure
network traffic and insecure backups, and want to resolve them?!? We can
protect the network, and when the time comes to backups do what?
If the network is still insecure (but can be), it is not an excuse to do
not have protection on backups!
Some companies/organizations prefer to *lose* some data instead of
revealing that same data to a competitor/enemy!!!

Back to the topic - GIVE US A CHANCE
The appropriate method is to open a enhancement request with IBM.
Argument it with the cumulative revenue IBM lost because of lacking the
feature - sum licenses, services and maintenance for 3 or 5 years, for all
projects you've lost and SHOW THEM THE MONEY!!!


Zlatko Krastev
IT Consultant






Guillaume Gilbert [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
27.11.2003 18:00
Please respond to guillaume.gilbert


To: [EMAIL PROTECTED]
cc:
Subject:Re: encryption: 56 to 128


I always ask if any data that transits on the network is encrypted.
Usually
it's not. So why would the backups be?. Unlike Netbackup, TSM does not use
tar to write on tapes. It uses its own proprietary method. And without the
TSM database, a TSM tape is worhtlees...

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
 On Behalf Of Remco Post
 Sent: Thursday, November 27, 2003 10:58 AM
 To: [EMAIL PROTECTED]
 Subject: Re: encryption: 56 to 128


 On Thu, 27 Nov 2003 09:41:34 -0500
 Joe Crnjanski [EMAIL PROTECTED] wrote:

  Hi All,
 
  Does anybody know if IBM is planning to upgrade their
 famous encryption
  from 56 to 128 bit at least. Not to mention that today on
 market 512 bit

Re: encryption: 56 to 128

2003-11-27 Thread Zlatko Krastev
-- ... without the TSM database, a TSM tape is worhtlees...

This is not completely correct. Some data can be read from the tapes but
you will not know is it the latest version and from which time period it
is. There was such a tool in the past. Look the list archives for
adsmtape.

-- ... any data that transits on the network is encrypted. Usually it's not.

Fully agree ... but the initial question was what to if we want to be
secure. If I am Mr. IT Manager and see two security issues, insecure
network traffic and insecure backups, and want to resolve them?!? We can
protect the network, and when the time comes to backups do what?
If the network is still insecure (but can be), it is not an excuse to do
not have protection on backups!
Some companies/organizations prefer to *lose* some data instead of
revealing that same data to a competitor/enemy!!!

Back to the topic - GIVE US A CHANCE
The appropriate method is to open a enhancement request with IBM.
Argument it with the cumulative revenue IBM lost because of lacking the
feature - sum licenses, services and maintenance for 3 or 5 years, for all
projects you've lost and SHOW THEM THE MONEY!!!


Zlatko Krastev
IT Consultant






Guillaume Gilbert [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
27.11.2003 18:00
Please respond to guillaume.gilbert


To: [EMAIL PROTECTED]
cc:
Subject:Re: encryption: 56 to 128


I always ask if any data that transits on the network is encrypted.
Usually
it's not. So why would the backups be?. Unlike Netbackup, TSM does not use
tar to write on tapes. It uses its own proprietary method. And without the
TSM database, a TSM tape is worhtlees...

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
 On Behalf Of Remco Post
 Sent: Thursday, November 27, 2003 10:58 AM
 To: [EMAIL PROTECTED]
 Subject: Re: encryption: 56 to 128


 On Thu, 27 Nov 2003 09:41:34 -0500
 Joe Crnjanski [EMAIL PROTECTED] wrote:

  Hi All,
 
  Does anybody know if IBM is planning to upgrade their
 famous encryption
  from 56 to 128 bit at least. Not to mention that today on
 market 512 bit
  is not very difficult to find in other softwares.
 
  We lost couple of customers because they requested at least 128 bit
  encryption.
 
  I know that IBM's argument is effect on speed of backup,
 but GIVE US A
  CHANCE to choose and we can decide when to use 56 128 or 1024 bit.
 

 IBM also arguments, rightfully, that if you need stronger
 encryption, you'll
 probably need to encrypt the files while they are stored on
 your disk as
 well. After some thought, I think I'll have to agree.
 Remember even 56bit
 des can currently not that easily be cracked by anyone who is
 not in the
 business of cracking strong encryption for a living.


  Joe Crnjanski
  Infinity Network Solutions Inc.
  Phone: 416-235-0931 x26
  Fax: 416-235-0265
  Web:  www.infinitynetwork.com


 --
 Met vriendelijke groeten,

 Remco Post

 SARA - Reken- en Netwerkdiensten
 http://www.sara.nl
 High Performance Computing  Tel. +31 20
 592 8008Fax. +31 20 668 3167

 I really didn't foresee the Internet. But then, neither did
 the computer
 industry. Not that that tells us very much of course - the
 computer industry
 didn't even foresee that the century was going to end. --
 Douglas Adams



Re: How to remove a 'Destroyed' volume

2003-11-24 Thread Zlatko Krastev
If this is primary pool volume, it is *highly* advisable not to do so!!!
By performing delete volume on a primary pool volume you are deleting
*all copies* of objects stored on the volume - primary, on-site and
off-site copypool.
When restore volume finishes with success, it will delete the volume
automatically. If query content shows some objects still residing on the
volume, the restore was not complete/successful.

While you still have some time (until you have copy of the database before
del v was done), you can check what data was deleted/lost.
Restore old DB on a test/DR server and look at q cont x output!!!

Zlatko Krastev
IT Consultant






[EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
19.11.2003 01:22
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: How to remove a 'Destroyed' volume


Hi,
I was just able to that bad volume by doing delete volume x
discarddata=yes.
Thank you.

Quoting James Choate [EMAIL PROTECTED]:

 Did you restore the volume by update the volume and setting the
 access=destroyed, and then performing a restore volume?

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, November 18, 2003 1:03 PM
 To: [EMAIL PROTECTED]
 Subject: How to remove a 'Destroyed' volume


 TSM 5.1.6.2 running on AIX 5.1

 Hi,

 I recently had a bad tape.  I was able to restore the volume
successfully
 from
 the offsite-copy tapes.  The bad tape was marked 'Destroyed'
subsequently.
 From what I read, I thought this tape would be removed by TSM after the
 data
 was restored to other volumes, but it did not.  Everyday TSM still trys
to
 access this tape and complaining that the tape was 'Destroyed'.  Do I
need
 to
 delete it manually? And how to do it?  I already tried audit volume
xx
 fix=yes, but it did not work.

 Thank you in advance.



Re: SAP R/3

2003-11-24 Thread Zlatko Krastev
-- I was told that Veritas does this. What would be the benefit of doing
things this way?

Teach yourself to avoid the marketing scrap without being bullied. This
functionality is useful in only one case - when your backup clients (or
their network connectivity) are heavily bottlenecked and tape drive
outperforms them.
I would hardly accept that your R/3 server is a box which is unable to
read from disks faster than a single tape drive!


Example 1: your nodes A, B, C are able to drive their backup streams up to
5 MB/s and the drive is capable of 15 MB/s. By multiplexing these slow
clients you would be able to stream the tape drive at full speed.

Example 2: your nodes A, B, C are able to stream at 20 MB/s (20+20+20=60
MB/s) but you multiplex them again. As result the backup data on tape will
look like ABCACCBAA... and your restore of client A will be read, skip,
skip, read, skip, skip, skip, read, read, ...

Example 3: Your single node A is splitting data in 3 streams A1, A2, A3
which in turn got multiplexed. The result might be again something like
A1A2A3A1A3A3A2A1A1.
But what if your restore becomes read A1, skip, skip, read A1, skip,
skip, skip, read A1, read A1, rewind, skip, read A2, skip, skip, skip,
skip, read A2, ...?!?

Zlatko Krastev
IT Consultant






Gill, Geoffrey L. [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
21.11.2003 16:52
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:SAP R/3


Can those of familiar with the SAP R/3 TDP please answer a question? I
know
that by running multiple sessions we can direct those same sessions each
to
a tape drive. Can the R3 TDP be configured to send multiple sessions to a
single tape drive simultaneously? I was told that Veritas does this. What
would be the benefit of doing things this way?



I was also wondering if someone would be willing to contact me, or I
contact
you, regarding the design of a TSM system that would basically be doing
these same TDP R/3 backups and nothing else. I'm looking for other
opinions
to help in designing this system given the requirements I've been sent. If
you would like to call me my number is below. If you would like me to call
you, you can email me directly, I would be will to do that also.



Thanks for the help,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154


Re: Library sharing from a 5.1 AIX 4.3.3.11 TSM server to a 5.2 server on AIX 5.1ML4-ANR9999D smlshare.c

2003-11-17 Thread Zlatko Krastev
You can do it pretty easy with 3494 - just set both servers to use
different categories. Then they will be treated as separate applications
and will only iteract with the library manager.

Zlatko Krastev
IT Consultant






Lisa Laughlin [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
14.11.2003 20:03
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Library sharing from a 5.1 AIX 4.3.3.11 TSM server to a 
5.2 server on
AIX 5.1ML4-ANRD smlshare.c


Rejean,

Thanks for the info-- it's what I was afraid of.  I'll have to figure out
some other way to get drives to the test server.

Have a nice weekend!

lisa




  Rejean Larivee
 Sent by: ADSM:
 Dist Stor  To
 Manager  [EMAIL PROTECTED]
 [EMAIL PROTECTED]  cc
 .EDU
   Subject
   Re: Library sharing from a 5.1 AIX
 11/13/2003 05:05  4.3.3.11 TSM server to a 5.2 server
 PMon AIX 5.1ML4-ANRD smlshare.c


 Please respond to
 ADSM: Dist Stor
 Manager
 [EMAIL PROTECTED]
   .EDU






Hello,
since you specify that you do library sharing between a TSM 5.1 server and
a TSM 5.2 server then this is a problem.  As the TSM 5.2 server readme
says
:




* Library Sharing and LAN-Free Upgrade Considerations
*




Compatibility
-
Version 5.2 and above of the Server and Storage Agent are not backwards
compatible with version 5.1 and below of the Server and Storage Agent when
in a
Library Sharing or LAN-Free environment.  All Servers and Storage Agents
in
a
Library Sharing or LAN-Free environment must be upgraded to 5.2 in order
to
function properly.

So, in other words, you must upgrade the TSM 5.1 server to TSM 5.2 so that
both library client and library manager are at TSM 5.2.
Later,
-
Rejean Larivee
IBM TSM Level 2 Support

...


Re: different backup policy on single node?

2003-11-17 Thread Zlatko Krastev
You can make one manual backup without quiet option. Then you will see
many messages about files being re-bound to the new class. It also can be
seen on backup session summary in dsmsched.log / dsmaccnt.log / ActLog as
Total number of objects rebound.


If you prefer to do the backup via scheduler (or have done it this way) -
DO NOT FORGET to bounce the scheduler!!! Otherwise it will not pick the
changes in dsm.sys and therefore will not recognize changes to
include/exclude list!

Zlatko Krastev
IT Consultant






Alexander Lazarevich [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
14.11.2003 17:54
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: different backup policy on single node?


Cool, thanks. made the change, backed up the filespace. But how can I
verify that the include statement has put that filespace into a new
management class? Nothing in the actlog about management class. A 'q file
xxx xxx format=detail' doesn't tell me either.

I could verify by deleting temp files on the filespace and see if they get
blown away from the server according to the new management class, but
there's got to be a better way to tell?

Thanks!

Alex

On Fri, 14 Nov 2003, David McClelland wrote:

 Alexander,

 How about using the include exclude list on the Linux client to specify
 a different management class for the filespec in which the OSX clients
 have their filespaces mounted?

 e.g. include /mnt/macclientmount/.../* MAC_MGMTCLASS

 Where MAC_MGMTCLASS as defined on the server might have the policy that
 you wish for your Mac files.

 Rgds,

 David McClelland
 Global Management Systems, Reuters Ltd., London

 -Original Message-
 From: Alexander Lazarevich [mailto:[EMAIL PROTECTED]
 Sent: 14 November 2003 15:04
 To: [EMAIL PROTECTED]
 Subject: different backup policy on single node?


 TSM 5.1 on Windows 2K server with Overland Neo 4100 LTO2. Windows, unix,
 mac clients.

 We nfs mount OS X workspaces onto our Linux fileserver, and back them up
 from there. We do that because, frankly, the TSM OS X scheduler is
 terrible. And since there is no command line for the TSM OS X client, we
 can't run the scheduler on OS X with cron. (what is IBM thinking?)

 Anyway, we now want different policies for the OS X nfs mounts and the
 other filesystems on the linux client. But I don't see any way of
 getting this done in TSM, it just wasn't designed that way.

 But is there any backdoor way to accomplish that? I just need a way to
 have different filespaces on a single client belong to different
 policies?

 Or is there any version of the OS X TSM client that actually can run via
 command line?

 Thanks in advance,

 Alex


 -
 Visit our Internet site at http://www.reuters.com

 Get closer to the financial markets with Reuters Messaging - for more
 information and to register, visit http://www.reuters.com/messaging

 Any views expressed in this message are those of  the  individual
 sender,  except  where  the sender specifically states them to be
 the views of Reuters Ltd.



Re: TDP Policy Change

2003-11-17 Thread Zlatko Krastev
The expiration time and load mainly depend on number of objects and not on
their total size. If you have to expire several thousands Domino databases
it will go pretty quick. But when the time comes to the mountain of
transaction logs of the Domino server, things may get worse.

For that reason TSM is having the CANcel EXPiration command. You can
schedule EXPire INVentory to start say at 15:00 and schedule at 19:00
expiration cancellation. Thus expiration would have only four hours each
day and when everything got expired it should finish before being
cancelled.
Put your own periods and schedules!

Be aware that if your expiration needs outgrow your expiration window, you
will sonner or later run out of scratches!!! Usual expiration should
finish on time and should not need to be cancelled.

Zlatko Krastev
IT Consultant






Douglas Currell [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
17.11.2003 11:00
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TDP Policy Change


Currently, Domino data is being kept longer than is required by our SLA
and TDP is in use. I hope to update the domain and copygroups to reflect
our lesser needs. This would then allow terabytes of data to be expired.
Is it necessary to control the expiration? My fear is that expiration
could carry on for a very long time and have performance implications. Any
ideas on how to approach this would be appreciated.



-
Post your free ad now! Yahoo! Canada Personals


Re: client restore fails trying to replace files that don't exist

2003-11-17 Thread Zlatko Krastev
Just a warning derived from another bad luck story.
We had once a restore attempt for a server with many Microsoft blah-blah
directories under C:\Program Files. Despite the fact it was Windows 2000
Advanced Server with MS SQL 2000, it simply disregarded long names. For
some unknown reasons M$ installer put in the registry the SQL path values
as C:\Progra~1\Micros~3\... ?!?
At restore time TSM was promptly restoring directories Microsoft A,
Microsoft B, Microsoft C, Microsoft D. It happened that in
alphabetical order the Microsoft SQL Server directory got to be fourth,
and was restored with 8.3 equivalent Micros~4. Opening a Command Prompt
window and doing a cd progra~1\micros~3 led us to the Office directory
and SQL Server did not worked. The admin of that box had to reinstall it
to get it working!
So the 8.3 problem is still alive beyond any DOS and Win 3.1/9x/Me. Good
example of design limitation which is carried on for backward
compatibility!

Bottom line: even when everything seems to be restored, you still may have
an inconsistent restore. So test and verify, and again test-test-test and
verify!

Zlatko Krastev
IT Consultant






Roger Deschner [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
12.11.2003 23:29
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: client restore fails trying to replace files that don't 
exist


H. Very interesting! Thanks, Wanda - this actually gives me
something to go on. All WinME systems use 8.3 filenames deep down
inside, because WinME was still built on a MS/DOS kernel. (Last of that
species!)

This might also explain why a circumvention we tried worked: Restore to
C:\abacadabra didn't work. So I tried a restore to a different location
C:\temp which worked. Then to my surprise, I was able to successfully
rename C:\temp to C:\abacadabra. The reason it worked was that rename
correctly chose a different 8.3 name for C:\abacadabra than for possibly
existing directory C:\abacadabrashazam. Now I need to go check this
client and see if he also has C:\abacadabrashazam on his computer, and
look and see what 8.3 names already exist that might cause a collision.

If this is true, then a full bare metal restore should also work OK.
Then all files are restored at once, and even if the new 8.3 names are
different, at least there will not be collisions because each long name
will map to a unique 8.3 name.

The problem arises when the client does piecemeal restores - and then
goes back in for more. I have always regarded the piecemeal restore
method to be the worst for resynchronization issues, and now I have
another reason to discourage it.

I don't know if this business of collision in the 8.3 namespace will be
it, but this certainly gives me something to go on to try to find it.

Thanks a lot!

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Wed, 12 Nov 2003, Prather, Wanda wrote:

Roger,

I don't know anything about Windows ME.  You may have a bug.

But I have seen something like this on Win2K; the symtom is that you get
prompted for permission to overwrite a file, even when you are restoring
into a previously empty directory.

When I have seen this on Win2K, it has been a problem with 8.3 filenames.
When TSM does restores, it recreates every file, so each long file name
gets
a reconstructed 8.3 filename, which may be DIFFERENT from its original
8.3
filename.

If the system has 8.3 filenames turned on, there is a Windows rule that
says
how the 8.3 filename gets constructed.

And, you can have clashes, where 2 files with different long file names
map back into the same 8.3 filename.  The first one restores OK, the
second
one TSM tries to restore results in a prompt to overwrite.  It isn't
obvious
what is happening, because TSM gives you the error on the LONG file name,
not the 8.3 one.

It is most likely to happen with long filenames that begin with identical
chars and end in very similar characters, for example:

very.long. my file name is a mess.urk
very.long. my file name is a [EMAIL PROTECTED]
very.long. my file name is a mess.xurk


The fix for this (again on Win2K) is a registry hack that TURNS OFF the
creation of the 8.3 filename - you can find it in Microsoft's knowledge
base
(search on 8.3).

Turn off the 8.3 filenames, finish the restore, then turn them back on
again.

There is no TSM fix that prevents this behavior, because there is no
Windows
fix for it.  You can reproduce the same behavior with XCOPY, not just
with
TSM.

Again, this may not be your problem.
But it's something to look at.

Wanda





-Original Message-
From: Roger Deschner [mailto:[EMAIL PROTECTED]
Sent: Monday, November 10, 2003 6:29 PM
To: [EMAIL PROTECTED]
Subject: client restore fails trying to replace files that don't exist


I'm having a client problem that has gotten me stumped. (And that takes
some doing.)

The client node is Windows ME. The hard drive crashed

Re: DB2 backups with multiple DB's on one host

2003-11-17 Thread Zlatko Krastev
Looking at DB2 Administration Guide:
Tivoli Storage Manager Node Name (tsm_nodename)
...
This parameter is used to override the default setting for the node name
associated with the Tivoli Storage Manager (TSM) product. The node name is
needed to allow you to restore a database that was backed up to TSM from
another node.

The default is that you can only restore a database from TSM on the same
node from which you did the backup. It is possible for the tsm_nodename to
be overridden during a backup done through DB2 (for example, with the
BACKUP DATABASE command).

So the parameter is for emergency recovery and is not intended for
day-to-day operations.


What you are trying to accomplish can be done in two ways:
1. Using include/exclude list each database can be bound to different TSM
management class with different destionation pools
2. Separate each database in own DB2 instance and set instance owners
environment DSMI_CONFIG accordingly. Only this way you can achieve your
two-TSM-nodes goal.

Do not forget that database configuration in DB2 is used only for
full/incremental backups. It depends on the user exit how transaction logs
are handled. You can find in same DB2 Guide:
Only one user exit program can be invoked within a database manager
instance. ...

Discuss the options with your DB2 DBA to select the best fit.

Zlatko Krastev
IT Consultant






French, Michael [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
14.11.2003 02:05
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:DB2 backups with multiple DB's on one host


I have a customer who has two DB2 databases on one server and I
need to back them up with TSM.  Is it possible to do this with two
separate node names to keep them apart?  If so, how would I do this? Since
you have to define 3 environment variables specific to DB2 for TSM and one
of them points to the dsm.opt file, can I specify two different node
entries inside of the opt file?  Would it look something like this:

SERVERNAME TSM1
COMMMEHOD tcpip
TCPBUFFSIZE 512
TCPWINDOWSIZE 128
TCPNODELAY yes
TCPSERVERADDRESS 10.82.96.21
NODENAME TIVANAI
PASSWORDACCESS generate

SERVERNAME TSM1
COMMMEHOD tcpip
TCPBUFFSIZE 512
TCPWINDOWSIZE 128
TCPNODELAY yes
TCPSERVERADDRESS 10.82.96.21
NODENAME TIVASSI
PASSWORDACCESS generate

If I do this, how do I specify inside of DB2 which node to use?  I
know that there is a parameter called TSM_NODENAME that is set inside DB2,
but I don't know how to generate the encrypted password since it only
grabs the first entry out of the opt file.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile


Re: Granularity of server to server

2003-11-10 Thread Zlatko Krastev
As I cannot see an answer to this, I will give it a shot.

If you are speaking about virtual volumes for primary stgpools: yes, they 
are sequential volumes just like the tapes. If migration will be from 
diskpool to vvpool many migration processes can run simultaneously. If 
migration is from a tape pool - only one process per pool.

I think you are asking about copypools. In that case you can utilize 
MAXPRocesses parameter of ba stg command, and establish several sessions 
with DR site.

Zlatko Krastev
IT Consultant






Steve Harris [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
23.09.2003 07:14
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Granularity of server to server


Hi List. 

Heaven preserve us all from penny-pinching managers.

I've designed a really neat TSM configuration, with ½ my diskpools at my 
prime site, ½ at second site.  Primary tape is a local 3494 copypools are 
on the offsite 3494.  All this is tied together by logical SAN and LAN 
links that encompass both datacentres.

The plan was for a second fibre to be installed between the sites by a 
diverse route.  Problem is we have the money to purchase the second link, 
but not to run it. Ah the joys of working for government.

So, I've been asked to look at a config that uses server to server instead 
of the SAN to tie the two sites together.

I know that with normal data, migrations are done by node, largest node 
first, so that it all your data is from one node you can only use one tape 
drive to migrate.  Is this the same with server to server?

i.e. if my offsite TSM is only a receiver for copypool data from the 
primary TSM, will I be restricted to one migration process?  What if there 
is more than one offsite pool defined on the offsite TSM? 

Any other gotchas?

Would I be better off to split the workload between the two instances and 
have them back-up each other?

Thanks

Steve.


Steve Harris
TSM Administrator 
Queensland Health, Brisbane Australia 

 

 




***
This email, including any attachments sent with it, is confidential and 
for the sole use of the intended recipients(s).  This confidentiality is 
not waived or lost, if you receive it and you are not the intended 
recipient(s), or if it is transmitted/received in error.

Any unauthorised use, alteration, disclosure, distribution or review of 
this email is prohibited.  It may be subject to a statutory duty of 
confidentiality if it relates to health service matters.

If you are not the intended recipients(s), or if you have received this 
e-mail in error, you are asked to immediately notify the sender by 
telephone or by return e-mail.  You should also delete this e-mail message 
and destroy any hard copies produced.
***


Re: TSM and multiple libraries

2003-11-10 Thread Zlatko Krastev
Have in mind that in such scenario the 3592 writing speed cannot be higher
than reading rate of 3590 drives.
For better performance you can define separate copypool over devclass of
3592 and read/write at higher speed. You can do this at least for onsite
copypools if your library at DR site is having only 3590 drives.
Copy between 3592 drives can be at 3x40 MB/s = 432 GB/hour (3 drives
reading, 3 writing), while copy from 3592 to 3590 is limited to 6x 14 MB/s
= 302,4 GB/hour (6 drives 3592 reading, 6 writing + 2 idle 3590). Even if
you add 2 more 3592 drives later you can reach 8x 14MB/s = 403,2 GB/hour
(but it that case you can have 4x 40 MB/s = 576 GB/hour).

Bottom line: put your priority systems into 3494 with Jaguar drives and
leave the less important in 3494/3590.

Zlatko Krastev
IT Consultant






Mike Cantrell [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10.11.2003 17:02
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM and multiple libraries


Hello
I have a question concerning using a single TSM Server and two libraries.
1. Single TSM Server
2. first 3494 with 8 3500E drives
3. second 3494 with 6 of the new jaguar drives
4 TSM Server will see both 3494's
5. first 3494 will be used for backups
6 second 3494 will be used to do duplexing only
Can we actually backup to one 3494 and then duplex to another 3494
library.
I do not believe we can assign a copypool a particular tape range. Since
the
command ba stg jxn_tapepool copypool02 will backup the tapepool to a
copypool how can you tell TSM what library to use??


Thanks
Mike


Re: Idle Sessions

2003-11-10 Thread Zlatko Krastev
There was similar problem submitted to this list in the past but I cannot
recall for which version. As you are running basic unpatches v4.2.0 the
chances to have this fixed after upgrade to latest v4.2 maintenance (if
you cannot afford upgrade to v5.x) are high.
I have no time to dig the list archives or APAR database but you can. This
would reveal to you at which level the problem appeared and probably also
at which level it was resolved.

Zlatko Krastev
IT Consultant






Nazeer Parak [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10.11.2003 11:25
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Idle Sessions


Hi

i wonder if anyone could possibly be of assistance to me regarding the
following:

firstly i am running TSM 4.2.0 on OS 390
?xml:namespace prefix = o ns = urn:schemas-microsoft-com:office:office
/
The problem i am having only recently  is that everytime a client node
polls the server the polled session remains as an idle session
indefinately.
This results in pages of idle sessions at any given time. eventually
actual backup sessions are refused and server response becomes incredibly
slow and at times TSM even abends.
the only way to get rid of the idle poll sessions is to cancel them
manually with a can se  or cycle the server when it becomes too
deadlocked. this is a new problem as i have never had it before, in fact
previously client polls happen so fast you dont even see the session.(the
way it should be).

if anyone has had something like this or if nyone knows of anything i may
be able to do to fix this problem i would greatly appreciate it.
Also if you do reply to this please could you also copy your reply to me
personally off list at [EMAIL PROTECTED] as i may not receive it
otherwise.

thanks in advance.


Nazeer Parak
Systems Engineer
Arivia.kom

* Phone:   203-6062
* Cell:   084-447-7771
 E-Mail:   mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]


NOTICE: Please note that this eMail, and the contents thereof, is subject
to the standard arivia.kom email disclaimer which may be found at:  
http://www.arivia.co.za/disclaimer.htm.  If you cannot access the disclaimer through 
the URL attached, and you
wish to receive a copy thereof, please send an eMail to
[EMAIL PROTECTED] or call (011) 233-0800. You will receive the
disclaimer by return email or fax.


Re: Option file

2003-11-10 Thread Zlatko Krastev
This usually is put into DON'T DO list!!! Standard recommendation is to
use separate node name for B/A client and each TDP.
It is good links to point the same dsm.sys file, allowing single point of
control. But the file is better to have more than one section with
different node names in each and separate dsm.opt files.

Zlatko Krastev
IT Consultant






Nazeer Parak [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
17.09.2003 07:07
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Option file


Hello Anyone,

Do someone used TSM to backup the filesystems, R/3  BW databases and Redo
log files.

Tell me how the OPTION files can be shared with backup/client and TDP
mysap.com. with the
parameter PASSWORD  Generate that enable to schedule the filesystems
backup without password.


eg.

/usr/tivoli/tsm/client/ba/bin
lrwxrwxrwx   1 root system   32 Sep 11 10:46 inclexcl.list -
/local/etc/tsm/bin/inclexcl.list
lrwxrwxrwx   1 root system   26 Sep 11 09:27 dsm.opt -
/local/etc/tsm/bin/dsm.opt
lrwxrwxrwx   1 root system   26 Sep 11 09:27 dsm.sys -
/local/etc/tsm/bin/dsm.sys

/usr/tivoli/tsm/tdp_r3/ora64
lrwxrwxrwx   1 root system   26 Sep 04 17:38 dsm.opt -
/local/etc/tsm/bin/dsm.opt
lrwxrwxrwx   1 root system   26 Sep 04 17:38 dsm.sys -
/local/etc/tsm/bin/dsm.sys

/usr/tivoli/tsm/client/api/bin64/
lrwxrwxrwx   1 root system   26 Sep 11 09:40 dsm.sys -
/local/etc/tsm/bin/dsm.sys
lrwxrwxrwx   1 root system   26 Sep 11 09:40 dsm.opt -
/local/etc/tsm/bin/dsm.opt



Regards,
zaini


Re: TSM Client Query

2003-11-10 Thread Zlatko Krastev
This have been asked several times but the answer is still mostly negative
- you have information what is residing on those volumes in CONTENTS
table, but information from what date/time is the data is held in BACKUPS
and ARCHIVES tables. AFAIK TSM server does not provide method to build a
relation there.

Zlatko Krastev
IT Consultant






Gerald Wichmann [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
13.09.2003 01:10
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM Client Query


We frequently get tapes from other sources and are expected to restore
them.
The problem I'm finding is the original environment sends me their TSM
database and it may have restores for a given server (say SERVERA) going
back a year (so lets say 365 backups that the TSM DB is aware of). Now
they
only send me 200 tapes and I'm faced with figuring out which of those
backups that the DB knows about for SERVERA do I *actually have*? I
realize
this is probably a perfect candidate for a select query or macro where I
pass in a list of volumes and it spits out what backup dates it knows
about.
Something I'll have to figure out how to determine.

Similarly I'm curious though whether this is doable with Agent backups
(specifically Exchange)..

Anyone develop something like this so I don't have to reinvent the wheel?
Any thoughts? Appreciate the help lately.

Thanks!
Gerald



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of
the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under
applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have
received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Re: Migrate disk pool with disk failure

2003-11-09 Thread Zlatko Krastev
It is indeed release independent :-) But the bad news is that migration
will start based on 90% of 90 GB :-/
This was once reported to the list - some diskpool volumes become
read-only, 'q stg' shows free space in the pool, but backups go to next
storage pool. You may get error, or go to next tape depending on MAXNUMMP
setting of the node (if maxnummp=0, an error will be reported).

Zlatko Krastev
IT Consultant






Richard Foster [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
26.09.2003 17:27
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Migrate disk pool with disk failure


Hello list

A planning question:

I have a disk storage pool of 3 equal-sized volumes of 30 GB each and
highmig=90. How does migration get handled if a disk gets an I/O error and
is set to readonly while the pool is being filled overnight?

With 3 disks, migration starts at 90% of 90 GB, all fine and dandy. With
one disk read-only, you've only got 60 GB to play with.

Does migration now kick in at 90% of 60 GB?

And if not, what happens when you've filled the good disks (approx 60 GB,
depending on how much went on the error disk before the problem)? Does it
go over to tape automatically, or do you get an error?

Our system is TSM server 5.1.6.4 on AIX 4.3.3, but I hope the answer is
release-indepedent.

Regards
Richard Foster
Norsk Hydro asa




***
NOTICE: This e-mail transmission, and any documents, files or previous
e-mail messages attached to it, may contain confidential or privileged
information. If you are not the intended recipient, or a person
responsible for delivering it to the intended recipient, you are
hereby notified that any disclosure, copying, distribution or use of
any of the information contained in or attached to this message is
STRICTLY PROHIBITED. If you have received this transmission in error,
please immediately notify the sender and delete the e-mail and attached
documents. Thank you.
***


Re: device class mount point mixed library (DLT7000 and SDLT320)

2003-11-08 Thread Zlatko Krastev
I am afraid you might have an unsupported configuration.
Looking to ITSM v5.2 Administrator's Guide, Chapter 5 Configuring Storage
Devices, section Mixing Device Types in Libraries:
While the Tivoli Storage Manager server now allows mixed device types in
a library, the mixing of different generations of the same type of drive
is still not supported.
...
Mixing generations of the same type of drive and media technology is
generally not supported in a Tivoli Storage Manager library.
...
If you need to transition from an older generation to a newer generation
of media technology, consider the following guidelines:
- Upgrade all drives in a library to the new generation of media
technology.

For me the problem is that both def dev statements are using same
devtype=dlt!!! Thus the server does not distinguish between drives. If you
look in the example, it is not with DLT+SDLT but with DLT+LTO, and
difference is made by devtype parameter of the drives.

-- IBM said that the mixed drives are supported in the same library.

But not all mixtures are supported. So either you or IBM had to check will
be your configuration supported or not!
Hope I am wrong.

Zlatko Krastev
IT Consultant






Kurt Beyers [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06.11.2003 20:54
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:device class mount point mixed library (DLT7000 and SDLT320)


Hello,

I've posted a question a few weeks ago about DLT7000 drives and SDLT320
drives in the same library (Storagetek L180). IBM said that the mixed
drives are supported in the same library. The TSM server is 5.2.1.2 on
Windows2000.

I've defined today the library as follows:

define library mixlib libtype=scsi
define path server01 mixlib srctype=server desttype=library
device=lb3.0.0.0

define drive mixlib dlt_mt4
define drive mixlib sdlt_mt5
define path server01 dlt_mt4 srctype=server desttype=drive library=mixlib
device=mt4.0.0.0
define path server01 sdlt_mt5 srctype=server desttype=drive library=mixlib
device=mt5.0.0.0
define devclass dlt_class devtype=dlt format=dlt35c library=mixlib
define devclass sdlt_class devtype=dlt format=sdlt320c library=mixlib
define stgpool dlt_pool dlt_class maxscratch=20 (was already defined of
course)
define stgpool sdlt_pool sdlt_class maxscratch=20


The first serie of tests went fine: labelling of SDLT tapes
   backup of the TSM database
to a DLT7000 tape

I started then the migration from the primary DLT volumes to the SDLT
primary pool. This confuses me from time to time. The following sitations
happened:

1. SDLT volume mounted in SDLT drive and a bit later the move failed with
the warning ANR1145W (not enough mount points available). All drives and
paths were online A 'query mount' didn't give any other mount requests.

2. SDLT volume mounted in SDLT drive, DLT tape in the other SDLT drive and
the 'move data'  is successfull.

3. SDLT volume in SDLT drive and DLT tape in DLT7000 drive (that is what I
would expect). The'move data' is successfull.

If a new 'move data'  is started, you typically see the message 'waiting
for a mount point in device class sdlt_class or dlt_class. But how does
TSM knows which drive belongs to which device class? Where is this link
defined (through the format:dlt35c / sdlt320c in the device class?)? Can
you check which drives belongs to a specific device class according to
TSM?

And why is a DLT tape mounted in a SDLT drive if the DLT drive is ready to
be used as well?

Any input that will make this more clear to me how TSM chooses a drive is
more then welcome. Thanks in advance,

Kurt


Re: TSM-client under FreeBSD?

2003-11-08 Thread Zlatko Krastev
Because you are not managing packages using rpm, it cannot find the
prerequsites. Therefore you have to check whether the necessary modules
are installed yourself, and ask rpm to avoid checking for them:
rpm -i --nodeps TIVsm-API.i386.rpm

Zlatko Krastev
IT Consultant






Ewald Jenisch [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
07.11.2003 15:12
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM-client under FreeBSD?


Hi,

I'd like to use the 5.x-TSM-client for Linux in order to back up a
FreeBSD-system. FreeBSD comes with Linux-compatiblity so the binaries
should be compatible/runable.

When trying to install the required .rpms (TIVsm-API.i386.rpm,
TIVsm-BA.i386.rpm) I end up with:

bash-2.05b# rpm -i TIVsm-API.i386.rpm
error: failed dependencies:
/bin/sh   is needed by TIVsm-API-5.2.0-0
ld-linux.so.2 is needed by TIVsm-API-5.2.0-0
libcrypt.so.1 is needed by TIVsm-API-5.2.0-0
libcrypt.so.1(GLIBC_2.0) is needed by TIVsm-API-5.2.0-0
libc.so.6 is needed by TIVsm-API-5.2.0-0
libc.so.6(GLIBC_2.0) is needed by TIVsm-API-5.2.0-0
libc.so.6(GLIBC_2.1) is needed by TIVsm-API-5.2.0-0
libc.so.6(GLIBC_2.2) is needed by TIVsm-API-5.2.0-0
libdl.so.2 is needed by TIVsm-API-5.2.0-0
libdl.so.2(GLIBC_2.0) is needed by TIVsm-API-5.2.0-0
libdl.so.2(GLIBC_2.1) is needed by TIVsm-API-5.2.0-0
libm.so.6 is needed by TIVsm-API-5.2.0-0
libm.so.6(GLIBC_2.0) is needed by TIVsm-API-5.2.0-0
libpthread.so.0 is needed by TIVsm-API-5.2.0-0
libpthread.so.0(GLIBC_2.0) is needed by TIVsm-API-5.2.0-0
libpthread.so.0(GLIBC_2.1) is needed by TIVsm-API-5.2.0-0
libpthread.so.0(GLIBC_2.2) is needed by TIVsm-API-5.2.0-0
libstdc++-libc6.2-2.so.3 is needed by TIVsm-API-5.2.0-0
bash-2.05b#

So there are some libs missing on the target (FreeBSD) system.


Has anybody out there got the Linux-TSM client running under FreeBSD?

Thanks much in advance for any clue,
-ewald


Re: TDP for Oracle Errors

2003-11-08 Thread Zlatko Krastev
We are accustomed to accept that Microsoft is having behavior in some
products which Tivoli cannot overcome. But not only M$ does so, and in
this case we are seeing similar Oracle behavior.
Current MML request is delete that object and I will ignore the error if
it does not exist. If the API allowed to call something like if this
object exists, delete it, than TDPO would be able to query the TSM server
without producing severe error.
Do not blame IBM/Tivoli for something Oracle must fix.

Zlatko Krastev
IT Consultant






Neil Rasmussen [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06.11.2003 00:29
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: TDP for Oracle Errors


Eric,

Oracle 9i introduced the concept of autobackups for the control file.
During the autobackup process, Oracle dynamically generates the
backuppiece name for the control files that are being saved.  During this
backup processing, a unique name is generated by Oracle prior to backing
it up and the TSM Server is then checked to ensure that this backuppiece
name does not exist.

When performing this check for any existing objects that might have this
name, Oracle will first try to delete this file regardless of whether it
exists or not. The return code from the deletion process not finding the
object on the TSM Server is the ANU2602E message.

During the autobackup processing, Oracle calls the Media Management Layer
(Data Protection for Oracle/TSM Server in this case).   Oracle issues the
command to attempt a deletion prior to autobackup of the control file.
Because each MML operation is a unique and distinct session, the MML has
to treat each delete the same. In other words, Oracle gives no hints as to
the type of deletion being performed therefor Data Protection for Oracle
just attempts the delete.

From Data Protection for Oracle's point of view a file not found during a
delete could be a potentially serious error. For example:
- During the delete the user is actually using a TSM filespace name that
is different than what the objects were originally backed up under.
- The TSM node does not have the backup delete permissions
(backdel=yes|no).
-  During the deletion a different TDPO_NODE is specified than what was
used during backup.
The reason that these are potentially dangerous is that when Data
Protection for Oracle detects an error during delete that a file is not
found or cannot be deleted Oracle specifies that the MML does not return
an error to Oracle. This causes a situation where during normal deletion
Oracle would remove the backuppiece name from it's catalog but the backup
could still exist on the TSM Server there by using up unnecessary storage
space.

Unfortunately, unless we ignore this error altogether then there is
nothing that we (developement) can do. In the mean time the only tips we
can give to work around these error messages during autobackups is for the
user to do the following: Use the DISABLE EVENTS command on the TSM Server
to suppress all ANU0599 ANU2602E messages so there would be no
notification when a the file is not being found during DP for Oracle
processing. For the tdpoerror.log, a different DSMI_LOG could be specified
(in the tdpo.opt file) for the autobackup so that the errorlog would go to
a different directory - in this way these logs could be monitored
separately.


--

Date:Tue, 4 Nov 2003 16:28:55 +0100
From:Loon, E.J. van - SPLXM [EMAIL PROTECTED]
Subject: TDP for Oracle Errors

Hi *SM-ers!
Our Oracle guys and girls are currently testing DP for Oracle 5.2 on
Oracle
9i. They turned one a new RMAN feature for automated controlfile backup
(for
those of you interested, the command is 'configure controlfile autobackup
on;') Now, whenever a controlfile backup is running it generates the
following error in both the tdpoerror.log and the actlog:

ANE4994S (Session: 1437562, Node: KL1003VC-ORC)  TDP Oracle AIX ANU0599
ANU2602E The object /mount/appl1//c-213141136-20031104-0e was not
found
on the TSM Server

We had to make a trace to see that this error is generated because TSM
first
checks whether the object exists on the server before allocating it.
I created a PMR for this behavior and level 1 now says: works as designed,
just suppress ANE4994S from the actlog (impossible since ANE4994S is a
generic message) and build something yourself to clear your tdpoerror.log
periodically.
I find this a VERY disappointing reply from IBM
Has anybody encountered this behavior to and maybe found a better way to
solve these errors?
Maybe someone from development listening on this list has a good
suggestion?
I'm really stuck here with the Dutch level 1 support... :-((
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


Regards,

Neil Rasmussen
Software Development
Data Protection for Oracle
[EMAIL PROTECTED]


Re: Patch or upgrade

2003-11-08 Thread Zlatko Krastev
You may also search IBM site (and the list archives) about the problem
with MaximumSGList setting!

Zlatko Krastev
IT Consultant






hassan MOURTADI [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
21.10.2003 22:26
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Patch or upgrade


Hi,

I am in 4.2.1.9  / NT 4.0. I have a problem in restoring SAP DATA after
backing up using LAN-FREE (storage Agent on NT).
To resolve this, i think i have to apply patch 9 and above or to upgrade
to
5.1.

Any Idea??

Thanks


Re: Moving archives between policy domains

2003-11-08 Thread Zlatko Krastev
The TSM server must use the settings of the archive copygroup associated 
with the management class with same name in new policy domain.
You should investigate why it shows unknown. Output from your queries 
may sched some light what is going on.

Zlatko Krastev
IT Consultant






Markus Engelhard [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
13.10.2003 17:21
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Moving archives between policy domains


Hi *sm ers,

I´m presently trying to consolidate 20 odd distributed TSM-Servers that
were indiviually managed and have very different logical setups. My aim is
to have identical policies across the country, so I thought having
standardized policy domains would be a good idea.
It´s quite straightforward for backups, so I´m not so worried about those.
But what !really! happens when a node with archived data is moved
to a new policy domain isn´t quite clear to me.
I used a TSM 5.1.5 Server on W2K and a 5.1 Client: an archived file will
show unknown for the Management-class after an update node moving it
from one policy-domain to another, althougt the management class with the
matching archive copy group was defined in both domains. The default
management class doesn´t apply either. So it would only be the grace 
period
that stops data expiring with the next expire inventory, but even that
does´nt seem to apply: I changed grace period to one day and did an 
expire,
but the archive is still there.
Does anyone have a comprehensive description what happens when moving
archives this way or has even gone through this himself? I´m not quite 
sure
I understand the implications decribed in the QuickFacts and IBM support
didn´t manage to give a convincing explanation yet.
I can´t afford to loose archive data (a real no-go!) and I hate not 
knowing
what is up, so any help will be welcome.

Thanks,

Markus


Re: Choosing volumes for DR restores

2003-11-08 Thread Zlatko Krastev
The server searches through *available* (having status read-only or
read-write, and checked in a library) copypool volumes in alphabetical
order. You can also override which copy to be used by specifying restore
stg primary_pool copy=desired_copypool.
If one of the copies is not checked in or the volume is not in readable
status, there is no choice and server attempts restore from the other
copy.

Zlatko Krastev
IT Consultant






Thomas Denier [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
23.10.2003 22:08
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Choosing volumes for DR restores


We are in the process of reorganizing some of our storage pools. We
are using 'move nodedata' to move primary storage pool contents to
new storage pools. We are running 'backup stgpool' commands to create
new offsite copies in corresponding new copy storage pools. We are using
'delete volume' and 'backup stgpool' to clean up the copies left behind
in pre-existing copy storage pools. It is not clear that we will be able
to finish the copy storage pool cleanup before our next disaster recovery
test. If we don't finish by then our reconstructed server will have
access to two copies of some client backup files. How does TSM decide
which copy to use in this situation?


Re: TSM Client 5.2 is compatible with TSM Server 4.2 ?

2003-11-08 Thread Zlatko Krastev
Q1: Compatibility between ITSM client v5.2 and TSM server v4.2
A1: Officially unsupported. It probably will work but you are on your own
risk.
Comment1: First of all v4.2.2.7 server is very old, out of support, and
may have some problems with System Objects (if you have Windows nodes).
Staying at AIX 4.3.3 you still can upgrade to ITSM v5.1.x server!

Q2: Can v4.2 client work on AIX 5.1
A2: Yes, it can in both 32-bit and 64-bit mode.
Comment2: You can install ITSM client v5.1.x on AIX 5 and this would be a
supported configuration with v4.2 server. It would allow you if the server
is upgraded to ITSM v5.1 or ITSM v5.2 to stay in supported configuration.

Zlatko Krastev
IT Consultant






Juan Jose Reale [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09.10.2003 18:17
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM Client 5.2 is compatible with TSM Server 4.2 ?


   Hi everyone!

I will install the TSM Client 5.2  in  AIX 5.1 System Operative,  so I
am unsure about the compatible between TSM Client 5.2  and  TSM Server
4.2.2.7.   This means:  can TSM Client 5.2 work with TSM Server 4.2.2.7 ?


Now I am running TSM server 4.2.2.7 in other server AIX 4.3.

   Other question:  Can TSM Client 4.2 can run in AIX  5.1 System
Operative?

   I look forward to your reply.
   Thank you.

   Juan Reale
   Data Center/AES.


This email has been scanned for all viruses by the MessageLabs service.


Re: device class mount point mixed library (DLT7000 and SDLT320)

2003-11-08 Thread Zlatko Krastev
So it seems that all three drives are acceptable for DLT operations.
Observing that usually TSM picks drives in alphabetical order and later
does some sort of round-robin, I would name them to have DLT drive
preceding SDLTs. This ought to give it slightly better priority for DLT
operations but I would expect still the round-robin to mount often in SDLT
drives :-()

About the usage of drive for specific media, I would expect TSM to query
somehow the library what media technology is this cartridge. The library
being able to work with many types ought to be able to recognise them and
respond if properly queried. So order ought to be:
- TSM receives checkin request
- library is queried about the media technology of the cartridge
- TSM selects an available drive compatible with the cartridge
- mount request is sent to the library

Zlatko Krastev
IT Consultant






Kurt Beyers [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
08.11.2003 19:32
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: device class mount point mixed library (DLT7000 and 
SDLT320)


IBM guarantueed me that a mixture of DLT7000 and SDLT is supported in the
same library. I've given them the same remarks as you've made during the
discussion.

It is already a bit more clear to me, but not yet completely.

If you do a 'query drive f=d', you see all the formats that the drive can
read and can write to. In the device class definition, the format is
specified explicitly this time (DLT35C or SDLT320C). The storage pools
have
a specific device class, so TSM can determine which drive can be used for
a
read or write operation. This explains why the SDLT320 drives are used as
well for the read operation of a DLT7000 tape. TSM doesn't have any
preference for the drive to be used, any drive that can read the tape has
the same chance to be used.

But why I do get the warning 'ANR1145W ' from time to time, is still not
clear to me.

And if you label a new tape and check it in as scratch, how does TSM knows
which drive should be used. The tape doesn't belong to a storage pool yet.

Kurt

- Original Message -
From: Zlatko Krastev [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, November 08, 2003 5:26 PM
Subject: Re: device class mount point mixed library (DLT7000 and SDLT320)


 I am afraid you might have an unsupported configuration.
 Looking to ITSM v5.2 Administrator's Guide, Chapter 5 Configuring
Storage
 Devices, section Mixing Device Types in Libraries:
 While the Tivoli Storage Manager server now allows mixed device types
in
 a library, the mixing of different generations of the same type of drive
 is still not supported.
 ...
 Mixing generations of the same type of drive and media technology is
 generally not supported in a Tivoli Storage Manager library.
 ...
 If you need to transition from an older generation to a newer generation
 of media technology, consider the following guidelines:
 - Upgrade all drives in a library to the new generation of media
 technology.

 For me the problem is that both def dev statements are using same
 devtype=dlt!!! Thus the server does not distinguish between drives. If
you
 look in the example, it is not with DLT+SDLT but with DLT+LTO, and
 difference is made by devtype parameter of the drives.

 -- IBM said that the mixed drives are supported in the same library.

 But not all mixtures are supported. So either you or IBM had to check
will
 be your configuration supported or not!
 Hope I am wrong.

 Zlatko Krastev
 IT Consultant



Re: Controlling nodes - storage pool migrations

2003-11-08 Thread Zlatko Krastev
There is no method to convince the TSM server to skip the biggest node
(AFAIK). But the workaround is quite straightforward - define separate
stgpool for that node and use the existing stgpool for the rest. Also the
node with 200 GB might be very good candidate for direct-to-tape
backups/restores. Putting it into separate stgpool will ensure manual
collocation and good restore times.

-- I believe will check to see if ALL the files for a particular node have
not been touched in 30 days before migrating ALL of the node's files.

Incorrect. The files which are backed up more than 30 days ago will be
migrated, while the recently backed up files will stay in the stgpool. If
you look at the description of def stg command, you will see that
MIGContinue parameter is honoured when storage pool is filled with files
newer than MIGDelay period!

As not all your assimptions are correct and there are too many ways to
skin the same cat you can avoid adding dasd endlessly.

Zlatko Krastev
IT Consultant






Roy P Costa [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
07.10.2003 17:10
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Controlling nodes - storage pool migrations


We have storage groups set up for various parts of our business, primary
pools being to dasd and migrating to tape in an automated Tape Library
(ATL).  Most of the nodes in the different storage groups are file
servers,
some holding many gigabytes of data.  As I understand from the TSM/ADSM
documentation and experience, when a dasd storage group meets the
migration
criteria (High Mig), TSM looks for the node that has the MOST data in the
storage pool and then proceeds to migrate ALL of that node's filespaces
from dasd to tape before it checks to see if the migration low threshold
has been met.  We currently have the situation that the node with the most
data has over 200G of data on dasd, causing us to run out of tapes in the
ATL (and bringing down the dasd usage to well below the low threshold).
I've tried to make the Migration Delay = 30 days, which I believe will
check to see if ALL the files for a particular node have not been touched
in 30 days before migrating ALL of the node's files.  If this is true,
then
these fileservers will never have all of their files 30 days old, since
files are updated regularly and I would need to have Migration Continue =
yes to avoid the dasd storage pools from filling totally.
If my assumptions are correct, throwing more dasd at the dasd storage
pools
will only make these migrations even larger.
Is there a way to tell TSM to not migrate the node with the MOST data or
some workaround that gives us better control of these migrations?  I'm
willing to explore and test any suggestions that you may have.


Roy Costa
IBM International Technical Support Organization


Re: For a W2K TSM 5.1.6.3 server program reload: Why won't TSM server restart as before?

2003-11-08 Thread Zlatko Krastev
After reinstalling the OS and ITSM binaries you will need two registry
keys:
HKLM\Software\IBM\ADSM\CurrentVersion\Server\Server1
HKLM\System\CurrentControlSet\Services\TSM Server1

Probably the easiest way would be:
- rename ..\server1 directory
- create instance using same directory without overwriting DB, Log 
stgpool files
- stop the blank instance and delete so created directory
- rename back old ..\server1
(have in mind I did not tested it this way)

Zlatko Krastev
IT Consultant






Ken Sedlacek [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06.10.2003 20:57
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:For a W2K TSM 5.1.6.3 server program reload: Why won't TSM 
server restart
as before?


Enviro:

W2K TSM 5.1.6.3 server.

Original W2K TSM 5.1.6.3 program on C drive.

Running for months with no-problems!

DB, recovery logs, and ..\server1 files on D drive.

The W2K OS on C drive needed to be reloaded to fix corrupted OS files.

The D drive was never touched and remained as it was before the C
drive OS reload.


Question:
What is the proper way to reload the TSM server program files and use the
existing DB, recovery log, and ...\server1 files to continue using TSM the
way it was before the W2K OS reload?

I do have a current DB tape  ...server1\*.* files I can use if need be.


Ken Sedlacek
AIX/TSM/UNIX Administrator
[EMAIL PROTECTED]
Agilysys, Inc. (formerly Kyrus Corp.)

IBM eServer Certified Specialist: pSeries AIX v5 Support
IBM Certified Specialist: RS/6000 SP  PSSP 3
Tivoli Certified Consultant - Tivoli Storage Manager v4.1


Re: space reclamation on a server storage pool?

2003-11-08 Thread Zlatko Krastev
Because only the DB of TSM1 knows what is in that virtual volume. From
TSM2's perspective this is single file and that's it. If that file was
Oracle data file, would you expect TSM to know what is the content of it?!

Zlatko Krastev
IT Consultant






John C Dury [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
08.10.2003 07:31
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:space reclamation on a server storage pool?


We have 2 systems. TSM1 and TSM2. TSM2 is used solely as an offsite system
to backup our primary tape storage pool on TSM1. When I run reclamation on
the server storage pool on TSM1, the process requests a mount on TSM2 and
a
mount on TSM1. It appears to be copying data from the mount on TSM1 to
TSM2. Why wouldn't reclamation mount 2 tapes on TSM2 and copy the data
that
way instead of through the network which is going to be much slower.
Essentially it seems like it is going to copy the exact same data from
TSM1
to TSM2 two times because it gets copied once during the BACKUP STG
command, and then again during the space reclamation of the server storage
pool. This seems like a great waste of time when the exact same data
should
already be on the TSM2 system and could be moved directly from 1 tape to
another where it would be considerably faster.
Does this make sense? Why would it be done this way?
John


Re: Webclient through firewall

2003-11-08 Thread Zlatko Krastev
Many times discussed on this list:
The web-client needs connection from intranet to DMZ on port 1581 (usually
firewalls allow that direction).
The actual data transfer goes in opposite direction - from the node in DMZ
to TSM server on port 1500. This got improved in TSM v5.2 client - now the
server can also initiate the session with the client, so it can be again
from intranet to DMZ.

So what you are trying to achieve can be done either by opening port 1500
on the firewall from DMZ to intranet (your security officer might not
allow it), or install properly configured v5.2 client(s) for DMZ systems.

Zlatko Krastev
IT Consultant






Geert De Pecker [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06.10.2003 16:39
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Webclient through firewall


Hi,

I have a couple of tsm clients in my DMZ. The backups through
the firewall are no problem at all (port 1500).

When I want to use the http://xxx.xxx.xxx:1581, no problem to
get the applet started. However, as soon as I try to start a
backup or restore from the applet, I get a java error Connection
timed out. With the firewall open on all ports: no problem.

I found the webports setting and have put that in the dsm.sys
file (I am running redhat 8) on the client machine:

passwordaccess  generate
compression yes
Editor  yes
schedlogretention   3 D
txnbytelimit24576
httpport1581
webports1584 1585

I opended these ports on the firewall (from internal to dmz: open
ports: 1581, 1584, 1585) and still no luck.

When I shutdown and restart dsmcad and retry the connection,
I can start a backup or restore version once. If I try another
backup or restore, it fails with the timeout error.

Looking at the firewall sniffer, it seems tsm starts with the
ports set by webports, but also uses other ports (like 1841).

As I understand from the doc, the webports variable should
be the only thing and this behaviour seems like a bug. Does
anybody know how this can be solved?

Thanks,

Geert


Geert De Pecker - SOFICO NV
Fraterstraat 228-242, B9820 Merelbeke, Belgium
Mail: [EMAIL PROTECTED], Tel: +3292108040, Fax: +3292108041



Re: L700e library with LTO2. Definition problems on AIX

2003-11-08 Thread Zlatko Krastev
We have seen ITSM v5.2.1.0 on AIX 5.2 64-bit working fine with L700e.
Library defined without problems though the drives were LTO1.
What is the output of
`lsdev -Cc library`
`lsdev -Cc adsmtape`
`lsattr -El lbX` (where lbX is the actual name of library changer device)
`lslpp -L tivoli\*`

Zlatko Krastev
IT Consultant






Ochs, Duane [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
01.10.2003 16:54
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:L700e library with LTO2. Definition problems on AIX


This is a new install.
TSM 5.2.1.1 on AIX 5.2 64-bit enabled.

L700e with LTO2 fibre drives. Drives defined fine as Tivoli devices.

The robotics will not define, the lsattr comes back empty.
From the smit log:  lsattr -c library  -s 'scsi'  -t 'ADSM-SCSI-LB'  -D -O

I had to apply a lsattr fix to correct a return of not enough memory. Now
I
just get a generic error.

I do have an open call with AIX support that was forwarded to Tivoli
support, they feel it may be a driver issue.

Anybody else have a similar issue or resolution?

 Duane Ochs
 Enterprise Computing

 Quad/Graphics

 Sussex, Wisconsin
 414-566-2375 phone
 414-917-0736 beeper
 [EMAIL PROTECTED]
 www.QG.com




Re: AIX TSM server 4.3.3 ML10 --- 5.1.0 ML3 and TSM 5.1.6.2 --- 5.1 .X.X?

2003-11-08 Thread Zlatko Krastev
-- This will require a reboot.

... which can be easily avoided!!! You will need at least one reboot
during upgrade to AIX 5L so why not to use it:
1. Remove tivoli.tsm.devices.aix43.rte fileset. It would not unload the driver from
the running kernel.
2. Reboot and proceed with installation of AIX 5L.
3. Install tivoli.tsm.devices.aix5.rte fileset. No need to reboot (!!!),
just invoke cfgmgr and devices would be detected.
4. Update paths with appropriate device names (they can be detected in
different order).

Zlatko Krastev
IT Consultant






Dmitri Pasyutin [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.09.2003 22:03
Please respond to pasd


To: [EMAIL PROTECTED]
cc:
Subject:Re: AIX TSM server 4.3.3 ML10 --- 5.1.0 ML3 and TSM 5.1.6.2 
--- 5.1
.X.X?


On Monday 29 September 2003 15:26, Thach, Kevin G wrote:
 I will be performing an OS upgrade on our TSM server in the next week or
so
 from 4.3.3 ML10 to 5.1 ML3, and was wondering what version of TSM people
 have had good success with on AIX 5.1 (32-bit)?  We are currently at
 5.1.6.2.  Is 5.1.7.3 pretty stable?

I was running TSM 5.1.7.x on AIX 5.1 32-bit for a few months, no problems.
I am now running TSM 5.2.1.1 on AIX 5.2 64-bit.

 Also, can someone point me to a good document that describes the TSM
 upgrade procedures when an OS upgrade is also involved?  Do I just
simply
 perform the OS upgrade, install the new TSM filesets, and I'm done?  Or
do
 I need to uninstall the current version of TSM, install the new, restore
 the database, etc.?

After upgrading to AIX 5.1, you need to remove the
tivoli.tsm.devices.aix43.rte fileset and install
tivoli.tsm.devices.aix5.rte
(base level plus any updates as needed). This will require a reboot. After
rebooting, configure your TSM devices with 'smit tsm_devices' and you're
done.
No need to reinstall all TSM filesets nor restore the database.

The TSM for AIX Quick Start manual describes everything pretty well.

Cheers
Dmitri


Re: disk for library

2003-11-08 Thread Zlatko Krastev
I would say that 40-400 GB database is very good candidate for
direct-to-tape. Also I tend to believe that library with LTO2 or 9940B or
3592 drives would be cheaper and faster than bulk storage disk arrays.
If you want to have those databases on real high-end disk storage, you can
use EMC Symmetrix's BCV/SRDF or IBM ESS' FlashCopy/PPRC and TSM taking
backups from there.

Zlatko Krastev
IT Consultant






Otto Schakenbos [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
01.10.2003 19:13
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:disk for library


We are researching on getting a big disk array we want to use in tsm as
a virt. library.
So we define this disk array as a filetype library and use it to backup
the clients. Then we  make a copy storage pool using a normal tape
library and take these tapes off site.
We have a  pretty small environment and backup (inc.) around 50 gigs a
day. The total size of this backup pool would be 4 tb.
Is anyone using this kind  setup and what issue's did you bump into?
Now the real question:
We also do db2 backups. These are around 1 tb in size every night ( the
sizes of the db's are around 40 to 400 gigs).
I can't seem to figure out how i could use the disk array to speed up
the db2 backups. I could of course backup to the file-library but then i
would need to move this data to tape. (otherwise the disk array would
fill up pretty quickly and i want to have this data off site)
 This would end up with a migration process and this would mean i can
only use one drive per node which is not fast enough for us. (for backup
it would be ok but the restore time would take to long).
This would leave me backing up directly to tape since this would allow
the use of 2 drives. (and is what we are doing at the moment)
Are there any other ways to use the file-library for the db2 backups?
Or am i'm missing something? Any suggestions are welcome.
(note that the raid array is not a nas device)

Regards

--
Otto Schakenbos
PC-Support

TEL: +49-7151/502 8468
FAX: +49-7151/502 8489
MOBILE: +49-172/7102715
E-MAIL: [EMAIL PROTECTED]

TFX IT-Service AG
Fronackerstrasse 33-35
71332 Waiblingen
GERMANY


Re: WAS : Windows 2003 Client Backup errors, IS NOW : Windows 2003 Client 5.1.6.7 restore impossible ?

2003-11-08 Thread Zlatko Krastev
Haven't tried it on Win2003 but for Win2000 we have avoided BSOD by
performing Windows repair install just before rebooting after System
Object restore. It might be worth to try (when I find enough time for
Win2003, I will test it myself).

Zlatko Krastev
IT Consultant






PAC Brion Arnaud [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.09.2003 17:09
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:WAS : Windows 2003  Client Backup errors, IS NOW : 
Windows 2003
Client 5.1.6.7 restore impossible ?


Hi all,

 As I was informed by Andy Raibeck, you can use the 5.1.6.7 client to
backup Win2k3
 servers to a pre-5.2 TSM server with full recoverability.

Could anyone confirm this assertion ? We already had two
**UNSUCCESSFUL** disaster recovery attempts using client 5.1.6.7 on
Win2003 servers with a TSM server having 5.1.6.2. Both finished with a
wonderful BSOD at reboot :-(
Something special to take care of for getting it working ?
TIA.

Arnaud


=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



-Original Message-
From: Bill Boyer [mailto:[EMAIL PROTECTED]
Sent: Tuesday, 30 September, 2003 15:05
To: [EMAIL PROTECTED]
Subject: Re: Windows 2003 Client Backup errors


I just started a thread like this a couple weeks ago. Seems that the 5.2
client on Win2k3 uses a new interface to backup the System Object, and
this caused them to make a change to the protocol with the TSM server.
To fully backup a Win2k3 server with the TSM 5.2 client, you need a TSM
5.2 server also. It's actually buried deep in the client manual if you
do a search on systemobject.

As I was informed by Andy Raibeck, you can use the 5.1.6.7 client to
backup Win2k3 servers to a pre-5.2 TSM server with full recoverability.
You just don't get some of the bells and whistles that come with the 5.2
version.

Bill Boyer
DSS, Inc.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Bruce Lowrie
Sent: Tuesday, September 30, 2003 7:56 AM
To: [EMAIL PROTECTED]
Subject: Windows 2003 Client Backup errors


All,
Receiving a strange error on the TSM Server (Solaris 2.8, TSM Server
5.1.63) activity log referencing a Windows 2003 client (Version 5.2.03)
:

09/29/03   20:14:10 ANRD smnode.c(18984): ThreadId66 Session 73736
for
node FPRTCMD01 : Invalid filespace for backup group member: 2.

We receive several of these every night, yet the backups complete
successfully. Under client version 5.2.00 I did not see this error, only
after moving to 5.2.03. If anyone has any insight, please let me know.

Regards,

Bruce E. Lowrie
Sr. Systems Analyst
Information Technology Services
Storage, Output, Legacy
*E-Mail: [EMAIL PROTECTED]
*Voice: (989) 496-6404
7 Fax: (989) 496-6437
*Post: 2200 W. Salzburg Rd.
*Post: Mail: CO2111
*Post: Midland, MI 48686-0994
This e-mail transmission and any files that accompany it may contain
sensitive information belonging to the sender. The information is
intended only for the use of the individual or entity named. If you are
not the intended recipient, you are hereby notified that any disclosure,
copying, distribution, or the taking of any action in reliance on the
contents of this information is strictly prohibited. Dow Corning's
practice statement for digitally signed messages may be found at
http://www.dowcorning.com/dcps. If you have received this e-mail
transmission in error, please immediately notify the Security
Administrator at mailto:[EMAIL PROTECTED]


Re: TSM licensing again

2003-11-07 Thread Zlatko Krastev
You can find my detailed explanations on licensing in list archives.
In short - the sales team is right, you must sum processor count of both 
TSM server and TSM server-type nodes (file servers, RDBMS, web/appl. 
servers, etc.) and obtain necessary number of Processor licenses. For 
non-server nodes (workstations not serving anything over the network) you 
get Client license regardless of processor number.

For a 10-way server you need 10 ITSM or ITSM XE Processor licenses. For 
a 6-cpu LPar/nPar/vPar in 10-way system you need 6 TSM licenses if other 
partition(s) do not use TSM.

Zlatko Krastev
IT Consultant






Przemysaw Maciuszko [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
07.11.2003 13:35
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:TSM licensing again


Hi.
I've read all the previous messages about this topic, but I still have 
some
questions.
We've been asked by IBM to count our client's processors number.
IBM says that now we will have to pay for licensing:

1. TSM Server - this is based on the number of processors used by clients
being backuped up on this server
2. TSM Clients - number of nodes using the TSM Server

The overall cost consists of those two.

Maybe I'm missing something in here, but the first part is somehow...

We have few machines but highly equipped (mostly 10+ CPUs per unit), so 
the
fees for us are very high. Is it the _right_ way of licensing or just the
Polish sales team is missing something important in their way of thinking?


-- 
Przemysaw Maciuszko
Agora SA


Re: Space reclaimation by changing retention policy

2003-11-04 Thread Zlatko Krastev
There are too many variables affecting the result. The best approach would
be to restore the database to a second instance, change the policy, let
the expiration run, ... and compare!

Zlatko Krastev
IT Consultant






Stanley, Jon [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
31.10.2003 05:01
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Space reclaimation by changing retention policy


Is there any way to quantify the amount of space that will be reclaimed
via changing a retention policy and letting expiration run until it's
done? For example, the current policy is 180 day/8 versions.  If I
change that to 30 days/3 versions, how much space would I gain?  I'm
sure it's quite substantial, but I need to be able to quantify it.

Thanks!
-Jon


Re: Upgrade LAN FREE Licensing Question

2003-11-04 Thread Zlatko Krastev
In the past there were two different clients: MgSysLAN (B/A Client only)
and MgSysSAN (B/A Client + Storage Agent). For the server you will need
Library Sharing feature.
Since ITSM v5.1 the licensing was modified:
B/A Client is part of either IBM TSM or ITSM Extended Edition. To get
Storage Agent you need ITSM for SAN license on top of ITSM or ITSM XE
license.
Library Sharing now is not a separately licenseable feature but is part of
ITSM Extended Edition.

Thus you need ITSM XE processors for both client and server + ITSM for SAN
processors for the client. You cannot get lan-free client, you get
lan-free (ITSM for SAN) on top of client (ITSM XE). Plus your server
needs to be licensed for ITSM XE, and because server and client cannot
intermix ITSM and ITSM XE, the client has to be ITSM XE.

Zlatko Krastev
IT Consultant






Kamp, Bruce [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
28.10.2003 16:36
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Upgrade  LAN FREE Licensing Question


In the process of bringing a SAN into my environment.  I know I need a lan
free client but is there a new piece that I need to purchase for the TSM
server side?  Also is the lan free client the only piece that I need for
the
client side or does it require a backup client?

Thanks,
--
Bruce Kamp
Midrange Systems Analyst II
Memorial Healthcare System
E: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
P: (954) 987-2020 x4597
F: (954) 985-1404
---


Re: TDP for Oracle works with Oracle 8.1.6 ?

2003-11-04 Thread Zlatko Krastev
It may work but is unsupported. Supported versions are 8.1.7 or any 9i!
The last version which supported Oracle 8.1.6 was TDP for Oracle 2.2.

Zlatko Krastev
IT Consultant






Juan Manuel Lopez Azañon [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
29.10.2003 11:52
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:TDP for Oracle works with Oracle 8.1.6 ?


Hi all.
I´m close to install the TDP for Oracle 5.2.0 over HPUX 11.0, so I wonder 
if does works with Oracle 8.1.6.
Anyboby have experience with this software conbination ?

Thanks. Juanma

ING National Netherlanden
Madrid - SPAIN

Cloudy, Raining and 12 ºC


Re: maximumsglist - lanfree

2003-10-31 Thread Zlatko Krastev
Looking to mail user - you love tsm, not the news :-()

The bad news: this problem might affect any SAN tape operation!!! Be it
over LAN with TSM server writing to SAN tape, be it LAN-free with storage
agent, even with competitive product (the problem was introduced by
QLogic; IBM, Legato and Veritas were some of the affected parties)!!!

It is even harder to track and detect the problem and its impact as you
can have different settings on different systems. For example, TSM server
might have old driver with default value of 0x41, while newly installed
storage agent might be with 0x21 and corrupted LAN-free backups.
If we bring the change management in the equation the things are getting
even worse. Example: same TSM server (old driver, 0x41) using LTO1 drives
was connected to upgraded fabric with bigger SAN switches and the driver
got upgraded - you may introduce a problem into formerly working
environment.
etc, etc...

Go and investigate:
- when the TSM was installed?
- who have installed it?
- which version of the driver did he/she used?
- was he/she aware of the MaximumSGList issue and was the value increased?
- was the SAN changed?
- was the Windows service Packed with driver update?
...

And do not forget to inform the management! You will have some sleepless
nights, let them share your glory.

Zlatko Krastev
IT Consultant






i love tsm [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.10.2003 17:06
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: maximumsglist - lanfree


So just for my clarity

This affects all data written to the LTO drives, it doesn't matter whether
its data from normal LAN backups or LAN Free backups.??

In effect all my backups taken in the last year are potentially not fully
restorable.

I would love you to tell me the above is wrong

What a nightmare this could turn out to be !!


From: Zlatko Krastev [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: maximumsglist - lanfree
Date: Thu, 30 Oct 2003 14:27:24 +0200

Chris,

Nearly two years ago (during the TSM 4.2 era) QLogic changed the default
value of MaximumSGList option from 0x41 to 0x21. This option mandates how
large your FCP packets can be. If you were receiving errors when packet
is
larger than the limit, you will now during backup time there is an
inconsistency. The real issue is that data is falsely reported as
successfully sent but is truncated.

Large blocks are used mostly for tape storage. Thus disk operations are
not affected (and no problems are reported there). But tape operations
are
affected and you can realize this usually long after the problem
happened!!

Answering to your question - increasing the value *does not* affect the
disk operations. Assume the option equivalent to Maximum Transmit Unit
(MTU) for LANs - you can send (shorter) disk frames and they will be
under
the limit. If limit is high enough, the (longer) tape frames will also be
within it, and no data will be lost/corrupted.

Zlatko Krastev
IT Consultant






i love tsm [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.10.2003 12:50
Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: maximumsglist - lanfree


Zlatko

thanks for your response...we have been running our tsm server for a
year with maximumsglist at the default value  if what your saying is
true about restore capability that would explain a few things.. we have
done
major restores and not got all the data back we expected.  Is there an
IBM
document on all this somewhere other than IBM flash 10135?

Another question, our tsm server has multiple qlogic hba's.  I believe
the
maximumsglist will take affect on all the adapters.  However some of
those
adapters are used just for SAN disk connectivity. Will changing this
parameter have any impact on disk access/performance?

Thanks again

Chris

 From: Zlatko Krastev [EMAIL PROTECTED]
 Reply-To: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Re: maximumsglist - lanfree
 Date: Thu, 30 Oct 2003 10:31:30 +0200
 
 You must set this option higher than 0x41 (I prefer 0xFF) on *any*
machine
 accessing tape over SAN - be it TSM server, TSM Storage Agent, or any
 other product.
 Afterwards you must perform the backup *again* - some times with low
 maximumsglist setting the data is reported to be written but is not
 completely. Thus your backup might be not restoreable.
 
 Zlatko Krastev
 IT Consultant
 
 
 
 
 
 
 i love tsm [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 30.10.2003 09:21
 Please respond to ADSM: Dist Stor Manager
 
 
  To: [EMAIL PROTECTED]
  cc:
  Subject:maximumsglist - lanfree
 
 
 Hi
 
 Well the quest for lan free backups continues
 
 I got past my initial problems by upgrading storage

Re: ANR0480W

2003-10-31 Thread Zlatko Krastev
Looking through Message manual:
ANR0480W Session session number for node node name (client platform)
terminated - connection with client severed.

Explanation: The specified client session is ended because the
communications link has been closed by a network error or by the client
program.

System Action: Server operation continues.

User Response: If a user breaks out of a client program, this message will
be displayed on the server as the connection is suddenly closed by the
client. A network failure can also cause this message to be displayed. If
a large number of these messages occur simultaneously, check the network
for failure and correct any problems. 


So the questions are:
- did someone killed the client's process
- did someone powered the node's system
- are there any network problems


Zlatko Krastev
IT Consultant






[EMAIL PROTECTED]
31.10.2003 12:02


To: Zlatko Krastev [EMAIL PROTECTED]
cc:
Subject:ANR0480W


Hi,

I have the following problem while i am taking a backup:

 ANR0480W: Session session number for node node name (client platform)
terminated -connection with client severed.



I will appreciate for any help

Thanks
N.Savva

Privileged/Confidential information may be contained in this message and
may be subject to legal privilege. Access to this e-mail by anyone other
than the intended recipient is unauthorised. If you are not the intended
recipient (or responsible for delivery of the message to such person), you
may not use, copy, distribute or deliver to anyone this message (or any
part of its contents) or take any action in reliance on it. In such case,
you should destroy this message, and notify us immediately.

If you have received this email in error, please notify us immediately by
e-mail or telephone and delete the e-mail from any computer. If you or
your
employer does not consent to internet e-mail messages of this kind, please
notify us immediately.

All reasonable precautions have been taken to ensure no viruses are
present
in this e-mail. As we cannot accept responsibility for any loss or damage
arising from the use of this e-mail or attachments we recommend that you
subject these to your virus checking procedures prior to use.

The views, opinions, conclusions and other information expressed in this
electronic mail are not given or endorsed by Laiki Group unless otherwise
indicated by an authorised representative independent of this message.



Re: maximumsglist - lanfree

2003-10-30 Thread Zlatko Krastev
You must set this option higher than 0x41 (I prefer 0xFF) on *any* machine
accessing tape over SAN - be it TSM server, TSM Storage Agent, or any
other product.
Afterwards you must perform the backup *again* - some times with low
maximumsglist setting the data is reported to be written but is not
completely. Thus your backup might be not restoreable.

Zlatko Krastev
IT Consultant






i love tsm [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.10.2003 09:21
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:maximumsglist - lanfree


Hi

Well the quest for lan free backups continues

I got past my initial problems by upgrading storage agent to 5.1.5.4 (same
level as server)

Yesterday I managed to run an image backup lan free, 150Gb in 1hr 15
minutes
so was quite happy with that.  However when I went to restore it 5 minutes
later it failed and the server actlog gave me

10/29/2003 16:39:39   ANR8939E The adapter for tape drive DRIVE2WWPK
(mt1.0.0.5)
 cannot handle  the block size needed to
use
the volume.

I then tried another image backup and it failed with the same error.
During
the time of the image backup working then stopping working nothing had
changed.

I believe the above error is to do with the registry entry for
maximumsglist.  On the storage agent client I have had maximumsglist set
to
FF for a while.

My question is do I need to set the same registry key on the TSM Server
itself?
If not then what else can be causing this

TIA

_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger


Re: maximumsglist - lanfree

2003-10-30 Thread Zlatko Krastev
Chris,

Nearly two years ago (during the TSM 4.2 era) QLogic changed the default
value of MaximumSGList option from 0x41 to 0x21. This option mandates how
large your FCP packets can be. If you were receiving errors when packet is
larger than the limit, you will now during backup time there is an
inconsistency. The real issue is that data is falsely reported as
successfully sent but is truncated.

Large blocks are used mostly for tape storage. Thus disk operations are
not affected (and no problems are reported there). But tape operations are
affected and you can realize this usually long after the problem
happened!!

Answering to your question - increasing the value *does not* affect the
disk operations. Assume the option equivalent to Maximum Transmit Unit
(MTU) for LANs - you can send (shorter) disk frames and they will be under
the limit. If limit is high enough, the (longer) tape frames will also be
within it, and no data will be lost/corrupted.

Zlatko Krastev
IT Consultant






i love tsm [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.10.2003 12:50
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: maximumsglist - lanfree


Zlatko

thanks for your response...we have been running our tsm server for a
year with maximumsglist at the default value  if what your saying is
true about restore capability that would explain a few things.. we have
done
major restores and not got all the data back we expected.  Is there an IBM
document on all this somewhere other than IBM flash 10135?

Another question, our tsm server has multiple qlogic hba's.  I believe the
maximumsglist will take affect on all the adapters.  However some of those
adapters are used just for SAN disk connectivity. Will changing this
parameter have any impact on disk access/performance?

Thanks again

Chris

From: Zlatko Krastev [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: maximumsglist - lanfree
Date: Thu, 30 Oct 2003 10:31:30 +0200

You must set this option higher than 0x41 (I prefer 0xFF) on *any*
machine
accessing tape over SAN - be it TSM server, TSM Storage Agent, or any
other product.
Afterwards you must perform the backup *again* - some times with low
maximumsglist setting the data is reported to be written but is not
completely. Thus your backup might be not restoreable.

Zlatko Krastev
IT Consultant






i love tsm [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.10.2003 09:21
Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:maximumsglist - lanfree


Hi

Well the quest for lan free backups continues

I got past my initial problems by upgrading storage agent to 5.1.5.4
(same
level as server)

Yesterday I managed to run an image backup lan free, 150Gb in 1hr 15
minutes
so was quite happy with that.  However when I went to restore it 5
minutes
later it failed and the server actlog gave me

10/29/2003 16:39:39   ANR8939E The adapter for tape drive DRIVE2WWPK
(mt1.0.0.5)
  cannot handle  the block size needed to
use
the volume.

I then tried another image backup and it failed with the same error.
During
the time of the image backup working then stopping working nothing had
changed.

I believe the above error is to do with the registry entry for
maximumsglist.  On the storage agent client I have had maximumsglist set
to
FF for a while.

My question is do I need to set the same registry key on the TSM Server
itself?
If not then what else can be causing this

TIA

_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger

_
Find a cheaper internet access deal - choose one to suit you.
http://www.msn.co.uk/internetaccess


Re: Antwort: Re: maximumsglist - lanfree

2003-10-30 Thread Zlatko Krastev
You are not describing on which platform.

I am not sure is this QLogic specific or owners of Emulex adapters were 
hurt too. What I can say is that IBM are shipping branded QLogic adapters 
(QLA2200F, QLA2300F, or QLA2340F) for Netfinity/xSeries systems. As TSM is 
getting bundled with IBM hardware very often, the issue with MaximumSGList 
on IBM Wintel platform is well known.
For RS6000/pSeries IBM is using Emulex adapters (LP7000, LP9002, or 
LP9802) and in AIX the setting is called max_xfer_size with value 
measured in bytes. By default the value is 0x10 (1 MB) and is 
sufficient for disk operations. For LTO tapes is must be set to 0x100 
(16 MB).
Till now I have not seen description the equivalent setting for other 
platforms (Solaris, HP-UX, Linux) but am eager to learn it.

Zlatko Krastev
IT Consultant






Markus Veit [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.10.2003 10:41
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Antwort: Re: maximumsglist - lanfree


Hi,
just an info,
when we ran some LAN free backup/restore tests using TSM 5.2.1.1 server 
and STA
5.2.1.1 it was not necessarry to
set the maximumsglist at all using Emulex LP9002 HBA's. Backup and restore
operations worked fine.
We used LTO2 drives and LTO1 tape though.

Mit freundlichen Grüßen / Best Regards

Markus Veit



 
 
 
An: [EMAIL PROTECTED]
Kopie: 
Thema:   Re: maximumsglist 
- lanfree
 
  [EMAIL PROTECTED] 
  Received :  30.10.2003 
  09:32 
  Bitte antworten an ADSM: 
  Dist Stor Manager 
 




You must set this option higher than 0x41 (I prefer 0xFF) on *any* machine
accessing tape over SAN - be it TSM server, TSM Storage Agent, or any
other product.
Afterwards you must perform the backup *again* - some times with low
maximumsglist setting the data is reported to be written but is not
completely. Thus your backup might be not restoreable.

Zlatko Krastev
IT Consultant






i love tsm [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.10.2003 09:21
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:maximumsglist - lanfree


Hi

Well the quest for lan free backups continues

I got past my initial problems by upgrading storage agent to 5.1.5.4 (same
level as server)

Yesterday I managed to run an image backup lan free, 150Gb in 1hr 15
minutes
so was quite happy with that.  However when I went to restore it 5 minutes
later it failed and the server actlog gave me

10/29/2003 16:39:39   ANR8939E The adapter for tape drive DRIVE2WWPK
(mt1.0.0.5)
 cannot handle  the block size needed to
use
the volume.

I then tried another image backup and it failed with the same error.
During
the time of the image backup working then stopping working nothing had
changed.

I believe the above error is to do with the registry entry for
maximumsglist.  On the storage agent client I have had maximumsglist set
to
FF for a while.

My question is do I need to set the same registry key on the TSM Server
itself?
If not then what else can be causing this

TIA

_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger


Re: Client for HP-UX on Itanium

2003-10-28 Thread Zlatko Krastev
Client for HP-UX/PA-RISC works on Itanium though unsupported by IBM.
Regarding the server or storage agent I cannot assure you (not tested
yet).

Zlatko Krastev
IT Consultant






Christoph Pilgram [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
28.10.2003 16:56
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Client for HP-UX on Itanium


Hi all,

Can anybody tell me, if there is a TSM-Client available for a HP-UX
Itanium
server ? I couldn't find anything about it.
In mid of year I was told that a client is planned for end of this year,
now
I hear that there is no client planned ?

What is the fact ??


Chris


Re: Antwort: Re: TDP for Lotus Domino 5.1.5.1

2003-10-28 Thread Zlatko Krastev
There are many ways to skin a cat.
So for partitioned servers I would prefer to have separate TSM node for
each Domino partition
- each partition/node can run its schedules independently from others -
both parallelism without losing the result codes
- you can very easily split one or more partitions to another server if
the load increases

Zlatko Krastev
IT Consultant






Richard Foster [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
28.10.2003 13:29
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Antwort: Re: TDP for Lotus Domino 5.1.5.1



Hi Rainer

 I will ask IBM/Tivoli support for a solution about to suprress all these
 messages being sent to TSM activity log. Maybe they have an answer. If
they
 have and you are interested I will forward that to you.

Yes, I would be interested. If you can be bothered, please send me a copy
of the answer. But my experience with Tivoli support is that this will
take
time, so I won't hold my breath. Good luck.

With regard to your comment:
 In general it seems I have to ask them what's more important
for us there is no question about it. We *have* to clear the disk log
areas
to TSM, otherwise Domino (or SAP, or Oracle) just stops. And with our
largest Domino backup taking 30 hours (1+ TB), we *cannot* afford so long
a
stop in the scheduler.

Another good reason to put the jobs in the background is to get them to
run
in parallel. The TSM scheduler only does 1 backup at a time, but we run
many Domino partitions, and have lots of tape drives. It's much more
efficient to run several backups in parallel (we do up to 8 at a time). In
fact, in our installation we cannot get through the night's backups
without
running in parallel.

Another solution is to put some of the schedule jobs into crontab, but
it's
much easier to keep an overview if all jobs are in the TSM scheduler, and
you still lose the return code to TSM.

But as I said, we run scripts and do the return code checking ourselves.
It's the only way to go for us.

Regards
Richard Foster




***
NOTICE: This e-mail transmission, and any documents, files or previous
e-mail messages attached to it, may contain confidential or privileged
information. If you are not the intended recipient, or a person
responsible for delivering it to the intended recipient, you are
hereby notified that any disclosure, copying, distribution or use of
any of the information contained in or attached to this message is
STRICTLY PROHIBITED. If you have received this transmission in error,
please immediately notify the sender and delete the e-mail and attached
documents. Thank you.
***


Re: Database fragmentation formula (was Re: Online DB Reorg)

2003-10-27 Thread Zlatko Krastev
As any other monitoring measurement, it matters only for the value it
measures - fragmentation.
For example it shows high fragmentation numbers for Wayne's database. The
latter leads to a conclusion that he would really benefit from doing
unload/load. But all others having the database less than 20-30%
fragmented, the typical answer would apply - even if database is
defragmented the result will be temporary.
If the fragmentation value is 50% (or more), some space can be conserved.
And for 80% and beyond, the space savings can be significant.

So it is only one aspect of TSM server health-check. As with any complex
product, the overall picture consists of many such throttles, bells and
whistles :-)

Zlatko Krastev
IT Consultant






Roger Deschner [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
21.10.2003 18:42
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Database fragmentation formula (was Re: Online DB Reorg)


I have to ask one thing about all this higher math - Does it matter?

That is, what can you do about it, or what should you do about it? I
think the answer is that, even if these competing formulae are correct
and you discover your database is fragmented, there is practically
nothing you can or should actually do. Except in the case where a TSM
system is being deliberately shrunk, such as by removing a bunch of
nodes, there is nothing you should attempt to do about database
fragmentation.

Fragmentation occurs at three levels, as far as I can tell. But any
speculation on my part as to what those three levels are is just that -
speculation. And conjecture from external observation.

I do know that there is fragmentation within TSM's storage units, and
then there is fragmentaiton of TSM's units within the OS file system's
storage units. Measuring or fixing the former involves the lengthly and
risky unload/reload procedure which I do not ever recommend. I suspect I
can see the latter, by comparing the amount of free space shown by Q
DBVOL F=D, and the amount of free space shown by Q DB. I could correct
fragmentation at that level with DELETE DBVOL, which is easier than
unload/reload, but it still won't achieve much and the effect still
won't last.

So all this mathematics still does not give me much to go on, in terms
of how to make my TSM system run better in the long term. I ran that
first SELECT published in this thread on my system and came up with
-0.04% fragmented. Obviously a flawed formula. I know my database is
fragmented, simply because it is old and big.

But what you can do, that will help, is to just give it enough room and
let it spread itself out far enough that it can usually get contiguous
space, at all levels including the physical level, when it wants to
write something that is large enough to span multiple units, whatever
those units are. A full, fragmented database will defragment itself to a
degree after it has been run with additional space for a while. Throw
more disk drives at the problem. An 80% full database of any kind WILL
run faster than a 98% full database. Of that I am very, very certain.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]
Have you ever, like, tried to put together a bicycle in public? Or a
grill? Astronauts David Wolf and Piers Sellers, explaining the
difficulties encountered in attaching equipment to the Space Station


Re: write speed for LTO2 drive using LTO1 tape

2003-10-27 Thread Zlatko Krastev
- (on mbox.infotel.bg)

email-body was scanned and no virus found
-
As the LTO2 cartridges differ from LTO1, their speeds also differ. However 
according to IBM's specs (I have not looked this info for HP or Seagate) - 
an LTO2 drive can read/write LTO1 cartridge at 20 MB/s native. This is to 
be considered 33% improvement and your 26 MB/s may simply mean you are 
getting 1,3:1 compression ratio. With same compression LTO1 drive ought to 
give you 1,3 x 15 MB ~= 19,5 MB/s. If you use LTO2 cartridges, you may get 
1,3 x 35 (or 30 for HP/Seagate) ~= 45 MB/s.

Zlatko Krastev
IT Consultant






Markus Veit [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
24.10.2003 14:42
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:write speed for LTO2 drive using LTO1 tape


Hi,
has anyone tried backing up an image to a LTO2 drive using LTO1 tapes, if 
yes
what throughput did you get?

we tried LAN free backups using an EMC Clariion CX400 and pathlight SAN 
gateway
with 1 G switches,
the max throughput was only 26 MB/sec suggesting that LTO2 drives can only 
write
at LTO1 drive max write speed. (specs 30MB/s compressed)

Any confirmation of this would be appreciated.

TSM Server 5.2.1.1,  STA 5.2.1.1, TSM client 5.2.0.3
W2k environment

Mit freundlichen Grüßen / Best Regards

Markus Veit



Re: Move nodedata - what is moved first

2003-10-26 Thread Zlatko Krastev
If the data was only on the disk, I would say your disaster recovery
preparation needs serious fix!
The idea is that you backup the whole storage pool *hierarchy* to a copy
pool. For example hierarchy DISKPOOL - TAPEPOOL must be backed up with
1. ba stg diskpool copypool
2. ba stg tapepool copypool
3. ba db t=dbs

Without backing up the diskpool to a onsite copy pool you are exposed to
data loss in case of disk storage failure (even if there is no site-wide
disaster).

Zlatko Krastev
IT Consultant






Marc Levitan [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
24.10.2003 16:12
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Move nodedata - what is moved first


What would happen if there was a site disaster and the data was only on
the
disk which is no longer available to perform restores?
I guess what I am asking is, without sending DIRMC off-site, can you
recover from a site disaster?





|-+---
| |   Deon George |
| |   [EMAIL PROTECTED]|
| |   .COM   |
| |   Sent by: ADSM: |
| |   Dist Stor   |
| |   Manager|
| |   [EMAIL PROTECTED]|
| |   T.EDU  |
| |   |
| |   |
| |   10/23/2003 08:07|
| |   PM  |
| |   Please respond  |
| |   to ADSM: Dist  |
| |   Stor Manager   |
| |   |
|-+---

---|
  ||
  |To:  [EMAIL PROTECTED]   |
  |cc:|
  |Subject: Re: Move nodedata - what is moved first  |

---|



Peter,

 servers . Currently, our main file server has data on over 200 3590
 tapes therefore a directory restore can potentially have hours added to
 the process directly related to tape mounts.

Is the directory information you referring about related to Windows
systems? You should use the DIRMC client option to store all your
directory information in a DISK based storage pool (DISK or FILE), so that
it remains on faster quicker access media for restore purposes. (Dont let
that stuff go to tape for the reasons you have outlined below.)

The DIRMC client option is not really required for Unix based systems, as
the database has enough space to store that information.

...deon
---
Have you looked at the A/NZ Tivoli User Group website?
http://www.tuganz.org

Deon George, IBM Tivoli Software Engineer, IBM Australia
Office: +61 3 9626 6058, Fax: +61 3 9626 6622, Mobile: +61 412 366 816,
IVPN +70 66058
mailto:[EMAIL PROTECTED], http://www.ibm.com/tivoli


Re: Cleaning Machine for 3590 cartridges

2003-10-24 Thread Zlatko Krastev
You did not called it IBM but I assumed it being unaware of the device.
Thanks to Richard's clarification I was able to figure it out.

Now to the topic:
if you really need to use such machine, it would certainly mean your
server room environment is *dirty*! If it indeed is, the risk of losing
data would be rather close to the risk when you are not doing backups at
all. I can only quote a sentence not invented by me:
Poor security is worse than no security at all, as it provides fake sense
of security!

If your library environment is clean enough (through using and regular
cleaning of appropriate filters in conditioners) and your operators are
handling tapes only through library I/O station - you should not need such
a device. At that point I personally would react nearly as Tom suggested.

Zlatko Krastev
IT Consultant






Pearson, Dave [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
23.10.2003 17:26
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Cleaning Machine for 3590 cartridges


I apolgise for calling the 3599 model an IBM My mistake.

I was just giving one model of cartridge cleaning machine called 3599 (Yes
there are cartridge cleaning meachine out there.)

There is another cleaning machine called.. STAR 3590 cartridge cleaning
machine.
I'm sure there are many other 'brand' of machine that clean
tapes/cartridge.

Ziatko,  does you company clean their cartridges on these type of machine?
If so,  Is it worth it?
if not... why you don't use them?

Thanks again Ziatko.

Dave Pearson

 -Original Message-
 From: Zlatko Krastev [SMTP:[EMAIL PROTECTED]
 Sent: Thursday, October 23, 2003 5:13 AM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Cleaning Machine for 3590 cartridges
 IBM 3599 is not a machine, but so called machine type / model
for
ordering any Magstar cartridges.
3599-001, -002, -003 are 3590 J cartridges (10/20/30 GB)
3599-004, -005, -006 are 3590 K cartridges (20/40/60 GB)
3599-007 is 3590 cleaning cartridge (what probably you are looking
for!!!)
3599-011, -012, -013 are 3592 cartridges (300 GB)
3599-017 is 3592 cleaning cartridge.

This is a second method to order cartridges through IBM Storage
channel.
The first is as components of 3590 drives. Same is for LTO - they
can be
ordered as part of 358x unit or separately as 3589-xxx media.
If you need short confirm: Yes, this is working/supported cleaning
media
for any IBM 3590 drives (standalone, within IBM 3494, or within
StorageTek
silo)!

Machine type does not mean automatically hardware. For example
before
joining Passport Advantage, TSM was machine type/model 5697-TSM
and later
5698-TSM!

Zlatko Krastev
IT Consultant


Re: dsmserv loadformat failure

2003-10-23 Thread Zlatko Krastev
-- Does it expect those files to already have been created via dsmfmt?

Yes, it does. Moreover, it *does not* expect to have the DB/Log volume
size(s) provided to LOADFORMAT as can be seen on the syntax diagram (can
be found in Appendix A of Administrator's Reference, GC32-0769-00 for ITSM
v5.1.5).
The values 5000 and 3 should be entered as dsmfmt parameters.

Zlatko Krastev
IT Consultant






Gerald Wichmann [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
22.10.2003 21:45
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:dsmserv loadformat failure


Anyone know what errno=2 means below? The /tsm partition exists and has
777
permissions so I'm thinking it's not a permissions problem. Does it expect
those files to already have been created via dsmfmt? Currently the files
do
not exist.

[EMAIL PROTECTED]:/usr/tivoli/tsm/server/bin# dsmserv loadformat 1
/tsm/tsmlog.001 5000 3 /tsm/tsmdb.001 3 /tsm/tsmdb.002 3
/tsm/tsmdb.003 3
ANR7800I DSMSERV generated at 12:37:06 on Aug 21 2002.

Tivoli Storage Manager for AIX-RS/6000
Version 5, Release 1, Level 5.0

Licensed Materials - Property of IBM

5698-ISE (C) Copyright IBM Corporation 1999,2002. All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR0900I Processing options file dsmserv.opt.
ANR0921I Tracing is now active to file /tsm/trace.out.
ANR7811I Direct I/O will be used for all eligible disk files.
Error opening file /tsm/tsmlog.001, errno = 2



Thanks,
Gerald




This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of
the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under
applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have
received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Re: Cleaning Machine for 3590 cartridges

2003-10-23 Thread Zlatko Krastev
- (on mbox.infotel.bg)

ole0.bmp was scanned and no virus found
-
IBM 3599 is not a machine, but so called machine type / model for
ordering any Magstar cartridges.
3599-001, -002, -003 are 3590 J cartridges (10/20/30 GB)
3599-004, -005, -006 are 3590 K cartridges (20/40/60 GB)
3599-007 is 3590 cleaning cartridge (what probably you are looking for!!!)
3599-011, -012, -013 are 3592 cartridges (300 GB)
3599-017 is 3592 cleaning cartridge.

This is a second method to order cartridges through IBM Storage channel.
The first is as components of 3590 drives. Same is for LTO - they can be
ordered as part of 358x unit or separately as 3589-xxx media.
If you need short confirm: Yes, this is working/supported cleaning media
for any IBM 3590 drives (standalone, within IBM 3494, or within StorageTek
silo)!

Machine type does not mean automatically hardware. For example before
joining Passport Advantage, TSM was machine type/model 5697-TSM and later
5698-TSM!

Zlatko Krastev
IT Consultant






Pearson, Dave [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
22.10.2003 18:17
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Cleaning Machine for 3590 cartridges


Hello Everyone,

Do any of you use the model 3599 cleaning machine to clean your 3590
cartridges?  If you do,  What do you think of it.  If you don't, Why not?

Thanks

David C. Pearson
IS Production Support Analyst
System  Network Service
Snohomish County PUD # 1
 ole0.bmp
Phone: 425.347.4420
Pager:  425.290.0944
FAX: 425.267.6380
E-mail: [EMAIL PROTECTED]




ole0.bmp
Description: Binary data


Re: TSM Downward Scaleability

2003-10-23 Thread Zlatko Krastev
I would rather disagree.
1. Zero on-site skills: from our experience even a clerical end-user can
sign a Fed-Ex delivery, unplug power and LAN cords from failed system, and
plug them back in the replacement server.
2. Limited on-site skills: Most often available - people with limited
knowledge of Windows, some have done hard-disk replacement (or can easily
be taught how to replace hot-swap disk). With unified hardware you can
send only pre-built disk instead of whole server.

The advanced part can be handed successfully to central site:
- with very inexpesive disks a local diskpool to hold all backups is not a
problem, copypool over WAN;
- enterprise-wide management products are common in big enterprises;
- TSM itself has very good enterprise management features (and I prefer
CLI which is WAN-friendly vs. WebAdmin)
- many of the restores are not BMR ones but for single file/directory;
- restores still can be performed through Web client using browser at
central site;
- TSM server can be rebuilt at central site and sent to satelite site for
replacement;
- BMR restore also can be performed at the central site and
workstation/disk be sent to satelite site.

Thus if I am the IT manager, instead of hiring or outsourcing people for
tens of remote sites, I can manage this with existing staff at central
site or no more than one-two persons in addition to TSM admin.
For me TSM scales down to 0 tape drives. Of course, if total occupied
space on the remote site machines is going to become beyond 100 GB, an
autoloader or library is a must. But for office workstations using TSM
prograssive backup, sub-file backups and having MP3/AVI/DivX/etc.
excluded, the tape-less server is fine. Such offices usually do not
utilize the WAN during the night.

Zlatko Krastev
IT Consultant






Richard Sims [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
21.10.2003 18:50
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: TSM Downward Scaleability


The organization that I work for deploys TSM quite
sucessfully at its large main sites that serve some
+4000 nodes. It is very apparent that TSM scales
upwardly very well but I believe that scaling down is
something else. MY question is this:How can similar
services be delivered to sites where there are less
than ten nodes, limited bandwidth, no system
administrators and, most importantly, tiny budgets.

It's not realistic to have server systems of any kind at a site where
there is
no technical administration: someone has to be knowledgeable about the
systems
in order to minimally inspect them visually when there is a problem.
Clerical
people simply can't serve in that capacity.  Remote administration is a
feasible
concept, but when hardware stops working, knowledgable eyes and
experienced
hands must be at the site.  Such a responsibility might be contracted to
an
outside company, which can feasibly attend to disparate physical sites.
Consider also that while unattended backup, by various means by products
of
different scales, is not difficult, the backups are done because of the
prospect
of the need for a restoral, which can involve a full-down computer, and
that is
beyond the capabilities of clerical people to address: someone has to know
what
to do, particularly where a collection of office computers will seldom be
uniform.

TSM is an enterprise product, intended for larger installations, which is
to say
those where there are concentrated server facilities and network access.
As
Wanda suggests, backup by remote offices over a WAN is the method of
choice
where TSM or like backup/restore product are involved.  Again, the backup
is
easy, but restoral can be problematic.  Advanced planning is necessary to
cover
all aspects of backup/restoral needs, which in turn is just a part of a
company's larger disaster recovery plan.

   Richard Sims, BU


Re: Backing up SAP on Adabas

2003-10-23 Thread Zlatko Krastev
- (on mbox.infotel.bg)

email-body was scanned and no virus found
-
I have visited successfully that page in the past. Now it is protected 
and I was unable to access it.
Maybe according to new IBM's policies we are supposed to pay to look at 
their advertizing and product info :-(((
Well done web! Congratulations to the corporation which constantly is 
shooting itself in the foot!

Zlatko Krastev
IT Consultant






Clarence Beukes [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
22.10.2003 09:55
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Re: Backing up SAP on Adabas


Product Info:  http:/www.ibm.com/de/entwicklung/adint_adsm

Clarence Beukes
 Advisory IT Specialist - Tivoli Certified Consultant
 Geomar SSO Mid Range and Application Support Discipline
 Location:  IBM Park Sandton, IA2G
 Tel: +27 (0) 11 302-6622   Cell: +27 (0) 82 573 5665
 E-mail: [EMAIL PROTECTED]





Tomá? Hrouda Ing. [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/22/2003 08:43 AM
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Backing up SAP on Adabas

 

Hi all,

one of our customers requested backing up SAP system on Adabas database. 
Do
anobody have any experience with this stuff? Any information and solutions
will be appreciate.

Thanks
Tom




Re: Moving from NT TSM to AIX TSM

2003-10-21 Thread Zlatko Krastev
This is not going to work due to different byte ordering within the word
between Windows and AIX!!!

Zlatko Krastev
IT Consultant






Crawford, Lindy [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
17.10.2003 18:18
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Moving from NT TSM to AIX TSM


Hi TSMers,

We are busy trying to test our dr procedures. At the one site we have an
Window NT server with TSM 4.2 server and on the other site we have an
Aix server with TSM 5.1 server.

We want to restore the tsm database from the Windows TSM server to the
Aix tsm server. Is this possible to do...???

How can I go about thisany ideas

Thank you in advance.

Lindy Crawford
Information Technology
Nedbank Corporate - Property  Asset Finance
*+27-31-3642185
 +27-31-3642946
[EMAIL PROTECTED]  mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] 






This email and any accompanying attachments may contain confidential and
proprietary information.  This information is private and protected by
law and accordingly if you are not the intended recipient you are
requested to delete this entire communication immediately and are
notified that any disclosure copying or distribution of or taking any
action based on this information is prohibited.

Emails cannot be guaranteed to be secure or free of errors or viruses.
The sender does not accept any liability or responsibility for any
interception corruption destruction loss late arrival or incompleteness
of or tampering or interference with any of the information contained in
this email or for its incorrect delivery or non-delivery for whatsoever
reason or for its effect on any electronic device of the recipient.

If verification of this email or any attachment is required please
request a hard-copy version


Re: TSM Storage Pool Hierarchy Question

2003-10-21 Thread Zlatko Krastev
1. It is possible
2. No, you need the DRM license (part of Extended Edition) to have
server-to-server virtual volumes
3. To the virtual volumes device class, i.e. to a file stored in MS
server
3b. Yes, it depends on the destination setting of the MS server
copygroup
4. See the answers to q.3  3b.
999. No matter of the setup or version :-) You just need enough RS disks
and rather good pipe to MS.


Zlatko Krastev
IT Consultant






Curt Watts [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
21.10.2003 02:41
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM Storage Pool Hierarchy Question


Alright, time to ask the experts!

Essentially, I'm trying to have our remote TSM servers (satellite
locations) utilize the main TSM server as the next storage pool in it's
hierarchy.

The hierarchy will eventually look like this:

Remote Server (RS) Disk Cache
  -Main Server (MS) Disk Cache
 -MS Tape Pool
 -MS Offsite Copy Pool

Questions I have:
1) Is this possible?
2) Can I do this without the DR module?
3) How does the RS backup it's database - because it won't backup to
the local disk?
3b) Should the RS backup it's DB directly to tape? and if so, how is it
possible to share the tape library with no NAS?
4) Should I just break down and put some tape drives out there to
handle the DBs?

Current setup:
  MS: W2k sp4, TSM v4.3.2, IBM 3853 w/ 54 slots  2 LTO drives.
  RS: W2k sp4, TSM v5.2.0

Thanks everyone.

Curt Watts

___
Curt Watts
Network Analyst, Capilano College
[EMAIL PROTECTED]


Database fragmentation formula (was Re: Online DB Reorg)

2003-10-20 Thread Zlatko Krastev
* ATTENTION *
Those who do not like the mathematics, skip to the end (search for word
select).
*

From my mathematical background this query is not showing very good
results. The formula looks like

  ( MAX_REDUCTION_MB * 256 )
100 -  -
(USABLE_PAGES - USED_PAGES) * 100

Thus closer we are to fully used database, less accurate the formula would
be. Moreover if we fill the DB at 100% the result will be division by 0
while the database might be both fragmented and not fragmented. One of our
goals is to fully utilize our resources. In that exact moment the query
would be useless.
Also the formula is of no use if there is no legend how to interpret the
numbers. For example our test server DB is giving PERCENT_FRAG=26.92 while
being nearly unfragmented.

So I would dare to recommend another formula (in pages):

   used - needed
fragm_p = --- x 100
   used

i.e. what space is wasted from all used (in %).
The needed space (in pages) can be found if we multiply PCT_UTILIZED by
USABLE_PAGES and divide by 100 to remove percentages:

needed = PCT_UTILIZED x USABLE_PAGES / 100

while the used space (in pages) is readable from USED_PAGES column.
Therefore my final formula would be (the lines may be split by mailers):

   USED_PAGES - PCT_UTILIZED x USABLE_PAGES / 100
fragm_p =  x 100
USED_PAGES

and the final query would be:
select cast(100 * (USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100) -
  / USED_PAGES as decimal(9,5)) as Unused page parts [pages] from db

Now the percentage shows the percentage of wasted space vs. used space. 0%
would mean database is fully populated with no holes, 100% are impossible
(as completely empty pages would not be counted, and 99+% mean each page
is filled with something small just to allocate it.

PART 2.
Beyond how much space is wasted inside pages we would be also interested
in how many empty pages we are losing due to partition-allocation scheme.
Again the math first. Same formula (but now in MB):

   used - needed
fragm_p = --- x 100
   used

Now needed space is derived from CAPACITY_MB field:

needed = PCT_UTILIZED x CAPACITY_MB / 100

while actual usage is the size to which we can reduce the DB:

used = CAPACITY_MB - MAX_REDUCTION_MB

the resulting formula would look like  (the lines may be split by
mailers):

   (CAPACITY_MB - MAX_REDUCTION_MB) - PCT_UTILIZED x CAPACITY_MB / 100
fragm_p =
- x
100
CAPACITY_MB - MAX_REDUCTION_MB

Division by zero cannot happen as TSM server does not allow us to reduce
the DB under one partition.
Now the query for this percentage would be:
select cast(((CAPACITY_MB - MAX_REDUCTION_MB) -
   - (PCT_UTILIZED * CAPACITY_MB / 100) ) -
   / (CAPACITY_MB - MAX_REDUCTION_MB) * 100 -
   as decimal(9,5)) as Allocation waste [%] from db

And the final big-big query would look like:
select cast(USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100 -
   as decimal (20,3)) as Unused page parts [pages], -
   cast(100 * (USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100) -
   / USED_PAGES as decimal(9,5)) as Page fragmentation [%], -
   cast( (CAPACITY_MB - MAX_REDUCTION_MB) -
   - (PCT_UTILIZED * CAPACITY_MB / 100) as decimal (10,2)) -
   as Overallocated space [MB], -
   cast(((CAPACITY_MB - MAX_REDUCTION_MB) -
   - (PCT_UTILIZED * CAPACITY_MB / 100) ) -
   / (CAPACITY_MB - MAX_REDUCTION_MB) * 100 -
   as decimal(9,5)) as Allocation waste [%] from db

If someone already invented these formulae, I would congratulate him/her.
Even if I am the first dared to do this hard work, there is no Nobel price
for mathematics :-))

Zlatko Krastev
IT Consultant






Remco Post [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
20.10.2003 13:30
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Online DB Reorg


On Sat, 18 Oct 2003 14:35:09 -0400
Talafous, John G. [EMAIL PROTECTED] wrote:


 Remco,
   Would you be willing to share your SQL query that reports on DB
 fragmentation?


I was allready looking at Eric (he probably saved my thingy somewhere
usefull, I just saved it in my sent-mail folder), here it is...

select cast((100 - ( cast(MAX_REDUCTION_MB as float) * 256 ) / -
(cast(USABLE_PAGES as float) - cast(USED_PAGES as float) ) * 100) as -
decimal(4,2)) as percent_frag from db

Note that I still think this is one of the more useless queries I've ever
build...

 Thanks to all,
 John G. Talafous  IS Technical Principal
 The Timken CompanyGlobal Software Support
 P.O. Box 6927 Data Management
 1835 Dueber Ave. S.W. Phone: (330)-471-3390
 Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
 [EMAIL PROTECTED]   http

Re: SQL to determine what offsite volumes are needed for move data

2003-10-20 Thread Zlatko Krastev
-- This list of tapes is used by the operators to go and fetch the tapes to
be brought onsite.

I cannot figure it out! For what a reason you prefer to add more risk to
your offsite copies by bringing them onsite? Before new copies reach the
vault you are having *no* offsite copies (or your copies are less than
designed).
I can accept Richard's suggestion assuming he is doing delete and
subsequent stgpool backup *before* to retrieve the volumes onsite. But in
that scenario tracking what to be retrieved might be error prone process.
Yes, TSM is having rather complicated handling of offsite copies, but this
is done so intentionally!!

Zlatko Krastev
IT Consultant






Richard Sims [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
16.10.2003 18:34
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: SQL to determine what offsite volumes are needed for move 
data


...
Our tape copy pools are kept offsite.  Reclamation for these pools is
handled by leaving the TSM reclamation threshhold at 100% so that he
never
does reclaims on his own.  On a regular basis, we run a job that queries
the server for copy pool tapes with a reclamation threshhold greater than
'n' percent.  This list of tapes is used by the operators to go and fetch
the tapes to be brought onsite.  They then run a job that updates those
tapes to access=readwrite, and issues a 'move data' command for each
tape.

Now the problem.  Some of these 'move data' processes treat the volume
that is being emptied as 'offsite', even though the volume has been
loaded
into the library and its access updated to readwrite.  I'm pretty sure
the
reason for this is that the volumes in question have files at the
beginning and/or end that span to other copy pool volumes which are
themselves still offsite.
...

Bill - I gather that you are not using DRM.  I started out doing much the
   same as you, in our non-DRM offsite work.  Then I realized that I
was making it much more complicated that it need be...

You can greatly simplify the task and eliminate the problems you are
experiencing with the inevitable spanning issue by just doing
 DELete Volume ... DISCARDdata=Yes
The next Backup Stgpool will automatically regenerate an offsite volume
with that data, and pack it much fuller than Move Data can.  It's a win
all around.

  Richard Sims http://people.bu.edu/rbs


Re: Storage agent different version from TSM server.

2003-10-18 Thread Zlatko Krastev
From my past knowledge of AIX the patches were called PTFs. Something like
4.2.1.1, 4.3.3.27 or 5.1.0.38 of particular package or set of packages.
OTOH when bundle of patches goes through more testing, it is considered
less temporary and is called Maintenance. Going far back to AIX 3.2 or
4.1 I recall maintenance levels also mentioned as PTFs but haven't seen
this for newer versions and assumed that such practice was abandoned.

As I have not seen formal definition what is in and what is out of the
term PTF, I can only rely on my interpretation. As I am only a human being
and to err is human I should stand corrected.

Last minute clarification: If we stuck to the word, the mixing of PTFs
might be allowed. But will then the mixing of interim fixes be allowed
and supported?
Maybe v5.2.1.0 storage agent will be tested and supported with v5.2.0.0,
while it might be not with v5.2.0.1 or v5.2.1.1 server.
Where should we draw the border?

Zlatko Krastev
IT Consultant






Andrew Raibeck [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
17.10.2003 19:11
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Storage agent different version from TSM server.


PTF == Program Temporary Fix
4.2.2.0, 5.2.1.0, etc., are PTFs

PTF != Patch
4.2.2.6, 5.2.1.1, etc. are patches (or interim fixes, as we are now
being asked to call them)

It is conceivable that some APAR fixes could require you to update both
server and storage agent in order to acquire the fix. Outside of those
cases, the intent of different PTF levels can be supported statement
means that you can use 5.2.0.x server with 5.2.1.x storage agent, or
5.2.0.x storage agent with 5.2.1.x server, etc.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




Zlatko Krastev [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/17/2003 08:47
Please respond to ADSM: Dist Stor Manager

To: [EMAIL PROTECTED]
cc:
Subject:Re: Storage agent differrent version from TSM
server.


Quotations from IBM EMEA Announcement Letter ZP03-0215:
With V5.2, different PTF levels can be supported, in most cases, between
the storage agent and Tivoli Storage Manager server.
A Tivoli Storage Manager V5.2 SAN storage agent will not operate with a
previous version of the Tivoli Storage Manager Server.
A  Tivoli  Storage Manager V5.2 Server will not operate with a previous
version of the Tivoli Storage Manager SAN storage agent.

PTF stands for Program Temporary Fix, in other words patch. Thus the
statement says:
- mixing of neither versions (v5.2 + v4.2) nor releases (v5.1 + v5.2) is
allowed;
- you can mix patch levels but not maintenance. Example: 5.2.1.1 server
and 5.2.1.0 storage agent, but not 5.2.1.1 server and 5.2.0.0 storage
agent.
- you *can* (but without 100% promise) mix server and storage agent at
different patch levels, *in most cases* but not always! Some of the
combinations might be supported (but up to now I am not aware is the list
of proper combinations public). Guessing example: 5.2.1.1 server +
5.2.1.0 StA might be valid, while 5.2.1.0 server + 5.2.1.1 StA might still
be unsupported.

IMnsHO this clearly answers in writing (at the date of announcement - 8-th
of April) to concerns expressed.

Zlatko Krastev
IT Consultant






Bugs [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
16.10.2003 11:20
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Storage agent differrent version from TSM
server.


Hi,
For versions prior TSM 5.2 TSM Server and Storage Agent must be SAME
versions.
1. A new feature in TSM 5.2 is that there can be different versions for
Server and SA (but I doesn't test it).
2. I doesn't see any information about using TSM Server 5.2 with Storage
Agent 5.1 and I thing that the new feature is valid only for mixing
different versions of TSM 5.2 Server and TSM 5.2 Agent. By sample - TSM
Server 5.2.1.1 and TSA 5.2.0.0 and etc.

Best Regards,
Svetoslav Tolev

Phone:  +359(2)9753629
Mobile: +359(88)8817629
email:  [EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Wira
Chinwong
Sent: Thursday, October 16, 2003 5:43 AM
To: [EMAIL PROTECTED]
Subject: Storage agent differrent version from TSM server.

Who has experience to use the storage agent that has a different version
with TSM Server?
The storage agent often hand and down. I plan to install fix or upgrade
the
storage agent.
But I don't know what the impact if I have upgrade to a newer version or
install fix only.
My environment is a TSM server 5.1.1.0

Re: Help needed using seagate STT20000A

2003-10-18 Thread Zlatko Krastev
- (on mbox.infotel.bg)

email-body was scanned and no virus found
-
As your drive is not supported by TSM driver, you should *not* use it 
against the drive.

The problem is in the command define path STEVEOIMAGE2_SERVER1 GENDRV_0.0.0 
srctype=server 
desttype=drive device=mt1.0.0.0  library=GENLIB1
You should use ... device=\\.\Tape0 ... instead.

When you try to address the drive as mt1.0.0.0 you trying to do via TSM 
device driver. If you do it addressing it as \\.\Tape0 it will be via 
native driver.
If there are no other drives, you can even avoid to install TSM device 
driver.

Zlatko Krastev
IT Consultant






Steve Ochani [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
17.10.2003 23:04
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Re: Help needed using seagate STT2A


Hi again,

Yes \\.\Tape0 is listed

but when i try to pull up it's properties I get the following error

http://www.steveo.us/tivoli/7.jpg

I don't know if that is because I'm not useing ibm's driver

I don't have any other programs on at the time and the segate backup 
software service
is off.




On 17 Oct 2003 at 15:15, alkina wrote:




 If u are using default windows driver.In that case give


 \\ .\ tape0


 tape0 is the device detected by windows .


 Regards


 Alkina l

 ADSM: Dist Stor Manager wrote:



 Steve,

 In the TSM MC, under device information, were you able to see tape0 in
 the Device Name colume? Just wanna make sure that the device is
 controlled by the native driver.

 Hunny Kershaw




 Steve Ochani
 To: [EMAIL PROTECTED]
 Sent by: ADSM: cc:
 Dist Stor Subject: Re: Help needed using seagate STT2A
 Manager

 .EDUgt;


 10/16/2003 02:11
 PM
 Please respond to
 ADSM: Dist Stor
 Manager






 Hello,

 On 16 Oct 2003 at 8:43, Hunny Kershaw wrote:

 gt; Steve,
 gt;
 gt; The TSM device driver does not support drives with device ID 
STT2A.
 gt; You will need the Windows native driver for this device.

 That's what I tried to use first. And I got the following error when 
trying
 to do the initial
 device config.

 
 Server Version 5, Release 1, Level 5.0
 Server date/time: 10/13/2003 17:27:29 Last access: 10/13/2003 17:25:23

 ANS8000I Server command: 'define library GENLIB1 libtype=manual'
 ANR8400I Library GENLIB1 defined.
 ANS8000I Server command: 'define drive GENLIB1 GENDRV_0.0.0'
 ANR8404I Drive GENDRV_0.0.0 defined in library GENLIB1.
 ANS8000I Server command: 'define path STEVEOIMAGE2_SERVER1 GENDRV_0.0.0
 srctype=server desttype=drive device=mt1.0.0.0 library=GENLIB1'
 ANR8420E DEFINE PATH: An I/O error occurred while accessing drive
 GENDRV_0.0.0.
 ANS8001I Return code 15.
 ANS8000I Server command: 'define devc GENCLASS1 devtype=GENERICTAPE
 library=GENLIB1'
 ANR2203I Device class GENCLASS1 defined.
 ANS8000I Server command: 'define stgpool GENPOOL1 GENCLASS1
 maxscratch=500 dataformat=NATIVE'
 ANR2200I Storage pool GENPOOL1 defined (device class GENCLASS1).

 ANS8002I Highest return code was 15.
 


 So how do I use the device?




 gt;
 gt; Regards,
 gt;
 gt; Hunny Kershaw
 gt;
 gt;
 gt;
 gt;
 gt; Steve Ochani
 gt; To:
 [EMAIL PROTECTED]
 gt; Sent by: ADSM: cc:
 gt; Dist Stor Subject: Re: Help needed
 using seagate STT2A
 gt; Manager
 gt;
 gt; .EDUgt;
 gt;
 gt;
 gt; 10/15/2003 04:33
 gt; PM
 gt; Please respond to
 gt; ADSM: Dist Stor
 gt; Manager
 gt;
 gt;
 gt;
 gt;
 gt;
 gt;
 gt; Thanks for the reply,
 gt;
 gt; I tried what you suggested, uninstalled the TSM device driver (via
 gt; add/remove
 gt; programs). Checked ms driver with backup utility from seagate, 
backup
 works
 gt; fine.
 gt;
 gt; Installed TSM device driver from CD.
 gt; Went to device manager, update drivers -- windows could not find 
the ibm
 gt; device driver,
 gt; even had it scan the CD (there is no INF file for the 
adsmscsi.sys). So I
 gt; tried forcefully
 gt; installing it by modifying the winnt/inf/tape.inf file and put in 
an
 entry
 gt; for IBM and for the
 gt; admscsi.sys file. It seemed to install fine, rebooted, check the 
driver
 gt; property and it
 gt; seems fine, lists ibm driver as shown here
 gt;
 gt; http://www.steveo.us/tivoli/4.jpg
 gt;
 gt; But now when I go into TSM console and try to start the driver I 
get
 the
 gt; error
 gt;
 gt; Unable to start the TSM Device Driver.
 gt; Reason: The system cannot find the device specified.
 gt;
 gt; as shown here
 gt;
 gt; http://www.steveo.us/tivoli/5.jpg
 gt;
 gt; I then tried in the driver properties, putting the device over to 
the
 left
 gt; column Devices
 gt; controlled by TSM Device Driver, rebooted , but it put the device 
back
 to
 gt; the column
 gt; Devices controlled by Native Device Drivers
 gt;
 gt; as shown here
 gt;
 gt; http://www.steveo.us/tivoli/6.jpg
 gt;
 gt; So how do I properly install this driver if there is no inf file 
for it.
 gt;
 gt; I

Re: / /OREF:CPT444C5 TSM - Too many tapes being used

2003-10-18 Thread Zlatko Krastev
The are more than one statements in your mail I cannot understand/accept:
-- I am not using collocation
If you are doing backups to *one* storage pool hierarchy (one disk pool
migrating to one tape pool) and the tape pool is not collocated, you ought
to fill with those 300 GB no more than 3 LTO-1 tapes or 2 LTO-2 tapes.
Even if we assume you have three nodes with 100.1 GB each, doing backups
in parallel, you may end up with 3 tapes filled with 100 GB and 3 tapes
with 0.1 GB. Total of 6 tapes and with no collocation remaining nodes
should append to latter 3 tapes instead of 7-th scratch.
Other possibility might be COMPRESSAlways=No and some large
non-compressable files on node(s). File is written to tape compressed
and resent to save space. As result first write is discarded and there
is a wasted space on the tape.
Is the DB backup tape counted in those 7?
Output of q stg f=d might shed some light on the problem. Output of q
v for those 7 volumes also might reveal something. A SQL select might
also help - select distinct node_name,stgpool_name,volume_name from
volumeusage

Are those nodes starting backups simultaneously? I can imagine some odd
sequence:
- all seven nodes start within a short period
- each is requesting via Storage Agent to mount a tape
- server is defining 7 scratches to the pool and is providing the names to
each storage agent
- first three mount requests are satisfied by the server and backups are
done
- rest four storage agents are waiting for their already designated tapes
to be mounted as the requests are already made
- when one of first three nodes finishes its backup a tape drive is freed
- mount request of another storage agent is satisfied and designated tape
is mounted instead of partially full tape from previous node
Digging the actlog when backups start might prove or reject the guess. If
it is true, it will mean each node goes to its own tape. Therefore the
tape with 0,7% utilization should contain only 700 MB from one node. That
node is very good candidate for LAN backup vs. LAN-free. After the server
mounts the volume, the storage agent still needs to open the device.
During that time a LAN backup will have already finished. Diskpool will
further improve backup time.


-- I am not using ... a copy pool.
Aren't you afraid of media failure. Even if LTO is very reliable, its
reliability can never be 100%!


-- SANergy is not supported by IBM.
Definitely incorrect. Product is still sold by IBM and as any other IBM
Software product is delivered with 1 year of support.
Limited use SANergy license is also part of ITSM for SAN license and
allows backups to SAN-shared *file* pools. SAN-sharing of random access
disk pool is not possible, but is supported for sequential file pools.

Zlatko Krastev
IT Consultant






Cecily Hewlett [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
17.10.2003 18:29
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:/ /OREF:CPT444C5 TSM - Too many tapes being used


Please help me.

I am backing up 7 nodes, running TSM 5.1 on AIX 5.1 and AIX 4.3.3.
Total data backed up = +- 300 GB.
I am using an LTO3583 library, with 3 x 3580 drives, across a SAN.

Backup times are great, but TSM is being v. wasteful with tapes
using up to 7 tapes every night, some of them only have 0.7%
utilization.

I am not using collocation, or a copy pool.
I tried to use a diskpool on my shark and then move the data to tape,
but , SANergy is not supported by IBM.

Does anyone have any suggestions.???

Cecily Hewlett


Re: Storage agent differrent version from TSM server.

2003-10-17 Thread Zlatko Krastev
Quotations from IBM EMEA Announcement Letter ZP03-0215:
With V5.2, different PTF levels can be supported, in most cases, between
the storage agent and Tivoli Storage Manager server.
A Tivoli Storage Manager V5.2 SAN storage agent will not operate with a
previous version of the Tivoli Storage Manager Server.
A  Tivoli  Storage Manager V5.2 Server will not operate with a previous
version of the Tivoli Storage Manager SAN storage agent.

PTF stands for Program Temporary Fix, in other words patch. Thus the
statement says:
- mixing of neither versions (v5.2 + v4.2) nor releases (v5.1 + v5.2) is
allowed;
- you can mix patch levels but not maintenance. Example: 5.2.1.1 server
and 5.2.1.0 storage agent, but not 5.2.1.1 server and 5.2.0.0 storage
agent.
- you *can* (but without 100% promise) mix server and storage agent at
different patch levels, *in most cases* but not always! Some of the
combinations might be supported (but up to now I am not aware is the list
of proper combinations public). Guessing example: 5.2.1.1 server +
5.2.1.0 StA might be valid, while 5.2.1.0 server + 5.2.1.1 StA might still
be unsupported.

IMnsHO this clearly answers in writing (at the date of announcement - 8-th
of April) to concerns expressed.

Zlatko Krastev
IT Consultant






Bugs [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
16.10.2003 11:20
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Storage agent differrent version from TSM server.


Hi,
For versions prior TSM 5.2 TSM Server and Storage Agent must be SAME
versions.
1. A new feature in TSM 5.2 is that there can be different versions for
Server and SA (but I doesn't test it).
2. I doesn't see any information about using TSM Server 5.2 with Storage
Agent 5.1 and I thing that the new feature is valid only for mixing
different versions of TSM 5.2 Server and TSM 5.2 Agent. By sample - TSM
Server 5.2.1.1 and TSA 5.2.0.0 and etc.

Best Regards,
Svetoslav Tolev

Phone:  +359(2)9753629
Mobile: +359(88)8817629
email:  [EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Wira
Chinwong
Sent: Thursday, October 16, 2003 5:43 AM
To: [EMAIL PROTECTED]
Subject: Storage agent differrent version from TSM server.

Who has experience to use the storage agent that has a different version
with TSM Server?
The storage agent often hand and down. I plan to install fix or upgrade
the
storage agent.
But I don't know what the impact if I have upgrade to a newer version or
install fix only.
My environment is a TSM server 5.1.1.0 and Storage agent 5.1.1.0.

Thanks you for helping.



Wira Chinwong


Re: Backup copy group definition

2003-10-16 Thread Zlatko Krastev
On Day 2 you previous copy will be expired due to Versions Data Exists
limited to 1. If you need to keep for N days, you should set VERE=no
limit VERD=no limit.

Zlatko Krastev
IT Consultant






Nicolas Savva [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
13.10.2003 17:30
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Backup copy group definition


Hi to all

I have a question regarding a backup copy group definition.

Let say that i want my incremental backup to be stored for 7 days and my
selective backup for 31 days

I have the following settings in my TSM for a backup copy group:

1. For incremental backup:

 Copy Group Name STANDARD

 Versions Data   1
 Exists

 Versions Data   1
 Deleted

 Retain Extra7
 Versions

 Retain Only Version 7

 2. Selective backup

 Copy Group Name STANDARD

 Versions Data   1
 Exists

 Versions Data   1
 Deleted

 Retain Extra31
 Versions

 Retain Only Version 31


Do You think the above definitions are correct?

Thanx




Privileged/Confidential information may be contained in this message and
may be subject to legal privilege. Access to this e-mail by anyone other
than the intended recipient is unauthorised. If you are not the intended
recipient (or responsible for delivery of the message to such person), you
may not use, copy, distribute or deliver to anyone this message (or any
part of its contents) or take any action in reliance on it. In such case,
you should destroy this message, and notify us immediately.

If you have received this email in error, please notify us immediately by
e-mail or telephone and delete the e-mail from any computer. If you or
your
employer does not consent to internet e-mail messages of this kind, please
notify us immediately.

All reasonable precautions have been taken to ensure no viruses are
present
in this e-mail. As we cannot accept responsibility for any loss or damage
arising from the use of this e-mail or attachments we recommend that you
subject these to your virus checking procedures prior to use.

The views, opinions, conclusions and other information expressed in this
electronic mail are not given or endorsed by Laiki Group unless otherwise
indicated by an authorised representative independent of this message.



Re: Drive Backup Rates

2003-10-16 Thread Zlatko Krastev
What about the website
http://www.storage.ibm.com/
You can find a lot of info there.

Also I would ask for what reason you intent to update 3590B - 3590E
instead of going directly to the end of the road (3590B - 3590H). This
would allow you to increase capacity three times instead of just doubling
it still allowing to use same 3590 cartridges.

Zlatko Krastev
IT Consultant






Nelson Kane [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
15.10.2003 22:34
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Drive Backup Rates


Hello All,
Does any one have a doc or know where I can get one
regarding Drive Transfer rates. I currently have 3590B drives
and thinking of upgrading to the E's and replacing the SCSI connection
with Fibre.
Thanks in advance!
Nelson Kane


Re: ASR and TSM server level required (was: Windows 2003 backup to 5.1.7 server)

2003-10-16 Thread Zlatko Krastev
Thank you very much Mike. It was important clarification!

Zlatko Krastev
IT Consultant






Mike Dile [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
16.10.2003 02:01
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:ASR and TSM server level required (was: Windows 2003 backup to 
5.1.7
server)


Just to clarify:

The use of TSM recovery with Windows ASR (Automated System Recovery)
delivered in the TSM v5.2.0 Windows client does *not* require a TSM v5.2.0
server.  ASR support is available for Windows XP and Windows 2003 Server
systems.

The use of Windows Volume Shadowcopy Services (VSS) to obtain system state
and services backups with the TSM v5.2.0 Windows client for Windows 2003
Server *does* require a TSM v5.2.0 server.

You can obtain system object backup of Windows 2003 Server using the
non-VSS legacy methods (just as was done in  Windows 2000) by using the
TSM v5.1.6 client to a 5.1.x server.

Regards,

Mike Dile
Tivoli Storage Manager Client Development
Tivoli Software, IBM Software Group
[EMAIL PROTECTED]


Re: Backup copy group definition

2003-10-16 Thread Zlatko Krastev
If your files are with unique names VERE=1 is equivalent to VERE=no 
limit from policy point of view.
But then all those unique names will be always ACTIVE in TSM unless you 
delete them from the filesystem. Only after you delete them, the VERD and 
RETO settings apply. RETE/VERE are used only when you have many versions 
of same file name.

If your DB2 DBA is keeping the backups to files, say 31 days, and only 
then deletes them - 31 days (while the file is active) + 31 days (retain 
only version) = 62 days!
Maybe it would be better to convince the DBA to use backup to tsm 
command in DB2 instead of dumping to file - DB2 is having built-in agent 
for TSM.

Zlatko Krastev
IT Consultant






[EMAIL PROTECTED]
16.10.2003 12:10

 
To: Zlatko Krastev [EMAIL PROTECTED]
cc: 
Subject:Re: Backup copy group definition



Hi,

My Environment is based on a Win2k TSM Server, with AIX Clients and a NAS
Client. NAS server is mainly used for storage from other Windows Servers
using a maped drive scheme.

One of the Windows Server is running SQL (DB2) and when an administrator 
is
performing a backup, the backup image is dumped on a mapped drive that
resides on NAS. The filename of the DB2 SQL backup is created based on the
date and time. for example :-

/NODE/DB_INCR_BACKUP.20030918202006.1.

The next day i am going to make a backup, another filename will be created
based on date and time. example :-

/NODE/DB_INCR_BACKUP.20030919212149.1

 So now as an example we have two files with different filename.

My question is :-

how can i maintain my backups ( incremental for 7 days and selective for
31 days) using backup copy group policies, since TSM policies are always
based on File VERSION?

Hope I was informative enough.

Best regards


  
  Zlatko Krastev  
  [EMAIL PROTECTED]  To:  ADSM: Dist Stor 
Manager [EMAIL PROTECTED] 
  16/10/2003 11:57cc:  Nicolas Savva 
[EMAIL PROTECTED] 
bcc:   
Subject:   Re: Backup 
copy group definition 
  
  




On Day 2 you previous copy will be expired due to Versions Data Exists
limited to 1. If you need to keep for N days, you should set VERE=no
limit VERD=no limit.

Zlatko Krastev
IT Consultant






Nicolas Savva [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
13.10.2003 17:30
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Backup copy group definition


Hi to all

I have a question regarding a backup copy group definition.

Let say that i want my incremental backup to be stored for 7 days and my
selective backup for 31 days

I have the following settings in my TSM for a backup copy group:

1. For incremental backup:

 Copy Group Name STANDARD

 Versions Data   1
 Exists

 Versions Data   1
 Deleted

 Retain Extra7
 Versions

 Retain Only Version 7

 2. Selective backup

 Copy Group Name STANDARD

 Versions Data   1
 Exists

 Versions Data   1
 Deleted

 Retain Extra31
 Versions

 Retain Only Version 31


Do You think the above definitions are correct?

Thanx





Privileged/Confidential information may be contained in this message and
may be subject to legal privilege. Access to this e-mail by anyone other
than the intended recipient is unauthorised. If you are not the intended
recipient (or responsible for delivery of the message to such person), you
may not use, copy, distribute or deliver to anyone this message (or any
part of its contents) or take any action in reliance on it. In such case,
you should destroy this message, and notify us immediately.

If you have received this email in error, please notify us immediately by
e-mail or telephone and delete the e-mail from any computer. If you or
your
employer does not consent to internet e-mail messages of this kind, please
notify us immediately.

All reasonable precautions have been taken to ensure no viruses are
present
in this e-mail. As we cannot accept responsibility for any loss or damage
arising from the use of this e-mail or attachments we recommend that you
subject these to your virus checking procedures prior to use.

The views, opinions, conclusions and other information expressed in this
electronic mail are not given or endorsed by Laiki Group unless otherwise
indicated by an authorised representative independent of this message.









Privileged/Confidential information may be contained in this message and
may be subject to legal privilege. Access to this e-mail by anyone other
than the intended recipient

Re: Windows 2003 backup to 5.1.7 server?

2003-10-14 Thread Zlatko Krastev
Eric,

you need both the v5.2 client and v5.2 server if you want to exploit the
new Automated System Recovery feature of Windows Server 2003.
If v5.1.6 (or later) client and/or v5.1.x server are used you can only
have System State/System Object backups and restores.

Zlatko Krastev
IT Consultant






Loon, E.J. van - SPLXM [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
14.10.2003 16:08
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Windows 2003 backup to 5.1.7 server?


Hi *SM-ers!
In the TSM client manual for Windows 5.2 I found the following line:

Your Windows Server 2003 client must be connected to a Tivoli Storage
Manager Version 5.2.0 or higher server. (page 86)

Does this mean that Windows 2003 system state recovery doesn't work when
you
are connected to a 5.1.7 server?
This is a little bit unclear because the  5.1.6 client readme files states
that Windows 2003 is supported, but the (probably older) 5.1 Windows -
Backup-Archive Clients Installation and User's Guide doesn't list Windows
2003 as a supported platform and thus doesn't discuss system state
recovery.
We can't upgrade to 5.2 because of our current AIX level...
Thanks in advance for any reply!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail or
any attachment may be disclosed, copied or distributed, and that any other
action related to this e-mail or attachment is strictly prohibited, and
may be unlawful. If you have received this e-mail by error, please notify
the sender immediately by return e-mail, and delete this message.
Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
employees shall not be liable for the incorrect or incomplete transmission
of this e-mail or any attachments, nor responsible for any delay in
receipt.
**


Re: TSM 5.1 server on AIX 5.2 64-bit?

2003-10-14 Thread Zlatko Krastev
Both irrelevant and incorrect.

The question was about the server, not the client. In IBM Announcement
Letter ZP02-0494 (EMEA, find US/Canada equivalent) for ITSM v5.1.5 you can
find:
Additional operating system support
...
AIX 5.2 support for the Tivoli Storage Manager server, LAN and LAN-free
backup-archive client (except HSM), Lotus Domino component of Tivoli
Storage Manager for Mail, and ESS for Oracle, and ESS for DB2 components
of Tivoli Storage Manager for Hardware
...
...
The Tivoli Storage Manager server on AIX requires the following hardware
and software:
...
Operating system - IBM AIX 4.3.3 (32-bit) or later or, IBM AIX 5.1 (32-bit
or 64-bit) or later

Regarding the client - you can look at
ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/client/v5r1/AIX/AIX64bit/v516/IP22651.README.AIX51.
All TSM packages require following operating system level and components
...
or
AIX 5L 5.2 PPC

Bottom line: You can use both TSM server and client in 64-bit mode on AIX
5.2 if you are at or beyond particular maintenance level. In fact the
client is supported *only* in 64-bit mode on AIX 5.2 according to the same
Announcement Letter!

Zlatko Krastev
IT Consultant






John Monahan [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
14.10.2003 17:36
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: TSM 5.1 server on AIX 5.2 64-bit?


If you send an email to [EMAIL PROTECTED] with the subject of
52_Install_Tips, the following information is part of the email:

 TSM 5.1 Not Supported on AIX 5.2

Tivoli Storage Manager (TSM) 5.1 is not compatible with AIX 5.2 and will
cause a system crash if installed. TSM 5.1 is shipped in error on the AIX
5L
for POWER V5.2 Bonus Pack (LCD4-1141-00) dated 10/2002, and should not be
installed.

Once TSM 5.1 is installed on AIX 5.2, the system will crash on every
reboot.
To recover from this situation, the system will have to be booted in
maintenance mode from the AIX 5.2 installation media or a system backup
(mksysb) to uninstall the tivoli.tsm.* filesets. Alternatively, the
following line can be uncommented in /etc/inittab by inserting a ':'
(colon)
at the beginning of the line.

 adsmsmext:2:wait:/etc/rc.adsmhsm  /dev/console


__
John Monahan
Senior Consultant Enterprise Solutions
Computech Resources, Inc.
Office: 952-833-0930 ext 109
Cell: 952-221-6938
http://www.compures.com




 Dan Foster
 [EMAIL PROTECTED]
 Sent by: ADSM:To
 Dist Stor [EMAIL PROTECTED]
 Manager   cc
 [EMAIL PROTECTED]
 .EDU Subject
   TSM 5.1 server on AIX 5.2 64-bit?

 10/14/2003 08:39
 AM


 Please respond to
 ADSM: Dist Stor
 Manager
 [EMAIL PROTECTED]
   .EDU






1. Will that combination work?

2. Is that a supported combination? Or is only TSM 5.2 server on AIX 5.2
   supported?

The docs I've seen to date hasn't directly addressed TSM 5.1 server on AIX
5.2 64 bit, and I don't have a spare 64-bit 5.2 machine to test with at
the
moment. (Got plenty of 32-bit 5.2 machines; verified the client works
fine)

Thanks!

-Dan


Re: SV: Windows Server 2003 and TSM

2003-10-14 Thread Zlatko Krastev
Misleading!
It can also be done using v5.1.6 (or later) client and/or v5.1.x server.
See my previous reply on the other thread.

Zlatko Krastev
IT Consultant






Christian Svensson [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
14.10.2003 14:35
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:SV: Windows Server 2003 and TSM


Hi William!
It?s included in to the standard iTSM Client. (And is not a TDP) ;)
But you most use Client level 5.2.x.x and TSM Server 5.2.x.x

/Christian

-Ursprungligt meddelande-
Fran: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Bill
Fitzgerald
Skickat: den 14 oktober 2003 13:20
Till: [EMAIL PROTECTED]
Amne: Windows Server 2003 and TSM


My company is moving to use Windows server 2003 Active directory.

Is there a TDP for backing up serves using this?

What is the process for backing up a server using Active directory on a
Win 2003 server?



William Fitzgerald
Software Programmer
Munson Medical Center
[EMAIL PROTECTED]


Re: Anyone using OTG DiskXtender?

2003-10-14 Thread Zlatko Krastev
Have in mind that LTO is not so good for *any* HSM due to long mount time.
Usually IBM 359x/STK 9x40 will provide you much faster recall.

Zlatko Krastev
IT Consultant






Bill Boyer [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
14.10.2003 20:39
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Anyone using OTG DiskXtender?


It will be the lastest DiskXtender2000 product and host will be
Windows2000
server, TSM Client 5.1.6 and TSM server 5.1.7.2 on AIX 5.2 64-bit (p630
processor) connected SCSI to IBM3584 LTO1 with 3-drives.

Bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Dearman, Richard
Sent: Tuesday, October 14, 2003 1:32 PM
To: [EMAIL PROTECTED]
Subject: Re: Anyone using OTG DiskXtender?


I tried it a couple of years ago and it was terrible.  What version of
Diskextender are you running?  What os and version of TSM are you running?

Thanks

-Original Message-
From: Justin Case [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 14, 2003 12:16 PM
To: [EMAIL PROTECTED]
Subject: Re: Anyone using OTG DiskXtender?

FYIYes I am running DiskXtenderon a Windows powered (w2k) Nas 300G with 11
million files and I am not Pleased
at all with the response times when we have to fetch files from tape.
Using
Tivoli Storage Manger ion the back end,
have been running it for about 2 years.

Justin Case
Duke University
919 684-3421




Bill Boyer [EMAIL PROTECTED]@VM.MARIST.EDU on 10/14/2003 11:10:35
AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:[EMAIL PROTECTED]
cc:

Subject:Anyone using OTG DiskXtender?

I have a client that wants to install DiskXtender on a Windows2000 server.
Anyone have experience doing/running this? Pitfalls..things to look out
for???

Bill Boyer
Some days you are the bug, some days you are the windshield. - ??
***EMAIL DISCLAIMER***
This
email and any files transmitted with it may be confidential and are
intended
solely for the use of th individual or entity to whom they are addressed.
If you are not the intended recipient or the individual responsible for
delivering the e-mail to the intended recipient, any disclosure, copying,
distribution or any action taken or omitted to be taken in reliance on it,
is strictly prohibited.  If you have received this e-mail in error, please
delete it and notify the sender or contact Health Information Management
312.413.4947.


Re: TSM Agent for Lotus Notes

2003-10-14 Thread Zlatko Krastev
Replication of the database to the server and backup of the server
replica.

Zlatko Krastev
IT Consultant






John Stephens [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
14.10.2003 22:18
Please respond to jws


To: [EMAIL PROTECTED]
cc:
Subject:TSM Agent for Lotus Notes


Is there still an agent or TDP for Lotus Notes version 4.x
I am aware of the TDP for Domino, but what about Notes user's what do they
use for TSM backups?

Thanks

John Stephens
www.Storserver.com


Re: Backup Stgpool Process hangs, not cancelable

2003-10-12 Thread Zlatko Krastev
Have you looked at the value of Current Physical File - it is 55,767,181,387.
TSM Server does not increment the Bytes Backed Up value until the
transaction finishes and it may take a lot of time (calculate yourself how
much should take 55 GB with your tape technology). Whatever number of
times you query, the result will be the same - the file is not yet
successfully backed up. And if there is some unsatisfied mount request
pending, it may even finish with failure.
If you issue the cancel process command, the cancel is pending but the
process ends again after transaction completion.

On the end if you had not enough patience to wait for the successful
completion, the only method for force-cancel of the process is to use
command halt! Sorry, no better answer.

Zlatko Krastev
IT Consultant






Gerd Becker [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
12.10.2003 11:23
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Backup Stgpool Process hangs, not cancelable


Hi TSM'ers,

my daily backup storagepool works fine since yesterday. The process
started with a central script. All storage pools are copied fine during a
few minutes. But the last of them, TAPE_BACKUP_001, probably the biggest
will not finish for more then 20 hours. I queried the process severalt
times, but there is no change in the current physical File bytes. So I
tried to canel the process. The command was accepted, but the process
doesn't stop.

 Process Process Description  Status
  Number
 
-
 455 Backup Storage Pool  Primary Pool TAPE_BACKUP_001, Copy Pool
   COPY_BACKUP, Files Backed Up: 26036, Bytes

   Backed Up: 1,011,460,904, Unreadable Files:
0,
   Unreadable Bytes: 0. Current Physical File

   (bytes): 55,767,181,387
Current input volume:
   82.
Current output volume: 26.

- I tried it again:
ANR0941I CANCEL PROCESS: Cancel for process 455 is already pending.

- I put the status of the volume to unavailable - command refused:

ANR2405E UPDATE VOLUME: Volume 82 is currently in use by clients
and/or data management operations.
ANR2212I UPDATE VOLUME: No volumes updated.

- I looked for locks:
show locks:
Lock hash table contents (slots=510):

slot - 181:
LockDesc: Type=17011, NameSpace=0, SummMode=sLock, Key='+PD_MAIN_WE'
  Holder: Tsn=0:11807906, Mode=sLock
slot - 274:
LockDesc: Type=17011, NameSpace=0, SummMode=sLock, Key='+PD_BKUP_STG_ALL'
  Holder: Tsn=0:11807912, Mode=sLock
slot - 383:
LockDesc: Type=34040, NameSpace=319, SummMode=xLock, Key='11.0'
  Holder: Tsn=0:11928737, Mode=xLock
LockDesc: Type=17011, NameSpace=0, SummMode=sLock, Key='+PD_BKUP_STG'
  Holder: Tsn=0:11809139, Mode=sLock

My Question: Has anyone a idea, how I can stop this process without shut
down the server?

Thank you for Help

Regards

Gerd Becker, Emprise Network Consulting


Re: licensing (again... :( ) - some formal statements

2003-10-12 Thread Zlatko Krastev
 is a workstation in my shop

As we see from the discussion, salespeople prefer to treat everything as
servers in your shop :-()


Hope I am not the only one on this list having the formal definition.

Zlatko Krastev
IT Consultant






Remco Post [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09.10.2003 18:15
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:licensing (again... :( )


Hi all,

since IBM now diffentiates it's TSM licensing fee based on the worksation
of
server role a system has in the company, I assume that IBM has some
definition of what that it thinks a server or a workstation is.
Unfortunately, I have been unable to find this definition on the IBM
website. Could anybody point me in the right direction? It's not unlikely
that I have to buy some licenses in the near future, and I'd like to know
what I need to buy

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer
industry
didn't even foresee that the century was going to end. -- Douglas Adams


Re: WRONG Answer: dsmfmt file size

2003-10-07 Thread Zlatko Krastev
-- Dumb question, but why would you want OS mirroring when you have TSM do
the mirroring between two Raw LV's

Obviously TSM supports only DB  Log volumes mirroring. The mirroring of
the random access volumes should be done by hardware or at OS level.
Unless in some of the next versions LVM is used for the diskpool volumes.

-- If I wanted TSM support I would need to do TSM mirroring.

I would be happy to get this written clearly in the manuals. If there is
no such thing in the docs, the statement is incorrect.

-- You could just format a bunch of 2G storage pool volumes.  You know what
size they are and they format a lot faster.

They will format a lot faster and will run a lot slower. Your disk head(s)
will start thrashing when you need to use those volume under certain load.
I (personally) would prefer to do more work once and have much less
trouble later.

Zlatko Krastev
IT Consultant


Re: empting a read only tape.

2003-10-07 Thread Zlatko Krastev
There is a solution for almost *every* problem!
1. checko libv your_lib that_ugly_one checkl=no
2a. upv v that_ugly_one acc=dest (for primary)
2b. del v that_ugly_one discard=y (for copy)
3a. rest v that_ugly_one
3b. ba stg primary_pool copy_pool

Throw away the tape after step 1 and do not bother yourself with quick
typing (doing quick and dirty mistakes ;o)))

Zlatko Krastev
IT Consultant






Roger Deschner [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
07.10.2003 07:53
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: empting a read only tape.


Yes, this is how it works, ...and, This is a Problem.

I frequently make a tape read only if it is having enough I/O errors to
suspect the tape media. So I want to get it out of circulation, perhaps
to send it for recirtification or replacement. (At USD$100/each I do not
casually discard suspect S-DLT tapes!)

But if you are careless about how you do this, it will get reused,
sometimes very quickly. So I always work deliberately - instead of
letting expiration and reclamation remove the data from the suspect
tape, I do a MOVE DATA. Then, with my REUSEDELAY set to 2 days, I have a
chance to grab the tape while it is still Pending.

The best way to remove a suspect tape that is Pending is to do a DELETE
VOL, and then ***IMMEDIATELY*** do a UPDATE LIBVOL libname volname
STATUS=PRIVATE. If you are slow, it will get reused the very next time
that a scratch tape is needed - there's some kind of logic here that
probably makes sense except in this special circumstance. Then leave it
physically in the library until your next database backup, after which
you can safely remove it.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]
== What would you rather do: Conduct a thorough analysis of ===
= your finances and adjust your investments accordingly, or be =
= tied down next to an anthill and covered with grape jelly? ==
= -- Barbara Brotman, Chicago Tribune ==


On Mon, 6 Oct 2003, Alexander Verkooijen wrote:

We have read-only tapes going to pending and then scratch
all the time.
So I guess the answer to your question is Yes.

Regards,

Alexander


Alexander Verkooijen
Senior Systems Programmer
High Performance Computing
SARA Computing  Networking Services


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
 On Behalf Of Bill Fitzgerald
 Sent: vrijdag 3 oktober 2003 18:51
 To: [EMAIL PROTECTED]
 Subject: empting a read only tape.


 Does anyone know if a tape in read only status becomes a
 scratch tape when it is emptied by expiration or reclamation?


 William Fitzgerald
 Software Programmer
 Munson Medical Center
 [EMAIL PROTECTED]




Re: lan free installation doc

2003-10-07 Thread Zlatko Krastev
Step 1: Upgrade your server to v5.2!!!

The *title* of IBM Announcement Letter ZP03-0216 (EMEA, find the
corresponding U.S. one):
IBM Tivoli Storage Manager, S/390 Edition V5.2 Adds LAN-free Support for
the z/OS Server

Steps 2 ... N: Follow the v5.2 manuals.

Zlatko Krastev
IT Consultant






Wholey, Joseph (IDS DMDS) [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06.10.2003 20:48
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:lan free installation doc


Can anyone point me in the general direction of steps required for a Lan
Free installation and customization?  Sun client TSMv5.2.  Z/os server
TSMv5,1,6.1.  thx.


Re: LAN only for UNIX

2003-10-07 Thread Zlatko Krastev
Backup via LAN for me means you do not have any tapes connected to the TSM
client system. So there might not be any rmtX devices and this ought to be
normal.

Zlatko Krastev
IT Consultant






Juan Jose Reale [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06.10.2003 21:58
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:LAN only for UNIX



 Hi  all TSM experts.
 Well,  for me  it will be first time to install the TSM
client and TDP for SAP in  UNIX environment,  so I am unsure about backup
via LAN.   Do anyone know if it is possibility to perform a backup via LAN
only over AIX environment?   How to generate rmtx  in the AIX server if we
will use only backup via LAN ?   I am using 4 drives 3584 library.

thank you... Regards.




This email has been scanned for all viruses by the MessageLabs service.


Re: policies

2003-10-07 Thread Zlatko Krastev
Ability to have different management classes and different retentions in
them is invented for that exact purpose. What is preventing you from using
two classes?

Zlatko Krastev
IT Consultant






Wojciech Zukowski [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
01.10.2003 16:42
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:policies


Hi,

I have to define and start backups with policies:
daily archive  - keep for 31 days
and on 1st day of every month archive - keep for 1 year.

how to do this?
with copy storagepools of different schedules?
STSM server is running AIX.


regards
Wojtek


Re: Anyone have memory problems with TSM Server 5.1.7.2

2003-10-07 Thread Zlatko Krastev
-- I would also strive to configure my system with at least twice as much
virtual memory disk space as real memory capacity.

As this is the TSM server, I personally would prefer to have it not
consuming a lot of paging space. vmtune settings and properly configured
BUFPOOLSIZE ought to achieve the good results.
Increasing the size of paging space very often hides memory leaks for
longer periods (causing sometimes disk contention and processors
overutilization).

Zlatko Krastev
IT Consultant






Richard Sims [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
03.10.2003 20:44
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Anyone have memory problems with TSM Server 5.1.7.2


I'm currently having memory problems with TSM server 5.1.7.2 (32-bit) on
AIX (64-bit). Has anyone heard of a memory leak with this version? I keep
running out of virtual memory, even though I have 4 GB real memory and 3
GB swap space. The box only runs the TSM server. AIX is 5.1 ML 4 with
patches.

Your posting doesn't indicate that system statistics point to any dsm*
process being the oinker, so definitely look into that: you may find
unexpected processes hanging around with acquired memory.
I would also strive to configure my system with at least twice as much
virtual memory disk space as real memory capacity.

If the issue is the TSM scheduler or web client, then the Client Acceptor
Daemon
is what you should look into using, being designed to obviate memory
rentention
issues.

  Richard Sims, BU


Re: p650 with TSM anyone using it?

2003-10-07 Thread Zlatko Krastev
You may consider splitting the load into two TSM servers or instances.
Depending on the type of clients/files/load you may even need to go
further and get p670/p690 to handle the burden. Or to use several
p650/p630 without LPar-ring.
Do your sizing carefully, change later might be a painful process.

Zlatko Krastev
IT Consultant






Tae Kim [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
08.10.2003 03:20
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: p650 with TSM anyone using it?


Backing up 2300 servers with about 700,000,000 files. Nightmare to size
just the sizing of the DB is driving crazy

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Talafous, John G.
Sent: Tuesday, October 07, 2003 5:21 PM
To: [EMAIL PROTECTED]
Subject: Re: p650 with TSM anyone using it?

How many clients do you NEED to backup? We run a p650 with:

8 1.45GHZ processors
16GB memory
6 3590-H  drives
2.4TB  external disk cache
4x146GB internal disks for 2 AIX instances
8x76GB internal disks for TSM DB and Logs
2 IBM 2109 32 port Fibre switches
14 fibre HBA's

We recently moved to this hardware and are backing up close to 500
application servers (Windoze, Unix, AIX, etc) and almost 2000
workstations
(only the My Documents folder(s)). Our intent is to partition and
separate
the Application Servers from Workstations. I really want two physical
servers in different locations so that they can backup each other. This
is a
step in that direction.

The machine screams!!!

John G. Talafous  IS Technical Principal
The Timken CompanyGlobal Software Support
P.O. Box 6927 Data Management
1835 Dueber Ave. S.W. Phone: (330)-471-3390
Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
[EMAIL PROTECTED]   http://www.timken.com


Re: LOG.NSF - backup it up or not?

2003-10-04 Thread Zlatko Krastev
Though the LOG database is not necessary to recover the Domino server
functionality, it holds all the data for later investigation. Otherwise
the reason for failure might not be pinpointed and server to fail again.
IMnsHO it is up to Domino administrator to decide should this database be
backed up or not. We ought to provide the backup service.

Zlatko Krastev
IT Consultant






D. Malbrough [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
02.10.2003 15:01
Please respond to D. Malbrough


To: [EMAIL PROTECTED]
cc:
Subject:Re: LOG.NSF - backup it up or not?


Bill,

We do not back up the log.nsf files either because they are non-critical
to the
recovery of Lotus Domino. This file is often recreated during maintenance
and
that file will become either log1.nsf, log2.nsf, etc and placed in the
mail2 folder
under subfolder logs. It will be considered a redundant backup file.

Regards,

Demetrius Malbrough
TSMV5.1 Certified Consultant

   ---Original Message---
   gt; From: Bill Boyer lt;[EMAIL PROTECTED]gt;
   gt; Subject: LOG.NSF - backup it up or not?
   gt; Sent: 01 Oct 2003 19:25:23
   gt;
   gt;  Domino R5 and TDP Domino 5.1.5
   gt;
   gt;  The sample DSM.OPT file supplied by the TDP agent excludes the
LOG.NSF file.
   gt;  I just had a Domino admin question why it wasn't being backed
up... I had no
   gt;  answer to that.
   gt;
   gt;  Would backing up LOG.NSF be a good thing to start doing? Or just
leave the
   gt;  default Exclude?
   gt;
   gt;  Bill Boyer
   gt;  quot;Some days you are the bug, some days you are the
windshield.quot; - ??
   ---Original Message---


Re: Tape Technology Comparison

2003-10-04 Thread Zlatko Krastev
Apples vs. oranges, you ought to know.
Better do not put 9940B in this basket. LTO is upper mid-class drive,
while 9x40 are high-end drives. You can compare them with the newly
announced IBM 3592 drives for 3494. Searchng through list archives you can
find alot of discussions making the same point.
Mid-class: ADIC i2000, HPaq ESL9322, IBM 3584, StorageTek L700e/LTO-2
High-end: IBM 3494/3592, StorageTek L700e/9940B

So the compare should be in two phases:
- consider cheaper mid-class LTO-2 vs. heavy-duty high-end 3592/9940B. You
may take Richard's suggestion (3592/9940B for primary; LTO-2 for copy
pools) into consideration.
- select within the range the vendor best suitable for you

Zlatko Krastev
IT Consultant






Neil Schofield [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
30.09.2003 15:09
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Tape Technology Comparison


Hello

Anybody out there able to share their thoughts and experiences?

We're doing a technology refresh on some of our libraries and have a
number
of products in the frame:
IBM 3584 / LTO-2
ADIC Scalar i2000 / LTO-2
HP ESL9322 / LTO-2
StorageTek L700e / T9940B

The interesting one from my point of view is the L700e with its T9940B
drives. The performance/capacity comparisons between LTO-2 and T9940B seem
close enough to make no difference which leaves cost and reliability as
the
differentiators.

Reliability will be an important factor in our decision and the T9940B
seems to be marketed as a high duty cycle, 24 x 7 drive. Does anybody have
any real world experieces in a TSM environment which suggest the T9940B is
more (or less) reliable than LTO-2? Should I be concerned about going for
a
'proprietary' technology like 9x40 instead of an 'open' standard like LTO?

Also, any thoughts on the libraries we are considering would be gratefully
appreciated.

For info, no suppliers offered 3590 or 3592. This was probably on grounds
of cost but I'm guessing the first is looking obsolete now while the
second
is a bit too new. Also, we're upgrading from DLT7000 libraries so chances
are we'll be ecstatic with whatever we get.

Thanks
Neil Schofield
Yorkshire Water Services



Visit our web site to find out about our award winning
Cool Schools community campaign at
http://www.yorkshirewater.com/yorkshirewater/schools.html
The information in this e-mail is confidential and may also be legally
privileged. The contents are intended for recipient only and are subject
to the legal notice available at http://www.keldagroup.com/email.htm
Yorkshire Water Services Limited
Registered Office Western House Halifax Road Bradford BD6 2SZ
Registered in England and Wales No 2366682


Re: SQL server backups

2003-09-25 Thread Zlatko Krastev
-- This would cut down backup time (but increase restore time).

While the former statement is true, the latter is incorrect and
misleading. Restore from flat dump will require:
1. Read the file from tape volume on the (TSM) server
2. Write the file to a disk on the SQL server
3. Create SQL server data files (usually takes some time)
4. Read the data from the disk(s)
5. Write the data to SQL data files

Usage of TDP for MS SQL will eliminate steps 2 and 4, i.e. will eliminate
a whole data read-write session. Even for 20 GB this is going to save
about 10 min. If  the dump file is written on same disks as data files, it
will cause disk contention. If separate disks are used, the number of
spindles available to MS SQL (and respectively *both* read and write
performance) will be lower.

Bottom line: properly tuned TDP for MS SQL (or any competitive product)
should perform better!!! This motivates the spending on such products.

Zlatko Krastev
IT Consultant






Todd Lundstedt [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
23.09.2003 20:54
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: SQL server backups


Yes, but you could stripe the full backup across multiple tape drives (if
you have more than one tape drive).  This spreads the database backup
across multiple tapes, and makes reclamation of those files less
problematic.  Note: if you stripe a backup, you *must* send the backup to
a collocate=filespace storage pool.  Otherwise, reclamation could put more
than one stripe for a particular backup on a tape, making restoration
impossible.
Additionally, TDP will allow you to backup differential backups of the
database, and even transactions logs, directly to TSM storage.  This would
cut down backup time (but increase restore time).
Hope that helps.
Todd

ps.. we are backing up an 800+ GB SQL database using stripe=3 to LTO1
drives via SAN in 3-4 hours.
pps.. if you implement a FULL with DIFF backup from TDP, make sure your
database is not getting dumped to disk at all, or it invalidates the FULL
on tape, but TSM is unaware of it.




Thomas Denier [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09/23/2003 12:39 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Fax to:
Subject:SQL server backups


My company has a Windows server supporting a very large SQL Server
database.
Before the last hardware upgrade, the database was backed up with the SQL
Server Connect Agent. Since the upgrade the database has been dumped to a
flat file which is then backed up by the backup/archive client. Either way
the backup arrives at the TSM server as a single file containing about
20 gigabytes of data. A single file this size causes a number of problems.
The recovery log gets pinned for hours. Tape reclamation tends to perform
poorly. The system administrator is now considering installing TDP for
SQL Server. Would this software still send the backup as a single huge
file?


Re: Does TSM support Network Load Balanced (NLB) clients

2003-09-23 Thread Zlatko Krastev
As is written in the article you are mentioning (Note that there is no
sharing of data between servers running NLB), there is no specific
activity and from TSM point of view these are ordinary Windows systems. So
TSM ought to work without problems.
Are you experiencing problems or needing assurance before deployment?

Zlatko Krastev
IT Consultant






Mastrangelo, Ralph [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
19.09.2003 12:19
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Does TSM support Network Load Balanced (NLB) clients


We need to back up some Microsoft Network Load Balanced (NLB) clients
(as opposed to Microsoft Cluster Service clustered clients) running
Microsoft SQL. Does either TSM V4.2 or V5.x support this?

If needed, here's a link (http://tinyurl.com/nusf) to an article,
Windows clustering: Microsoft Cluster Service or network load
balancing, which discusses the difference between NLB and Clustering.

Thanks for your help

Ralph


Re: tsm schedule can't handle network disk

2003-09-23 Thread Zlatko Krastev
The problem is that mapping will not exist when you log off! The other
problem might be that the level of authorization of TSM Scheduler service
is not enough to perform network operations.

Ensure the following is set up:
1. The scheduler is running under account with access to the network
share. By default it runs under local SYSTEM account and has no
priviledges mount/access network resources. Usually domain account is
necessary.
2. Do not use Z:\* specification but use \\host\share\* instead.
Example: file share named Data mounted from server NAS1 will be
accessed as \\NAS1\Data\*
3. You have forgotten the subdirectories option. This does not cause you
backups to finish with error but does not backup everything. When defining
the schedule add options=-subdir=yes:
def schedule sch1-domain schedule1 startdate=09/18/2003 starttime=21:00
objects=\\NAS1\Data\* options=-subdir=yes duration=4 durunit=hours

Zlatko Krastev
IT Consultant






jiangguohong1 [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
19.09.2003 09:38
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:tsm schedule can't handle network disk


hello,
i have a problem.if anyone know this problem,please resposible to me.
my probelm is follow as below:

i use tsm 5.1.5(for solaris platform) to do schedule automaiton backup.

my tsm client is windows platform ,that is based on tsm t.1.5 version.my
automation backup objects is network shared disk,i map network shared disk
to local machine,the mapping tagger is Z:\ disk.this machine node name
is test.

i have defined schedule,and i have association the schedule with client.
tsmdef schedule sch1-domain schedule1 startdate=09/18/2003
starttime=21:00 objects=z:\* duration=4 durunit=hours
tsmdef association sch1-domain schdule1 test
on tsm client test,when i start tsm schedule by using dsmc scheudle
command ,display error information is follow below:
incremental backup volume Z:\*
ANS1228S send object Z:\* fail
ANS1063E specify path not valid
visual verity that tape library robot don't move and tsm storage pool data
is not change.so schedule automation backup is not sucess.i try with local
machine disk c:\,update schedule
tsmdef schedule sch1-domain schedule1 startdate=09/18/2003
starttime=21:00 objects=c:\* duration=4 durunit=hours
tape library robot move and data backup to storage pool,schedule
automaction is sucess.
the result of the local machine disk c:\ and network shared mapping disk
Z:\ is not same.plesase according to above describe,how to invoke the
problem?

thank you very much.


Re: Backup Of UDB7 Databases

2003-09-23 Thread Zlatko Krastev
It ought to be easy - set the scheduler to run under user ID of the DB2
instance owner (or a designated user, able to perform all the backups
iteractively)!
This is not a TSM issue but DB2 security issue. DB2 protects the data
from unauthorized access as every good product ought to. You are not able
to log on as Windows SYSTEM account and cannot recreate the problem
iteractively.

Zlatko Krastev
IT Consultant






Jeff White [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
23.09.2003 17:40
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Backup Of UDB7 Databases


Hello,

Perhaps someone could help here...

TSM Server is 5.1.5.2
TSM Client is 5.1.5.0 on Windows NT
UDB7 installed

I have a UDB7 database on a Windows NT server. I have performed all of the
actions required to backup the database and have backed it up several
times
from either the UDB7 Control Centre or via a batch file that executes UDB7
commands. I backup the database to TSM

My batch file looks like this:
CD\
CD SQLLIB\BIN
START /B DB2CMD DB2 BACKUP DATABASE database-name ONLINE USE TSM

I want to back this up via a TSM schedule using the command option. I
point
the client schedule to the location of the batch file that contains the
above command. I have a scheduled service on the NT box, pointing to the
correct options file. I have the options file set to prompted and the
schedule runs fine, except that i see a DB2 Command Line Client screen
with
message 'SQL0567N SYSTEM IS NOT A VALID AUTHORIZATION ID -
SQLSTATE=42602'

All of our TSM scheduled services use the SYSTEM account, including our
many TDP agents and all are fine.

I have the documentation 'Backing Up DB2 Using Tivoli Storage Manager' and
it refers specifically to UNIX and WINDOWS 2000. There is no specific
reference to NT - is it fully supported on Windows NT clients?

I am aware that there is a scheduling function within UDB7 Control Centre
and that this would schedule the backup for me. But my preference is via
TSM schedule because it gives me a greater degree of control

Thanks

Jeff White
Senior Systems Programmer
CIS
[EMAIL PROTECTED]




*

This e-mail may contain confidential information or be privileged. It is
intended to be read and used only by the named recipient(s). If you are
not the intended recipient(s) please notify us immediately so that we can
make arrangements for its return: you should not disclose the contents of
this e-mail to any other person, or take any copies. Unless stated
otherwise by an authorised individual, nothing contained in this e-mail is
intended to create binding legal obligations between us and opinions
expressed are those of the individual author.

The CIS marketing group, which is regulated for Investment Business by the
Financial Services Authority, includes:
Co-operative Insurance Society Limited Registered in England number 3615R
- for life assurance and pensions
CIS Unit Managers Limited Registered in England and Wales number 2369965 -
for unit trusts and PEPs
CIS Policyholder Services Limited Registered in England and Wales number
3390839 - for ISAs and investment products bearing the CIS name
Registered offices: Miller Street, Manchester M60 0AL   Telephone
0161-832-8686   Internet  http://www.cis.co.uk   E-mail [EMAIL PROTECTED]

CIS Deposit and Instant Access Savings Accounts are held with The
Co-operative Bank p.l.c., registered in England and Wales number 990937,
P.O. Box 101, 1 Balloon Street, Manchester M60 4EP, and administered by
CIS Policyholder Services Limited as agent of the Bank.

CIS is a member of the General Insurance Standards Council

CIS  the CIS logo (R) Co-operative Insurance Society Limited




  1   2   3   4   5   6   7   8   9   10   >