Re: 5.1.6.2 Upgrade

2003-02-28 Thread Don France (TSMnews)
Hi Gretchen (and Gerhard),

Hope this finds you doing well. We've sure missed you at SHARE!

Question: Have you done anything new to protect yourself from a bad maintenance
level? Was there something special in 5.1.5 or 5.1.6 that attracted you to upgrade so
soon?

I have a customer wanting v5, and I am not anxious (yet) to go beyond 5.1.1.6 -- the
latest level I like from a server-stability perspective -- but they do not have the
5.1.0.0 CD,,, they have 5.1.5.0, so I am looking for hints/tips to identify a good
level beyond 515.

Thanks,

Don

==

My only complaint is the speed of the expiration - it's never fast

enough for me.

Gretchen Thiele

Princeton University


Re: Windows 2000 Server Spec for TSM 5.1

2002-05-22 Thread Don France (TSMnews)

Your backup sizing is quite small;  are you sure it's that small?  There are
Redbooks on sizing for AIX;  also, there is much material from SHARE
proceedings on performance and tuning.

A more typical arrangement (I've seen) for a single-client+server situation
might be 50 GB to start, up to 100 GB total backup occupancy, on a
file/print server that is also a TSM server (and client) -- which I had at
two sites;  we configured with 4-way Dell processors, 512MB RAM, external
RAID for the file-served data, internal drives for the TSM db, log  disk
pool (102 GB).  This was a software development  marketing site, this HW
config was more than sufficient to handle both TSM and file-server loads
with sub-second response time for normal business day users... the main
deficiency was only one tape drive, so we insisted that primary storage
pool )for backups) stay on disk, two copy pools to tape (one for onsite, one
for offsite) to fully protect their data -- which we limited to 60 GB of
file server storage.

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Dallas Gill [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, May 20, 2002 5:19 PM
Subject: Windows 2000 Server Spec for TSM 5.1


 Can someone please tell me or point me in the right direction to find so
 documentation on how I should spec my TSM Server, I need to find out how
 many CPU's I need also how much RAM I should have. I am looking to backup
 approx. 1Gb of data first up then approx 600Mb of Data for the incremental
 backups  this data all resides on the TSM server it self (no other
clients)
 Could someone please help. I am going to be running TSM Server 5.1

 Thanks.

 DJG



Re: TSM on a win2k cluster

2002-05-22 Thread Don France (TSMnews)

Cannot comment on the drive view or schedule-mode problems -- sounds like
a TSM-cluster service definition discrepancy (else, a bug; need to check the
cluster services setups and client code level, then contact Support Line or
install latest client code -- there's been a bunch of client-code activity
in cluster support this year, both for Win2K and AIX).

Regarding your speed/performance of incremental {If the EMC disks appear as
local drives, you should consider the NTFS journaling-incremental
feature.} -- alternatively, you may want to consider -incrbydate for your
weekday backups;  the speed of progressive-incremental is largely due to the
client traversing the entire file system structure to identify which files
to process -- file systems with large numbers of files (anything over
half-million) seem to be cause for performance concerns.  I had a client
that decided to address this issue by limiting their file systems to 100 GB;
starting a new drive-letter when reaching that size greatly helped mitigate
the daily incrementals (AND full file system restores, their main concern).
We recently did 1.6 million file/object restore for a 320 GB file system,
achieved nearly 10 GB/Hr with parallel restore sessions, DIRMC (very
important), DIRSonly and FILESonly options, to minimize NTFS re-org
thrashing, and CLASSIC (vs. no-query) restore path, to ensure minimal tape
mounts.

Result was backups of 12-20 GB, restores 6-15 GB/Hr, depending on
collocation standards.  High priority (ie, mission critical or
high-visibility) servers get collocated.

Hope this helps.  See, also, a dozen other posts this past couple months,
search for cluster in the subject field.

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Firl Debra K [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, April 04, 2002 2:07 PM
Subject: TSM on a win2k cluster


 How is everyone elses experience using TSM on a win 2000 cluster?

 Here is ours.


 We have 2 win2k clusters that are just used for file/print.  All the disks
 are EMC DASD connected via fibre.One (clusterA has 11 group-disk
 resources, the other 9 (clusterB).  The clusters are used in any mode,
 active-active any drive combination, active-passive.

 ClusterA (308g used of 572g), about 4 million files
 ClusterB (642g used of 719g), about 4 million files.

 Both clusters are just used by users to shares for file and print access.

 Are there others out there that have clusters of this capacity and using
TSM
 to backup them?


 First Attempt.
 I looked at the redbook, sg24-5742-00, Using Tivoli Storage Management in
a
 clustered Windows NT environment.
 I first configured it using the common names method.  Initial complete
 backups of volumes took a while.  We averaged 4-6g/ hour.
 Worked somewhat ok, tell I tried to kick off the scheduler.  When the
 cluster is in active passive mode, I could get the scheduler to work using
 sched mode prompted ,and refer to one dsm.opt file that included all the
 cluster domains, e:-o:., It would grab the last scheduler that was started
 and look at the dsm.opt file and run from there.  So one schedule ran to
 include all the drives NOW when the cluster is in active\passive mode,
I
 could only get it to work with sched polling and not all the schedulers
 would kick off.  2 to maybe 4 would kick off of the 11, so when using a
 separate dsm.opt file per disk cluster group only.
 Dealt with tech support, never got more scheduler to kick off then the 4.
 NOTICE that I had to have different configurations depending on how the
 cluster was in, whether active-active or active-passive.  So that was not
a
 solution.

 Tech support suggested using the unique names method which is the only
 method now referenced in their newer documentation.

 Ok, I registered with TSM a separate node per cluster group.  Created a
 separate scheduler per cluster group.  So I have 11 plus the quorum.  The
 initial complete backup speed about 4/6 g... .. Now something interesting
is
 happening.  One one node 2 of the cluster group disk can't be seen in TSM.
 The users are using it fine on the server and it is available in the
 operating system.   I tried enabling the scheduler for those 2 disks, TSM
 does not back them up because it does not see them.  I swap to the other
 node the disk cluster groups.  TSM can now see it and manual backups and
 scheduled backups works.  Weird!
 On the node that does not recognize the disk, thinks they are not
clustered
 disks for some reason. Noticed the error when doing a command line backup.

 The questions.

 Do other business have clusters like these with this amount of disk space
 and successfully using TSM to do backups?  Are others experiencing these
 type of issues?


 Issues we see is the performance of the incrementals using the common
names
 method is dreadfully slow.  

Re: Select Stmt? or Query

2002-05-21 Thread Don France (TSMnews)

Also, there *may* be other messages of interest (if b/a-client, 4961, 4959,
and others in close numeric proximity are the session statistics messages).


Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Steve Harris [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, May 07, 2002 7:26 PM
Subject: Re: Select Stmt? or Query


Try something like

select date_time, message
from actlog
where nodename='MYNODE'
and date_time  current timestamp - 12 hours
and msgno in (1234, 5678, 9876)

Just tune the where clause to select the messages that you want.  Run this
in commadelimited mode and pipe the output through your scripting tool of
choice to give you a pretty report.

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia

 [EMAIL PROTECTED] 08/05/2002 3:35:57 
Hello All:

I have a unix client that kicks off an incremental backups thru tivoli
using an internal dump process for a sql database.  I want to provide a
report to the group that monitors its backup.  The only information they
care about is whether a filespace backed up successfully, what files
failed, and how much data was transferred.  For the time being I am
running 'q actlog begind=-1 begint=18:00:00 sea='nodename', because we
try to schedule our backups between 6 p.m. and 6 a.m.  This is too much
information to run thru on a daily basis for them and they were hoping I
could trim down a report.  Does anybody have a select statement or query
that comes close to the needs I have?

Thanks,

Bud Brown
Information Services
Systems Administrator



**
This e-mail, including any attachments sent with it, is confidential
and for the sole use of the intended recipient(s). This confidentiality
is not waived or lost if you receive it and you are not the intended
recipient(s), or if it is transmitted/ received in error.

Any unauthorised use, alteration, disclosure, distribution or review
of this e-mail is prohibited.  It may be subject to a statutory duty of
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this
e-mail in error, you are asked to immediately notify the sender by
telephone or by return e-mail.  You should also delete this e-mail
message and destroy any hard copies produced.
**



Re: Tuning TSM

2002-05-17 Thread Don France (TSMnews)

Reading thru this thread, no one has mentioned that backup will be slower
than archive -- for TWO significant reasons:
1. The standard progressive-incremental requires alot of work in comparing
the attributes of all files in the affected file systems, especially for a
LARGE number of files/directories (whereas archive has minimal database
overhead -- it just moves data).
2. Writes to disk are NOT as fast as tape IF the data can be delivered to
the tape device at streaming speed;  this is especially true if using
no-RAID or RAID-5 for disk pool striping (with parity)... RAID-0 might
compete if multiple paths  controllers are configured.  The big advantage
to disk pools is more concurrent backup/archive operations, then
disk-migration can stream offload the data to tape.

So, firstly, debug fundamentals using tar and archive commands (to eliminate
db overhead comparing file system attributes to identify changed
files/objects);  once you are satisfied with the thruput for archive, allow
20-50% overhead for daily incremental. If your best incremental experience
is not satisfactory, (but archive is okay) consider other options discussed
in the performance-tuning papers -- such as, reducing the number of files
per file system, use incrbydate during the week, increase horsepower on the
client machine and/or TSM server (depending on where the incr. bottlenecks
are).

The SHARE archives do not yet have the Nashville proceedings posted; when
they do show up, they are in the members-only area  (I was just there,
searching for other sessions).


- Original Message -
From: Ignacio Vidal [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 17, 2002 6:30 AM
Subject: Re: Tuning TSM


Zlatko:
OK: tcpwindowsize is in KBytes (actually my setting is 1280).

I have the same question (why client is idling?).
We have users working on that server from 9:00 until 22:00 (every
day...), backup jobs start about 00:15/01:15

I've monitored the nodes in disk i/o operations and network transfers,
in different moments of the day.
About cpu load/memory usage/pagination: the values are all OK, for
example:
- cpu load (usr) has an average of 5~7 during all day
- memory usage (have 7GB RAM) is not a problem (about 30~40% for
computational the same for noncomp)
- pagination: max use may be 10~12% (mostly of the day 0.5%, peaks
during user's work time)

Viewing your results (500GB in 3:10hs), and trying to compare: we're
backing up 250GB starting 00:15 and ending 07:00... 250GB in 6:45hs
(it's not good)

Yesterday I set resource utilization = 10 too, and performance was the
same (really a bit worst).

I think there are multiple problems (and our provider -IBM- cannot help
us):

First of all: disk performance (IBM should tell us how to get the best
from FastT500), we have from 15 to 25 MB/sec in the storage...

Then: all TSM nodes in our installation have not the same file
configuration.
I explain a bit more this: we have nodes merging a lot of files
(25) with an average size of 40KB each and a few files (1000) with
an average size of 50MB (it's Oracle Financials: the database server
keeps datafiles and files belonging to application and DB motor).

We have 4 nodes such as just described, with a total of 35~40GB for each
(average and growing...)

Well, here was a brief description.
I'm listening for new ideas.
Thanks

Ignacio


 -Mensaje original-
 De: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
 Enviado el: viernes, 17 de mayo de 2002 5:29
 Para: [EMAIL PROTECTED]
 Asunto: Re: Tuning TSM


 Just a point
 TCPWindowsize parameter is measured in kilobytes not bytes.
 And according
 to Administrator's reference it must be between 0 and 2048. If not in
 range on client complains with ANS1036S for invalid value. On server
 values out of range mean 0, i.e. OS default.
 However this are side remarks. The main question is why
 client is idling.
 Have you monitored the node during to disk and to tape operation? Is
 migration starting during backup? Are you using DIRMC.
 You wrote client compression - what is the processor usage
 (user)? What is
 the disk load - is the processor I/O wait high? Is the paging
 space used -
 check with svmon -P.
 You should get much better results. For example recently
 we've achieved
 500 GB in 3h10m - fairly good. It was similar to your config - AIX
 nodeserver, client compression, disk pool, GB ether. Ether
 was driven
 10-25 MB/s depending on achieved compression. The bottleneck was EMC
 Symmetrix the node was reading from but another company was
 dealing with
 it and we were unable to get more than 60-70 MB/s read.
 Resourceutilization was set to 10.

 Zlatko Krastev
 IT Consultant




 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
 Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 cc:

 Subject:Re: Tuning TSM

 Zlatko:
 Here are the answers:

  Have you tested what is the performance of FAStT 500 disks?
  Those were
  mid-class disk 

Re: mksysb or sysback to library volume

2002-05-06 Thread Don France (TSMnews)

Nice catch, Bill -- that is another essential action, or maybe, mark it
private *before* using the tape...

- Original Message -
From: Bill Mansfield [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, May 05, 2002 1:08 AM
Subject: Re: mksysb or sysback to library volume


 The only thing I would add to Don's excellent description is
 (1a) checkout the tape with option REMOVE=NO
 This will keep TSM from deciding to try to use the tape in the midde of
 the mksysb.



 _
 William Mansfield
 Senior Consultant
 Solution Technology, Inc
 630 718 4238




 Don France (TSMnews) [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 05/03/2002 05:00 PM
 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: mksysb or sysback to library volume


 Try using tapeutil (or some 3575 library tool, like mtlib on 3494) to
 mount
 a tape in a given drive -- you must choose a scratch tape and ensure TSM
 is
 not using the drive;  then, issue (1) make drive offline to TSM, (2) the
 cmd to mount the tape, (3) the cmd to mksysb to that device, then (4) the
 cmd to dismount the tape... then, manually remove the tape from the
 library
 (else it's still in TSM's inventory and could get used unless you also
 mark
 it private.)

 Since sys. admin. time is more expensive than operator time, we usually
 specify the AIX box to have internal 4mm tape drive, so we script the
 mksysb
 to that drive, daily;  operators rotate the tapes, report any visual
 problems (if the script fails, the tape is not ejected), the script logs
 are
 used for admin. verification.

 Regards,
 Don

 Don France
 Technical Architect - Tivoli Certified Consultant

 Professional Association of Contract Employees (P.A.C.E.)
 San Jose, CA
 (408) 257-3037
 [EMAIL PROTECTED]

 - Original Message -
 From: Martin, Jon R. [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, May 03, 2002 11:17 AM
 Subject: mksysb or sysback to library volume


  Hello,
 
  Possibly I am imaging this whole thing.  However,  I believe it
 is
  possible to backup the AIX operating system directly to a tape in the
  Library, not through TSM necessarily.
 
  What AIX device would I choose to backup to?
  rmt0 IBM Magstar MP 3570 Library Tape Drive
  smc0 IBM Magstar MP 3575 Library Medium Changer
  rmt1 IBM Magstar MP 3570 Library Tape Drive
  rmt2 IBM Magstar MP 3570 Library Tape Drive
  rmt3 IBM Magstar MP 3570 Library Tape Drive
 
  What if anything needs to be done to specify the tape cartridge(s) that
 will
  be used?
 
  If anyone is doing this, a little insight to get me started would be
 greatly
  appreciated.
 
  TSM version 3.7.2
  Operating System: AIX 4.3.3 ML9
  Server: IBM 7026-H50
  Library: 3575-L32
 
  Thank You,
  Jon Martin
  AIX/Tivoli/TSM/SAN Administrator



Re: Management Class for Directorys on Linux

2002-05-05 Thread Don France (TSMnews)

You must realize the Linux directories (likely) fit nicely within the
available space in the TSM server,,, how do you know Linux clients ignore
your DIRMC?  (The only simple way I would know how to test is to create a
path greater than 160 bytes -- I think the TSM db only has room for about
150 bytes or so.)

- Original Message -
From: Andreas Wvhrle [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, April 11, 2002 1:34 PM
Subject: Management Class for Directorys on Linux


Hi all,

I have a TSM Server Version 4.2 on WIN2K.
I have Policys for WINNT, NOVEL and LINUX with a Management Class vor Data
and Directorys.
For WINNT and NOVEl all is OK but the Linux Clients
ignore the Management Class for the Directorys. When I look in the Gui
Client all seems good.
Has someone the same problem?

Thanks
Andreas Woehrle



Re: Help Understanding Mgmt classes

2002-05-03 Thread Don France (TSMnews)

Diana,

I guess I added to your confusion... I will try to clarify.  Your CAN use
the policy set trick to flip between modified and absolute;  that's about
the only option that will help for a single-node-name solution.  Any other
attributes that are changed in the copy-group could adversely affect the
desired version count subject to expiration.

So, in your example, you could (once a week, when you have the cycles to
handle) activate a policy set that sets absolute for all nodes in that
domain.  Then on Monday, re-activate the normal policy set for modified
incrementals.  Assuming you have identical ve/vd/re/ro parameters, with
ve/vd both = 30, you will have 30 versions (max) of any given file, for up
to re/ro number of days.

I hesitate to recommend this approach, because the granularity of control is
at the policy domain level.  I would (firstly) question why your customer
needs to run TSM as if it were Veritas or Legato;  full backups this often
are unnecessary under TSM, due to its progressive incremental technology.
If you must run periodic full backups, I would do it using an alternative
node-name... so you don't get hurt trying to complete the backup in a given
24-hour cycle for ALL nodes in the domain (you'd have node-level
granularity).

Hope this helps.

Don
- Original Message -
From: Diana Noble [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 03, 2002 6:22 AM
Subject: Re: Help Understanding Mgmt classes


 Don -

 I think I'm more than a bit confused.  So, according to your first
paragraph, I
 cannot activate a new (different) policy set within a domain and expect
that my
 files will then be backed up according to the mgmtclass specifications in
in
 the new policy set?

 So what is the best way to swap back and forth between absolute and
modified
 backups, keeping a retention of 30 versions combined.  Would it be best to
 modifiy my existing management class backup copygroup to absolute or
modified
 depending on what should be done that day, leaving the version count the
same?
 If I change it to absolute, what does that do the modified backups already
 taken, anything?

 Thank you for your help.

 Diana



 Quoting Don France (TSMnews) [EMAIL PROTECTED]:

  You are abit confused.  The *ONLY* way to have TWO policies applicable
to a
  given file is to use TWO node-names for your backups;  swapping policy
sets
  *may* work for your situation, if what you want (and set) is 30 versions
of
  a given file... that piece will work.
 
  Files can be bound only to one management class at a time; if you try
  changing MC for the file, it will change ALL versions to that MC, not
just
  the next backup.  The policyset-swap trick is useful when changing from
  modified to absolute and back;  that's about the only use I've ever seen
  for
  multiple policy sets.  Hope this helps.
 
  Regards,
  Don
 
  Don France
  Technical Architect - Tivoli Certified Consultant
  Professional Association of Contract Employees (P.A.C.E.)
  San Jose, CA
  (408) 257-3037
  [EMAIL PROTECTED]
 
  - Original Message -
  From: Diana Noble [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Thursday, May 02, 2002 10:13 AM
  Subject: Help Understanding Mgmt classes
 
 
   Hi All -
  
   I believe I have my management classes all defined with a major flaw.
We
   do scheduled modified backups during the week and scheduled absolute
   backups on Sundays.  I have two management classes defined.  Both have
  the
   same retentions coded but one has absolute for the copy mode and one
  has
   modified coded.  I have a script that swaps the default management
  class
   on Sundays.  After rereading the manual and looking at the archives of
  this
   list, it seems there's no guarantee that the backup will use the
default
   Management class.  Also, if I've specified to keep 30 versions of the
  data
   in both management classes, does that mean I'm going to retain 30
  versions
   from the absolute and 30 versions of the modified?  I really want
30
   versions all together.
  
   My thought is to create multiply policy sets, and activate the policy
set
   that contains only the management class I want.  I would then specify
a
   retention of 4 versions for my policy set that contains the management
   class for absolute.  This won't delete any of my 30 versions that
were
   saved using the policy set that contains the modified management
class,
   will it?  Does this make sense, or am I still way off here?
  
   Diana
 



Re: dsadmc -consolemode

2002-05-03 Thread Don France (TSMnews)

Probably not... however, if you are using AIX, you could run the dsmulog
daemon to capture the console log to a file (much like the old SYSLOG
feature on MVS), then you could monitor that file with more or tail.
See the AIX Admin Guide for details.

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Gerald Wichmann [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, May 02, 2002 3:26 PM
Subject: dsadmc -consolemode


 Normally running dsmadmc -consolemode doesn't display any date/time stamp
 with each message. Is it possible to make it do so such as what gets
 displayed when you do a q act? I don't see anything in the guide so as
far
 as I can tell no..

 Gerald Wichmann
 Sr. Systems Development Engineer
 Zantaz, Inc.
 925.598.3099 w
 408.836.9062 c




Re: Help Understanding Mgmt classes

2002-05-03 Thread Don France (TSMnews)

Michael,

The rules I described apply universally to backup objects with the same name
owned by a given node in backup storage.

With TDP products, in some cases, the backup objects are given different,
unique names every time a backup occurs -- you must review the Install 
User's Guide for the TDP you are using.  For example, TDP v1 for Exchange
creates unique names for every backup, so retention/expiration must be done
manually;  in v2 of this TDP, objects stored in backup storage are given the
same name, so their retention/expiration becomes automated like for files
backed up by the b/a-client -- and the management class rules, as I
described, will apply.

Hope this helps.

- Original Message -
From: Regelin Michael (CHA) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 03, 2002 5:17 AM
Subject: Re: Help Understanding Mgmt classes


Hi Don,

I'm not sure to understand your answer.

By the way, thanks for your answer. I'm interested in this mailing and was
not the originator.


Our Backup solution is based on tdp for domino 1.1.2. Our tsm client is
4.2.1.20 based on Windows Nt4 server.

here is our strategy:
 a full backup a week - keeping 5 version (management class=week)
 a full backup a month - keeping 13 version (management class=month)
 an incremental version a day - keeping 5 version (management class=daily)

after reading your mail, I understand that having 3 MC for the same file
will cause the retention to change after every backup when the MC is used.
So for example:
when week backup is finished, it will apply the MC for the file and when
month backup is finished, it will change all retention on every file base on
the new MC ?

thanks

Mike



___
 Michael REGELIN
 Inginieur Informatique - O.S.O.
 Groupware  Messagerie
 Centre des Technologies de l'Information (CTI)
 Route des Acacias 82
 CP149 - 1211 Genhve 8
 Til. + 41 22 3274322  -  Fax + 41 22 3275499
 Mailto:[EMAIL PROTECTED]
 http://intraCTI.etat-ge.ch/services_complementaires/messagerie.html
 __



-Message d'origine-
De : Don France (TSMnews) [mailto:[EMAIL PROTECTED]]
Envoyi : vendredi, 3. mai 2002 03:39
@ : [EMAIL PROTECTED]
Objet : Re: Help Understanding Mgmt classes


You are abit confused.  The *ONLY* way to have TWO policies applicable to a
given file is to use TWO node-names for your backups;  swapping policy sets
*may* work for your situation, if what you want (and set) is 30 versions of
a given file... that piece will work.

Files can be bound only to one management class at a time; if you try
changing MC for the file, it will change ALL versions to that MC, not just
the next backup.  The policyset-swap trick is useful when changing from
modified to absolute and back;  that's about the only use I've ever seen for
multiple policy sets.  Hope this helps.

Regards,
Don

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Diana Noble [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, May 02, 2002 10:13 AM
Subject: Help Understanding Mgmt classes


 Hi All -

 I believe I have my management classes all defined with a major flaw.  We
 do scheduled modified backups during the week and scheduled absolute
 backups on Sundays.  I have two management classes defined.  Both have the
 same retentions coded but one has absolute for the copy mode and one has
 modified coded.  I have a script that swaps the default management class
 on Sundays.  After rereading the manual and looking at the archives of
this
 list, it seems there's no guarantee that the backup will use the default
 Management class.  Also, if I've specified to keep 30 versions of the data
 in both management classes, does that mean I'm going to retain 30 versions
 from the absolute and 30 versions of the modified?  I really want 30
 versions all together.

 My thought is to create multiply policy sets, and activate the policy set
 that contains only the management class I want.  I would then specify a
 retention of 4 versions for my policy set that contains the management
 class for absolute.  This won't delete any of my 30 versions that were
 saved using the policy set that contains the modified management class,
 will it?  Does this make sense, or am I still way off here?

 Diana



Re: TSM 4.2 differences

2002-05-03 Thread Don France (TSMnews)

Nice catch, Bill!

- Original Message -
From: Bill Mansfield [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 03, 2002 6:03 AM
Subject: Re: TSM 4.2 differences


 How about the TSM 4.2 Technical Guide Redbook SG24-6277?  Not powerpoint,
I know...



 _
 William Mansfield
 Senior Consultant
 Solution Technology, Inc





 Don France (TSMnews) [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 05/02/2002 02:54 PM
 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: TSM 4.2 differences


 Nope... I do have hardcopy (from SHARE);  you might find what you want in
 the books --- there's a good summary of changes in the preface of the
 Admin. Guide, Using xxx Clients, and Admin. Ref.

 Don France
 Technical Architect - Tivoli Certified Consultant

 Professional Association of Contract Employees (P.A.C.E.)
 San Jose, CA
 (408) 257-3037
 [EMAIL PROTECTED]


 - Original Message -
 From: Gerald Wichmann [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Thursday, May 02, 2002 10:39 AM
 Subject: TSM 4.2 differences


  Does anyone have the TSM 4.2 differences powerpoint presentation on what
  changed from 4.1 to 4.2? Or could point me in the proper place to look
 that
  up. Thanks
 
  Gerald Wichmann
  Sr. Systems Development Engineer
  Zantaz, Inc.
  925.598.3099 w
  408.836.9062 c
 



Re: Management Class problem

2002-05-03 Thread Don France (TSMnews)

This is documented in the Admin Guide about directories stored in backup
storage (and numerous APAR's to fix over the years since v1).

When directory information is stored for backups, TSM attempts to keep all
the info in the TSM database;  if there is more data than the data base can
hold (due to long path names and/or ACL's), it uses backup storage so must
choose a management class -- TSM chooses the management class will longest
retention (maybe the default, maybe not), to ensure any data that must be
restored can also recover its associated directory.  Ed stated the rule much
more succinctly.

Hope this helps.

Don

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Wholey, Joseph (TGA\MLOL) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 03, 2002 6:46 AM
Subject: Re: Management Class problem


 Can someone clarify Edgardo's response.  Particularly, when you set up
NTWCLASS with a higher retention number of versions.  Higher than what?
Is this by design?  I also have ONLY directory
 structures going to mgmtclasses that I would not suspect.  thx.  -joe-

 -Original Message-
 From: Edgardo Moso [mailto:[EMAIL PROTECTED]]
 Sent: Friday, April 26, 2002 9:37 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Management Class problem


 That happens when you set up NTWCLASS with higher retention number of
 days or versions.   The directory backup goes to the
 mgt classs with the highest retention.   Ours,  we specified the directory
 backup by using DIRMC mgt classs.





 From: David Longo [EMAIL PROTECTED] on 04/26/2002 11:59 AM

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 To:   [EMAIL PROTECTED]
 cc:
 Subject:   Management Class problem

 I have TSM server 4.2.1.10 on AIX 4.3.3 ML09.  I have AIX clients TSM
 4.2.1.23 and NT clients 4.2.1.20.  This is a new setup, has been running
 a few months.  I just noticed that some of the data from these clients
 is being bound to mgt class NTWCLASS and not to the default  Class.

 I double checked the ACTIVE management class and backup copy groups.
 The DEFAULTCLASS is the default and NTWCLASS is not.  (I have
 setup NTWCLASS, but not using it yet - or I thought not!!).  I do not have
 ANY
 CLIENTOPSETS defined.  I do not have these copygroups using each
 other as NEXT.  I checked the dsm.opt and dsm.sys and backup.excl
 files and I am not using this class.  Using default or other special
 classes.

 Notice I said some of the data is going to wrong class, some of it is
 going
 to correct class. It is not clear on the data as to the pattern of what's
 going
 to wrong place.

 This data should all be bound to default.  Whats's the deal?



 David B. Longo
 System Administrator
 Health First, Inc.
 3300 Fiske Blvd.
 Rockledge, FL 32955-4305
 PH  321.434.5536
 Pager  321.634.8230
 Fax:321.434.5525
 [EMAIL PROTECTED]




Re: mksysb or sysback to library volume

2002-05-03 Thread Don France (TSMnews)

Try using tapeutil (or some 3575 library tool, like mtlib on 3494) to mount
a tape in a given drive -- you must choose a scratch tape and ensure TSM is
not using the drive;  then, issue (1) make drive offline to TSM, (2) the
cmd to mount the tape, (3) the cmd to mksysb to that device, then (4) the
cmd to dismount the tape... then, manually remove the tape from the library
(else it's still in TSM's inventory and could get used unless you also mark
it private.)

Since sys. admin. time is more expensive than operator time, we usually
specify the AIX box to have internal 4mm tape drive, so we script the mksysb
to that drive, daily;  operators rotate the tapes, report any visual
problems (if the script fails, the tape is not ejected), the script logs are
used for admin. verification.

Regards,
Don

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Martin, Jon R. [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 03, 2002 11:17 AM
Subject: mksysb or sysback to library volume


 Hello,

 Possibly I am imaging this whole thing.  However,  I believe it is
 possible to backup the AIX operating system directly to a tape in the
 Library, not through TSM necessarily.

 What AIX device would I choose to backup to?
 rmt0 IBM Magstar MP 3570 Library Tape Drive
 smc0 IBM Magstar MP 3575 Library Medium Changer
 rmt1 IBM Magstar MP 3570 Library Tape Drive
 rmt2 IBM Magstar MP 3570 Library Tape Drive
 rmt3 IBM Magstar MP 3570 Library Tape Drive

 What if anything needs to be done to specify the tape cartridge(s) that
will
 be used?

 If anyone is doing this, a little insight to get me started would be
greatly
 appreciated.

 TSM version 3.7.2
 Operating System: AIX 4.3.3 ML9
 Server: IBM 7026-H50
 Library: 3575-L32

 Thank You,
 Jon Martin
 AIX/Tivoli/TSM/SAN Administrator



Re: Help Understanding Mgmt classes

2002-05-02 Thread Don France (TSMnews)

You are abit confused.  The *ONLY* way to have TWO policies applicable to a
given file is to use TWO node-names for your backups;  swapping policy sets
*may* work for your situation, if what you want (and set) is 30 versions of
a given file... that piece will work.

Files can be bound only to one management class at a time; if you try
changing MC for the file, it will change ALL versions to that MC, not just
the next backup.  The policyset-swap trick is useful when changing from
modified to absolute and back;  that's about the only use I've ever seen for
multiple policy sets.  Hope this helps.

Regards,
Don

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Diana Noble [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, May 02, 2002 10:13 AM
Subject: Help Understanding Mgmt classes


 Hi All -

 I believe I have my management classes all defined with a major flaw.  We
 do scheduled modified backups during the week and scheduled absolute
 backups on Sundays.  I have two management classes defined.  Both have the
 same retentions coded but one has absolute for the copy mode and one has
 modified coded.  I have a script that swaps the default management class
 on Sundays.  After rereading the manual and looking at the archives of
this
 list, it seems there's no guarantee that the backup will use the default
 Management class.  Also, if I've specified to keep 30 versions of the data
 in both management classes, does that mean I'm going to retain 30 versions
 from the absolute and 30 versions of the modified?  I really want 30
 versions all together.

 My thought is to create multiply policy sets, and activate the policy set
 that contains only the management class I want.  I would then specify a
 retention of 4 versions for my policy set that contains the management
 class for absolute.  This won't delete any of my 30 versions that were
 saved using the policy set that contains the modified management class,
 will it?  Does this make sense, or am I still way off here?

 Diana



Re: copy storage pools

2002-04-30 Thread Don France (TSMnews)

I may have missed a large part of this thread;  seems that normal backup stg
works just fine (notwithstanding the courier damaging media in transit --
maybe need a closed container with padding contract, like Paul Seay is
doing).  Your concern becomes (1) the recovery plan (DRM solves this) and
(2) the time it takes to complete 50 servers;  most folks will tell you, the
business will survive if you can just identify the mission-critical servers,
and recover them first.

The *real* solution here, as anywhere, depends on how much it's worth (X
dollars) to get data back (in Y hours).  Most managers just need to
understand the cost associated with faster recovery times -- so, you
calculate the cost of filespace vs. node-based collocation for a given
example server;  use your best guess about which server situation the
business depends on the most --- OR, get the customer to classify the
service for their apps  servers, using just 3 categories (mission critical,
production, non-production).  For the mission-critical, calculate the cost
of the varying collocation settings... if you can winnow the list down to
just one or two file-servers that need collocation, you'll be okay(all the
other data can be restored, it will just take longer for some than others).

For most offsite DR's, imho, you may get away with no collocation for the
offsite tapes;  mission-critical data base servers are (generally) backed up
daily (full-online or full-snapshot or BCV's) so the data is already clumped
(no need for collocation).  It's the file servers that will bite you on a
DR;  carefully configured with high-level directories, to allow for
multi-session restore, and properly identifying/isolating the key server or
two that need offsite collocation -- this, also, means a separate onsite
storage pool, to minimize the amount of data getting collocation treatment.
And, there are (other) varying choices to be made about collocation (ie,
onsite vs. offsite, controlling number of tapes in the pool, etc.).

The question of separating active from inactive data is (essentially)
answered with backupsets and export (filedata=active);  implementing this
for the new MOVE NODEDATA got a concerned response --- to do it requires
the aggregates be re-built, which becomes very time-consuming.  Seems like
an offsite reclamation feature would be nice... try to articulate a way of
getting just the active versions reclaimed, then submit to development for
review (via SHARE it would get a good peer review and visibility with
developers).
Hey, I like the way Gerald said it:
backup stgpool somepool copypool filetype=active
This has its drawbacks, but would seem to come closer to what's desired than
the speed of backupset or export.  Alternatively, there IS the point about
most customers end up using point-in-time parameters when doing filesystem
restores.

Hope this helps.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Rob Schroeder [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, April 30, 2002 12:16 PM
Subject: copy storage pools


 Here is my dilemma.  I have 50 Win2k servers.  Our auditors demand a
 complete disaster recovery plan, and I only have one data center.  I have
 about 2 terabytes of data active.  There are a couple oracle servers, sql
 servers, data servers and a whole bunch of application servers.  I cannot
 duplicate 60 3590E tapes everyday with a backup storage pool command.  I
 also cannot specify 50 generate backup sets and expect my operators to do
 it right, much less promptly.  Yet, I still need to have offsite copies of
 my data.  You may say that's the cost of doing infinite incrementals, but
 tell that to the companies using TSM that worked in the WTC, or had their
 building ruined by a tornado last week, or the one that will burn to the
 ground next week from arson.  Am I supposed to gamble my billion dollar
 business on that?

 Rob Schroeder
 Famous Footwear
 [EMAIL PROTECTED]



Re: TDP R3 keeping monthly and yearly for different retentions?

2002-04-29 Thread Don France (TSMnews)

The customers I've worked with used a shell script to determine -archmc for
daily/weekly/monthly;  without TDP, the script manipulates the parameter
passed in for the -archmc value on the dsmc ar cmd... you could use a
presched command to do the same (for flip the profile name, causing TDP to
use varying -archmc values).

- Original Message -
From: Paul Fielding [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 29, 2002 10:22 AM
Subject: TDP R3 keeping monthly and yearly for different retentions?


Hi all,

I did some poking around the list and didn't see anything on the subject.

Does anybody have a good method for doing Monthly and Yearly backups of an
R3 (oracle) database using the TDP for R3? I have a requirement to maintain
daily backups for 2 weeks, monthly backups for 3 months and yearly backups
for 7 years.   Superficially, It appears to be straightforward to set up
different server stanzas within the TDP profile for different days of the
week, but that's it.

I suspect that I could get extra fancy and write a script to do a flip of
the profile to an alternate profile file on the appropriate days, and have
it flip back when it's done, but that seems like a bit of a band-aid to me
and I'm wondering if anyone's come up with something better?

regards,

Paul



Re: Unix directory exclude question

2002-04-29 Thread Don France (TSMnews)

Sounds like a bug -- yes, there has been a level (or 3) that incorrectly
caused exclude list to be processed by ar cmd... it's the client code that
controls it -- try running the latest (4.2.x) client, unless you're hot to
use 5.1, then get the latest 5.1 download patch.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Mattice, David [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 29, 2002 4:44 PM
Subject: Unix directory exclude question


 We are running a scheduled incremental on an AIX 4.3.3 client.  There is
a
 need to exclude a specific directory tree, which needs to be archived
via
 another, shell script based, scheduled command.

 The initial idea was to add an exclude.dir in the client dsm.sys file.
 This caused the incremental to exclude that directory tree but, when
 performing the command line (dsmc) archive the log indicates that this
tree
 is excluded.

 Any assistance would be appreciated.

 Thanks,
 Dave

 ADT Security Services



Re: Technical comparisons

2002-04-25 Thread Don France (TSMnews)

There are a couple (new) white papers on the Tivoli site... one's pretty
good (Achieving Cost Savings...), and has no do not duplicate notices;
the other is pretty weasel-ly, was commissioned to a consulting group *and*
has do not duplicate notices on it --- AND it's not all that good, except
for high-level management that are clue-less about storage management ROI
details!!!

http://www.tivoli.com/products/solutions/storage/storage_related.html#white

I have been sharing copies of the good one with IT director types (and the
NT-platform admin types that think BrightStor is a good solution).  There's
another one I liked, also;  the Disk to Disk Data File Backup and
Restore... targets SANergy, but has an outstanding build-up from local tape
drive to network-based (and, ultimately, SAN-based) backup/restore
strategies.  It's listed with the Technical Briefs at the above link.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]



- Original Message -
From: Jolliff, Dale [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, April 03, 2002 6:12 AM
Subject: Technical comparisons


 Does anyone have a link to some detailed white paper sort of comparisons
 between TSM and the leading competitors in storage management?

 I have a customer specifically asking for comparisons between Veritas and
 Tivoli - and the most recent google search turned up several marketing
 pieces from Veritas and one Gartner comparison on old versions of ADSM/TSM
 (version 3.x)..

 Surely someone else has already invented this wheel?



Re: Big Restores?

2002-04-25 Thread Don France (TSMnews)

1.  Turn OFF client and admin schedules;
2. Turn OFF any real-time virus scan (on the destination client);
3. If you're doing restore to Windows platform, use  -DIRSONLY option, to
restore just the directories, first -- after first pass, then restore
the -FILESONLY -- using PIT restore options, in both cases;
4. Use command-line client, AND consider using CLASSIC restore (eg,
specify -PICK option) so the server will sort  consolidate tape mounts;
5. Run multiple restore sessions from separate high-level directories, up to
the number of tape drives available for the task.

Monitor network pipe on both ends, ensure it's full of data (remove any
bottlenecks observed, such as other apps like lfcep.exe).  Expect to get
5-10 GB per hour with large file server;  best case, maybe up to 15 GB/Hr.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Schreiber, Roland [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, April 19, 2002 1:55 AM
Subject: Big Restores?


Hello,

how can I generally perform big restores(e.g. we have DIRMC)???

Any suggestions??


Regards,

Roland



Re: Consolidating TSM servers.

2002-04-23 Thread Don France (TSMnews)

If you stay on the same platform-OS, all you need to do is shutdown (old server) after 
a final DB backup, move the HW connections to the new server, restore DB, and you're 
finished.  Remember to copy the dsmserv.dsk (filesystems for all TSM server files -- 
db, log, disk pool vols, logical volumes, path-names), as well as volhist, devcfg and 
dsmserv.opt;  move/re-org filesystems *after* the move.  Many other posts confirm this 
approach, have personally done it on AIX since v2 days.

Regards,

Don France 
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.) 
San Jose, CA 
(408) 257-3037 
[EMAIL PROTECTED] 

  - Original Message - 
  From: Daniel Sparrman 
  To: [EMAIL PROTECTED] 
  Sent: Monday, April 22, 2002 12:59 PM
  Subject: Consolidating TSM servers.


  Hi

  Anybody out there know if it's possible to do a export server, without having to 
export all filedata, and then do a import and use the existing tapes.

  We have one STK 9310 and one IBM 3494. Each TSM server has a copypool and a primary 
tape pool in the different libraries. All equipment is SAN connected, so if I could 
first do a DB backup/restore to the new machine of one of the TSM servers, and then 
export everything except for the file data from the second TSM server to the new 
machine, it would be perfect.

  Doing a export server filedata=all, would create about 1500 new tapes, according to 
a export server preview=yes (about 31TB of data) and this would probbaly not work at 
all.

  So, anybody done this before?

  Best Regards

  Daniel Sparrman


  ---
  Daniel Sparrman
  Exist i Stockholm AB
  Propellervägen 6B
  183 62 HÄGERNÄS
  Växel: 08 - 754 98 00
  Mobil: 070 - 399 27 51



[no subject]

2002-04-23 Thread Don France (TSMnews)

If you truly want to snapshot your current data on a weekly/monthly/yearly
basis, the current-release recommended solution would be generate
backupset -- which, at least, avoids re-sending the data across the LAN.
Alternatively, Unix has image backup capability, and 5.1 will have it for
Win2K. Either way, you get a consolidated copy of current data residing on
the associated client file systems.

You should, however, consider the challenges posed by Petur, since they
*may* be more pertinent than achieving the technical response you requested.

Regards,

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Eduardo Martinez [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 22, 2002 10:41 AM


 Hello *SMers.

 What is the best way to achieve the following 

 I have 6 servers, each one of then must be backed up incrementally on a
 daily, weakly, monthly and yearly basis.
 I thought about configuring 4 nodes for each box on the dsm.sys file,
 lets say:

 NODE1DAILY
 NODE1WEEKLY
 NODE1MONTHLY
 NODE1YEARLY

 And also start 4 dsmc sched processes, so they can backup on the way i
 want.

 Is it correct or is another way to do this?

 Thanks in advance.

 =
 Do or Do Not, there is no try
 -Yoda. The Empire Strikes Back

 ___
 Do You Yahoo!?
 Yahoo! Messenger
 Comunicacisn instantanea gratis con tu gente.
 http://messenger.yahoo.es



Re: Slow restores

2002-04-23 Thread Don France (TSMnews)

Another huge contributor to this phenomenon is failure to use DIRMC on disk
(and FILE on disk for sequential migration/reclamation) storage pools.

A lot of servers and a lot of objects (more common with Windows these days)
need all the help they can get;  keeping directories on disk (using DIRMC
trick) makes a huge difference in the number of tape seeks required... see
other posts about restore times.  I had a customer with non-collocated DLT
backups for 30 servers, spread over 45 tapes;  it only took about 30 hours
to restore 316 GB, 1.6 million objects -- that's over 10 GB/Hr, faster than
most benchmarks!

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Richard Cox [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, April 23, 2002 6:40 AM
Subject: Re: Slow restores


 Wieslaw,

 A typical reason for long search times is a high reclaimation value and
 no colocation.  If you have a lot of servers, this can really fragment
 the data across the volumes

 Regards,

 Richard

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
 Wieslaw Markowiak/Kra/ComputerLand/PL
 Sent: 23 April 2002 13:44
 To: [EMAIL PROTECTED]
 Subject: Slow restores

 Hi,
 After issuing node restore command my library spends a lot of time
 seeking
 for something on tape. The actual restore takes comparatively little
 time.
 Can anybody explain me what is happening? And is it possible to shorten
 the
 seeking period?
 Thank you for your help in advance - Wieslaw



Re: TDP for NDMP

2002-04-23 Thread Don France (TSMnews)

You've just about gottit all.  There is some concern about this whole thing;
currently, you are limited to image backups using NDMP;  if you want
file-level granularity, you must use NFS mount to access the file system
from a supported b/a-client... not as pretty as you'd like, but that's the
way it is.  Then, too, you are limited to NAS filer;  Auspex and other
filers are not supported.

The storage pool limitation comes from the format used to store the data;
they invented a new concept/format for NDMP storage pools, so must be kept
segregated from standard storage pools;  I THINK you can still use backup
stg command -- but you cannot intermix the different types of primary
storage pools in a common copy-pool.

IBM/Tivoli is wondering how much market there is to justify expanding this
support -- (a) file-level granularity and/or (b) other filers.  IMHO, I'd
want to see file-level granularity.  Comments can be posted here...

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]



- Original Message -
From: Gerald Wichmann [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, April 17, 2002 9:47 AM
Subject: TDP for NDMP


 Has anyone used and had any experiences with NDMP backups? I'm curious
 what kind of experiences you've had using the product. Pros/cons, etc..

 I've been reading the admin guide section on it and it seems like a
 whole other beast to what I'm used to with TSM. E.g. your NDMP file
 server has to be attacked to the tape drive. Seems like this would be
 undesirable if you had a lot of file servers. Would you have to buy a
 comparative # of tape drives? Or how many tape drives could you share
 between your file servers and TSM server?

 Theres a comment about you cannot backup a storage pool used for NDMP
 backups. Does this mean you can't create a copypool and 2nd copy of the
 backed up data?

 Do the NDMP backups only backup the entire file server and restore the
 entire file server? I guess it's not clear to me on what level of
 granularity there is in backing up NDMP file servers.

 Seems like NDMP backups require their own TSM domain/policyset/etc
 structure as well.

 Also I understand it only works with Network Appliance NAS devices?

 Appreciate anyone with practical knowledge to comment on their
 experience and perhaps clarify some of my questions above.. thank-you



Re: Logical volume Snapshot, to enable 'online' image backups.

2002-04-23 Thread Don France (TSMnews)

Petur,

You will need v5 server, as well as client;  there are other limitations --
see the post from Anthony Wong, and go RTFM... they are now posted at

http://www.tivoli.com/support/public/Prodman/public_manuals/td/StorageManage
rforWindows5.1.html

Have fun!


- Original Message -
From: Pitur Ey~srsson [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, April 12, 2002 6:52 AM
Subject: Logical volume Snapshot, to enable 'online' image backups.


 Hi Fellow comrades.

 I am searching for information on how to take Online Image Backups in W2k.
 with TSM Client 5.1 I know I hasn't read the manual  (so don't RTFM me
:/ ).

 how do I do it.

 Do I need the TSM Server 5 for it or can I do it with TSM Sever 4

 thanks in advance

 Kvedja/Regards
 Petur Eythorsson
 Taeknimadur/Technician
 IBM Certified Specialist - AIX
 Tivoli Storage Manager Certified Professional
 Microsoft Certified System Engineer

 [EMAIL PROTECTED]

  Nyherji Hf  Simi TEL: +354-569-7700
  Borgartun 37105 Iceland
  URL:http://www.nyherji.is



Re: Single Drive Manual Library

2002-04-22 Thread Don France (TSMnews)

Yep... it's possible to arrange a single-drive environment;  it's highly
dependent on the onsite person doing tape mounts on request, and organizing
the tape pool to satisfy your retention requirements -- if you're lucky, get
sufficient disk pool storage to allow either single-drive reclamation or
copy-pool support.

I've had two different situations, we resolved them as follows:
1.  Manual single-drive -- a file server with Exchange server;  we
configured it for Incremental forever (the file server node), daily
incremental with weekly full for the Exchange;  we installed admin client on
the desktops of 2 or 3 onsite supervisors, so they would watch for tape
mounts, and react accordingly.  We used server-to-server across a WAN for
the database backups;  again, daily incremental, weekly full (on the
weekend, when traffic was low).  This is probably the worst scenario for
trying to use TSM -- but, that's what we did;  it was running for about a
year when I left the account -- turned it over to centralized backup
personnel, who continue the operation.  This was a test to see how good or
bad it would be to manage remotely;  it was okay, but one must always take
initiative to monitor the onsite folks performing tape mounts.  Rather than
try to do reclamation, we did once-a-month Full backups (by changing the
policyset to absolute for one cycle), which would cause older tapes to
(ultimately) go to Empty, since retention of Exchange was one-month, files
was 30-45 days... poor man's trick reclamation.

Since this was a DLT, we minimized tape thrashing by running both daily
backups in close succession, using same media for all the data... used one
tape per week, accumulating approx. 15 tapes total, as they recycled.
Refreshed all new tapes once per year.  Another reason this is not quite as
cool as we'd like is it lacks copy-pool protection against media failure --
for case of a large restore request (as in a disk drive failure).  I think,
more likely, a simpler approach, would be better -- weekly-fulls might be
better than the monthly-absolute -- in case of media failure, you'd still
have another copy of all but the most recent week's data).

2. Single-drive library with 6 or 7 slots -- installed sufficient disk pool
to handle single-drive reclamation, as per Admin. Guide;  works fine,
allowed 14-day PIT supporting one tape drive's worth of file-served data
(approx. 65 GB in this case);  that, plus diskpool used for storing primary
copy (never migrated), tape slots used for two copy pools (one kept onsite,
the other sent offsite) plus db backups.  14-days of PIT translates to
approx. 1.7 x file-server-capacity (using 5% per day data change/add/delete,
rule-of-thumb).  This is the minimal, workable configuration for TSM (imho);
minimum recommended configuration would be TWO tape drive library with
sufficient slots to hold all backup data for desired number of versions,
plus some slots for db backups  daily copy pool tapes... can I do daily
incremental db-backups to a single output tape (appending each day's
data)?!?  Never did reclamation -- all unexpired backup data kept in disk
pool, never migrated... eventually, MOVE DATA could be used to vacate disk
pool to allow for single drive reclamation of the onsite copy pool.

Hope all this helps!

Regards,
Don

- Original Message -
From: Etienne Brodeur [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 22, 2002 7:28 AM
Subject: Re: Single Drive Manual Library


 I HAD a client like your describe, small sites (less than 40 GB
 total storage) withan LTO 3580 tape drive.  The problem is that you have
 to do reclamation on the disks (since you only have one tape drive), now
 with an LTO tape (100 to 200 GB depending of compression) that's a lot os
 space to leave empty on the server.  So my client decided to NOT use
 reclamation.  He routinely deleted the volumes manually to be able to
 reuse them.  To make sure that he had a full copy of his files I
 configured a selective backup that took everything on his machines once
 per week.  I had to setup TSM to work like the good old days of four
 incremental and then a full backup.

 As for the DB backups, he had to change the tape in his drive
 (doing a dismount vol _volname_ first) manually before the schedule DB
 backup at 10:00 AM (he didn't want to have to type the commands himself).
 Then he loads the tapes for tonight checking to make sure there is enough
 space on it.  If there isn't enough space then he does a delete volume
 _volname_ discarddata=yes.

 In other words it's pretty horrible.  We added an old tape drive
 he had lying around and use that for the DB backups which helped a little.
  I guest if you have reclamation it would help a lot, but you still have
 to manipulate the tapes a lot and check for free space on them before
 using them.

 I have setup sites in collocation with small library 5-8 slots and
 one drive and that works well as long as you have 

Re: TSM on SUN

2002-04-19 Thread Don France (TSMnews)

Ben,

It's highly recommended that (on Solaris) you use raw volumes for the TSM
file systems (ie, db  log)... due to severe performance penalties. Other
than that, all should be okay.  There are other rules of thumb to consider,
especially on Sun boxes, regarding how much CPU power to support multiple
NIC's of varying speeds and bandwidth, as well as number of SCSI
drives/ports, etc. (search the archive for recent info about sizing the
server).

Regards,
Don France

Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]



- Original Message -
From: bbullock [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, April 18, 2002 2:45 PM
Subject: Re: TSM on SUN


 Hey, careful, some of us love the IBM LVM and find Veritas
 cumbersome and overly confusing. :-)

 While we are on the subject, we recently brought up our first TSM
 server on Solaris (our other 10 are on AIX) and had a question as to the
 disk setup. VxFs and VxVm are available to be used on the disks, but
should
 we use raw volumes (i.e. /dev/vx/rdsk/stgdg/stg10) or mounted filesystems
 (i.e. using the dsmfmt to create the TSM devices)?

 We started using filesystems, but found the performance very poor.
 We tried tweaking some Veritas settings but could never get it to match
the
 speed of the raw volumes, so we are now just using the raw volumes.

 We are using TSM mirroring for the DB and logs, so is there any
 added risk by using the VxVm raw volumes (as opposed to a logged
 filesystem)?

 Thanks,
 Ben



 -Original Message-
 From: Kelly J. Lipp [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, April 18, 2002 4:10 PM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM on SUN


 Our experience with TSM on Solaris has been very good.  Installation is
very
 easy, upgrades are very easy and the code has good support from
IBM/Tivoli.
 And the added benefit of not having to learn the iLogical Volume Manager!

 If your customer has a large Solaris site already, let them use TSM on
 Solaris.  No point adding another Unix platform in this case.  Now, if you
 asked about HP the story would be completely different.

 Kelly J. Lipp
 Storage Solutions Specialists, Inc.
 PO Box 51313
 Colorado Springs, CO 80949
 [EMAIL PROTECTED] or [EMAIL PROTECTED]
 www.storsol.com or www.storserver.com
 (719)531-5926
 Fax: (240)539-7175


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Pitur Ey~srsson
 Sent: Thursday, April 18, 2002 9:57 AM
 To: [EMAIL PROTECTED]
 Subject: TSM on SUN


 Hi guys

 A big company here in Iceland is thinking about installing TSM on SUN,
they
 will be the first one to have TSM on SUN here.


 My question is,

 Is there anything I should know about (bugs, problems) anything sun
related
 that is different from the other systems.
 Or should I recommend just using AIX.?


 Kvedja/Regards
 Petur Eythorsson
 Taeknimadur/Technician
 IBM Certified Specialist - AIX
 Tivoli Storage Manager Certified Professional
 Microsoft Certified System Engineer

 [EMAIL PROTECTED]

  Nyherji Hf  Simi TEL: +354-569-7700
  Borgartun 37105 Iceland
  URL:http://www.nyherji.is



Re: ACSLS connection

2002-04-15 Thread Don France (TSMnews)

Chris,

I have looked at the Gresham EDT-DT info, am trying to understand why a
customer
might want to use it in a non-LAN-free environment.

I have a client who's installing a StorageTek SN-6000, which virtualizes the
tape drives
in a Powderhorn silo;  they will use two TSM servers, initially -- so, there
does not seem
to be a compelling reason to use EDT-DT.  I have reviewed the online
material and the
latest install  user's guide -- except for SAN mgmt/monitor utilities, I am
still trying to
find ways to sell the client on installing it before deploying a broad,
LAN-free solution.

Can you please highlight some benefits of using EDT-DT in a context where
it's not
absolutely required?!?  (In my client's shop, the TSM servers are on AIX,
the ACSLS
is on its own Solaris box.)

Thanx,
Don

 Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Chris Young [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, April 02, 2002 6:45 AM
Subject: Re: ACSLS connection


 Craig,

 If you are using a single TSM server and you do not plan to use LAN-Free
 clients (which I believe you are from the message below), then you can use
 TSM's native ACSLS communication drivers. However, if you are using
LAN-Free
 clients in addition to your TSM server, where each client will access the
 same library as the TSM server, you need to obtain Gresham's
 EDT-DistribuTAPE product to enable this functionality. There are other
 reasons that you might want Gresham's EDT-DistribuTAPE in environments
that
 do not involve LAN-Free clients and employ only a single TSM server but it
 isn't required as it is when using LAN-Free clients. You can obtain more
 information on EDT-DistribuTAPE from
 www.gresham-software.com/storage/products/distributape.htm.

 Chris Young

 -Original Message-
 From: Murphy, Craig [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 01, 2002 9:05 PM
 To: [EMAIL PROTECTED]
 Subject: ACSLS connection


 Hello,

 We are looking to use TSM to backup over fibre attached client to a STK
 Powderhorn controlled by ACSLS.
 The TSM server is on a solaris server and the ACSLS is a separate solaris
 server. What do I need on my TSM server to be able to communicate to the
 ACSLS server? I see there is LibAttach software for Windows. Whats the
equiv
 for Solaris?

 Regards
 Craig Murphy



Re: Monthly Backups, ...again!

2002-04-15 Thread Don France (TSMnews)

Bill,

Your arguments are excellent, and I have one large client dealing with the
health
records issue --- they need to save patient records for 32 years;  we are
liking
the export solution, once a year, for those nodes (in order to control TSM
db
size) -- and, continue storing their data in archive storage so we have
access to
all versions generated, then (AFTER THE FACT) deleting extra versions from
each month --- save 2 or 3, delete the rest from archive storage, using a
smart
script in concert with the dba's technique of naming the files being stored.

However I only know ONE (simple) way to accomplish the desired, stated
results ---
month-end snapshot kept for X months
or years;  ie, a backupset of the desired file systems on specific nodes.
The limitation
of this solution is it only captures files in backup storage;  you need a
different
answer for database backups stored in archive storage -- my DBA's love using
archive storage, and I have no argument against it, as they only care about
time
(have no need or interest in counting versions, etc), and they must save
daily
backups for 4-14 days, weekly's for 6 weeks, monthly's for 6-15 months, etc.

If you have a method that allows you mark the currently active versions of
backup
files with a different retention than others, I think it would be new news
to most of us...
please share.

Thanks,
Don

 Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Bill Mansfield [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, April 04, 2002 9:55 AM
Subject: Re: Monthly Backups, ...again!


 There is a good reason it keeps coming up: legitimate business
 requirements.

 The suits (auditors, IRS, corp counsel, HIPAA, etc) demand to be able to
 be able to reproduce any datum at given intervals for given durations.
 Most often, that translates to restoring files that may change every day
 to month end state for somewhere between 1 and 7 years.  Sometime they
 can identify the kinds of data they want, but it is expensive to
 accurately identify the list of all files/directories required, so usually
 you get a vague wave to save everything.  And of course, it's their
 data, not yours, they have a right to keep as much as they want.  Telling
 them that TSM doesn't support their requirement just invites other
 software vendors in the door since *they* handle this particular
 requirement with ease (on paper).

 The number of days you can reasonably keep in an incremental backup
 usually doesn't extend to forever.  Archives sometimes don't cut it,
 either in their traditional form or the instant form.  You can't stand to
 move that much data or use that many tapes - that's why you went
 incremental forever in the first place.  I really just to do some
 operation that marks the current active version with a longer guaranteed
 retention, without changing the retention of anything else.

 I don't want to restart the perennial discussion of truly long term
 archival storage.  It's reasonable to expect a backup system to maintain
 internal compatibility for 7 years, and there are techniques for migrating
 the data to newer media.

 Just my 5 cents worth (inflation).
 _
 William Mansfield
 Senior Consultant
 Solution Technology, Inc





 Mr. Lindsay Morris [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 04/04/2002 10:04 AM
 Please respond to lmorris


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: Monthly Backups, ...again!


 This keeps coming up.  It's the hardest thing about TSM, to sell users on
 the way it works.

 Tivoli's Storage Vision whitepaper has a comparison of the benefits you
 get
 by NOT using this Grandfather-father-son technique, but I wish somebody at
 Tivoli would come up with some better assistance to help us sell the
 incremental-forever -ooops, progressive backup methodolgy - to non-techie
 users.  (Maybe it's there and I just don't know where to find it...?)

 I think Kelly Lipp has a good article on archiving and when it's sensible
 -
 maybe he'll post that link here again.

 Also, maybe some users have specific oddball scenarios they have run into
 that require surprising policy settings. It would be interesting to hear
 about those.  Like, the user who goes on vacation for two weeks, and
 manages
 to trash here email file the day she leaves, doesn't notice it, Lotus
 touches the damaged file every day so it gets backed up again, and they
 don't keep 14 versions, so she gets back and the only good version (15
 days
 old) has rolled off (expired).

 -
 Mr. Lindsay Morris
 CEO, Servergraph
 www.servergraph.com
 859-253-8000 ofc
 425-988-8478 fax


  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Marc Levitan
  Sent: Thursday, April 04, 2002 

Re: 3494 replacement (cont.)

2002-04-15 Thread Don France (TSMnews)

If replacement HW is truly the same stuff, just ensure microcode (in library
*and* the drives) is up to date;  after you re-run cfgmgr, you just re-do
the TSM DEFines for LIBRary and DRives.

BTW, checkin will skip volumes with data if you specify scratch (assuming
your TSM db is intact -- volhist is a good reference source).  When we have
a mix of scratch and private, loaded en masse, we just run the checkin for
scratch first, then checkin the rest as private --- barcode and search=yes
runs the two passes very fast!@!

Regards,
Don

 Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: Gill, Geoffrey L. [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 15, 2002 12:31 PM
Subject: 3494 replacement (cont.)


 In the midst of replacing the 3494 and drives now and I thought of
 something. I'm obviously going to have to remove the drives from TSM and
 AIX, then rescan and redefine both. I'm wondering if anyone knows if I'm
 going to have any issues with the library.
 Replacement hardware is the same as what's being removed.

 As for the tape suggestions I got last week this is what I used and it
 worked to create the file and check everything out with a remove=no.

 SET SQLDISPLAYMODE WIDE

 select 'checkout libv ' || trim(library_name) || ' ' || trim(volume_name)
||
 ' checkl=no rem=no' from libvolumes  /temp/macro

 dsmadmc -id= -pass= -itemcommit macro /temp/macro

 When I check tapes back in do I need do I need to checkin as scratch or
 private, since they really do have data on them.

 checkin libvol 3494lib U00235 search=yes status=private checkl=no
devt=3590

 Thanks,

 Geoff Gill
 TSM Administrator
 NT Systems Support Engineer
 SAIC
 E-Mail:   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 Phone:  (858) 826-4062
 Pager:   (877) 905-7154



Re: TSM network problem

2002-04-15 Thread Don France (TSMnews)

Are you sure you're using CAT-5 cable?
You say the NIC's are forced at 100/full -- how about the switch ports?
Is there adjacent noise that might be emitting across the network?
Do you have old vs. current switch HW?  Is it up to date, microcode?
VLAN's -- are you sure it's a point-to-point and not getting re-routed due
to DNS mistakes (eg, any potential router involved, multiple DNS entries
for the same hostname, local hosts file on the client, local routing table
on the client)???

Your msg got garbled when stating specifics of your client situation... is
the problem only on one (of many) clients using the same switch?

Finally, what OS platforms (and switch vendor  model) are involved?

These are buggers to solve, unless you can find some consistency -- eg, one
client fails but others run fine (typical, and helps reduce the focus to
identify the delta between good client and failing client -- for Win2K,
we've seen flaky OEM-NIC's cause this kind of problem;  also, one switch
vendor didn't work well with forced 100/full, simply insisted on
auto-negotiate.)


- Original Message -
From: Jim Healy [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 15, 2002 11:25 AM
Subject: TSM network problem


 Can any of you network gurus help me out with a TSM problem?

 We currenty have an isolated 100mb ethernet network for TSM.
 We have three NICS in the TSM server, each attached to a seperate V-lan
 We spread the servers backing up across the three v-lans

 We are having on clients on one of the vlans that intermittently get
 session lost re-initializing messages in the dsmsched.log

 When we ping the clients from the TSM server we get no seesion or packet
 loss

 When we ping the TSM nic from the client we get intermittent packet losses

 We replaced the NIC in the TSM server
 We replaced the cable from the TSM server to the switch
 We replaced the cable from the client NIC to the switch

 We've ensured that both NICs are set to 100/full

 My network guys are out of ideas any body have any suggestions?



Re: TSM V5.1

2002-04-11 Thread Don France (TSMnews)

Try running your SELECT queries;  last I heard, there were various problems
with Summary table ---
which is a key helper to sizing a current environment. There may be problems
with other tables as well.

Hope to hear more info on what works (eg, Win2000 online-image backups), and
what doesn't.

Regards,

 Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: David Longo [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, April 11, 2002 11:43 AM
Subject: Re: TSM V5.1


You going to have it up and running by Monday?  Make a document
for us on what works and doesn't, conversion issues etc?

David Longo

 [EMAIL PROTECTED] 04/11/02 01:54PM 
Hey everyone,

I just got my TSM V5.1 package of software. Anyone else get so lucky?

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



MMS health-first.org made the following
 annotations on 04/11/02 14:57:28

--
This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.  If
you receive this message in error, please immediately delete it and all
copies of it from your system, destroy any hard copies of it, and notify the
sender.  You must not, directly or indirectly, use, disclose, distribute,
print, or copy any part of this message if you are not the intended
recipient.  Health First reserves the right to monitor all e-mail
communications through its networks.  Any views or opinions expressed in
this message are solely those of the individual sender, except (1) where the
message states such views or opinions are on behalf of a particular entity;
and (2) the sender is authorized by the entity to give such views or
opinions.


==



Re: Monthly Backups, ...again!: The Real Issues

2002-04-10 Thread Don France (TSMnews)

Bill,

Just out of curiosity, in your restore situation -- was it an NT platform (I
see it must have been alot of tiny files)?!?

If you are using NT, Netware or AIX as file server platforms, the DIRMC
option pays for itself... BIG TIME.  I recently responded to a customer,
clearly a smaller shop, but the storage pool was non-collocated for over a
year, using DLT library in a Win2K-to-Win2K context;  there was a total RAID
failure for the E: drive, but I originally engineered it with DIRMC on disk,
migrated to FILE on disk, then copy pool'ed to tape.  After about 30 hours,
1.6 million files and 316 GB of data were restored to the point-in-time
specified as the last-known-good;  this performance was achieved by using
two concurrent restore sessions (across 10 high-level directories), CLASSIC
restore (not the default), and the dirsonly first, then the data -- only
slowdown was due to tape mounts (which were consolidated within each
session), because the customer had more tapes than slots, so needed to
respond to demand mount requests for tapes mountable not in library.

This experience is usually a wake-up call for the customer to evaluate
RESTORE requirements;  if 5-10 GB/Hr is satisfactory, then, so be it.  2GB
restore in 8 hours, did you check your accounting records for media wait --
and does that reconcile with the time?  How about the number of tape mounts
(available from the summary records or count the approp. message # in the
activity log)???  Sounds abit suspicious, to me.

I've seen several ideas shared in this thread, any one of which could be the
right answer for a given context;  your 3-class system sounds interesting --
as does Bill Colwell's, and Paul's, and Nick's, and Alex.  Also, with 5.1
the new IMAGE backup would seem a good substitute for monthly backupset.
Ultimately, I like Jim Taylor's answer the best... get the dialog on RESTORE
needs, then figure out what of the various suggestions will work for a given
customer/server/class-of-servers.

Of course, the key political question is truly to get a dialog on RESTORE
REQUIREMENTS;  focus on the business needs for that first, then benchmark
and/or collaborate for possible solutions -- ultimately, the customer needs
to take this seriously enough to pay for trial exercises a couple times a
year... else, they're just putting their heads in the sand and courting
disappointment.

Regards,

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]



- Original Message -
From: William Rosette [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, April 10, 2002 6:12 AM
Subject: Re: Monthly Backups, ...again!: The Real Issues


 I enjoy TSM debates, but one that has not sold me yet is this particular
 one.  Yes, I come from a different environment (than Incremental-forever).
 One of the biggest drawbacks of the old ADSM was the abnormal slow
restores
 and this has been notified by a collegue of mine of why his previous job
 did not get ADSM back in the early 90's.  My suspicion was this
 Incremental-forever and here I see why.  It all relies on the collocate
 issue.  If you have numerous tapes and numerous drives than collocate will
 make Incremental-forever a quck restore.  I started with 2 drives 80
 clients and 500 tapes leading to no-collocate.  When restoring 2 GB+
 restores that were spread out over 70-100 tapes this took almost 8 hours
 (note this took about 1 year to get to 70-100 tapes).  At our DR test we
 noticed a diffence of 6 times a normal restore for no-collocate versus a
 backupset.  But on the other hand I brought that restore time down when
 splitting the restore into 5 separate restores for 5 tape drives (I
 currently have 6 drives).  So the no-collocate will work if the right
 amount of tape drives are available (our 3570 are pretty expensive).  Now
 in the real world you usually have 1-2 tape drives for restores
(dedicated)
 and you cannot afford to send 80 tapes off every night for collocate, so
 what we are gravitating to is a class1, class2, and class3 system.  class1
 will be our DRM critical restore with collocate onsite and offsite, class2
 our noncritical but high restore class with collocate onsite and
 noncollocate offsite.  and then the good class3 with nocollocate onsite or
 offsite.  The monthlys or Absolutes will be used to stop the spreading of
 info over 70 -100 tapes.  It may not be true, but I seem to see the more
 reclamation the more spreading of info.  and I would have thought that the
 reclamation would bring info closer together.  All it takes is one time of
 getting burned with the 70-100 tape restore spread.  We have presently
 moved data from one server to another 7 GB and used TSM to do this. If the
 manual full had not be done the night before (which has happened) then I
 still be restoring as we speak.  There are more good things about fulls
 than meets the eye.  Now I know not 

Re: Mapping deviceclass to tape storage pool

2002-04-02 Thread Don France (TSMnews)

Actually, why not continue using the same method already in place?  There's
nothing inherently different between the two platforms that would require a
change -- only your operations preferences (and control) for managing the
inventory.  There is (currently) no feature to protect one storage pool
(or
TSM server) from tapes with undesired characteristics -- like a vol-ser
filter.  See page 47 of the TSM 4.2 Admin Guide for Windows.

Instead of multiple devclasses, you could use multiple library
definitions -- one for long tapes, one for standard length;  then, you'll
checkin long tapes to the long tape library, etc.  This would be *one*
solution... which uses scratch tapes.  Using this solution, the operators
are trained to be watchful of which tapes are checked into which library;
the downside being you'll need to coordinate sharing the drives between
the two libraries -- else, undesired waits for a (common) mount point could
occur.

A third solution, not yet available, would be to exploit a feature which
restricts
certain vol-ser patterns to a specific pool (or server  or application);
there is
a known problem with shared-3494 where an operator inserts tapes into
the i/o area, then, before the operator can coordinate with the intended
server, another
server notices the collection of tapes with insert category -- FF00 and
allows *any*
application to check them into the library... then it's up to the
application to accept a
given tape or not, and TSM currently accepts *any* tape assigned the insert
category.

Regards,
Don


- Original Message -
From: David E Ehresman [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, March 28, 2002 7:04 AM
Subject: Mapping deviceclass to tape storage pool


 I currently run TSM 4.2 on OS390 2.10.  I use DFSMS and the data set
 name prefix set by TSM for a device class to put some storage pools on
 standard length carts and some on extended length carts in a 3494
 library.

 I am planning for a conversion to TSM on AIX sharing the 3494 library
 with MVS.  On TSM on AIX, how will I define some devclasses to use
 standard length and others to use extended length carts?

 David



Re: Tape retention after restores

2002-04-02 Thread Don France (TSMnews)

Thomas,

This sounds like a recurring bug;  did you try adjusting IDLETIMEOUT in
dsmserv.opt?

Alternatively, could you identify the session holding the tape (via q se
f=d)???  If so, then you can at least cancel the offending session (should
be in idle-wait, anyway).

Did Level 2 give you any help?

Regards,
Don

 Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Thomas Denier [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, March 27, 2002 1:37 PM
Subject: Tape retention after restores


 We have been seeing a recurrent problem with GUI restores under Windows
NT.
 A system administrator will start a restore and walk away from his disk.
 When the restore finishes, the session will stay open and the last tape
 used in the restore will remain mounted. We have sometimes had a tape
drive
 tied up for hours before getting the person who started the restore to
 respond to a telephone call or a page. We have occasionally seen a similar
 problem with command line restores under HP-UX. A batch mode dsmc restore
 will relinquish the tape drive when the restore ends, but a restore run
 from a loop mode dsmc session will not. Our server runs under OS/390 and
 is currently at level 4.2.1.9. The last Windows NT client to monopolize a
 tape drive was at level 4.1.2.12. The last HP-UX client to do so was at
 level 4.1.2.0. Is there any way to get TSM to take a more rational
 approach to tape drive retention in this situation?



Re: Disk Availability

2002-04-01 Thread Don France (TSMnews)

This is an *excellent*, best practices, suggestion... works fine, runs a
long time!

- Original Message -
From: Francisco Molero [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 01, 2002 2:42 AM
Subject: Re: Disk Availability


 Hi,

 the best solution is 4 loops of 8 disks, in addition
 you can place the db the db mirror the log and the log
 mirror in each one of the 4 virtual disk. Also you can
 define a raid 0 for the disks, and the db and tsm log
 mirrors must be tsm mirror.

  --- Gill, Geoffrey L. [EMAIL PROTECTED]
 escribis:  Hi all,
 
  I'm working on installing a second SSA drawer in my
  system. I'd like any
  opinions as to best connect this. What I've got is
  an M80 with 2 SSA
  controllers, and now, 2 SSA drawers full of 36gb
  disks. What would be the
  best way to connect this so no matter what happens
  to the controllers all
  the disk would still be available? Is there a
  different solution I should
  use?
 
  Thanks for the help.
 
  Geoff Gill
  TSM Administrator
  NT Systems Support Engineer
  SAIC
  E-Mail:mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]
  Phone:  (858) 826-4062
  Pager:   (877) 905-7154

 ___
 Do You Yahoo!?
 Yahoo! Messenger
 Comunicacisn instantanea gratis con tu gente.
 http://messenger.yahoo.es



Re: Purge Summary Table

2002-04-01 Thread Don France (TSMnews)

Curtis,

Try setting summaryretention to 1, stop the server, change system date to 2
days later than the bad date, start dsmserv, ACCEPT DATE -- all of this with
sessions and schedules disabled, to prevent further records;  after purge
processing, stop the server, reset system date, start dsmserv, set retention
to desired interval, maybe 30 --- did you already do an ACCEPT DATE cmd
(after the accidental time-source problem)?

This sounds like you need a debug command to clear up;  did you get any help
from TSM Support folks?  I'd guess other folks have this problem, but might
not notice until they run an open-ended date on their query (like TDS does).

Regards,
Don

Don France
Technical Architect - Tivoli Storage Manager
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA (408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: Magura, Curtis [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, March 28, 2002 5:28 AM
Subject: Purge Summary Table


 I plan to call the support center in a bit but thought I'd throw this out
to
 the experts and see if anyone has any thoughts.

 TSM Server 4.2.1.7 on AIX 4.3.3 ML6.

 The time source that we use for our machines got set to the year 2021
 recently (that's another whole story!). As a result we have a record in
the
 summary table from 2021. We are in the process of setting up TDS reporting
 and it turns out that is uses the summary table as part of the input. The
 command below gets issues when you run the TDS loader.

 03/28/2002 06:52:29  ANR2017I Administrator ISMBATCH issued command:
DEFINE
 CURSOR C3330188 SQL=SELECT * from SUMMARY where END_TIME = '2021-08-18
 06:02:19' order by END_TIME

 As a result a majority of the cubes that get built as part of TDS are
empty.
 We have been running with the summaryretention set to 30. I reset it to 0
 hoping it would reset the table. No luck. Started and stopped TSM while it
 was set to 0...still no luck.

 Looking for a way to purge the record from the database. Here's the
 offending record:

 START_TIME: 2021-08-18 06:00:06.00
 END_TIME: 2021-08-18 06:02:19.00
 ACTIVITY: STGPOOL BACKUP
 NUMBER: 440
 ENTITY: BACKUPPOOL - OFFSITE
 COMMMETH:
 ADDRESS:
 SCHEDULE_NAME: COPY_BACKUPPOOL_OFFSITE
 EXAMINED: 162
 AFFECTED: 162
 FAILED: 0
 BYTES: 28143616
 IDLE: 0
 MEDIAW: 62
 PROCESSES: 1
 SUCCESSFUL: YES
 VOLUME_NAME:
 DRIVE_NAME:
 LIBRARY_NAME:
 LAST_USE:

 Thoughts?

 Curt Magura
 Lockheed Martin EIS
 Gaithersburg, Md.
 301-240-6305