IBM 3583 LTO issue/messages

2006-01-22 Thread Ted Byrne
Zoltan,

You may very well be on the right track in suspecting hardware issues.

We recently experienced a problem with similar symptoms on a new TSM/3583
install.  Although the messages were not identical to what you posted, a bad
SCSI card manifested itself by library components disappearing
mid-operation.  We were getting messages referring to missing elements, no
devices found to match what was defined for the PATH, and medium not
present when a tape was mounted and already being used.

You might want to run diagnostics on the card, and check that there are no
issues with the cabling.

Ted


Re: 15,000,000 + files on one directory backup

2005-06-18 Thread Ted Byrne

I would second Bill's addition of poorly-architected applications to
Richard's list of issues that should be (but are often not) addressed, or
even considered.  At another customer, we and the customer's sysadmins were
bedeviled by a weblog analysis application (which shall remain nameless)
that chose to store its data on the filesystem, using the date of the log
data as a directory under which the data was stored (as well as the
associated reports, I believe).  The explanation we were given was that
they had chosen to do this for application performance reasons; it was
apparently quicker that using a DBMS.

This decision, although it made random access of data quicker, had horrible
implications for backup as the log data and reports accumulated over time;
recovery was even worse.  Aggravating the situation was the insistence by
the application owner that ALL historical log data absolutely had to be
maintained in this inside-out database format.  Just getting a count of
files and directories on this drive (via selecting Properties from the
context menu) took something on the order of 9 hours to complete.  The
volume of data, in GB, was really not that large - something on the order
of 100 GB.  All of their problems managing the data stemmed entirely from
the large number of files and directories.

When the time came to replace the server hardware and upgrade the
application, they had extreme difficulty migrating the historical data from
the old server to the new.  They did finally succeeded in copying the data
from the old server to the new, but it took days and days of
around-the-clock network traffic to complete.

Addressing the ramifications of this type of design decision after the fact
is difficult at best.  If at all possible, we need to prevent it from
occurring in the first place.

Ted


Re: Checked in volumes could not be mounted

2004-08-10 Thread Ted Byrne
if you've read my previous posts, I've written that I'm checking in
volumes with checklabel=barcode that means all volumes are barcoded and I
think that also means there is nothing to label. Or am I wrong?
The cartridges may well have barcode labels affixed to the outside; that
does not guarantee that the tapes are labelled internally to match.  The
error message you posted points to the possibility that there is no
internal label on the tapes.
If there is a question whether tapes are labelled, check them out of the
library and run the label libv process to attempt to write a label onto the
tape.  Activity log messages resulting from that process will shed light on
whether the labelling was successful.  Label libv (without overwrite=yes
will fail on tapes that are already labelled, or there could still be an
I/O error logged.
Ted


Re: Empty tapes not returning to scratch

2004-08-02 Thread Ted Byrne
Fred,
Were these tapes originally scratch tapes?  Or are they assigned to the
storagepool?
Posting the output from the commands
q vol tape_volser_here f=d
and
select * from volumes where volume_name='tape_volser_here'
would help us provide assistance.  Also, what sort of output/response
errors did you get when you attempted to del vol or audit vol?
Ted
At 03:21 PM 8/2/2004, you wrote:
Tried that.
At 02:54 PM 8/2/2004 -0400, you wrote:
You moved them out. Move them back.
Magic.
Dale Jolliff
Sungard Midrange Storage Administration
Office Telephone: (856) 566-5022


fred johanson [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
08/02/2004 02:12 PM
Please respond to ADSM: Dist Stor Manager
To: [EMAIL PROTECTED]
cc:
Subject:Empty tapes not returning to scratch
A new variation on an old problem.
I've moved a lot of media to the shelf for space reasons.  A bonus to
using
move as opposed to checkout is that when tapes hit empty on the shelf,
they
do not disappear from the storagepool.  I restored 4 such to the tape
robot
on Friday, checked them into the Library Manager, and updated them to
Mountable in Library, while Pending Reuse is set to 0.  And there they
sit, and nothing will change the status back to scratch.  I've tried Move
Data, Del vol, Aud vol, and changing their access to
destroyed.  Nothing seems to work.

Fred Johanson
ITSM Administrator
University of Chicago
773-702-8464
Fred Johanson
ITSM Administrator
University of Chicago
773-702-8464


Re: server scripts

2004-07-27 Thread Ted Byrne
As a refinement to what Richard suggests, you might try using the option
format=macro rather than format=raw.  In this case, where you're
essentially copying your scripts en masse between servers, you would be
able to dump all of the commands scripts from the existing server to a file
with a single command, and re-create all of the scripts on the new server
by processing that file with the macro command.
Take Richard's final recommendation to heart.  Maintaining the scripts
external to TSM will be worth any additional effort required; the TSM web
interface to edit scripts leaves a lot to be desired, and there is no undo...
Ted
At 03:15 PM 7/27/2004, you wrote:
Is there any way to export or save server scripts? I am building a new
TSM server and want to use the same scripts I have on existing TSM
servers. Any way to avoid the hassle of having to recreate each script
on the new server. (I know a DB restore would work...but this is a new
server with new nodes) AIX server v5.2.2
Greg -  From http://people.bu.edu/rbs/ADSM.QuickFacts :
Scripts, move between servers   Do 'Query SCRIPT scriptname FORMAT=RAW
OUTPUTFILE=' to a file, move the
file to the other system, and
then do a
'DEFine SCRIPT ... FILE=' to take
that file as input.
Still, the best overall approach is to maintain your complex server scripts
external to the TSM server and re-import after editing.
  Richard Sims


Re: Tape Reclamation Failing

2004-07-26 Thread Ted Byrne
If you can reclaim 40 tapes at a threshold of 90%, you may be over the
MaxScratch value for your offsite pool.
try this command:
select stgpoool_name, count(*) from volumes group by stgpool_name
Check what value is reported for the stgpool that you are trying to reclaim.
Ted
At 05:41 PM 7/26/2004, you wrote:
I'm getting the error ANR1086W Space reclamation terminated for volume
volume name - insufficient space in storage pool.  I'm trying to reclaim
the offsite copy pool (tape_backup_copy) The Onsite Primary pool
Tape_backup Has  Max Scratch of 50 the Offsite pool has a max Scr of
200.  We have 15 new Scratch in the library... We have no tapes in
unavailable status...  I'm only trying to reclaim to 90% which is
suppose to yield us some 40 Tapes
If anyone has any other ideas???  Please???
Thanks Much!

Env Info
TSM  5.2.4
AIX 5.2ML3
3494 8x3592 FC Drives


Re: backup report?

2004-07-16 Thread Ted Byrne
Does anyone have a report that will parse something like
'q act' and generate a report on backup start, backup stop,
files inspected, files expired, etc.?

I would suggest looking at the q act command.  If you add the option
originator=client and filter on msgno, you can get pretty much all of the
information that you need.  You may need to do some math with timestamps
and elapsed time, but the information is there.  You might also want to
look at using the accounting records produced by TSM
Alternately, TSM Operational Reporting would also be an option.
Ted


Re: Tsm restore

2004-07-09 Thread Ted Byrne
Can you tell me if there is an option within TSM to restore crossing mount
points?
Rachel,
Can you clarify what you mean by crossing mount points?  An example of
what you're trying to accomplish would be helpful.
-Ted


Re: Novell server abending during incr backup

2004-07-09 Thread Ted Byrne
One thing you might want to consider:  given the number of files that TSM
is processing (as shown in the dsmsched.log excerpt you sent) it's
conceivable that the backup is taxing the amount of available memory on the
Netware box.  You could try backing up with the MEMORYEFficientbackup
option set to yes.
If that does not provide any relief, or if the backups are already running
with that option, you should probably get in touch with IBM support and
asking for assistance.  They may have you do some tracing that would
provide a more detailed picture of what is occurring.At this point, I would
recommend getting in touch with IBM support and asking for
assistance.  They may have you do some tracing that would provide a more
detailed picture of what is occurring.
-Ted
At 10:16 AM 7/9/2004, you wrote:
Ted,
We put in the verbose option in our novell tsm client's
dsm.opt file, but it only showed the following.
07/09/2004 04:01:30 ANS1898I * Processed   361,500 files *
07/09/2004 04:01:35 ANS1898I * Processed   362,000 files *
07/09/2004 04:01:39 ANS1898I * Processed   362,500 files *
07/09/2004 04:01:43 ANS1898I * Processed   363,000 files *
07/09/2004 04:01:47 ANS1898I * Processed   363,500 files *
07/09/2004 04:01:50 ANS1898I * Processed   364,000 files *
It didn't show the file that it was backing up when the Server crash occurred.
TSM novell client 5.2
Netware OS 6.5 SP2
Ted Byrne wrote:
 At 03:49 PM 7/8/2004, you wrote:
 I looked in the dsmsched.log there is nothing in there. (see below)

 If you run your scheduled backups with the -verbose option, it will record
 the files being backed up.  Based on the dsmsched.log file contents you
 posted, it looks like the option in effect for the scheduled backups is
-quiet.

 -Ted


Re: Novell server abending during incr backup

2004-07-08 Thread Ted Byrne
At 03:49 PM 7/8/2004, you wrote:
I looked in the dsmsched.log there is nothing in there. (see below)
If you run your scheduled backups with the -verbose option, it will record
the files being backed up.  Based on the dsmsched.log file contents you
posted, it looks like the option in effect for the scheduled backups is -quiet.
-Ted


Re: Node just sitting In-Progress

2004-07-02 Thread Ted Byrne
At 12:11 PM 7/2/2004, you wrote:
Timothy
This is one of my pet hates about TSM.
A scheduled backup which is actually in progress - ie actively transferring
data between client and server - shows a status of 'Started'.
A scheduled backup which started but encountered an error, dropped the
session or whatever shows a status of 'In Progress'.
Am I the only one who thinks this is the wrong way round?
FWIW, This is a pet annoyance for me as well, and I'm a big fan of
TSM.  The In progress status originally showed up as (?) which was
probably more accurate than what it was changed to.  (I would mentally read
that as 'Huh?'.)
When we originally encountered the non-exceptional (?) status, which
certainly qualifies as an exception in my book, we chose translate that
into a status of Incomplete in the scripts we were using to report on
event status.  After it was changed to In Progress, we changed our
scripts to translate the new description to Incomplete as well.  It
seemed to convey the actual state of affairs more accurately.
I seem to recall that when I read the original APAR that was opened about
the (?) status, there was a  strong argument that this should be classified
as an exception when using q ev.  This is obviously not what they chose
to do.
Curiously, the APAR describing the status change to In progress (IC33373)
discusses long-running events, and uses restartable to describe them.  In
our experience, it almost always indicates a failure of the client (such as
the scheduler service/daemon freezing or dying).  I can't recall ever
seeing one of these events restarted.
-Ted


Re: Unsuccessful attempt to create a restore Test Schedule.

2004-06-24 Thread Ted Byrne
Bill,
Correct me if I'm wrong, but it looks like you are trying to use a Netware
command (load dsmc) to restore data from one Windows box to another.
Try putting the restore command (something like dsmc restore \\src\c$\*
\\tgt\d$\restore\fromsrc\ ...) you want to run in a batch/script file on
the S2401 system, then changing the schedule's OBJects option to explicitly
reference that filename, including the full path.  Don't forget to quote
the script's filename if the path contains any spaces.
Ted
 I am trying to create Daily Schedule to restore files,
from on node to another , with no success !
06/24/2004 14:45:57  ANR2017I Administrator BILL issued command: UPDATE
  SCHEDULE MISC RESTORE_S2321_S2401
DESCRIPTION=Daily
  restore of non database files ACTION=COMMAND
  OBJECTS=load dsmc restore -noden=S2321
  \\S2321\C$\BILL\*
  \\S2401\C$\BILL -REP=ALL -IFN -SU=YES
  -PASSW=nnn
  PRIORITY=5 STARTDATE=06/24/2004 STARTTIME=14:45
  DURATION=10 DURUNITS=MINUTES PERIOD=1
  PERUNITS=DAYS
  DAYOFWEEK=THURSDAY EXPIRATION=06/25/2004


Re: Rename of Client server

2004-06-17 Thread Ted Byrne
Be aware also that you are probably going to wind up with different
filespace names, since the name of the server is part of the filespace's
name on Netware (and Windows). For example, server LAXF45 has the SYS
volume recorded as LAXF45\SYS:.  So you're going to get a full backup of
the server's volumes when the backup does run
Ted
At 01:49 PM 6/17/2004, you wrote:
Never mind, the answer hit me two seconds after I clicked the send
button.  TSM won't care what the server name as long as the TSM client has
the same nodename in the dsm.opt.  I probably just need to rest the
password to reconnect them.  If I'm wrong let me know otherwise thanks
again and sorry for taking up your time.
Shannon
Shannon C Bach
06/17/2004 12:38 PM
To:[EMAIL PROTECTED]
cc:
Subject:Rename of Client server
Soon most of our NW's will be migrated to a cluster.  In getting ready for
this process one of the server administrator's renamed one of his Novell
servers that I have been backing up.  Now he's called to tell me and can't
understand why TSM won't back it up anymore. I found lots of stuff on
renaming nodes but nothing on what to do if a client server is
renamed.  Any suggestions???
Thanks!   Shannon


Re: Rename of Client server

2004-06-17 Thread Ted Byrne
Any other ideas?
Shannon
From Tim Rushforth:
Although that should work, I think you are better off renaming the node on
the server and the filespaces for that node.
Tim Rushforth's suggestion was probably the best, as long as the
institutional memory retains the information that data from
OldWinServer need to be restored from the client nodename NewWinServer
Ted


Re: DB2 Restore Fails

2004-06-15 Thread Ted Byrne
At 03:36 AM 6/15/2004, you wrote:
Hi
Please help, has anyone had a problem with restoring from tsm when
trying to restore a db2 database form TSM it returns saying the i/o error
inactive backup

Please helpurgent..
It would be helpful if you could post the exact error message(s) that you
are receiving, along with any entries from the server's activity log or the
client's log files from the timeframe that the error is occurring.
Ted


Re: Odd Linux client problem

2004-06-07 Thread Ted Byrne
Take a closer look at the Unix client manuals.  Some options (Nodename is
one of them) can only be placed in the dsm.sys file.  If you want to use
something other than the hostname for the node, include it in the server
stanza.
Ted
At 05:06 PM 6/7/2004, you wrote:
Hi,
I am having a problem with the 5.2 or 5.1 client running under Redhat 9.0.
The client installs properly and seems to connect to the server.  I have
coded the dsm.sys file to point to the server.  However, any statements
I place in the dsm.opt file will cause the following error to be
displayed:
ANS1036S Invalid option 'NODENAME' found in options file '/opt/tivoli/tsm/
client/ba/bin/dsm.opt
Similar errors appear for any other valid option.
If I remove all options from the dsm.opt file and invoke the GUI and do the
configuration from there, none of the options I select in the GUI will be
placed in the dsm.opt file.
The attrib and owner rights on the dsm.opt and dsm.sys are correct.  The
two DSM_CONFIG and DSM_DIR environment variables are also set.
I am at a loss.  Has anyone seen this problem?
Thanks in advance.


Re: multiple schedules?

2004-06-02 Thread Ted Byrne
The only advantage I can think of being able to schedule or
force a backup of a server *now* without requiring all servers
in that same group to backup.
Mike
For this, you can use DEFine CLIENTAction.  We have even set this up as a
script, so we can pass the nodename or comma-separated list of nodenames:
run test_w_client NodeX
run test_u_client NodeX,NodeY,NodeZ
I would avoid the matrix approach, if possible - maintenance could become
a real headache...
-Ted


Re: Help with clientoptions and exclude

2004-06-02 Thread Ted Byrne
At 09:00 AM 6/2/2004, you wrote:
Client v5.2.1 os win2000, server v5.1.9.0 on solaris 2.8.
Help out there.  Below is the error message I am recieving, as well as
the client option definition.  Can't seem to get this file excluded.

A little more information would be very useful - what does Query INCLEXCL
on the client show?


Re: 3583 door locked

2004-06-02 Thread Ted Byrne
At 03:41 PM 6/2/2004, you wrote:
Any ideas
here?  I'd really like to use the I/O door...otherwise i have to use the
main door, and it takes quite a while to Scan/Learn all 5 columns each
time.  thanks...

Your library's I/O station is configured as Storage Slots; you should have
it configured as Import/Export.  Reference the Setup and Operator Guide for
the 3583.  (It's on page 178 in the version of the manual that I have.)
-Ted


Re: 3583 door locked

2004-06-02 Thread Ted Byrne
The I/O station is already setup as Import/Export.  The icon changes when
it's setup as storage, i verified from the panel that it's setup as
Impt/Expt.  Any other ideas?

My apologies - I thought from the reference to the lock icon in your
original email that the padlock icon was showing as locked.
This could be a hardware problem; one of my co-workers mentioned that we
had a customer with at similar problem.  IBM wound up working on the
library.  I'm not sure what was done to correct the problem.
Ted


Re: TSM extended edition question.

2004-06-01 Thread Ted Byrne
I checked that link out.
From that page (just above the table of devices):
For IBM Tivoli Storage Manager version 5.1 and later,
to use a library with greater than 3 drives
or greater than 40 storage slots,
IBM Tivoli Storage Manager Extended Edition is required.


Re: Update Devclass from GENERICTAPE to ECARTRIDGE

2004-05-26 Thread Ted Byrne
ANR8365E UPDATE DEVCLASS: The DEVTYPE parameter cannot be changed.
ANS8001I Return code 3.

Does anyone have any suggestions on how to update the devclass.
The error message pretty much says it.
If you want data to go to a different type of tape, define a new devclass
and stgpool(s) that will use that devclass.  Once that is set up, update
your copygroup destinations to point to the new stgpool(s).
-Ted


Re: Select list of files backed up/archived on tsm?!?

2004-05-21 Thread Ted Byrne
You'll get much better results doing this from the client.
Use Query Backup with the -inactive and -su=yes switches from the root of
each filesystem/drive/volume.
Ted
At 07:32 AM 5/21/2004, you wrote:
Hey everyone!
I have a quick question.  I am trying to find the file names of all files
backed up or archived for a particular node on my tsm server.  I have
already issued
select * from backups where node_name='CHSU050', but that doesn't give me
the actual name of the file, it just gives high level and low level.  Does
anyone know of such a select statement?  Thanks for your help!  I'm kind of
in a bind for time, so any help is appreciated!

Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: Lanfree path failed: using lan, reason ?

2004-05-13 Thread Ted Byrne
I would recommend taking a look at the Activity Log of the Storage
Agent.  You can connect to the Storage Agent using a regular Admin Client,
by specifying the tcpserveraddress (and the port, if other than the default
1500) of the p660.
You may find errors logged by the Storage Agent that shed more light on the
problem that was encountered.
Ted

At 03:22 PM 5/13/2004, you wrote:
Hello all,

I would like to ask for help from more experienced TSM admins about possible
reasons for lanfree path failures, during oracle-tdpo backups.
First some info about configuration details:

Client(s):

pSeries 660 / 670, OS: Aix 5.1 ML-04 (32bit kernel), Running: Oracle 8.1.7.4

StorageAgent: 5.1.9
Application Client:  TDP Oracle AIX 32bit
TDP Version:  2.2.1.0
Compile Time API Header:  4.2.1.0
Run Time API Header:  4.2.1.0
Client Code Version:  4.2.1.25
Atape: 7.1.5.0
TSM Server:

pSeries 610, OS: Aix 5.1 ML-04 (32bit kernel), TSM Version: 5.1.9

SAN  Storage

HBA: 6228 2Gb (identical firmware on every one)
SAN Switches: IBM 2109-16
Library: IBM 3583 with two SCSI LTO-1 drives connected through san data
gateway
All backups are bound to tape storage pools.

Problem:

I've tried to configure two clients so that they would use lanfree path
method during oracle tdp backups, these two clients are very similar when it
comes to software components (identical os, tsm packages etc.), the only
difference is that the first one is pSeries 660 and the other is p670. So
far the p660 works great with lanfree backups (not even single failover to
lan path), but unfortunately the p670 is quite the opposite, because during
every oracle backup, after initial quite good san transfers, following
errors are appearing in its tdpoerror.log, which results in failover to lan:
05/13/04   16:00:09 session.cpp (1956): sessOpen: Failure in
communications open call. rc: -1
05/13/04   16:00:09 ANS9201W Lanfree path failed: using lan path.
05/13/04   16:01:38 session.cpp (1956): sessOpen: Failure in
communications open call. rc: -1
05/13/04   16:01:38 ANS9201W Lanfree path failed: using lan path.
05/13/04   16:01:47 session.cpp (1956): sessOpen: Failure in
communications open call. rc: -1
05/13/04   16:01:47 ANS9201W Lanfree path failed: using lan path.
05/13/04   16:03:11 session.cpp (1956): sessOpen: Failure in
communications open call. rc: -1
05/13/04   16:03:11 ANS9201W Lanfree path failed: using lan path.
In order to find a reason for this, I undertook following actions:

a) checked fabric ports statistics on san switches, but found no significant
errors.
b) added traceflag api api_detail pid tid to dsm.opt. That action produced
huge trace file but again no clear errors were found, simply during first
few rman backupsets there was info that lanfree path was being used, but
suddenly at exactly the same time when tdpoerror.log reported lanfree path
failed error, in the trace file new rman session began but without info
about using (or failure of) lanfree path.
c) added tdpo_trace_flags orclevel0 orclevel1 orclevel2 to tdpo.opt used
to allocate rman channel, but despite quite detailed trace file, again
everything looked good, no errors.
d) tried to route data traffic to another 3583 library, connected to another
san switch (few kilometers away), exaclty the same scenario was observed.
Has anyone faced similar problems ?

What else can I do to investigate the problem throughly ?

The only thing in my current configuration (that I'm aware of) that is
againts IBM recommendations is the fact of mixing san disks and san tapes
traffic through the same HBA adapters, but that is rather hard to overcome.
I'm especially curious about the fact that one client works ok, while the
other one (set up to my best knowledge in the same way) fails every time.
Thanks in advance for ANY hints.

Pawel Wozniczka


Re: TSM DB growing, but number of files remains the same ...

2004-05-04 Thread Ted Byrne
Arnaud,

I believe what I have to do now is to build some solid queries to explore
our activity
log...
Unless I'm mistaken, I believe that you may have misread Richard's
advice.  (Correct me if I'm wrong, Richard.) His recommendation:
I stress accounting log reporting because it is the very best, detailed handle
on what your clients are throwing at the server
Accounting log records are comma-delimited records written to a flat file
called dsmaccnt.log, assuming that you have accounting turned on.  If you
don't, I would recommend that you turn it on now...
The accounting log records give a very detailed, session-by-session picture
of the activity between the server and clients. It will be easier to parse
and process than what you can get out of the activity log.
Ted


Re: TSM DB growing, but number of files remains the same ...

2004-05-04 Thread Ted Byrne
Arnaud,
You could always make the accounting log file a softlink to a more
forgiving location...
It probably would also make sense to do some kind of regular log rotation
with the file.
Ted
At 10:39 AM 5/4/2004, you wrote:
Ted,
Now I realise you where certainly right ! Unfortunately I disabled
accounting some times ago, as it leaded us to server crash, because /usr
was full. At that time I did not knew how to direct it anywhere else as
it's standard path, and afterwards forgot to reactivate it : like
allways higher priority tasks ... Add to this that unlike computers I'm
working first in - last out, and you'll get a nice mess !
Cheers.
Arnaud
***
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]
***

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ted Byrne
Sent: Tuesday, 04 May, 2004 16:16
To: [EMAIL PROTECTED]
Subject: Re: TSM DB growing, but number of files remains the same ...
Arnaud,
I believe what I have to do now is to build some solid queries to
explore our activity log...
Unless I'm mistaken, I believe that you may have misread Richard's
advice.  (Correct me if I'm wrong, Richard.) His recommendation:
I stress accounting log reporting because it is the very best, detailed
handle on what your clients are throwing at the server
Accounting log records are comma-delimited records written to a flat
file called dsmaccnt.log, assuming that you have accounting turned on.
If you don't, I would recommend that you turn it on now...
The accounting log records give a very detailed, session-by-session
picture of the activity between the server and clients. It will be
easier to parse and process than what you can get out of the activity
log.
Ted


Re: What ever happened to Group Collocation ?

2004-05-04 Thread Ted Byrne
One confusion is why do I have so many partially filled tapes ?   I
don't have this many nodes ?
You might want to do some queries against the volumeusage table.  That data
can be massaged to see what tapes are being used.  You could also use q
content volume_name count=1 against all of the slightly used tapes, and
identify which nodes are spread across more tapes than they should be...
Ted


Read block size discrepancy between LV image and B/A client backups

2004-04-29 Thread Ted Byrne
AIX client v. 5.1.5.5
AIX server v. 5.1.8.0
AIX version 5.2.0.0
We  have a customer who is experiencing discrepancies in performance
between large-file cold backups of Oracle data and LV image backups, both
sent over the SAN to LTO-2 tape drives.  On the lv image backups, they are
getting about 70 MB/s sustained.  For the B/A client backups, they are
getting less than 30 MB/s.
Both backups are made using the same server stanza, and data is sent to the
same stgpool on the server, so the TSM configuration should be
identical.  (See below.)  In investigating this issue, the customer
determined that the block size used when reading the data for the image
backup was 256K, where the b/a client backup was about half of that.  This
is what they reported:
   the size of disk IO for the Oracle filesystems was close to 128k,
   whereas the size of disk IO of the image backup was fixed at 256k.
   Also, the image backup had a lot less disk seek than the
   regular Oracle cold backup through SAN.
The customer believes that the read block size is contributing to, if not
entirely responsible for, the performance discrepancies.  I would expect
that the image backup would have less disk seek, since it is processing the
entire LV, but the discrepancy in read block size is puzzling.
Other systems that we have tested with similar data have achieved equally
high throughput for the LV image backup and the b/a client backup of large
files, so it's not clear why this environment (mission-critical production
of course) would get such dramatically different throughput.
I have a PMR open with IBM regarding this issue.  Is anyone aware of a way
to control the block size used for reads by the TSM client?
I'd like to do some client-side tracing to see if we can turn up the reason
for the performance differences, but I would like to be selective about the
tracing, so as to not unduly impact the client machine.  Any suggestions
regarding what traceflags would be best to use?
If anyone has any suggestions regarding how to approach this, I'm all ears.
Thanks,
Ted
   SERVERNAME LAXU30_DBCOLD
   NOdename   LAXU23_DBCOLD
   PASSWORDAccess generate
   enablelanfree  yes
   lanfreecommmethod  tcpip
   lanfreetcpport 1500
   TCPPort1500
   TCPServeraddress   LAXu30.nowhere.com
   TCPCLientport  1503
   HTTPPort   1583
   TCPBuffsize256
   TCPWindowsize  1024
   TCPNodelay Yes
   TXNBYTELIMIT   2097152
   LargeCommBuffers   No
   Inclexcl   /usr/tivoli/tsm/client/ba/bin/inclexcl.dbcold.def
   ERRORLOGR  30 D
   errorlogname   /usr/tivoli/tsm/client/ba/bin/dsmerror.dbcold.log
   SLAXDLOGR  30 D
   sLAXdlogname   /usr/tivoli/tsm/client/ba/bin/dsmsLAXd.dbcold.log
   SLAXDMode  Prompted
   ResourceUtilization  2
   DOMAIN.IMAGE   /dev/redo1lv /dev/redo2lv /dev/redo3lv /dev/redo4lv


Re: Please help me NOT migrate

2004-04-20 Thread Ted Byrne
Hi Eliza,

My condolences; I don't envy the position that you're in.  I'm not sure
what arguments have been laid out against staying with AIX, but perhaps you
could put some weight on the other pan of the scale by doing some testing
and extrapolating the results to approximate the real costs of migrating to
a new platform.
It's probably a pretty safe bet that your users at Tech are not willing to
abandon their historical backup data, so you are probably faced with doing
an export/import of data to get that across to whatever new platform you
will go to.  Depending o how great a hurry management is to get AIX and/or
IBM server hardware out the door, side-by-side co-existence might be an
option. Not a very attractive one in my mind, but a possibility.)
Is there a medium-size representative client node that you could perform a
test of the export/import process with?  It should be one that has been
active for a while, so that the data is spread across volumes, rather than
being on one or a couple of tapes. This is assuming that you are not using
collocation.  If you are, data spread should be much less of an
issue.  If you don't have a test TSM server that can access the 3590
drives, it would be a good idea to temporarily rename the node for the
duration of the export so that you can import it to the same server.
Some factors that you might want to keep in mind as you approach this with
an eye toward presenting the cost(s) of migration:
Export will not automatically free tapes
on the source server; you'll need
enough tapes (and slots) available to create
the export and to import to the new server.
How much slack time do you have on your six drives?
Will you be able to perform these migrations
and still perform the work required to keep
the other backups running on the old AIX system?
Even doing these migrations one at a time, will the
downtime for any particular system be prohibitive?
Can your users/applications tolerate that kind of downtime
without
good backups?
I don't have any hands-on with TSM on either Linux or Solaris, so I can't
speak to the pros or cons of either, but perhaps the costs of the migration
scenario will be persuasive enough to at least slow, if not entirely stop
the pressures to migrate.
Let us know how you make out with this adventure.

Ted

Ted Byrne
Blacksburg, VA
At 10:53 AM 4/20/2004, you wrote:
TSMers,

Please help me NOT migrate the server to a different platform from AIX.

server:
4way P660, 3G memory running AIX 5.1
3494 with 6 FC 3590E tape drives connected to 2 SAN switches
90G database at 60% utilized on a Shark
24T of backup data
2600 3590E tapes
460 clients: Windows, Linux, Solaris, AIX
There has been a change in management, and to put it mildly, certain people
do not like IXX and want to see it out of the machine room.
I have to put in a strong argument on why TSM has to run on AIX.  The two
platforms I am offered are Linux and Solaris.  Management wants to keep the
3494 and all its tapes because of the sunk cost.
I have been running ADSM/TSM/ITSM v2/v3/v4/v5 for the past 8 years on a
J30 and then the P660 and am perfectly happy with TSM on AIX.  The two IBM
CEs who work on our hardware are wonderful.  Migrating to a different
platform is going to be a nightmare.  How long will backup be
down to export/import 460 clients with 24T of data?
Realistically, I know I can only stall it for 2 more years.  After we
outgrow the P660, the new hardware we buy will run either Solaris or Linux.
What is your experience with TSM server running on either one?
Thanks in advance,
Eliza Lau
Virginia Tech Computing Center
Blacksburg, VA


Re: Please help me NOT migrate

2004-04-20 Thread Ted Byrne
My recommendation is to set up a new Solaris server and slowly put new clients
   on it.  Then over time we will move all clients to the new server.  But I
   know it won't fly because it will entail buying a new tape library.
Eliza,

Would it be feasible to share the 3494 between servers during the
transition if it comes to that?
I've never done any significant export/import processing, but I can't
imagine that given the total downtime to do a one-shot cutover would be
acceptable given the number of nodes and volume of data that you're dealing
with.  My gut feeling is that the downtime for the cutover would be nothing
short of horrific, but that's really a hunch on my part.
Ted

Ted Byrne
Blacksburg, VA


Re: No match found...

2004-04-13 Thread Ted Byrne
A volume status of UNAVAILABLE
At 01:00 PM 4/13/2004, you wrote:
No Damaged Files on those tapes??? Can I put it to READWRITE?





Etienne Brodeur [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2004-04-13 11:35
Please respond to ADSM: Dist Stor Manager
To: [EMAIL PROTECTED]
cc:
Subject:Re: No match found...
Hi Stephan,

You should not have tapes in OFFPOOL that are unavailable and
especially not in TAPEPOOL!  That means the tapes containing your data
that needs to be reclaimed is not available to TSM, since those volumes
are unavailable.  If you check your activty log you'll see messages saying
'Volume  required for space reclamation.  Volume is unavailable'
Those tapes probably have write errors which is why they are unavaible.

You should check those 'TAPEPOOL' volumes to be sure they are in the
library and maybe set them to READONLY and then run an audit on them to
see if there really are damaged files there.
Etienne Brodeur
Serti Informatique


Stephan Dinelle [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
04/13/2004 10:24 AM
Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]
To
[EMAIL PROTECTED]
cc
Subject
Re: No match found...




Here is the result of the Q VOL ACC=UNAVAIL command

Tivoli Storage Manager
Command Line Administrative Interface - Version 4, Release 2, Level 1.0
ANS8000I Server command: 'q vol acc=unavail'
Volume Name   Storage  Device  EstimatedPct Volume

  Pool NameClass Name   Capacity   Util Status

(MB)
  ---  --  -  -

099ACYL1  ARCHIVEPOOL  LTOCLASS141,962.7  100.0 Full
101ACYL1  OFFPOOL  LTOCLASSO-  190,734.04.4
Filling
FF
104ACYL1  OFFPOOL  LTOCLASSO-  190,734.01.1
Filling
FF
139ACYL1  TAPEPOOL LTOCLASS190,734.00.7
Filling
142ACYL1  TAPEPOOL LTOCLASS162,534.45.7 Full
147ACYL1  TAPEPOOL LTOCLASS190,734.00.4
Filling
162ACYL1  OFFPOOL  LTOCLASSO-  190,734.0   18.2
Filling
FF
175ACYL1  ARCHIVESPE-  LTOCLASS190,734.06.4
Filling
   CIAL
ANS8002I Highest return code was 0.



Bill Boyer [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2004-04-13 10:06
Please respond to ADSM: Dist Stor Manager
To: [EMAIL PROTECTED]
cc:
Subject:Re: No match found...
Are you reclamation tasks completing successfully? Also, did anyone change
the reuse delay of the OFFPOOL storage pool? Are the tapes in a PENDING
state?
I would check to make sure your reclamation tasks are running to
completion.
Maybe check to see if you have any onsite tapes that are unavailable? Q
VOL
ACC=UNAVAIL. That would explain why reclamation wasn't completing.
Bill Boyer
DSS, Inc.
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Stephan Dinelle
Sent: Tuesday, April 13, 2004 9:31 AM
To: [EMAIL PROTECTED]
Subject: No match found...
Hi,

It will be almost 2 weeks now that our TSM server (version 5, release 1,
Level 6.4 for Windows) do not retrieving any storage=offpool,
access=offsite...
We normally had an average of 1 to 2 tapes every day that were returning
from the offsite storage.
Here is the command:

q vol stg=offpool access=offsite status=empty

Nothing was changed or modified for the last 6 months.

Any clue(s) where to check and what to check?

N.B. IBM Tape library 3583

Thanks


Re: No match found...

2004-04-13 Thread Ted Byrne
A volume status of UNAVAILABLE does not equate to damaged files.  It means
that TSM was unable to access the tape in a timely manner.  Check to see if
the tapes are actually present in your library; this might require a visual
inspection.
For the copypool tapes, it may be that the tapes were ejected without
updating the access to OFFSITE to indicate that they are no longer
available to be used.
For the primary stgpool tapes, perhaps they were removed from the library,
or there was a problem loading the tape when TSM wanted to us it.
-Ted

At 01:00 PM 4/13/2004, you wrote:
No Damaged Files on those tapes??? Can I put it to READWRITE?


Backup completion / BACKUP_END on network FileSpaces?

2004-04-08 Thread Ted Byrne
We have two customers that back up remote drives.  One client was recently
upgraded to version 5.2.2.5 (from 4.x)  and the other is running at
5.1.6.6.  Both clients are Win2K servers.
On the client that was upgraded, the backup_end field (Backup Completion
date/time) for that filespace has not changed since the client upgrade took
place(03/08).  Prior to the upgrade, the completion date/time was updated
with each backup.
On the client that has been at 5.1.6.6, there is no backup completion
date/time showing for the network filespace.
On both clients, messages indicating that the Filespace backup has
completed are logged in dsmsched.log
   04/07/2004 02:07:28 Successful incremental backup of '\\getback\q$\*'
   04/08/2004 07:47:50 Successful incremental backup of '\\gwise99\gwise\*'
Is this a known bug (or WAD behavior) in the 5.x clients?  I've searched
the APAR database, Richard's ADSM QuickFacts and adsm.org, but have not
come up with a hit.  It's always possible that I may not be using the right
search terms to get a match.
Thanks in advance,

Ted


Re: Backup completion / BACKUP_END on network FileSpaces?

2004-04-08 Thread Ted Byrne
At 09:01 AM 4/8/2004 -0600, you wrote:
When performing the backup, do not append '\*' to the end of the share
name (the messages you include in your post suggest that this is what you
are dong):
Andy,

On both of these clients, the filespaces are included using a domain
statement, and the backups are simply run with a schedule where
action=incremental, without explicitly specifying any objects.
-Ted


Re: Backup completion / BACKUP_END on network FileSpaces?

2004-04-08 Thread Ted Byrne
Thanks Andy,

We'll try your suggestions (and Richard's as well) and see what we get.

Ted

At 09:27 AM 4/8/2004 -0600, you wrote:
Hi Ted,

The '\*' is almost certainly being specified somewhere. Whether it is in
the domain statement, objects field of the schedule, file specification
from the command line, or client options set is immaterial.
On the 5.2 client that experiences the problem, what does the following
command show?
   dsmc show domain

Another way to verify would be to manually run the backup:

   dsmc i \\machinename\sharename

Does the file space get updated on the server? Do the incremental backup
of volume and successful incremental backup messages show the share
with the '\*' appended at the end?
Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]
The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.


Ted Byrne [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
04/08/2004 09:07
Please respond to
ADSM: Dist Stor Manager
To
[EMAIL PROTECTED]
cc
Subject
Re: Backup completion / BACKUP_END on network FileSpaces?




At 09:01 AM 4/8/2004 -0600, you wrote:
When performing the backup, do not append '\*' to the end of the share
name (the messages you include in your post suggest that this is what you
are dong):
Andy,

On both of these clients, the filespaces are included using a domain
statement, and the backups are simply run with a schedule where
action=incremental, without explicitly specifying any objects.
-Ted


Re: Restoring data after deleting a volume

2004-04-05 Thread Ted Byrne
Since I should have a copy of this in the copy pool, I
should be all set. So I'm going to delete the volume with discarddata=yes
STOP!

This will delete all references to the data (in the copy stgpool as well as
the primary stgpool volume).  You will not be able to recover the data
without restoring the TSM database to before the volume deletion...
Use the restore volume command _first_.  For primary stgpool volumes, use
delete volume xxxyyy discarddata=y only as a last resort, after you've
exhausted all other options.
Ted


Re: Library sharing

2004-04-01 Thread Ted Byrne
I can see the serial numbers on the 2K3 server.  How can I get the
serial-numbers from AIX ?
Zoltan,

Try lscfg -vl.

$ lscfg -vl rmt0
  DEVICELOCATION  DESCRIPTION
  rmt0  10-58-00-4,0  IBM 3590 Tape Drive and Medium
  Changer
ManufacturerIBM
Machine Type and Model..03590E1A
Serial Number...000C2646
Device Specific.(FW)E35A
Loadable Microcode LevelA0B00E26
$ lscfg -vl rmt5
  rmt5 P2-I5/Q1-W2001006045170A1A-L2E  IBM 3580
Ultrium Tape Drive (FCP)
ManufacturerIBM
Machine Type and Model..ULT3580-TD2
Serial Number...1110089774
Device Specific.(FW)38D0


Re: GroupWise backup

2004-04-01 Thread Ted Byrne
Is there any other way to backup GroupWise (hot-online backup); I don't
like the idea of shutting down e-mail system in the middle of the night,
and than pray that after postsched command it will wake up again.
Joe,

There is no TDP/Groupwise.  I'm not sure about the mechanism for shutting
down/starting Groupwise.  I would share your concern about
stopping/restarting their mail server.  This could potentially cause
problems with other processes elsewhere in their environment that depend on
mail being available (alerting would be one).
I belive that the solution recommended by IBM is to use St. Bernard's Open
File Manager, or something like that, if there is a competing product for
the Novell platform.
One thing you want to be careful of with Groupwise is how your policies are
set up.
The last time I worked with a Groupwise server as a TSM client (several
years ago), the way that the product stored the messages was in uniquely
named files.  Periodically, Groupwise would re-org these files, deleting
the old uniquely named files and creating new ones.
At the time that we configured the policies for that organization, we were
not aware of this behavior,  and we used the same settings as the rest of
the servers for the company - which included a long retonly setting.
This wound up causing some real capacity issues - the ratio of data managed
to live data on that client went well into the hundreds-to-one range.
As I said, it's been a while since I worked with Groupwise, and the
product's behavior may have changed since then, but it's definitely worth
taking a look at...
Ted


Re: Rebinding backups of deleted files.

2004-03-30 Thread Ted Byrne
Wanda's suggestion is a good one.  There are a few things you should 
consider  if you elect to take this route:

1) Before doing anything, check whether it will necessary to increase the 
verdeleted for this MC, so as to not inadvertently expire the client's data 
prematurely.  (If verdeleted is 1 for these unique files, backing up the 
dummy file(s) will cause the real data to roll off immediately.)  If this 
change would cause problems with retention of other clients' data, you can 
copy the domain that the client is in, then update the client to belong to 
the new domain.  This effectively creates a clean room so that any 
changes made to retentions will not affect other machines' data.

2) The time involved for step 2 (execution of the script, not the writing) 
could be considerable, depending on number of files, server platform, 
connection speed, etc.  If at all possible, run the script on the client 
machine.  I once tried to do this for a Netware client running Groupwise, 
and wound up taking a different route entirely because the creation of the 
dummy files was agonizing slow over the network.  Testing the script 
locally on my laptop worked fine.

3) It might be easier to get the file list for step one by running a query 
backup from the client rather than a Select from the server.  Getting this 
type of information from the administrative client can be very 
time-consuming, particularly if your DB is large.

Ted

At 09:58 AM 3/30/2004 -0500, you wrote:
How about:

1) Pull a list of all those files from TSM with SELECT
2) write a script that creates an empty dummy file of the same name
3) backup (thus rebing)
3) delete


-Original Message-
From: Salak Juraj [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 30, 2004 9:33 AM
To: [EMAIL PROTECTED]
Subject: AW: Rebinding backups of deleted files.
1) restore,
2) backup (thus rebind)
3) delete
See it positive - as a recovery test :-)

regards
juraj
-Ursprüngliche Nachricht-
Von: Alan Davenport [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 30. März 2004 16:07
An: [EMAIL PROTECTED]
Betreff: Rebinding backups of deleted files.
Hello Group,

 Just when you think you know how something works...

 I recently discovered that a user backup directory was not following
standard naming conventions and for this reason I was keeping 1 year's worth
of backups rather than 45 days. Their application creates unique names for
each backup then deletes old backups. I added and include option for their
backup directory for the 45 day management class and ran a backup. The
existing files in the directory rebound correctly to the 45 day class
however the TSM backups of the deleted files are still bound to the default
management class and will not go away for a year. This is causing me to
retain many many gigabytes more data than necessary. Is there any way to get
those files rebound to the 45 day class?
 Thanks,
 Al
Alan Davenport
Senior Storage Administrator
Selective Insurance Co. of America
[EMAIL PROTECTED]
(973) 948-1306


Re: Spectra Logic T950 support

2004-03-24 Thread Ted Byrne
If you have not done so already,  check the server README files for the
newer maintenance and patch levels.  If you're relying on the website for
this information, you may be missing something that is already
supported.  The device support web page, while a good idea, tends to lag
(sometimes significantly) behind what is actually supported at current
levels of server code.
-Ted

At 07:56 AM 3/24/2004 -0600, you wrote:
Could anyone comment as to when TSM for Windows might support the Spectra
Logic T950 library?  We are considering purchasing this library but it is
currently not listed on TSM's supported device list.
Thanks

Dean Winger
Information Systems Manager
School of Education
Wisconsin Center for Education Research
1025 West Johnson St
Madison, WI 53706
Phone: (608)265-3202
Cell: (608)235-1425
Email: [EMAIL PROTECTED]


Re: GIGE connectivity via TSM

2004-03-23 Thread Ted Byrne
You might also want to set up a static route on the client to the TSM
server, specifying the GigE card as the gateway address.
Ted

At 10:37 AM 3/23/2004 -0500, you wrote:
In your DSM.OPT on the client (this is from the book):

Tcpclientaddress
...

What is the best/recommended
way to ensure data is traversing the GIGE card (in both
directions... outbound/inbound) if it is not set up as the default NIC on
the client.  thx.


Re: TSM, 3494, manual mode...?.

2004-03-18 Thread Ted Byrne
Wanda,

Here is Richard Sims' recommendation in this situation:

- TSM may have to be told that the library is in manual mode. You cannot
achieve this via UPDate LIBRary: you have to define another instance of
your library under a new name, with LIBType=MANUAL. Then do UPDate DEVclass
to change your 3590 device class to use the library in manual mode for the
duration of the robotic outage.
-Ted

At 02:40 PM 3/18/2004 -0500, you wrote:
Any 3494 wizards out there -

Something new every minute...
Our gripper is dead (there is a part circling an airport somewhere, waiting
to land).
At first, we just switched the library into MANUAL mode, opened the doors,
and we 2-legged robots have been responding to the mounts on the library
manager console.
Then a drive also died.  TSM was waiting for the mount on that drive, which
cannot be satisfied, process will not cancel.
The only way I know out of that is to bounce TSM, which I did.
When TSM came back up, it says the library is UNAVAILABLE.

Now the library is still doing its manual-mode thing for the other system
attached to it, so I ASSUME this is because the library was not in a
normal state when TSM came back up.
So, is there anything I can do to get TSM to talk to the library again while
it is in MANUAL mode?
Thanks
Wanda


[no subject]

2004-03-11 Thread Ted Byrne
Robert,

Did the library load the cartridge in the correct drive?  Make sure that 
the element number that you've defined for the drive /dev/mt1 is the right 
one for the location of the in the library.

What you're describing sounds very much like the way TSM acts if there is a 
mismatch between device name and element number.  Drive device/element 
numbers do not necessarily stay constant when moving to new hardware.

Ted

At 07:58 AM 3/11/2004 +0200, you wrote:
Hi to all

After installing Aix version 5.2 (old version 4.3.3.1) and installing my Tsm
version 5.1.8 (before upgrading to 5.2) when I run a :
 dsmserv restore db devclass=dbbclass volumenames=42

I got an error message unable to open media /dev/mt1 error=46

This command load the correct cartridge on the tape but after a while I got
this message.
I try to change the blocksize on my mt1 device to :  1024 , 512 , 0 without
any success the same error.
My robot is a Scalar1000 DLT40/80.

Any suggestions will be really appreciate Â…Â…

Regards Robert Ouzen
E-mail: [EMAIL PROTECTED]


Re: Access to archived files from a different node

2004-02-04 Thread Ted Byrne
Dan,

Try starting the client on the machine where you're doing the restore to
with the option -virtualnodename or -nodename and specify the source system.\
You'll need to know the node's password to do this.
-Ted

At 12:21 PM 2/4/2004 -0500, you wrote:
We want to be able to retrieve archived files to a different system for
cases in which the original system is no longer available (crash, DR,
etc.).  I tested doing a retrieve from a different system using the
-fromnode option, but it worked only after I did a 'set access' for the
new node from the original system.  In a real situation, I wouldn't be
able to go back to the original system to do a 'set access'.  Is there
any way to handle this scenario other than doing a preemptive 'set
access' from all systems allowing retrieves anywhere?
Thanks,

Dan Sidorick


Re: FW: checkin vol checklabel=barcode search=bulk

2004-02-04 Thread Ted Byrne
snip Here's the logic of the script I wrote to fix to the
problem:
1. Use the tapeutil command to list the tapes in the I/O convenience slots.
snip
Jeff,

Can you share the tapeutil command(s) you use to retrieve this information?

Thanks,

Ted


Re: Files bound to wrong mgtclass

2004-01-30 Thread Ted Byrne
Can anyone help me out here?  Am I on the right track?
Before proceeding too much further, I'd recommend checking the output of
the q inclexcl command on the clients where you are seeing the
problem.  This may help to identify where the assignment to the MC is
originating.
Ted


Re: End of support for 5.1

2004-01-23 Thread Ted Byrne
I posed this question to IBM a short while ago (on 1/13).  This was their
response:
An end-of-Service date has not been announced for TSM 5.1.x at this time.
The EOS dates for all IBM software can be checked at the following URL:
http://www-306.ibm.com/software/info/supportlifecycle/list/t.html
-Ted

At 10:51 AM 1/23/2004 +, you wrote:
People,
Have searched for this without any luck.
Have IBM announced any end of supoort dates for ITSM 5.1
I am particularly interested in ITSM  for z/OS
thanks,
John


**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.
Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**


Re: TDP for MQ SQL problems

2004-01-23 Thread Ted Byrne
Tim,

You might want to take a look at the maxsize setting on the destination
stgpool for the copygroup in effect for the client's data, and also check
the capacity and maxscratch settings of the NEXT storagepool.
Ted

At 10:58 AM 1/23/2004 -0500, you wrote:
To *,

I'm getting the following error from an NT TSM client (v 5.1.6) using TDP
for MS
SQL:
01/23/2004 10:08:08 ACO5436E A failure occurred on stripe number (0), rc = 418
01/23/2004 10:08:08 ANS1311E (RC11)   Server out of data storage space
The TSM server (AIX 4.3.3 ML10, TSM 5.1.6) has sufficient primary storage pool
space (350 GB) to handle this backup request (6 GBs) but I'm still getting the
error. Also, ACO5436E is not listed in the Messages section of the TDP
for MQ
SQL manual.
Has anyone experienced this type of error with TDP, any thoughts

Regards, Tim
NAFTA IS Technical Operations
[EMAIL PROTECTED]


Re: SQL Query for amount to copy from tape pool to copy pool

2004-01-22 Thread Ted Byrne
How about

backup stgpool srcpool tgtpool preview=y wait=y

Note that this can take a while to return results, depending on the size of
the environment.
-Ted

At 08:13 AM 1/22/2004 -0600, you wrote:
It is possible to get a feel for how much needs to be migrated from a disk
pool to a tape pool by using a 'q stg' command.
Is there something that would give a similar number to know how much needs
to be put in a copy pool from either a
disk or tape pool?  I was guessing some SQL query might work, but don't know
for sure.
(basically, how many gig needs to be put into the copy pool from each of the
primary storage pools)
Suggestions?


LAN-Free backups - minimum file size recommendation?

2004-01-21 Thread Ted Byrne
Does anyone know what the recommended minimum size for sending backups
across the SAN is?  I'm trying to come up with a solution to integrate the
backup of DB and non-DB files on two AIX servers.  Currently, the DB files
are backed up after the db is shut down, using one node name, and the
remainder of files are backed up using a different nodename.
What I would like to is to set a maxsize on the disk stgpool that is the
copygroup destination for the default mgmtclass to cause the large files to
go to the NEXT stgpool, which would be SAN-attached tape.   I'd like to
make the best utilization of both SAN and LAN throughput as we
can.  Unfortunately, we do not have a test environment where we can
experiment.  The production AIX servers are tightly controlled, and we have
only a small maintenance window late in the night over the weekend that we
can do any work in (including getting the weekly backup done...)
One additional need - I'd like to increase the resourceutilization option,
but the backups can't go to more than one tape drive each.  Will setting
maxnummp to 1 potentially cause the backup to fail because multiple
sessions might try to mount a tape (because of the maxsize setting)?
Or are we better off sticking with the segregated scheme that we are using now?

Thanks for any suggestions,

Ted


Re: Does change in copygroup destination require StorageAgent restart?

2004-01-20 Thread Ted Byrne
Thanks for pointing this out.

Obviously, I need to spend more time keeping track of APARs...

Interesting note - although the APAR states clearly :

This situation will no longer be a problem in a future release.
A search of the 5.2.1.0 and 5.2.2.0 readme files for the storage agent does
not turn up a match for the APAR.  There is some discussion under the
heading Lanfree Particulars that might apply to this problem:
In most cases, it is no longer necessary to halt and restart a LanFree Storage
Agent to pickup changes in the Data Manager server.
But it does not specifically mention the APAR as fixed, nor is the scenario
of changing the copygroup destination mentioned.  I'm going to ask IBM for
clarification.
Thanks again,

Ted

At 09:51 AM 1/20/2004 +0100, you wrote:
Have a look at IC35801:


* USERS AFFECTED: All Tivoli Storage Manager users of  *
* versions 4.2 and 5.1.*

* PROBLEM DESCRIPTION: If a change/update is made to an*
*  existing policy for a storage pool  *
*  and management class used for lanfree   *
*  backups or archives, the storage*
*  agent will not recognize the change *
*  after the policy has been reactivated.  *

* RECOMMENDATION:  *

The problem was originally written against the documentation
lacking instrcuctions for this problem, however the solution
was to fix the problem in the product code.  This situation
will no longer be a problem in a future release.


-Original Message-
From: Ted Byrne [mailto:[EMAIL PROTECTED]
Sent: maandag 19 januari 2004 19:57
To: [EMAIL PROTECTED]
Subject: Does change in copygroup destination require StorageAgent
restart?
We ran into a situation over the weekend that I have not seen mentioned
before, and I was hoping that someone might be able to shed some light on
what occurred.
TSM server has a 3583 library attached to it, connected via SAN.  TSM has
been running for almost a year, using LAN-free backups for two DB servers
(cold DB backups, using just the backup-archive client and
StorageAgent).  During that time, all backups have been to two LTO-1 drives.
Last week, four LTO-2 drives were added to the library, and the library was
partitioned so that the two original drives are in one library (libr 3583,
devclass 3583).  The second partition contains the 4 new LTO-2 drives (libr
3583-2, devclass LTO2)
Access to the new drives was tested  from the TSM server, and everything
was working fine.  All server-side operations were cut over to use LTO2,
without encountering any problems once the devclass and storagepools were
defined.  Testing of the LAN-free backups had do wait for a maintenance
window this past weekend.  Once the devices were visible on the systems
with the Storage Agents, the Storage Agents were stopped  and
restarted.  Messages were logged for both storage agents that the new
library was recognized:
ANR8920I (Session: 12560, Origin: APP1_PROD_STA)
Initialization and recovery has ended for shared library 3583-2.
After the Storage Agents were restarted, the copygroup for the LAN-Free
clients' policy domain was modified to point to the new LTO2 storagepools,
and the policyset was activated.  We confirmed that the activation was in
effect by querying the copygroup, and then started a backup on each of the
two clients.  In both cases, tapes were mounted to the old LTO1 drives.
We went back and double-checked all settings, and all appeared correct, but
tapes from the 3583 devclass (LTO1) were still being mounted in the LTO1
drives.
When we stopped and restarted the Storage Agents on the two clients and
attempted another backup, scratch tapes were mounted into the LTO2 drives,
and the volumes were defined in the intended LTO2 storagepool.
Is there a documented requirement that the storage agent needs to be
restarted after a copygroup destination is changed on the TSM server?  I've
searched, and did not find anything, but it's certainly possible that I
overlooked something.
Thanks,

Ted


Does change in copygroup destination require StorageAgent restart?

2004-01-19 Thread Ted Byrne
We ran into a situation over the weekend that I have not seen mentioned
before, and I was hoping that someone might be able to shed some light on
what occurred.
TSM server has a 3583 library attached to it, connected via SAN.  TSM has
been running for almost a year, using LAN-free backups for two DB servers
(cold DB backups, using just the backup-archive client and
StorageAgent).  During that time, all backups have been to two LTO-1 drives.
Last week, four LTO-2 drives were added to the library, and the library was
partitioned so that the two original drives are in one library (libr 3583,
devclass 3583).  The second partition contains the 4 new LTO-2 drives (libr
3583-2, devclass LTO2)
Access to the new drives was tested  from the TSM server, and everything
was working fine.  All server-side operations were cut over to use LTO2,
without encountering any problems once the devclass and storagepools were
defined.  Testing of the LAN-free backups had do wait for a maintenance
window this past weekend.  Once the devices were visible on the systems
with the Storage Agents, the Storage Agents were stopped  and
restarted.  Messages were logged for both storage agents that the new
library was recognized:
   ANR8920I (Session: 12560, Origin: APP1_PROD_STA)
   Initialization and recovery has ended for shared library 3583-2.
After the Storage Agents were restarted, the copygroup for the LAN-Free
clients' policy domain was modified to point to the new LTO2 storagepools,
and the policyset was activated.  We confirmed that the activation was in
effect by querying the copygroup, and then started a backup on each of the
two clients.  In both cases, tapes were mounted to the old LTO1 drives.
We went back and double-checked all settings, and all appeared correct, but
tapes from the 3583 devclass (LTO1) were still being mounted in the LTO1
drives.
When we stopped and restarted the Storage Agents on the two clients and
attempted another backup, scratch tapes were mounted into the LTO2 drives,
and the volumes were defined in the intended LTO2 storagepool.
Is there a documented requirement that the storage agent needs to be
restarted after a copygroup destination is changed on the TSM server?  I've
searched, and did not find anything, but it's certainly possible that I
overlooked something.
Thanks,

Ted


Re: TSM Connectivity issue

2004-01-15 Thread Ted Byrne
So could there be any issue with the tcp ip port configuration?Also both
the TSM client and TSM server are in different network each.
Mohan,

Is there any chance that a firewall got installed between the networks in
question, or that some other device is blocking port 1500?
Are there any other TSM client nodes on the same network as the system
you're having trouble with?
Ted


Re: 4.2 Server README file

2004-01-15 Thread Ted Byrne
How about looking here:

ftp://index.storsys.ibm.com/tivoli-storage-management/patches/server/Solaris/4.2.1.15/TSM42115S_Sun.README.SRV

Ted

At 08:00 PM 1/15/2004 -0600, you wrote:
I'm trying to locate the 4.2.1.15 README file from the TSM server package.
I need to find the latest supported version of IBMTape for Solaris at that
revision of TSM Server.  Support tells me they can't give me that
information.
If anyone has it or knows it, I would really appreciate it.

Thanks!


Re: Inconsistency between summary and actlog

2004-01-14 Thread Ted Byrne
Robert,

As Richard suggested, the many postings to ADSM-L regarding the summary
table's contents (or lack thereof) are very informative.
Among other things, the location of the Accounting records is
detailed.  Repeatedly. (The format is recorded in the Admin Guide.)
Before posting a question to ADSM-L, search the message archives on
adsm.org to see if the subject that's vexing you has discussed
before.  It's an invaluable resource, and it can save you considerable time
in resolving whatever issue you are facing.  Richard's ADSM QuickFacts web
page (http://people.bu.edu/rbs/ADSM.QuickFacts) is another invaluable
resource for the TSM administrator, whether novice or experienced.
Ted

At 02:22 PM 1/14/2004 +0200, you wrote:
Richard

Thanks for the advice  . Do you know where I can found the
format/structure of the accounting records (dsmaccnt.log)
Regards  Robert

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 14, 2004 2:19 PM
To: [EMAIL PROTECTED]
Subject: Re: Inconsistency between summary and actlog
Robert - As you go through your professional life administering TSM, keep
one
 thing in mind: the Summary table is unreliable.  It has
historically
been unreliable.  As testament, see the many postings from customers who
have
wasted their time trying to get reliable data from it.
As we keep saying: Use the TSM accounting records as your principal source
of
data for usage statistics.  That's what they are there for, and they
reliably
contain solid, basic data about sessions.  Beyond that, use the Activity
Log,
the Events table, and client backup logs.
  Richard Sims, BU


Re: setting max capacity for storage pools

2004-01-14 Thread Ted Byrne
Take a look at maxnummp for the node backing up the data.  If it's set to
0, your data will not go to tape.
Ted

At 10:49 AM 1/14/2004 -0500, you wrote:
i have a disk storage pool which i previously never coded a maximum size
threshold for
i changed the disk storage pool to a max of 100mb, the server started
backig up. it
was running ok till it tried a 300mb file, i thought the 300mb file would
then go to the
next storage pool directly which is a tape pool.
this mesage appeared in the dsmerror log file

ANS1312E Server media mount not possible

the device class for the disk storage pool is

   Device Class Name: DISK
Device Access Strategy: Random
Storage Pool Count: 7
Format:
   Device Type:
 Est/Max Capacity (MB):
   Mount Limit:
 Mount Retention (min):
  Mount Wait (min):
 Unit Name:
  Comp:
  Library Name:
the device class for the tape storage pool is

   Device Class Name: NT3590
Device Access Strategy: Sequential
Storage Pool Count: 1
Format: 3590B
   Device Type: 3590
 Est/Max Capacity (MB): 9,216.0
   Mount Limit: 3
 Mount Retention (min): 1
  Mount Wait (min): 60
 Unit Name: 3590-1
  Comp: Yes
  Library Name:
tim brown


Re: tsm select statement help2

2004-01-13 Thread Ted Byrne
The information that you're looking for should be getting recorded in the
SUMMARY table, but that is broken.  Search the list on adsm.org for
summary to get details.
You best bet to get what you're looking for may be to process the
accounting logs generated by the server.  You will need to do this outside
of TSM.  Check the Admin guide for details of the record layout (it's a
comma-delimited file).
Alternatively, you can query the activity log for the period that you are
interested in - you can use a combination of the options or=client
nodename=X and msgno= to filter for the information that you are
looking for.
Ted

At 01:54 PM 1/13/2004 -0500, you wrote:
I have been banging my head againist the wall for monthes now because I
can't get the stats from the TSM server on how the client nodes have done
each night.
Each night, every node writes the following chunk of code to the schedule
log:
Total number of objects inspected:  802,019
Total number of objects backed up:   43,021
Total number of objects updated:  0
Total number of objects rebound:  0
Total number of objects deleted:  0
Total number of objects expired:119
Total number of objects failed:   1
Total number of bytes transferred: 3.02 GB
Data transfer time:  145.02 sec
Network data transfer rate:21,870.55 KB/sec
Aggregate data transfer rate:392.97 KB/sec
Objects compressed by:0%
Elapsed processing time:   02:14:30
Does this info get saved to the TSM Server? and if so what query works to
get that info?
I have tried several queries looking at most of the tables on the TSM
server. I am looking for
the total inspected, total backed up, and bytes transfered on a daily basis.
Can someone PLEASE!!! help.
Olin Blodgett

The information contained in this message may be privileged and/or
confidential and protected from disclosure.  If the reader of this message
is not the intended recipient or agent responsible for delivering this
message to the intended recipient, you are hereby notified that any
dissemination, distribution or copying of this communication is strictly
prohibited.  If you have received this communication in error, please notify
the sender immediately by replying to this message and deleting the material
from any computer.


Re: Copy Storage Pool.

2004-01-09 Thread Ted Byrne
I would recommend creating another copy storage pool that is dedicated to
data that will have its copy stored in the other building (maybe call the
stgpool local_vault).
Once the stgpool is defined using the devclass that is appropriate for the
medial you want to use, simply backup the storagepools that you want
vaulted in the other building to the new stgpool:
backup stgpool src_pool local_vault maxpro=2 wait=y
backup stgpool another_pool local_vault maxpro=2 wait=y
Or something similar...

HTH,

Ted

At 02:51 PM 1/9/2004 -0300, you wrote:
HI, folks.

I4m having a problem to do a backup copy. I need to do a backup copy to
other tapes (copy storage pool), but I4m lost (I read a little the manual,
but it is not so clear.)...
I know that there is a option in the backup offsite (that i saw in
manuals)...I would like to do my backup and to create a tape copy to take
from the room (and take it to another building)...
Who can help me ???

thanks

George Wash



-
Central anti-spam do Yahoo! Mail: com dicas, dzvidas e curiosidades!


Re: New Library and Tape Drives

2003-12-18 Thread Ted Byrne
Andy,

If you are looking for the Jaguar drives to completely take over the
workload for offsite, you will need to make sure that all appropriate
backup stgpool commands that are run via TSM schedule/script or by some
external mechanism are updated to backup to the new copypool(s).
I believe that you will also need to define a new devclass for the 3592
drives (and potentially a new library - I'm a bit fuzzy on the details of
how a mixed 3590/3592 library actually works)
HTH,

Ted

At 10:54 AM 12/18/2003 -0600, you wrote:
We are installin 10 new IBM 3592 tape drives, to be used primarily as
offsite storage units.  I am wondering about the best way to incorporate
these into the system.  My thinking was:
Create new Copypool
Backup the Disk Pool to the New Copypool
Shrink the Old Copypool by Ettrition (sp?)
Eventually Copy the Old Copypool Tapes to the New Copypool
Am I missing anything?  Is there a better way to do it?  Thanks for any
information.
Andy Carlson|\  _,,,---,,_
Senior Technical Specialist   ZZZzz /,`.-'`'-.  ;-;;,_
BJC Health Care|,4-  ) )-,_. ,\ (  `'-'
St. Louis, Missouri   '---''(_/--'  `-'\_)
Cat Pics: http://andyc.dyndns.org/animal.html


Re: Strange tape status

2003-12-17 Thread Ted Byrne
I moved the tape with tapeutil to an I/O-slot and then tried a Label
Libvolume on it, but all TSM gave me was:
17-12-2003 14:55:20  ANR8841I Remove volume from slot 30 of library
3575LIB at your convenience.
Richard,

Is the tape write-protected?  I've seen the same log msg when
(inadvertently) attempting to label a write-protected tape in a 3583.  I
have no hands-on experience with the 3575, so I can't say that it would
respond the same, but it's worth taking a look.
-Ted


Re: Updating the license information

2003-12-10 Thread Ted Byrne
We can't remember what the file name is that the license registration is
looking for.  Can anyone help?
Look for *.lic files in the server installation directory.


Re: Fun with disaster recovery

2003-11-22 Thread Ted Byrne
So, even though I made my change a month or so ago, unless I had actually
backed up the file(s) in question they could potentially be in the wrong
mgtclass?  The rebind wouldn't occur until the file was actually touched?
Joe,

Files/data are not in a mgmtclass.  It might be helpful to remember that
the parameter of the copygroup which governs where the data gets sent
during backup or archive is called destination and not location.  The
destination parameter only affects the data when it is first sent to the
server.  Where the data goes after it gets to the server is governed by
processes like stgpool configuration, stgpool backup, migration,
reclamation, etc.
The rebind occurs on the next incremental backup after the updated
policyset is activated.  Changes in the retextra, retonly, verexists and
verdeleted parameters will take effect immediately, but data that is
already on the server does not get moved as a result of changing the
managementclass to one that has a different destination that the
storagepool where the data is currently stored.
To get (all of) the data where you want it to be, two options come to mind:
1) MOVE NODEDATA
(potentially a lot of time spent on the server,
depending on volume of data)
2) Rename the current nodes to client_old,
define new nodes with the same name as the original node,
and start off fresh with all of the data going to the
desired stgpool.
This potentially involves a lot of time sending data
across the network.
There may be other approaches, but I'm not coming up with anything else at
the moment.
HTH,

Ted


Inaccessible volumes in 3494 library

2003-11-21 Thread Ted Byrne
We have a discrepancy between what our 3494 library reports as inventory
and what is available to TSM.
After correlating the output of mtlib -l /dev/lmcp0 -qI and Q LIBV, we
came up with two volsers that show as present in the Library, but do not
show up in TSM.
An attempted audit of the tape with mtlib -l /dev/lmcp0 -a -V volser
resulted in the following error message:
   Audit Volume  operation Failed, ERPA code - 75,  Library VOLSER Inaccessible.

What is the recommended course of action in a scenario like this?
Can it be corrected without being physically at the library?  (We support
this library remotely)
One last question - Does the library need to quiet in order to fix
it?  We are currently running almost flat-out with the windows where we do
not have any tapes mounted are few and very short-lived.
Any recommendations you can give are appreciated.

Thanks,

Ted


Re: TSM and EMC Celerra

2003-11-20 Thread Ted Byrne
If this is for a scheduled backup, make sure that the TSM services are
configured to run with an account that can access the device.  The SYSTEM
account (default) does not have privileges to access network resources.
Ted

At 08:04 AM 11/20/2003 -0700, you wrote:
Hi Folks,

I'm a (re)new list subscriber.  I was on the list a few years ago when I
installed ADSM at this site then moved on to other projects.  Now I'm back
supporting TSM and I need some help with a problem that we have created for
ourselves.
We have a number or nodes that created archives from a drive mapped to a
NetWare file server.  The NetWare server data was stored on an EMC
Symmetrix.  We then replaced the NetWare servers with an EMC Celerra NAS
device.  This works great.  The client nodes can see the old NetWare, now
Celerra, mapped drives and their archives on the TSM client GUI.  What they
cannot do is archive or retrieve any of the data that they can see!  The get
the following message on e3ach and every file that TSM attempts to process:
ANS5174E A required NT privilege is not held

We cannot figure out what we need to change to make this problem go away.
Anyone out there with any experience, Ideas, suggestions or solutions?
Thanks for your help,

Ted Spendlove
ATK Thiokol Propulsion


Re: Migrating nodes between TSM servers

2003-11-19 Thread Ted Byrne
Graham,

Export/Import node might be your best bet for what you are trying to achieve.
I'm not sure if that method will carry the backupset information along with
it, but you have the option of using DEFine BACKUPSET to make the
backupsets available on the destination server.
From the online TSM help:

Use this command to define a client backup set that was previously generated
on one server and make it available to the server running this command. The
client node has the option of restoring the backup set from the server
running this command rather than the one on which the backup set was
generated.
Any backup set generated on one server can be defined to another server as
long as the servers share a common device type. The level of the server to
which the backup set is being defined must be equal to or greater than the
level of the server that generated the backup set.
Hope this helps,

Ted

At 01:03 PM 11/20/2003 +1100, you wrote:
Guys (and gals),

Is there a way to migrate node information (backup data, backupsets etc)
between TSM servers? I have two identically configured TSM servers (4.2.3.0
on AIX) on the same network backing up around 1000 servers between them. As
some of my nodes are holding more data than others, I have an uneven load
on each TSM server and want to migrate nodes from one to the other.
Registering the node on the other TSM server and pointing the node from one
to the other is easy enough to do, but I won't retain backup history or
backupset information.
Any assistance would be helpful.

Regards,

--

Graham Trigge
IT Technical Specialist
Server Support
Telstra Australia
Office:  (02) 8272 8657
Mobile: 0409 654 434



The information contained in this e-mail and any attachments to it:
(a) may be confidential and if you are not the intended recipient, any
interference with, use, disclosure or copying of this material is
unauthorised and prohibited. If you have received this e-mail in error,
please delete it and notify the sender;
(b) may contain personal information of the sender as defined under the
Privacy Act 1988 (Cth).  Consent is hereby given to the recipient(s) to
collect, hold and use such information for any reasonable purpose in the
ordinary course of TES' business, such as responding to this email, or
forwarding or disclosing it to a third party.


Re: what tape is in a drive?

2003-11-18 Thread Ted Byrne
How about using query mount?

-Ted

At 04:15 PM 11/18/2003 -0600, you wrote:
TSM 5.1, 2K server, overland neo 4100 with 2 X LTO2 drives.

I'm trying to find a select or query statment that will tell me what
volume is in a drive. The only thing close is the drive_state in the
drives table, but that only tells me if it is loaded or not, whereas I
need to know what volume is in there.
Any idea? Seems simple enough, but I couldn't find it.

Thanks,

Alex


Re: Getting summary data from SQL Back Track to TSM

2003-11-04 Thread Ted Byrne
You would need to script it, but I would suggest looking at the syntax for
the command ISSUE MESSAGE.
Ted

At 09:03 AM 11/5/2003 +1300, you wrote:
a.k.a I dont want to reinvent the wheel.

Can anyone suggest an elegant way of feeding SQLBT summary data through to
TSM so we
can see it on the activity log?
Cheers, Suad
--


Re: How to stop tape from being ejected?

2003-11-04 Thread Ted Byrne
Anyway to configure the software to stop ejecting the tape?
Steve,

Try setting your MountRetention setting for the devclass to a high value.

Ted


Re: Include/exclude not working

2003-10-31 Thread Ted Byrne
Eric,

Since E: is not a local drive/filespace, did you add it to the domain
statement in your dsm.opt file?
-Ted

At 02:15 PM 10/31/2003 +0100, you wrote:
Hi *SM-ers!
I must be doing something wrong. My PC (WinNT, 5.1.6 client) has a network
drive E:. I want everything but E:\Inetpub (and all underlying files and
directories) on E: excluded from the backup.
My dsm.opt contains the following lines:
INCLUDE E:\Inetpub\...\*
EXCLUDE.DIR E:\...\*
Exclude E:\*
However, the GUI shows all files and directories excluded and the backup
backs up nothing.
I'm lost...
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines
**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail or
any attachment may be disclosed, copied or distributed, and that any other
action related to this e-mail or attachment is strictly prohibited, and
may be unlawful. If you have received this e-mail by error, please notify
the sender immediately by return e-mail, and delete this message.
Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
employees shall not be liable for the incorrect or incomplete transmission
of this e-mail or any attachments, nor responsible for any delay in receipt.
**


Re: Domino backups going to wrong storagepool

2003-10-29 Thread Ted Byrne
Did you update the management class (and activate the policy domain in
question)?
Ted

At 11:13 AM 10/29/2003 -0500, you wrote:
Back a while ago, when we first started doing Domino TDP backups, due to
lack of disk space, the backups were directed straight to tape.
Since then, lots of SSA disk (can we say CHEAP.) was added and I
changed the management classes to point to the new disk storage pool.
However, the backups are still going to tape. I see lots of sessions
waiting for MEDIA.
What gives ?   Does Domino remember where it was going and keeps going
there ?
ALL of the backupcopygroups for this policy set/domain point to the disk
storage pool.
Del, your thoughts on this ?


Re: Too many DRM tapes leaving 3949 library daily.

2003-10-14 Thread Ted Byrne
Alan,

As Richard Sims suggested, Q Content can help you figure out what is
going on.
To get a better handle on what's actually getting sent off-site in this
collocated stgpool, take the list of volsers that are being sent to the
vault from today.  Do this before reclamation gets started on this stgpool...
For each tape, issue the command q content volser count=1.  Since the
stgpool is collocated, you will be able to easily identify what node is
using each tape.  Q content runs fairly quickly when you specify count=1,
so if you batch up the commands (maybe as a macro), it should not take too
long.
You can redirect the output to a text file, and use that as input to
another program for analysis (Excel works pretty well for QD sorting and
grouping).  This will also allow you to verify whether data from any
unexpected nodes is sneaking in there.
Good luck,

Ted

At 09:38 AM 10/14/2003 -0400, you wrote:
3590H tapes which hold 60GB uncompressed. I've seen as much as 297GB fit on
a tape using the tape drive's compression and compressible data (SQL
databases). I'm backing up much less that that per night per server. The
largest critical server is only 31GB on average. NONE of them should take up
more than one tape per night each. Since the tapes leave daily almost empty
they are immediately eligible for reclamation. I would expect that the next
day backup storage pool to offsite should go to the tapes that were create
during reclamation and not to scratches. As I monitor maintenance processing
this seems to be the case. I do not understand why so many tapes should be
ejected from the library. It's taking quite some time to eject the tapes
each day and an equal amount of time to insert all the onsiteretreive tapes.
I'm hoping that someone can point to where I should look. I've verified that
the non-critical pool is NOT collocated and that THAT pool is pointed to by
the default management class. I have to set up an include statements in the
DSM.OPT files to point nodes to the critical (collocated) pool so I'm fairly
confident that nodes are not going there that do not belong there.
Take care,
Al
Alan Davenport
Senior Storage Administrator
Selective Insurance Co. of America
[EMAIL PROTECTED]
(973) 948-1306
-Original Message-
From: David E Ehresman [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 14, 2003 8:56 AM
To: [EMAIL PROTECTED]
Subject: Re: Too many DRM tapes leaving 3949 library daily.
How much data are you backing up per node?  What size tapes are you
writing to?


Re: Reclaim failing on off-site copy pool

2003-10-14 Thread Ted Byrne
At 10:45 AM 10/14/2003 -0400, you wrote:
Ran it at 70
-
Karel's suggestion is good - start with a higher number.  Your offsite
volumes' data is most likely spread across many onsite volumes.  This can
cause offsite reclamation to take a very long time if your threshold is set
too low.
Also, any unavailable volumes?  That can cause problems with reclamation.

You might want to define the script shown below - it can give you a handle
on how best to set your reclamation thresholds.  Substitute the reclamation
threshold value and stgpool name that you want to check on.
HTH - Ted

Script name:q_reclaim_stg
Invoke with:run q_reclaim_stg pct stgpool_name
SELECT stgpool_name as Pool, -
   count(volume_name) as No. Volumes, -
   cast(sum(cast((est_capacity_mb/(2**10)* -
   (pct_utilized/100)) -
   as decimal(5,1))) as decimal(5,1)) -
   as GB Reclaimable -
FROM   volumes -
WHERE  pct_reclaim$1 -
ANDlower(stgpool_name)=lower('$2') -
GROUP BY stgpool_name
SELECT cast(trim(volume_name) as char(10))  -
   as Volume, -
   pct_reclaim as Recl., -
   pct_utilized as Pct Used, -
   cast(status as char(12)) -
   as Status, -
   cast((est_capacity_mb/(2**10)* -
   (pct_utilized/100)) -
   as decimal(5,1)) -
   as GB Used, -
   stgpool_name as Pool -
FROM   volumes -
WHERE  pct_reclaim$1 -
ANDlower(stgpool_name)=lower('$2') -
ORDER BY Status, GB Used


Re: How to completely expire deleted files

2003-10-11 Thread Ted Byrne
Now, I have read that the new retention settings would only have taken
effect when the files were actually being incrementally backed up and thus
the client would see that they no longer existed them and expired them with
the new retention settings.
Farren,

As your situation stands now, you are correct.  Inactive objects without a
corresponding active version can not be re-bound to a new management class.
There are a couple of possibilities to release the relocated data, but they
all have the end result of deleting historical data for the client
prematurely.  Think about how much storage is really being
hogged.What is the auditoccupancy of the node in question?  Is your
server really so constrained that you can't afford to let the data expire
naturally?
If you choose to act on any of these options, be careful of what you are
doing...
### ***   Proceed with caution*** ###
### ***   Double-check before executing   *** ###
It would theoretically be possible to create dummy (zero-length) files to
stand in for the old files, then back them up incrementally to re-bind
them to the new MC.  Depending on the number of files involved, actually
doing this would probably be prohibitive in terms of the time required.
The other possibility that I can think of is to re-define the policies for
just that node:
1) Create a new policy domain by copying the
policy domain that the node is currently in,
2) Modify the management classes in the *new*  domain
to retain the data for however long you wish
3) Activate the updated policyset in the new domain
4) Update the node so that it is assigned to the new domain.
5) Run expiration.  The excess versions should be dropped.
6) After expiration completes, update the node
so that it is assigned back to the original policy domain
7) Ask your system admins to let you know in advance
before doing re-orgs, so that you can plan how
to handle the old data.
OK, it may not make any difference,
but at least you've spoken up ;-)
-Ted


Re: Checkin libvol

2003-10-10 Thread Ted Byrne
Hi everyone!
Everytime I try to checkin my tapes to my library I need to run q
request and reply request nr
Is it possible to checkin tape without to run a reply request nr?
Checking in the tapes through an I/O station (search=bulk) does require
that a reply be issued to a request.  However, for many of our customers,
it is not practical to have the operator who  is actually handling tapes
running a checkin manually after the Entry/Exit port is loaded with the
day's tapes (almost always less than the capacity of the entry/exit port).
For this situation, we have written a small perl script that gets scheduled
once a day,  typically at the end of each weekday.  It's defined as a
client event to run on a machine with the administrative client
installed.  The script runs, issues the checkin libv command, issues the q
request, parses the output to get the request number, and then issues the
reply.
Works well for us and our customers...

Ted


Re: Checkin libvol

2003-10-10 Thread Ted Byrne
why would you want to run such a script from
somewhere other than on the server itself???
Hi Andy,

Usually the checkin script does run on the server.  In some instances, we
also have a set of administrative scripts and programs running from another
system, and we group all of those together to make the administration simpler.
Ted


Re: icon disappeared from upgrading to TSM 5.2.0.3

2003-10-10 Thread Ted Byrne
Is it true that if he doesn't configure the
web client or install CAD then port 1581 is not open?
Security is on everyone's mind these days.
Hi Eliza,

Does port 1581 show up in netstat -a output like below?  Have you tried
connecting to the user's machine with the a web browser pointed at port
1581?  Unless the client configuration has changed to run the client
acceptor service in webclient mode, I don't think that the user is exposed.
C:\usr\binnetstat -a |find /i :1581
  TCPsvr0100:1581  svr0100.tsm.customer.com:0  LISTENING
Ted Byrne


Re: Using the EXPORT command

2003-10-08 Thread Ted Byrne
Ken,

Assuming you have adequate free disk online, you should be able to use a
devclass where devtype=FILE.
I'd recommend setting up a devclass for the desired target directory and an
appropriate MAXCAPacity.  (Default for MAXCAP is 4 MB.) Assuming that
you're doing an export of node data, you (probably) don't want to be
dealing with gobs and gobs of volumes.
HTH,

Ted

At 06:02 PM 10/8/2003 -0400, you wrote:
Enviro:

W2K 5.1.6.3 TSM server
DLT library
I have read over the TSM Admin. Reference manual on all the different
kinds of EXPORTS.
I know that you can EXPORT to DLT tapes and DEVclass=SERVER.

Question:
Is it possible to EXPORT to a disk file? (ie., E:\export\exportnode.txt)
I thought it was possible, except the DEVCLASS parameter cannot be = DISK.

What am I missing?



Ken Sedlacek
AIX/TSM/UNIX Administrator
[EMAIL PROTECTED]
Agilysys, Inc. (formerly Kyrus Corp.)
IBM eServer Certified Specialist: pSeries AIX v5 Support
IBM Certified Specialist: RS/6000 SP  PSSP 3
Tivoli Certified Consultant - Tivoli Storage Manager v4.1


Re: select statement syntax

2003-10-03 Thread Ted Byrne
Joni,

Using a calculated timestamp value in a query against the events table does
not work as you would expect.  It boils down to an issue with when TSM
constructs the events table to run the query against.  The default for q
event  and SELECT xxx,yyy,zzz from EVENTS... is for the current day
*only*.
When you run your query with a calculated value to compare the
scheduled_start against, TSM  builds the events table first (using the
default current day) BEFORE doing the comparison of
scheduled_startcurrent_timestamp - 1 day
If, on the other hand, you use a static value for the comparison, TSM will
return the results that you expect.  I've tried multiple tacks on getting
around this, including nesting the date comparison in one or more sets of
parentheses, but I was not able to come up with a way to have a server-side
TSM script that would allow us to just type ru q_cl_status.   There may
be a way to do this, but I was not able to puzzle it out...
I had opened a PMR with IBM about this, and essentially got a working as
designed response from development.
Ted

At 10:04 AM 10/3/2003 -0400, you wrote:
Hello everyone!

I have tried this select statement many times with no success.  I thought
that this statement had previously worked, but now I'm having problems
again...  I am looking for all nodes that begin with HM and PA for the past
day.  Does anyone have any suggestions?  This is on a mainframe TSM server
5.1.6.2 and I am trying to query NT clients.  I have also tried to create
this through the TSM Opertional Reporting tool without any luck, so I am
assuming that something is wrong with my syntax.  Thanks in advance!!!
select node_name, scheduled_start, schedule_name, status -
   from events -
   where (node_name like 'HM%' or node_name like 'PA%') -
   and scheduled_startcurrent_timestamp - 1 day -
   order by node_name
Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338


Re: BackupSet creativity

2003-09-29 Thread Ted Byrne
Todd,

I'm guessing that you will eventually end up with an error message stating
that there was no match  found, or something to that effect.  It may take a
long time to get to that point.  Backupsets can be very slow to process;
you typically wind up reading through the entire tape before the command
completes (or errors out).
There may also be inherent client compatibility issues that will prevent
this from working at all.  For one thing, the filespace naming format is
completely different between NT and Netware.
If you did not specify a destination for the restore, there will be no
filespace/drive that matches the source on the server that you are running
the restore from.
It's always a good idea to specify a destination on a restore like this,
anyway - prevents nasty things like overwriting data on the production
server across the network, rather than restoring data to the test box that
the restore is running on.
Ted



At 10:23 AM 9/29/2003 -0500, you wrote:
Although this shouldn't matter, I am running TSM 5.1.7.1 on AIX 4.3.3.

We are moving a large number and size of files from a Netware server to an
NT server, and I am trying to come up with a way to help the process so we
don't have to use a workstation to copy files from a Netware mount to a
Windows mount (which really slows things down because the data has to
travel from the Netware server to the workstation and then to the NT
server all on one network connection).
I can't give client access rights to an NT server for Netware backups
because of the file/folder rights/properties/attributes issues.  So, my
bright idea was to generate a backupset of the Netware volume to the TSM
server, delete the backup set, define that tape volume to the NT server as
a backup set, and restore the data from TSM (my understanding is a
backupset is unencrypted backups with no rights/properties information...
just files and folders, perhaps readonly, hidden etc attributes included).
 The generate backupset went well (after I used move nodedata to place all
the files on one tape).  The delete and define had no noticeable issues.
We are trying to do the restore of the backupset to the NT server now, but
nothing is moving.  The session started only has 4.5K sent and 694bytes
rcv'd.  It has been sitting like that for 20 minutes.  I visually checked
the tape drive, and the light is flashing as if the drive is doing
something (LTO library).  The NT admin verifies that there are no
folders/files restored.
Was this a pipe dream to begin with?


Re: BackupSet creativity

2003-09-29 Thread Ted Byrne
..., how can the restore
process on my NT server (test box) force a restore TO my Netware server
(production box) across the net work?  That entire last statement of yours
concerns me.  I didn't think that was possible, but perhaps I am
misinterpreting what you mean there.
Todd,

The scenario above is probably not going to happen if you are attempting to
restore Netware -- NT, or vice versa,  It *can* happen if you are
restoring Netware -- Netware or Wintel -- Wintel, because the original
server name is part of the filespace name, like NWPROD1\SYS: or
\\WIN2KPROD\C$.  Therefore, the original location is on the server where
the data was backed up from.
If you  are restoring to the original location (default behavior) and the
original location is visible across the network and you have the rights
to files, you can overwrite data that you may not be expecting to
overwrite.  This is why the danger for you is probably non-existent - you
won't be able to directly see a Netware filespace from an NT server.
I was simply stating that it is a good practice to always specify a
destination, as a matter of habit, to prevent just such a scenario from
occurring.
Ted


Re: BackupSet creativity

2003-09-29 Thread Ted Byrne
You're going to have to use that workstation to copy files from point A to
point B, after all.
Todd,

Rather than using a workstation and send the data across the network twice,
could you install a Netware client (temporarily) on the NT server, map the
drive, and just do a copy from Netware to NT directly?
Ted


[no subject]

2003-08-29 Thread Ted Byrne
From: Bill Fitzgerald [EMAIL PROTECTED]
I have a client running Sun Solaris 5.8
with client version 5.1.5
TSM server is AIX 4.3.3 running TSM 5.1.6.5
Below is the  include/exclude list from the  dsm.sys
   include * shared_dynamic_60
include /adsmorc/.../* database_class
include /dev/.../* database_class
include /udb/.../* database_class
include /MOUNT/.../* database_0
include /usr1/.../* database_0
exclude /tmp/.../*
we recently add the last two includes, stopped and started the client, and
the objects were rebound.


Bill,

You might want to investigate exactly _what_ was rebound after
applying the changes to the include/exclude list.  In my experience, the
include/exclude syntax on the *nix platforms can be a bit misleading.
For example, the statement exclude /.../* would appear to
exclude all files on the client.  What it actually does is exclude all
files *only* in the / filesystem.
The first part of an include or exclude statement refers to the
filesystem, not a directory.  In the Win32 world, this is explicit, since
you have the colon separating the drive letter (or filesystem) from the
path. On *nix systems, the distinction is not readily apparent.
From the expiration messages that you posted, you appear to have other
filesystems mounted below /usr1 and /MOUNT:
/usr1/GATE, /usr1/EXPORT, /MOUNT/MISC, /usr1/ICACHE/1

If you issue a q ba /usr1/GATE/* fom the baclient command line, or browse
there with the GUI, I'm betting that you will see the managementclass as
SHARED_DYNAMIC_60
If you add these lines to your cinclude/exclude list, you should get the
results that you are looking for.
include /usr1/GATE/.../* database_0
include /usr1/EXPORT/.../* database_0
include /usr1/ICACHE/1/.../* database_0
include /MOUNT/MISC/.../* database_0
Hope this helps,

Ted


Re: help needed include/exclude/rebind

2003-08-29 Thread Ted Byrne
Posted this w/o a subject line...
-tb
Date: Fri, 29 Aug 2003 11:24:17 -0400
To: [EMAIL PROTECTED]
From: Ted Byrne [EMAIL PROTECTED]
From: Bill Fitzgerald [EMAIL PROTECTED]
I have a client running Sun Solaris 5.8
with client version 5.1.5
TSM server is AIX 4.3.3 running TSM 5.1.6.5
Below is the  include/exclude list from the  dsm.sys
   include * shared_dynamic_60
include /adsmorc/.../* database_class
include /dev/.../* database_class
include /udb/.../* database_class
include /MOUNT/.../* database_0
include /usr1/.../* database_0
exclude /tmp/.../*
we recently add the last two includes, stopped and started the client, and
the objects were rebound.


Bill,

You might want to investigate exactly _what_ was rebound after
applying the changes to the include/exclude list.  In my experience, the
include/exclude syntax on the *nix platforms can be a bit misleading.
For example, the statement exclude /.../* would appear to
exclude all files on the client.  What it actually does is exclude all
files *only* in the / filesystem.
The first part of an include or exclude statement refers to the
filesystem, not a directory.  In the Win32 world, this is explicit, since
you have the colon separating the drive letter (or filesystem) from the
path. On *nix systems, the distinction is not readily apparent.
From the expiration messages that you posted, you appear to have other
filesystems mounted below /usr1 and /MOUNT:
/usr1/GATE, /usr1/EXPORT, /MOUNT/MISC, /usr1/ICACHE/1

If you issue a q ba /usr1/GATE/* fom the baclient command line, or
browse there with the GUI, I'm betting that you will see the
managementclass as SHARED_DYNAMIC_60
If you add these lines to your cinclude/exclude list, you should get the
results that you are looking for.
include /usr1/GATE/.../* database_0
include /usr1/EXPORT/.../* database_0
include /usr1/ICACHE/1/.../* database_0
include /MOUNT/MISC/.../* database_0
Hope this helps,

Ted


Wildcard limitations in filespec?

2003-08-25 Thread Ted Byrne
Greetings,

We'd like to run an incremental backup of a group of directories.  The
exact directory names vary from server to server, but they all follow the form
   /u01/app/oracle/admin/*/arch/
where the * is the name of an Oracle instance.

When we run an incremental using the filespec as shown,  the backup does
not pick up any files, even when we create a new file or touch an existing
file in one of those directories.  We've tried this with and without quotes
around the filespec.
Is there a limitation on the filespec used so that wildcards in the middle
of the string do not get processed?  These are Solaris 9 clients using
client version 5.1.6.2 or 5.2.0.0
Thanks in advance,

Ted


Re: Wildcard limitations in filespec?

2003-08-25 Thread Ted Byrne
Andy,

Thanks for pointing out the file specification syntax section in the client
reference manual.  I overlooked that when looking at the incremental syntax.
We may end up using a script to process the directories that we want to handle.

Thanks again,

Ted


Re: MediaW

2003-06-10 Thread Ted Byrne
Any ideas why a tape is not being mounted?

This may not be the issue, but is the destination stgpool configured with
an adequate number of scratch volumes allowed (MAXSCRATCH)?
One other thing to look at - are all drives and paths online?

Ted


Re: License Pricing

2003-03-21 Thread Ted Byrne
The figures I quote are from the IBM Tivoli Price Estimator tool.

Is this tool publically available to TSM customers?

David
David and Arnaud,

The pricing tool is not available to customers.  See the following quote
from the tool:
IBM Internal Use Only
This tool authorized for use only within IBM and
by Authorized IBM Business Partners.  It is not to
be distibuted to customers or any other non-
authorized party.
Ted


Re: License Pricing

2003-03-20 Thread Ted Byrne
Doesn't the new licensing scheme result in 1) higher prices for =
clients, but 2) lower prices for the server.
...
So if you plan to back up a just a small number of clients, you may =
actually come out cheaper?
Can anyone confirm this (possibly erroneous) opinion?
Tivoli certainly doesn't make it easy to understand.
Wanda,

Your opinion is correct.  For a small installation, TSM can actually be
less expensive to purchase than it used to be.
Speaking from the standpoint of someone working for a reseller, this can be
a good thing.
However, in my opinion, the tipping point where a TSM solution starts
costing more than it would have before is *way* too low.  I have not
done  an exacting analysis on where that point actually is, but for some
customers, what was once 40-50K software purchase would now be in the
hundreds of thousands of dollars.
The fact that it licensing is now CPU-based rather than box-based
introduces a very ugly multiplier effect at customers where the typical
Wintel server is 2-way or 4-way.  What used to be a $217 client (for a
4-way Wintel machine) is now anywhere from $1400 to $2500 list (depending
on the size of the library being used) in quantities of 1.  Quantity
discounts are, of course, available...
The figures I quote are from the IBM Tivoli Price Estimator tool.  Prices
are suggested prices only and are subject to change in IBM's sole discretion.
Even if you're not adding clients, the licensing change introduces a very
stiff increase in maintenance costs, as well.
My opinion only...
(But I have shared it with folks from IBM/Tivoli when I've had the chance.)
Ted


Q BA syntax on *nix

2003-02-18 Thread Ted Byrne
Please feel free to point me to the documentation if I've overlooked
something here, but I'm running into a brick wall...

   Client version 5.1.5.5
   Client OS - AIX 5.1 ML 02
   Server version 5.1.5.4
   Server OS - AIX 5.1 ML 02

I've been asked to produce a list of all current files that are backed up
on two  particular AIX nodes.  Based on the syntax described in the manual,
when logged in as root, I enter the command

   q ba -subdir=y /*

expecting to get a list of all files from the / directory on
down.  However, what I get appears to be all of the file in the /
filespace.  When I enter another FS name in place of the /* filespec,
/u06/* for example, the files in that filesystem are displayed.

Is there a way to display all files backed up using q ba  without
explicitly listing each FS by name in a separate command (for example with
a macro)?

Thanks,

Ted



Re: Anyone tried 5.1.1.6?

2002-09-24 Thread Ted Byrne

Gretchen,

 I just took a v4.2.2.12 AIX server to v5.1.1.6 last Saturday.
 All I can say is 'not pretty'.
 This is due to the SYSTEM OBJECT problem.

Can you elaborate on the system object problem that you were dealing
with?  I've searched for an APAR that fits the description that you've
given, but I have not been able to locate a reference to the APAR.  I've
looked in the 5.1.1.6 readme and searched through Tivoli's/IBM's search
mechanisms.

Thanks,

Ted



Re: Private Volumes

2002-05-13 Thread Ted Byrne

Burak,

You can automate the return of volumes using AutoVault. (See
www.coderelief.com and search the adsm.org archives for detailed
information regarding AutoVault's capabilities.)  In comparison to
AutoVault, DRM is missing some capabilities, such as automated vaulting of
backupsets and automated generation of tape rotation reports to email and
printer.

We have been using AutoVault extensively, and are very happy with what it
enables us to do.

Ted Byrne



Server-side scripting: not supported?

2002-02-28 Thread Ted Byrne

Has anyone received a similar response from TSM support in the past?  We
are working on an issue with a process running for a very long time, and
received the following as part of the response from TSM support:

 [W]as that the
 run-time of a specific backup process, or was
 that the run-time of the entire script

 Since we do not support scripts, I need to verify
 that this problem is not your script Try running
 each command in your script manually

The specific instance was a storagepool backup that was still running a day
later, parked on a 16+ GB file  The storagepool  backup was tape to tape;
the drives are on separate, dedicated SCSI adapters

TSM Server is Win2k,
TSM version 4210,
IBM 3583 Library

Thanks in advance,

Ted



Date math and the Events table

2002-01-17 Thread Ted Byrne

First off, I should say that I have read (and think I understand) Andy
Raibeck's explanation from 11/23/2001 of the restrictions on queries
against the Events table.  However, in trying to craft a query that returns
records from a relative timeframe, I am running into some problems getting
the result that I would like to get.

What I am trying to do is run a query that  yields the same results as

q ev * * begind=-1 endd=today begint=08:00 endt=07:59

with an eye toward doing some calculations on the number of events with
different status conditions

This is what I came up with:

select -
schedule_name, -
time(scheduled_start) as Scheduled, -
time(actual_start) as Actual, -
status as Status, -
node_name as Client -
from events -
where node_name is not null -
and date(scheduled_start+16 hour-1 minute)=date(current_timestamp)

This returns events only from the current day
However, if I hard-code the dates as follows:

select -
schedule_name, -
date(scheduled_start),-
time(scheduled_start) as Scheduled, -
time(actual_start) as Actual, -
status as Status, -
node_name as Client -
from events -
where node_name is not null -
and scheduled_start between -
'2002-01-16 08:01:00' and -
'2002-01-17 08:00:59'

I get a lengthy listing of all events, as I do from the query event command.

I suspect that there is some voodoo happening with the relative date
calculation not occurring before the events are restricted to the current
date, but I have not been able to come up with a query that works as I
would like it to.

Any suggestions or pointers on performing date math in SQL queries would be
greatly appreciated

Thanks,
Ted



Re: Point system has me very confused

2001-12-09 Thread Ted Byrne

  Dwight,

 Looking into things you realize that on 20 existing machines you have
 NEVER use software distribution SO you just drop it from those
 20 machines and use the freed points to apply towards TSM client 
 TWS points for your new 12 machines.

Unfortunately, I believe that this is not correct.  According to the
discussions with Tivoli that we've had (as a reseller), the points are
transferrable only within the confines of a Product Number.  This is true
even for products that one might think of as components of the same product.

For example, you might have a stable of 10 Unix servers that you're backing
up with TDP/Oracle (Product 5698-DPO).  Being Tier-2 systems, the list
price is 200 points each client, about $62,000 total.

Your company announces an acquisition of a company, and in the course of
hashing out the details of the transition, it is decided that the merged
company will standardize on DB2 as a database platform.  In addition, they
have 100 Unix web servers that are being backed up with a product that
nobody likes.  TSM is selected as the standard backup method for the merged
company.

In our understanding, (someone from Tivoli correct me if I am wrong about
this, please!) you may not transfer the 2000 fallow points from your
existing 5698-DPO bucket to the 5698-TSM (Managed System - LAN) product
that is required to backup your web servers.  Total list price: 1500
points, about $46,500.

Again, this is how it was explained to us; if someone from Tivoli can
correct me, I'd be happy to hear otherwise.

Quoting from the Tivoli Quick Quoter tool: Tivoli Points are Product
Specific.

Ted Byrne



Re: AutoVault

2001-10-19 Thread Ted Byrne

We use AutoVault in several locations; it's performed well doing day-to-day
vaulting tasks, it has capabilities that DRM does not have (sending
backupsets to the vault is a big one), and the recovery scripts are great
to have at DR test time.

Hope this  helps,

Ted



Re: Novell / Groupwise / TSM

2001-10-04 Thread Ted Byrne

 Is there a way to script the 4 archive commands, where each
 waits for the previous one to complete?

You could schedule a macro to execute.  The contents of the macro on the
client would just be a list of the archive commands to run.  We recently
used a macro with a series of commands to gather information about the
configuration of some clients.

HTH,

Ted



SQL query - GROUP on derived value?

2001-09-24 Thread Ted Byrne

Is it possible to group query output by a value that is *not* directly
present in a column in a table?  What I'm trying to achieve is something
like producing a tally of volumes that are pending, grouped by day that
they went pending.

If I try something like this:
select count(*),cast(pending_date as date) as 'Reclaimed' -
from volumes as pending where status='PENDING' -
group by 'Reclaimed'

TSM does not care for it:
ANR2906E Unexpected SQL literal token - 'Reclaimed'.

If I change the group by to refer to the pending_date column, the query
is processed, but the pending_date is a TIMESTAMP with a time component as
well.  There might be 10 different pending_date values that fall at various
times during a particular day.

This can certainly be done using perl, but I'd like to stick to pure SQL
if I can, to make it available as a script from within TSM.

Any suggestions?

If this is an RTFM item, I'll willingly take my time in the corner...  (A
referral to a SQL reference would be helpful; perhaps I'm just not looking
at the right sections of the references that I've consulted.)

Thanks,

Ted



Re: SQL query - GROUP by derived value?

2001-09-24 Thread Ted Byrne

Andy,
Thanks for the feedback.  I may not have been clear in stating my original
question, and now that I've gone back and re-read your ADSM SQL reference,
I think what I'm looking for may not be possible within the syntax of the
SQL Query.

Just to clarify:

 But not my question becomes, why do you have
 from volumes as pending instead of just from volumes?
This was a typo on my part...

 I would just write:
 select count(*), cast(pending_date as date) as Reclaimed -
 from volumes where status='PENDING' group by pending_date

When running this query, the output looks like this:
  Unnamed[1]  Reclaimed
 --- --
   1 2001-09-17
 (more rows here)
   1 2001-09-17
 (more rows here)
   1 2001-09-24
 (more rows here)
   2 2001-09-24
 (more rows here)
   1 2001-09-24

There are multiple results with the same value in Reclaimed; the
PENDING_DATE is different because they became pending at different times
during the same calendar day.

What I'm trying to produce would be more like the following:

  Unnamed[1]  Reclaimed
 --- --
   7 2001-09-17
  10 2001-09-23
  11 2001-09-24

The syntax diagram from Using the ADSM SQL Interface seems to exclude
anything but a column name that is part of a table from being used for
grouping.

 '-GROUP BY--++--.--column_name--'
 '-table_name-'

Thanks,

Ted



Re: Multiple TSM Servers

2001-09-05 Thread Ted Byrne

 I am worried about our operators accidentally inserting library-1
 good tapes (i.e. offsite) into the other library-2 as scratch.
 The TSM server will add them to the scratch pool.

Bill - Richard's idea about using the distinct solid colors around each
library portal is a good one.

If you're using barcode-labelled media, another alternative is to use
markedly different label formats for the human-readable portion of the
label.  For example, one set could use horizontal color-coded labels and
the other set could be labelled with vertical black  white labels.

What type of library/media will you be using?

Ted



Re: AIX device drives

2001-07-03 Thread Ted Byrne

 I found a message on the list
 (http://msgs.adsm.org/cgi-bin/get/adsm0012/34.html)
 which refers to a level 4.3.3.25. Is this an error
 or is there indeed a higher level at an obscure hidden
 place on the Internet somewhere?

Eric,

It's certainly possible that there is a(nother) obscure hidden place on the
Internet somewhere that has a higher version of the driver, but all of the
ftp sources that I've checked do not show this fileset.  Is calling AIX
SupportLine to verify if there is a more recent level of the driver an
option?

It does seem rather odd that this device driver would not have been
modified since base 4.3.3 code.

Perhaps the author of the December post you referred to (Gareth Jenkins)
could shed some light on the subject?

Ted



  1   2   >