magstar 3494 / 3590 / AIX question

2000-08-10 Thread Richard Cowen

Support just got back to me.  They suggest using:

Atape 5.0.2
atldd 4.0.7

under AIX 4.2.1.   I'll do that.


At 10:25 AM -0400 8/10/00, Richard Cowen wrote:
I am currently running:

AIX 4.2.1
ADSM 3.1.2.55
Atape 4.4.0.0
atldd 4.0.1.0
3590 B1A's.

I am going to upgrade the 3590's to E1A's.

The newest drivers are:

Atape 5.3.7
atldd 4.1.5

The question:

Can I stay at AIX 4.2.1 and run these newer drivers?
Smitty install doesn't complain.
I have a call into Support, but its currently in the software-hardware loop.
--
Richard



Re: Client Monitoring and Reporting

2000-09-06 Thread Richard Cowen

Here's a place to start:

perl scraps:

# Examine adsm activity log and:
# create html for backups/archives in the last 24 hours.
# create html for "scheduled busy's" in the last 24 hours.
# create html for "schedules missed" in the last 24 hours.
# create html for "schedules failed" in the last 24 hours.
# create html for backup error messages  in the last 24 hours.


$adsmc='select date_time as \"date_time--\", message
as \"message---
\",
sessid from ACTLOG where
  msgno in
\(406,403,407,405,480,481,482,483,484,485,486,487,4961,2578,2579,2571,4953,4954,4955,4956,4959,4964,4007,4966\)
and date_t
ime\\= \\\'';
$adsmc.="$mydate $mytime";
$adsmc.='\\\' and date_time \ \\\'';
$adsmc.="$enddate $mytime";
$adsmc.='\\\' ';


# do the select adsm command against the ACTLOG table

getlog;

# process the arrays built for backups/archives.

process_backups;
process_busys;
process_missed;
process_errors;
process_failed;

sub getlog {
$command="/usr/bin/dsmadmc -id=batch_admin -passw=$bapwd -outfile
\\ $tmp1 $adsmc";
system("$command");

...


if (@text[0] eq "ANE4961I") { # number of bytes transferred
elsif (@text[0] eq "ANE4964I") { # elapsed time
elsif (@text[0] eq "ANE4966I") { # Network Transfer Rate
elsif (@text[0] eq "ANR0406I") { # session start
elsif (@text[0] eq "ANR0403I") { # session stop
elsif (@text[0] eq "ANR0480W") { # session terminated - connection severed
elsif (@text[0] eq "ANR0481W") { # session terminated - timeout
elsif (@text[0] eq "ANR0482W") { # session terminated - idle
elsif (@text[0] eq "ANR0483W") { # session terminated - forced
elsif (@text[0] eq "ANR0484W") { # session terminated - protocol violation
elsif (@text[0] eq "ANR0485W") { # session terminated - insufficient
memory on server
elsif (@text[0] eq "ANR0486W") { # session terminated - internal
error on server
elsif (@text[0] eq "ANR0487W") { # session terminated - preempted
elsif (@text[0] eq "ANE4959I") { # objects failed
elsif (@text[0] eq "ANE4953I") { # Objects archived
elsif (@text[0] eq "ANE4954I") { # Objects backed up
elsif (@text[0] eq "ANE4955I") { # Objects restored
elsif (@text[0] eq "ANE4956I") { # Objects retrieved
elsif (@text[0] eq "ANE4007E") { # error processing
elsif (@text[0] eq "ANR2571W") { # no sessions available
elsif (@text[0] eq "ANR2578I") { # missed scheduled session
elsif (@text[0] eq "ANR2579E") { # scheduled session failed




--
Richard



Re: samples..........

2000-09-11 Thread Richard Cowen

I had an auditdb (fix=no) on a 10 GB database running for 40 hours.
(AIX, RS/6000, C20, SCSI-disks)

Greetings,
Pieter

"Wakefield, Nigel" schreef:

  Does anyone have example auditdb fix=yes times for me to compare
  with my audit which is running on 40 GB database.??

  Does anyone know of way of telling how many entries you have in a
   database without doing a unloaddb.??



I just finished an "dsmserv auditdb fix=no."
db = 52gb 82% occupied.  sum(num_files) = 100,000,000.
AIX 4.3.2, 4x112Mhz high node, 1GB memory, SSA disks.
TSM 3.7.3.6.
No other significant processes running on that node.
Time = 4 days, 12 hours.

Last 5 lines:

ANR4306I AUDITDB: Processed 333290150 database entries (cumulative).
ANR4306I AUDITDB: Processed 333290150 database entries (cumulative).
ANR6646I AUDITDB: Auditing disaster recovery manager definitions.
ANR4210I AUDITDB: Auditing physical volume repository definitions.
ANR4141I AUDITDB: Database audit process completed.
--
Richard



Re: Determining Capacity on ADSM?

2000-09-19 Thread Richard Cowen

How do I determine how many physical slots my 3494 library has?

mtlib -l /dev/lmcp0 -qL | grep cells

How do I determine how many tapes are offsite?

a) select count(*) from volumes where stgpool_name='BACKUP3590_OFFSITE'
(where a storagepool is used for offsite.)
b) select count(*) from drmedia
(where DRM is in use.)
c) select count(*) from volumes where access='OFFSITE'
(where you set the access or use DRM.)

Note: tapes may be in transit to-and-from offsite.
Also, db backups are not in the volume table.

How do I determine how many tapes total I have?

select count(*) from volumes where devclass_name='MAGSTAR3590'
(where your device class name is MAGSTAR3590.)

How do I determine the average capacity per tape?


select avg(EST_CAPACITY_MB) from volumes where devclass_name='MAGSTAR3590'

--
Richard



Re: Determining Capacity on ADSM?

2000-09-19 Thread Richard Cowen

tsm: TSMselect count (*) from volumes where devclass_name='stk9710'

  Unnamed[1]
---
   0

tsm: TSMselect count (*) from drmedia

  Unnamed[1]
---
  49

tsm: TSMselect avg(EST_CAPACITY_MB) from volumes where
devclass_name='stk9710'

Unnamed[1]

whats wrong with first and third query? or not applicable to stk?
-


This is user-defined at installation/configuration.
Do a "query devclass" and see what your tapes are using.
(Case counts to select)
Mine show:

tsm: ADSMq devclass

DeviceDevice Storage DeviceFormat
Est/Max  Mount
Class AccessPool Type
Capacity  Limit
Name  Strategy Count  (MB)
- -- --- - --
 --
DISK  Random   3
LOOPBACK  Sequential   0 SERVER
10,240.0  2
MAGSTAR3- Sequential   7 3590  DRIVE
0.0  3
  590



on another note:

m147397@f1n01 $mtlib -?

Usage: mtlib -[acdfiklmnqstvCDV?]
Arguments: -f[filename]   device special Filename, i.e. "/dev/rmt0".
-x[number] device number, i.e. "518350".
-l[filename]   Library special filename, i.e. "/dev/lmcp0".
-q[type]   Query library information option.
   type should be one of the following:
V  Volume data.
L  Library data.
S  Statistical data.
I  Inventory data.
C  Category inventory data.
D  Device data.
E  Expanded volume data.
K  inventory volume count data.
R  Reserved Category List
A  Category Attribute List
M  All Mounted Volumes
-m Mount option.
-d Demount option.
-D List of devices in library.
-E Used with -D option for expanded device list
-c[request id] Cancel pending request option.
-n No wait mode.
-i[request id] Query request id status option.
-C Change the category of a volume.
-a Audit the specifed volume.
-k[flags]  Assign a category to a device in the Library.
   flags should be the following:
O  Enable Category Order
C  Clear out ICL
G  Generate First Mount
A  Enable Auto Mount
X  Remove category assignment from drive
Valid combinations: OG, OA, GA, OGA
   Loader associated with the Library.
-r Reserve a category
-R Release a category
-S Set category attribute
-s[category]   Source or starting category.
-t[category]   Target category.
-L[list]   List of Volume serial numbers
-V[volser] Volume serial number
-N[name]   Category Name Attribute to assign to category.
-h[hostid] host id for reserve/release category
   or R/A option for query command
-u Include usage date in expanded volume data.
   Default is ISO format with period separator.
-F[flags]  Format and/or separator for volume usage date
   flags can be:
I  ISO/Japan .mm.dd
E  European dd.mm.
U  USA mm.dd.
p  period separator mm.dd.
d  dash separator mm-dd-
s  slash separator mm/dd/
-v Verbose.
-#[number] Category sequence number or number of
   categories to reserve.
-A Query library addresses and status
-? this help text.

NOTE:   The  -l argument is required.
--
Richard



Re: TSM 4.1 vs. 3.7

2000-10-11 Thread richard cowen

Interestingly enough, today I received TSM 4.1 in the mail. If IBM says I'm
licensed for this product them I guess I'll have to believe them.

Now the next question is which way to go. I'm running on AIX 4.3.3 with a
3494 library. I have both 3.7 and 4.1 but am having problems deciding if I
should upgrade to 3.7 first and see how things work out, then migrate again,
or just go to 4.1.

I haven't seen a lot of posts on 4.1 and I'm wondering if there is anyone
with a similar setup that might be using it already. If so could you send me
some info please.


Me, too. (AIX 4.3.2).  The TSM 3.7 is still in test mode, so I will
put on 4.1 and try that.



counting tapes

2000-10-18 Thread richard cowen

# of tapes used for backups on a daily basis.

The simplest way is to do daily queries of your backup storage pool
volumes complement, and track the increase.

Or you could look in the actlog for msgs 1340,1341,1360,6684...
Or at the tail of your volhistory disk file:


unix tail dsmserv.volhistory
  2000/10/17 15:48:42  STGDELETE   0  0  0 MAGSTAR3590
M00569
  2000/10/17 16:30:33  STGDELETE   0  0  0 MAGSTAR3590
M00724
  2000/10/17 17:04:37  STGDELETE   0  0  0 MAGSTAR3590
M00833
  2000/10/17 17:39:14  STGNEW  0  0  0 MAGSTAR3590
M02030
  2000/10/17 17:45:36  STGDELETE   0  0  0 MAGSTAR3590
M01467
  2000/10/17 18:16:50  STGDELETE   0  0  0 MAGSTAR3590
M01175
  2000/10/17 18:47:06  STGDELETE   0  0  0 MAGSTAR3590
M00468
  2000/10/17 20:23:20  STGDELETE   0  0  0 MAGSTAR3590
A3
  2000/10/17 20:53:12  STGDELETE   0  0  0 MAGSTAR3590
M00060
  2000/10/18 00:11:36  STGNEW  0  0  0 MAGSTAR3590
M02031


ADSMq actlog begind=-1 msg=1340

Date/TimeMessage

--
10/17/00   09:02:15  ANR1340I Scratch volume M02025 is now
defined in storage
   pool TSMDBB.
10/17/00   10:32:35  ANR1340I Scratch volume M02026 is now
defined in storage
   pool BACKUP3590_OFFSITE.
10/17/00   11:31:02  ANR1340I Scratch volume M02027 is now
defined in storage
   pool BACKUP3590_OFFSITE.
10/17/00   13:42:07  ANR1340I Scratch volume M02028 is now
defined in storage
   pool BACKUP3590_OFFSITE.
10/17/00   15:20:02  ANR1340I Scratch volume M02029 is now
defined in storage
   pool TAPEPOOL.
10/17/00   17:39:17  ANR1340I Scratch volume M02030 is now
defined in storage
   pool TAPEPOOL.
10/18/00   00:11:40  ANR1340I Scratch volume M02031 is now
defined in storage
   pool TAPEPOOL.

tsm: ADSMq actlog begind=-1 msg=1360

Date/TimeMessage

--
10/18/00   05:01:16  ANR1360I Output volume M02032 opened
(sequence number 1).

tsm: ADSMq actlog begind=-1 msg=1341

Date/TimeMessage

--
10/17/00   15:03:06  ANR1341I Scratch volume M00031 has been deleted from
   storage pool BACKUP3590_OFFSITE.
10/17/00   15:03:09  ANR1341I Scratch volume M00042 has been deleted from
   storage pool BACKUP3590_OFFSITE.

10/17/00   15:48:45  ANR1341I Scratch volume M00569 has been deleted from
   storage pool TAPEPOOL.

sm: ADSMq actlog begind=-1 msg=6684

Date/TimeMessage

--
10/17/00   15:03:07  ANR6684I MOVE DRMEDIA: Volume M00031 was deleted.
10/17/00   15:03:09  ANR6684I MOVE DRMEDIA: Volume M00042 was deleted.
10/17/00   15:03:11  ANR6684I MOVE DRMEDIA: Volume M00092 was deleted.

--
Richard



Re: Scripts/Tools to monitor drive usage?

2000-10-18 Thread richard cowen

Here is a sample html I create by querying the actlog.



Re: DSMADMC from Visual Basic Shell Function?

2001-02-06 Thread Richard Cowen

Shell environments set (dsm_dir, dsm_config, path) ?
I am not a vbs expert, but I can do this with cscript:

testcmd="dsmadmc -id=admin -pa=pw -outfile=test.out " + arg0
set wshshell=wscript.createobject("wscript.shell")
wshshell.run testcmd,0,True


-Original Message-
From: Fletcher, Leland D. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 11:44 AM
To: [EMAIL PROTECTED]
Subject: DSMADMC from Visual Basic Shell Function?


I'm attempting to write a VB application to monitor the daily ADSM schedule
and to issue several commands throughout the day. My problem is that when I
use the shell command nothing seems to be working. I do not get any errors
but I do not get any output either. The format of the shell command I'm
using is provided below. I have not set any environment variables and plan
on executing the VB application under Windows 2000.

The following should issue the "q fi".

Private Sub Command1_Click()
x = Shell("C:\Progra~1\IBM\ADSM\saclient\DSMADMC.EXE -id=admin -pa=x
-outfile=c:\cmdout.txt q fi")
End Sub

The function runs without error but I do not get the -outfile allocated.
Thanks in advance for any input to my problem!

Lee Fletcher
Network Project Integrator
AmerenUE Callaway Plant
[EMAIL PROTECTED]



Re: Sending messages to an email address on Windoze?

2001-02-13 Thread Richard Cowen

Is there a mailx for Windoze? From MS?
thanks



Re: 3494, multiple tsm's connected, balancing media

2001-02-14 Thread Richard Cowen

Snyder.John [[EMAIL PROTECTED]] wrote:

This configuration should allow us to "rebalance" the number of scratch
tapes between the different servers as required.   (ie: if I need to move
some tapes from server A to server B, I don't need to peel and stick any
volser labels!) So far, so good...and something I've done occasionally
in the past.

However, when we use DRM and insert tapes into the 3494 that have
returned from offsite, when we do a checkin for these returned volumes
(using "checkin libv MYLIB stat=scr search=yes devtype=3590 checklabel=no"),
all of the returned media gets checked into the one TSM server that issued
the checkin.




Or script the checkin process.  I had the script check the volid's against
the volhist file to see if was volume that was "ours", and only issue
checkins against those that matched.  I kept the volhist forever to keep
stats on volume usage, etc.  I used mtlib to get a list of inserted tapes.
You would also need to account for "new" scratch tapes that might need
labeling...



Re: Backup Sets for Long Term Storage

2002-02-13 Thread Richard Cowen

Just some thoughts:

Generating backupsets requires no client resources.
Backupset currently only cover filesystems, not TDP data.
Backupset only covers active data.
Backupset tapes are tracked in volhistory. (along with the command that
created it.)
Backupset tapes are one-per-node.
Backupset tapes can only be refreshed by restoring to a node, backing-up,
and re-generating.

Exporting a node requires no client resources.
Export node does support TDP data.
Export node can do active, inactive, backup, archive or all.
Export tapes are tracked in volhistory.  (along with the command that
created it.)
Export tapes can be more than one node.
Export tapes can only be refreshed by importing, and re-exporting. (no
client activity)
Export tapes can be dry-run imported with preview=yes.
Export tapes can be imported across O/S platforms, and thus may be more
portable.

Archiving requires client resources (cpu, network, etc.)
Archiving only covers active data.
Most TDP data is type=backup.

Since 7 year archives will not expire very often, the only ways to refresh
them are:
1) Mix them with backup data (as someone has mentioned.)
2) Use Copypools for them and re-copy them periodically.
3) Retrieve and re-archive them.
4) Use move data periodically on all the tape volumes.

Since technology changes much faster than 7 years, one assumes that any
periodic migration process will result in copying old media (say, DLT) to
new media (say, LTO.)  Hopefully, this applies to version upgrades of TSM as
well.  (Companies that offer migration services will be happy to do this for
you!)

Maybe some future TSM utility/command will support export/backupset
duplication, (I suppose a unix dd command would work if the source and
destination both fit on one physical tape.)

As has been pointed out serveral times, the real question is can you turn
that 7 year-old data into information; that is, will your applications still
run on Windows2007 or AIX 6 and did you keep that old Pentium box to run
them on?  (I bet you will be happy all your database data was saved as flat
CSV files, including the meta information to process them, and that you kept
that DICOM display utility for that old medical image data)

Richard Cowen
Senior Technical Specialist
CNT



Re: dsmserv restore db problem with 3494 libr

2002-02-15 Thread Richard Cowen

Matching APARs.  Note the dates and closed status. These are for 3.7 and
4.1.
Pretty confusing. Does someone have a newer APAR?


APAR= IC29523  SER=IN INCORROUT
DATABASE RESTORE WILL FAIL FOR 3494 LIBRARY IF THE LIBTYPE IS
NOT MANUAL IN THE DEVCONFIG FILE.

Status: CLOSED  Closed: 02/20/01
Apar Information:
RCOMP= 5698TSMSUTSM SUN/SOL SER RREL= R410
FCOMP=  PFREL= F TREL= T
SRLS:
Return Codes:
Applicable Component Level/SU:

Error Description:
When performing a database restore using a device class
associated with a 3494 library, the restore will fail with the
following errors:
   ANR8416E DEFINE LIBRARY: The DEVICE parameter is invalid for library type
MANUAL.
   ANRD admdbbk.c(4144): Error 272 creating device class 3590CLASS.

The error above indicates that the libtype specified in the
DEFINE LIBRARY command is manual, however, from the device
configuration file we can see that this is not accurate:
   /* Device Configuration */
   DEFINE DEVCLASS 3590CLASS DEVTYPE=3590 FORMAT=DRIVE
   MOUNTLIMIT=DRIVES MOUNTWAIT=60 MOUNTRETENTION=60 PREFIX=ADSM
   LIBRARY=3494
   DEFINE LIBRARY 3494 LIBTYPE=349X DEVICE=3494
   PRIVATECATEGORY=300 SCRATCHCATEGORY=301 SHARED=NO
   DEFINE DRIVE 3494 DRIVE1 DEVICE=//dev/rmt/0st ONLINE=YES
.
Based on the information in the device configuration file, the
ANR8416E message is clearly erroneous.  In order to successfully
restore the database using the 3494 library, the customer must
change the libtype in the DEFINE LIBRARY command to manual.  The
volume(s) required for the restore must then be mounted in the
appopriate drive via mtlib.
.

To recreate this problem:
   1. Create a devconfig file using the DEFINE DEVCLASS, DEFINE
  LIBRARY and DEFINE DRIVE commands listed above.
   2. Attempt a database restore using the 3590CLASS device
  class (i.e. dsmserv restore db devclass=3590CLASS)
.
This problem has been seen on AIX and Solaris with the TSM 3.7.x
and 4.1.x levels of Server code.


APAR= IC31194  SER=DD DOC
THE TSM SERVER MANUALS DO NOT INDICATE THAT A DATABASE RESTORE
CAN ONLY BE PERFORMED WITH A LIBTYPE OF MANUAL OR SCSI

Status: CLOSED  Closed: 10/01/01
Apar Information:
RCOMP= 5698TSMAXTSM AIX SERVER  RREL= R410
FCOMP=  PFREL= F TREL= T
SRLS:  GC35040301
Return Codes:
Applicable Component Level/SU:
Error Description:
When performing a database restore, only the libytpes of SCSI
and MANUAL are supported.  Any attempt to restore the TSM
database using the libtype of 349X will fail.  This limitation
is addressed in APAR IC29523, however, the TSM Server manuals do
not document this limitation.

INITIAL IMPACT = LOW
Local Fix:

Problem Summary:

* USERS AFFECTED: All users of TSM 4.1 and later.  *

* PROBLEM DESCRIPTION: TSM restore database can only be done o *
*  n manual or SCSI libraries. *

* RECOMMENDATION: Update the Administrator's Guides and Refere *
* nces with this restriction.  *


TSM restore database can only be done on manual or SCSI libraries.



Re: lbtest

2002-02-18 Thread Richard Cowen

I got this from Joel Fuhrman of washington.edu.  I have scripts that work,
so I know the document speaks accurately.




--

 lbtest

Caveat
==

This document describes how to use the lbtest tool shipped with
the ADSM server. This tool is shipped as a service aid for
diagnosing hardware problems for SCSI libraries used by ADSM.
This document describes an old version of the tool and is
slightly out of date. At this point in time lbtest and the
other service aids are not supported for customer use. (i.e. If
you find bugs in this document or lbtest, we are not obligated
to fix them).

--

Invoking lbtest

The tool can be invoked as a command from the command line or
from within a shell script or REXX EXEC using this syntax:

  lbtest -f input-file -o output-file -d special-file -t
-p special-file
  options:

-f input-file   This specifies the input file for batch mode.
 If a file is specified, lbtest will execute in
 batch mode and read input from this file.
 The default for this file is lbtest.in.

-o output-file  Specifies the output file.
  The default for this file is lbtest.out.

-d special-file Specifies the special file value to substitute
  on the open statement in the input file.

-p special-file Specifies the special file value to substitute
  on the passthru statement in the input file.

-t  Specifies trace will be invoked


e.g. lbtest -d /dev/lb0 -o test.output -f lbtest.mt

note.If no parameters are specified, lbtest will operate in interactive
mode.

Interactive Mode

When lbtest is invoked with no -f, it defaults to running in
interactive, or manual, mode. This allows a developer to
interactively determine the kind of testing to be done. When in
interactive mode, lbtest provides a menu of functions that can
be performed that looks like this:


Main Menu:
==

  1: Manual test
  2: Batch  test
  9: Exit lbtest

Enter selection: 1

After selecting option 1 from the main menu, a manual test menu
appears that allows individual device driver functions to be
tested:


variables settings
=
special file: /dev/lb0
return_error_when_fail 1  exit_on_unexpected_result 0

manual test menu:
=
  1: set device special file
  2: display symbols
  3: set return error when fail
  4: set exit on unexpected result
  5: set buffer address
  6: open7: close
  8: ioctl return element count
  9: ioctl return all library inventory
 10: ioctl return library inventory
 11: ioctl move medium
 12: ioctl audit 13: ioctl extend
 14: ioctl retract   15: ioctl inquiry
 16: ioctl get IOCINFO   17: ioctl return error
 18: ioctl move slot to slot 19: ioctl move slot to drive
 20: ioctl move drive to slot21: ioctl move from empty
 22: ioctl move to full  23: ioctl move slot to ee
 24: ioctl move ee to slot   25: ioctl move from bad addr
 26: ioctl move to bad addr  27: ioctl invalid ioctl
 28: ioctl position to element   29: ioctl position to slot
 30: ioctl library info
 40: execute command
 88: trace menu
 99: return to main menu

Batch Mode

If a batch input file was specified with the -f option on
invocation, lbtest will run in batch mode, rather than
interactive. Batch input files can contain these kinds of
statements:

commands
comments
exit
if
passthru
pause
set
skip
symbols
system
type

Each type of statement is described next.

Comment

Any line beginning with #, or any blank line, is a comment and will
be ignored.

command Statements

Device driver function is exercised by command entries in the
input file. Command statements must be on a single line of the
input file. The data is case sensitive, but leading or embedded
blanks are ignored.

command  command-text result-text


This statement is used to execute a library command and to test
the command completion status for an expected result.


The command-text is used to specify which tape operation to
perform. The possible values for this field are described next.

open

SYNTAX:

  open device-file
   $D

  e.g. open /dev/lb0
   open $D

If the special file $D is specified, the -d value given on the
command line will be substituted for $D.



FUNCTION Tested:

open will call the device driver ddopen entry point and
attempt to open a medium changer device special file.

close

SYNTAX:

  close

  e.g. close

FUNCTION Tested:

close will call the device driver ddclose entry point and
close the medium changer device special file previously opened.


return_elem_count

SYNTAX:

  

Re: 1 Gb. Ethernet adapter

2002-02-22 Thread Richard Cowen

Are you sure its a gigabit adapter?
Try a lscfg -vl entN

You should see something like:

This is for a Gigabit adapter.

$ lscfg -vl ent2
 ent2  30-68Gigabit Ethernet-SX PCI Adapter
(14100401)
Network Address.0002559A2F2F
Displayable Message.Gigabit Ethernet-SX PCI Adapter
(14100401)
EC LevelE78971
Part Number.41L6396
FRU Number..07L8918
Device Specific.(YL)P2-I7/E1

And you can try this to see the possible values.

This is for a 10/100 adapter.

$ lsattr -R -l ent0 -a media_speed

10_Half_Duplex
10_Full_Duplex
100_Half_Duplex
100_Full_Duplex
Auto_Negotiation

 -Original Message-
 From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]]
 Sent: Friday, February 22, 2002 6:51 AM
 To: [EMAIL PROTECTED]
 Subject: 1 Gb. Ethernet adapter


 Hi *SM-ers!
 We have a H70 server with a 1 Gb. Ethernet adapter. I checked the setting
 and it's set to Auto_Negotiation. I know that this is not the
 correct value for optimal performance, but I see only the following
choices:
 10_Half_Duplex
 10_Full_Duplex
 100_Half_Duplex
 100_Full_Duplex
 Auto_Negotiation
 There are no 1000_Half_Duplex and 1000_Full_Duplex options available.
 What should select here?



Re: Idea for a TSM feature

2002-02-27 Thread Richard Cowen

I am sure others will reply, but why not use large cached primary disk
pools?
If you need to, define a separate pool for each important client, large
enough to hold a full backup.  That way, all the active files will usually
be in the pool. (Exceptions will break this, eg a big file that changes
every day may flush out an old active file)

True?

(See TSM Guide for disadvantages of cached pools...)

While I suspect that having active/inactive be in different storage pools
would break something in TSM, maybe we could get a move data node=xxx
type=active command...

 -Original Message-
 From: James Thompson [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 27, 2002 11:44 AM
 To: [EMAIL PROTECTED]
 Subject: Idea for a TSM feature


 Thought I would throw an idea I had for a TSM feature out on
 the listserv and get some thoughts no whether this would be useful or not.

 The feature that I would like to see is the ability to create
 a special disk storage pool, that would only migrate inactive versions of
 backup objects to the next storage pool.  This would keep all the active
 versions on disk storage.



Re: Idea for a TSM feature

2002-02-27 Thread Richard Cowen

 -Original Message-
 From: James Thompson [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 27, 2002 12:51 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Idea for a TSM feature
...
 This would not break anything with TSM.  It is really simple
 to get in a state where your disk pool has only active versions and your
tape next
 storage pool has only inactive versions.  Just migrate data
 to tape and run a selective backup of the filesystem.  Voila, active
versions
 on disk, only inactive on tape.

(This is fun...)
(WARNING: thinking out loud.)

But no rules were applied to achieve this state, so it would not be a
configurable option.



So let's say the development folks added a storage pool (random access)
parameter:

MigrateActive   Yes/No  (default=Yes)

Then let us set that new parameter to No.
Overnight, we back up a bunch of stuff.  As long as it fits in the pool,
it remains on disk.
Next morning, we backup that stgpool to tape (for DR.)
The next night, a bunch of stuff changes on the client, and we do another
incremental.
Now, however, the pool is full, and TSM has to decide what to do (just like
in the cached stgpool case.)

I suppose we would want an initial attempt to locate the incoming file
already in the pool, and migrate that (now inactive) file to tape, thus
freeing up space for the (now active) file.

If that fails, either because this is a new file, or it has grown, we go
to the next step.

What might that be?  We could let the current logic apply, thus temporarily
disabling the MigrateActive=No.
If so, how long does that last? Just for the backup session?  Just for this
client (assuming more than one client lives in the pool)?  Until there is
room for the next incoming file?  Do we need high/low percentage parameters
for this special case (to avoid the start/stop tape operations you mention)
?
Or do we just start writing incoming data to primary storage tape?
Do we need some way to repopulate just the active data to that disk pool,
after we have increased its size?

Since we can't easily guarantee the pool will not fill, (I suppose we could
dynamically add volumes ala db triggering...), do we still want whatever
will fit as a performance hedge (some improvement is better than none)?  If
so, why wouldn't the current cache feature suffice?  And don't forget how
long it will take for us to debug Tivoli's code...

Other issues.

Since this applies only to disk pools, SAN-type sessions are not covered
(except SANEnegy.), since they go direct to tape.


As data replication becomes more prevalent, the fast restore of an entire
client's active dataset becomes less (but not entirely) dependent on Backup
methodologies.

I agree with the Generate Backupset and Export Node filedata=backupactive
performance concerns.  But it seems to me that some of the most important
features of TSM (forever incremental and version control), almost dictate
that some server-based background data movement will be needed, the question
is how/when it occurs.  One reason for using Backupsets is to increase the
performance of restores, at the expense of Server pre-recover processing.



Re: How to use string operators in select statement - TSM.

2002-02-28 Thread Richard Cowen

select msgno from actlog where msgno=2565

 MSGNO
--
  2565
  2565
  2565
  2565
  2565
  2565
  2565
  2565
  2565
  2565
  2565

Mostly useless as is.
Maybe:

select date(date_time) as date,time(date_time) as time, substr(message,10,2)
as deleted \
 from actlog where msgno=2565 \
 and  cast((current_timestamp-date_TIME)minutes as integer) between 0 and
1440

  DATE TIME DELETED
--  --
2002-02-28 00:05:02 0

There are a lot of choices...

 -Original Message-
 From: Pothula S Paparao [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, March 28, 2002 4:15 AM
 To: [EMAIL PROTECTED]
 Subject: How to use string operators in select statement - TSM.


 Hi TSM'ers
 The below is the message from select output (select * from actlog)
 All I want to know is whether or not possible to caputue only
 the message no (ex. ANR2565I) using select query. I tried using string
 operations in select command. didnt help much. If any select expert tried
this pls let me
 know. Im very much intrested to know about it.



Re: Server Migration - AIX to HP

2002-03-01 Thread Richard Cowen

 -Original Message-
 From: Robin Sharpe [mailto:[EMAIL PROTECTED]]
 Sent: Friday, March 01, 2002 1:40 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Server Migration - AIX to HP
...
 BTW,  export/import isn't supported either, although it may work.
...

Is this from a Support call?

The MVS,NT,AIX, and HPUX TSM Guides at least imply cross-platform import is
supported:

Importing Data from Sequential Media Volumes
Before you import data to a new target server, you must:
1. Install TSM on the target server. This step includes defining disk space
for the databas and recovery log.
For information on installing TSM, see Quick Start.
2. Define server storage for the target server.

Because each server operating system handles devices differently, TSM does
not export
server storage definitions. Therefore, you must define initial server
storage for the target
server. TSM must at least be able to use a drive that is compatible with the
export
media. This task can include defining libraries, drives, device classes,
storage pools, and
volumes. See the Administrator's Guide that applies to the target server.



Re: Server-side scripting: not supported?

2002-03-01 Thread Richard Cowen

 -Original Message-
 From: Williams, Tim P {PBSG} [mailto:[EMAIL PROTECTED]]
 Sent: Friday, March 01, 2002 2:41 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Server-side scripting: not supported?

 OK, a little deeper...
 we have 5 groups of stg areas, basically...
 Since TSM doesn't give you a good way to control (wait=yes
 is good with move data, but there's
 nothing for, say, migration) the movement from disk to tape
 we HAVE to use move data commands...

 We have over 100 disk vols.
 We simultaneously run move datas.
 If one area completes then, it goes onto the ba stg.
 ALL move datas and ba stgs have to complete (5 general areas) prior to the
db backup, etc
 A good scheduler, will get this done with ease.
 Scripting...that would be complex.

That good scheduler would have to query the TSM server to get state
variables to be non-fragile.  For example, before issuing move data
commands, you would want to know there were sufficient tape drives available
(maybe issuing dismounts against mounted-but-idle volumes.) Also, to be
dynamic, it would have to query TSM to get a list of volumes for each disk
storage pool as candidates.  And maybe parse the Description field from the
stgpool to get the copypool name for the Backup Stgpool command.

So why not script away, anyway?



Re: 3584 slots

2002-03-07 Thread Richard Cowen

From the Planning and Operator Guide (Third Edition (February 2001).

Cleaning Cartridge
To maintain the operating efficiency of the drive, IBM supplies a specially
labeled
IBM LTO Ultrium Cleaning Cartridge with each 3584 UltraScalable Tape
Library.
Each drive in the library determines when it needs to be cleaned, and alerts
the
library. The library uses the cleaning cartridge to automatically clean the
drive. For
more information about cleaning methods, see Drive Cleaning on page 19.

...

Cleaning Cartridge
With each 3584 Tape Library, IBM supplies a specially labeled IBM LTO
Ultrium
Cleaning Cartridge to clean the drive heads. The drive itself determines
when a
head needs to be cleaned. It alerts the library and the host's application
software.
Depending on which cleaning method you choose, the drive is automatically
cleaned or you are required to select menus to initiate cleaning (for
information
about cleaning methods, see Drive Cleaning on page 19).
Note: The volume serial number (VOLSER) on the cartridge's bar code label
must
begin with CLNI or the library treats the cleaning cartridge as a data
cartridge during an inventory.

Before a drive can be cleaned, you must ensure that an IBM LTO Ultrium
Cleaning
Cartridge is loaded in the library (to determine whether one or more
cleaning
cartridges are loaded, see Removing a Cleaning Cartridge from the Library
on
page 49). You can load multiple cleaning cartridges and store them in any
cartridge
storage slot except the slot reserved for the diagnostic cartridge (see
Non-Addressable Cartridge Storage Slot on page 32).

The 3584 Tape Library monitors the usage of all cleaning cartridges that are
present in the cartridge inventory. When a cleaning cartridge has exceeded
50
cleanings, the library displays the following message (where  equals
the
cartridge's VOLSER):
Replace 

...

Addressable Cartridge Storage Slots
Addressable storage slots have both a physical address (such as F1,C05,R19)
and
a SCSI element (logical) address (such as 1112(X'458'). They do not include
I/O
station slots or the non-addressable slots that are reserved for the
diagnostic
cartridges (see Non-Addressable Cartridge Storage Slot on page 32). A
library
frame contains a variable number of addressable storage slots, depending on
the
quantity of drives that are installed on the drive side. To determine the
quantity of
slots available for each frame, see Table 1 and Table 2 on page 3.
The 3584 UltraScalable Tape Library stores cleaning cartridges in
addressable
cartridge storage slots and as part of the normal inventory. If the
automatic cleaning
feature is enabled, the cleaning cartridges are not accessible by the host
software.

Non-Addressable Cartridge Storage Slot
The base frame (Model L32) contains one non-addressable cartridge storage
slot
for the diagnostic cartridge, which is used during service procedures. The
non-addressable slot has a physical address of F1,C01,R01, but does not have
a
SCSI element address. There are no non-addressable slots in an expansion
frame.

 -Original Message-
 From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, March 07, 2002 9:38 AM
 To: [EMAIL PROTECTED]
 Subject: Re: 3584 slots


 Hi Daniel!
 Aren't you confusing the 3584 with a 3494? As far as I can find in the
 doc's, there is no fixed cleaner slot in a 3584 and I don't
 even have a CE cartridge!



Re: virtual volumes?

2002-03-14 Thread Richard Cowen

I believe you need the copygroup defined as type=archive, not the default
type=backup...
(maybe not the only thing missing..)

 -Original Message-
 From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 13, 2002 9:08 PM
 To: [EMAIL PROTECTED]
 Subject: virtual volumes?


 I could really use some help on this one...

 TSMv4.1.3 server on S/390
 TSMv4.1.5 server on AIX-RS/6000

 I set up server to server communications.  All works well.
 I set up virtual volumes from the TSM RS/6000 to the TSM  S/390 server.
 All looked well until I noticed that the utilization on the tape pool I
thought I was writing to never
 changed.  Upon further investigation, I noticed I was writing to a disk
pool.  Can't
 figure out how it's happening. e.g. I direct data (backup db type=full
dev=vvol) to the
 device class vvol, which of course is devtype server, it's
 going to a storage pool that writes to disk.  Unfortunately,
 this is not how I set this up.  I suspect it has something to do with my
copygroup
 or mgmtclass... here's the output of 4 queries... I know it's
 quite a bit to look at, but I'd appreciate if someone could
 shed some light on this one.  I'm a little confused.

 tsm: ADSM-ML-WSTPq co virtual-vols virtual-policy f=d

 Policy Domain Name: VIRTUAL-VOLS
Policy Set Name: VIRTUAL-POLICY
Mgmt Class Name: VV-DEFAULT
Copy Group Name: STANDARD
Copy Group Type: Backup
=
   Versions Data Exists: 2
  Versions Data Deleted: 1
  Retain Extra Versions: 30
Retain Only Version: 60
  Copy Mode: Modified
 Copy Serialization: Shared Static
 Copy Frequency: 0
   Copy Destination: VLDB-POOL
 Last Update by (administrator): TGADSJW
  Last Update Date/Time: 11/14/2001 11:22:08
   Managing profile:
 



Re: virtual volumes?

2002-03-14 Thread Richard Cowen

 -Original Message-
 From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, March 14, 2002 10:41 AM
 To: [EMAIL PROTECTED]
 Subject: Re: virtual volumes?
 
 
 I don't think that's the case.  When I do a Q occ nodename, 
 it displays the data as type ARCH, even though it's a backup 
 coming from the source server.  This, I expect.  As all of 
 the doc states the target server wiews the data, whether it's an archive
or 
 backup as an archive

Yes, I was speaking of the copygroup on the target server (ADSM-ML-WSTP is
the target server?)

From the Guide (aix 4.2), pages 348pp:

Setting Up Source and Target Servers for Virtual Volumes

In the source/target relationship, the source server is defined as a client
node of the target
server. To set up this relationship, a number of steps must be performed at
the two servers.
In the following example (illustrated in Figure 60 on page 350), the source
server is named
DELHI and the target server is named TOKYO.
¶ At DELHI:
1. Define the target server:
v TOKYO has a TCP/IP address of 9.115.3.221:1845
v Assigns to TOKYO the password CALCITE.
v Assigns DELHI as the node name by which the source server DELHI will be
known at the target server. If no node name is assigned, the server name of
the
source server is used. To see the server name, you can issue the QUERY
STATUS command.
2. Define a device class for the data to be sent to the target server. The
device type for
this device class must be SERVER, and the definition must include the name
of the target server.
¶ At TOKYO:
Register the source server as a client node. The target server can use an
existing policy
domain and storage pool for the data from the source server. However, you
can define a
separate management policy and storage pool for the source server. Doing so
can
provide more control over storage pool resources.
1. Use the REGISTER NODE command to define the source server as a node of
TYPE=SERVER. The policy domain to which the node is assigned determines
where
the data from the source server is stored. Data from the source server is
stored in the
storage pool specified in the archive copy group of the default management
class of
that domain.
2. You can set up a separate policy and storage pool for the source server.
a. Define a storage pool named SOURCEPOOL:
define stgpool sourcepool autotapeclass maxscratch=20
b. Copy an existing policy domain STANDARD to a new domain named
SOURCEDOMAIN:
copy domain standard sourcedomain
c. Assign SOURCEPOOL as the archive copy group destination in the default
management class of SOURCEDOMAIN:
update copygroup sourcedomain standard standard type=archive
destination=sourcepool  ==
After issuing these commands, ensure that you assign the source server to
the new
policy domain (UPDATE NODE) and activate the policy.



Re: Show Libr output

2002-05-10 Thread Richard Cowen

I see different output.  AIX 4.3.3, TSM 5.1.

tsm: ORCAshow libr
MMSV-libList: head=31573A28, tail=31574268

Library L3494 (type 349X):
  reference count = 0, online = 1, borrowed drives = 0, update count = 0
   basicCfgBuilt = 1, libInfoBuild = 1, definingPathToLibrary = 0
   addLibPath = 0, driveListBusy = 0
  libvol_lock_id=0, libvolLock_count=0, SeqnBase=0
  Library is shared.
  library extension at 31572A08
  autochanger list:
dev=/dev/lmcp0, busy=0, online=1

  drive list:
DRIVE_3590_D0:
  Device Class = 3590,
  RD Capabilities = 00F0,
  (3590E-C,3590E-B,3590C,3590B),
  WR Capabilities = 00C0,
  (3590E-C,3590E-B),
  Drive state: EMPTY, Drive Descriptor state = NO PENDING CHANGES,
  online = 1, polled = 0,
  Inquiry String = ,
  Device Type From Inquiry =,
  allocated to (), volume owner is (),
  priorReadFailed = 0, priorWriteFailed = 0, priorLocateFailed = 0,
  update count = 0, checking = 0,
  offline time/date: 00:00:00 1900/00/00
349X-specific fields:
  devNo=12794240, isScsi=1
  last dismounted volume

Library L3583 (type SCSI):
  reference count = 2, online = 0, borrowed drives = 0, update count = 0
   basicCfgBuilt = 1, libInfoBuild = 0, definingPathToLibrary = 0
   addLibPath = 0, driveListBusy = 0
  libvol_lock_id=0, libvolLock_count=0, SeqnBase=0
  Library is shared.
  library extension at 31572BF8
  autochanger list:
dev=/dev/smc0, busy=1, online=1

  drive list:
DRIVE_3580_D0:
  Device Class = LTO,
  RD Capabilities = 0003,
  (ULTRIUMC,ULTRIUM),
  WR Capabilities = 0003,
  (ULTRIUMC,ULTRIUM),
  Drive state: UNKNOWN, Drive Descriptor state = NO PENDING CHANGES,
  online = 0, polled = 0,
  Inquiry String = ,
  Device Type From Inquiry =,
  allocated to (), volume owner is (),
  priorReadFailed = 0, priorWriteFailed = 0, priorLocateFailed = 0,
  update count = 0, checking = 0,
  offline time/date: 00:00:00 1900/00/00
  Clean Freq(GB)=-1, Bytes Proc(KB)=0, needsClean=0.
SCSI-specific fields:
  None


===

AIX 4.3.3, TSM 4.2.1.9:

tsm: ORCAshow libr
MMSV-libList: head=326BF360, tail=326C1CF0

Library L3494 (type 349X):
  reference count = 0, online = 1, borrowed drives = 0, update count = 0
   driveListBusy = 0
   libvol_lock_id=0, libvolLock_count=0, SeqnBase=0
  Library is shared.
  library extension at 326BE300
  autochanger list:
dev=/dev/lmcp0, busy=0, online=1

  drive list:
DRIVE_3590_D0:
  Device Class = 3590,
  RD Capabilities = 00F0,
  (3590E-C,3590E-B,3590C,3590B),
  WR Capabilities = 00C0,
  (3590E-C,3590E-B),
  Drive state: EMPTY, Drive Descriptor state = NO PENDING CHANGES,
  online = 1, polled = 0,
  allocated to (), volume owner is (),
  priorReadFailed = 0, priorWriteFailed = 0, priorLocateFailed = 0,
  update count = 0, checking = 0,
  offline time/date: 00:00:00 1900/00/00
349X-specific fields:
  devNo=12794240, isScsi=1
  last dismounted volume

Library L3583 (type SCSI):
  reference count = 0, online = 0, borrowed drives = 0, update count = 0
   driveListBusy = 0
   libvol_lock_id=0, libvolLock_count=0, SeqnBase=0
  Library is shared.
  library extension at 326BE7A0
  autochanger list:
dev=/dev/smc0, busy=0, online=0

  drive list:
DRIVE_3580_D0:
  Device Class = LTO,
  RD Capabilities = 0003,
  (ULTRIUMC,ULTRIUM),
  WR Capabilities = 0003,
  (ULTRIUMC,ULTRIUM),
  Drive state: UNKNOWN, Drive Descriptor state = NO PENDING CHANGES,
  online = 0, polled = 0,
  allocated to (), volume owner is (),
  priorReadFailed = 0, priorWriteFailed = 0, priorLocateFailed = 0,
  update count = 0, checking = 0,
  offline time/date: 00:00:00 1900/00/00
  Clean Freq(GB)=-1, Bytes Proc(KB)=0, needsClean=0.
SCSI-specific fields:
  None



Re: mtlib command

2002-05-20 Thread Richard Cowen

mtlib -l /dev/lmcp0 -f /dev/rmt1 -qD
Device Data:
   mounted volser.none.
   device category012E
   device state...Device installed in Library.
  Device available to Library.
  ACL is installed.
  Auto Fill is enabled.
   device class...3590-E1A
   extended device status.00

$ mtlib -l /dev/lmcp0 -D
  0, 00C33980 003590E1A00
  1, 00C39160 003590E1A01

$ mtlib -l /dev/lmcp0 -D -E
 Type   Mod  Serial #   Devnum   Cuid  Device  VTS Library
003590  E1A  13-C3398  00C339801  0
003590  E1A  13-C3916  00C391602  0

$  mtlib -l /dev/lmcp0 -qM
IBM016 C39160

$ tapeutil -f /dev/rmt1 inquiry
Issuing inquiry...

Inquiry Data,  Length 56

0 1  2 3  4 5  6 7  8 9  A B  C D  E F   0123456789ABCDEF
 - 0180 0302 3300 1000 4942 4D20 2020 2020  [...3...IBM ]
0010 - 3033 3539 3045 3141 2020 2020 2020 2020  [03590E1A]
0020 - 4533 3731 3133 3030 3030 3030 3043   [E37113000C33]
0030 - 3938 2030 0500 0181  [98 0...]

$ lscfg -vl rmt1
  DEVICELOCATION  DESCRIPTION

  rmt1  14-08-01  IBM 3590 Tape Drive and Medium
  Changer (FCP)

ManufacturerIBM
Machine Type and Model..03590E1A
Serial Number...000C3398
Device Specific.(FW)E371
Loadable Microcode LevelA0B00E26

$ tapeutil -f /dev/rmt2 fuser
Device is currently open by process id 13474
$ ps -ef |grep dsms
root 13474 1   1 08:24:41  -  1:08 ./dsmserv

-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 20, 2002 10:46 AM
To: [EMAIL PROTECTED]
Subject: mtlib command


Good day all,

Somewhere I had documented an mtlib command that could be used to query
information about the drives attached to the system. With the changeover of
TSM servers it's hard to figure out what drive is what. I have identified a
few by looking in the side door of the 3494 when a tape got mounted.
Unfortunately I'll never be able to do this with the rest and have misplaced
my mtlib commands.

Can anyone send me the command gives info about the drive itself, as in
serial number, so I can match rmt(whatever) to it. All I can find is mtlib
-l /dev/lmcp0 -qD -f/dev/rmt6, and this one doesn't do it for me. I've got a
bad drive and need to locate it this way.

Thanks,
Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Re: Problem deleting a storage pool volumes via move data

2001-08-29 Thread Richard Cowen

see apar IC29607.

fixed, supposedly, in 4.1.4.0 maintenance level.



Re: Problem with 3494 / 3590 drives only partly seen by library

2001-08-31 Thread Richard Cowen

Works here.  (same environment, but direct FC attached)
Maybe your SAN DG needs to have its scsi rescanned.  You could try delete
the drives under TSM, rmdev -dl under AIX, rescan at the SAN DG, cfgmgr,
define the drives under TSM, and try again?

# lscfg -vl rmt4
  DEVICELOCATION  DESCRIPTION

  rmt4  2A-08-01  IBM 3590 Tape Drive and Medium
  Changer (FCP)

ManufacturerIBM
Machine Type and Model..03590E1A
Serial Number...000C3398
Device Specific.(FW)E304
Loadable Microcode LevelA0B00E33

# mtlib -l /dev/lmcp0 -D -E
 Type   Mod  Serial #   Devnum   Cuid  Device  VTS Library
003590  E1A  13-C3398  00C339801  0
003590  E1A  13-C3916  00C391602  0

# mtlib -l /dev/lmcp0 -qV -V YZA456
Volume Data:
   volume state.00
   logical volume...No
   volume class.3590 1/2 inch cartridge tape
   volume type..HPCT 320m nominal length
   volser...YZA456
   category.012C
   subsystem affinity...01 02 00 00 00 00 00 00
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00

# mtlib -l /dev/lmcp0 -f /dev/rmt4 -m -V YZA456
# mtlib -l /dev/lmcp0 -qM
YZA456 C33980
# mtlib -l /dev/lmcp0 -f /dev/rmt4 -d -V YZA456
# mtlib -l /dev/lmcp0 -qM
#

-Original Message-
From: Rick Un [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 30, 2001 11:22 AM
To: [EMAIL PROTECTED]
Subject: Problem with 3494 / 3590 drives only partly seen by library


Ok, folks got a puzzle for you.   here's the environment:

H80 / AIX 4.3.3 (relatively latest patches)
3494 with 3590 type E drives  (another ADSM and TSM server are also using
this library but different drives)
3590 drives are attached via a SAN datagateway

Ok, the drives are available and working because I'm able to do the
following commands:

# mtlib -l /dev/lmcp0 -D -E
 Type   Mod  Serial #   Devnum   Cuid  Device  VTS Library
003590  E1A  13-57297  005729701  0
003590  E1A  13-56981  005698102  0
003590  E1A  13-56977  005697703  0
003590  E1A  13-56746  005674604  0
003590  E1A  13-F0336  00F033605  0
003590  E1A  13-F0550  00F055006  0
003590  E1A  13-34252  003425207  0
003590  E1A  13-34228  003422808  0   (presently testing this one)
003590  E1A  13-F2074  00F207409  0
003590  E1A  13-F1892  00F18920   10  0

#lsdev -Cc tape
rmt0  Available 40-60-00-5,0 SCSI 4mm Tape Drive
rmt1  Available 2A-08-00-0,2 IBM 3590 Tape Drive and Medium Changer
(here's the matching logical device being tested)
rmt2  Available 2A-08-00-0,4 IBM 3590 Tape Drive and Medium Changer
rmt3  Available 2A-08-00-0,6 IBM 3590 Tape Drive and Medium Changer
rmt4  Available 2A-08-00-0,8 IBM 3590 Tape Drive and Medium Changer
lmcp0 Available  LAN/TTY Library Management Control Point

(now I load a tape, write to it and unload the tape here)
#mtlib -l /dev/lmcp0 -x 342280 -m -V A00020
#tar -cvf /dev/rmt1 /etc (note: rmt1 maps to serial number 342280
checked with the lscfg -l rmt1 -v command)
#mtlib -l /dev/lmcp0 -x 342280 -d -V A00020

So, you may be a bit curious why I use the -x flag here instead of the 
-f /dev/rmt1 flag!
Well, that's what I've been wondering for the past week and trying to get
IBM to answer.

(This command doesn't work:)
#mtlib -l /dev/lmcp0 -f /dev/rmt1 -m -V A00020
 Query  operation Error - Device is not in library.

So, I can actually simply the problem even further, Can anybody on this god
forsaken planet
tell me what the hell is wrong (or at least why this first command suceeds
but the 2nd cmd fails)
#mtlib -l /dev/lmcp0 -x 342280 -q D
Device Data:
   mounted volser.none.
   device category01F6
   device state...Device installed in Library.
  Device available to Library.
  ACL is installed.
   device class...3590-E1A
   extended device status.Suppress unsolicited interrupts
#mtlib -l /dev/lmcp0 -f /dev/rmt1 -m -V A00020
 Query  operation Error - Device is not in library.



Re: Import node from a shared library..

2001-09-25 Thread Richard Cowen

Is this a SCSI shared library, and are doing the import on the client tsm
server?
If so, you may want to check:

APAR= IC31078  SER=IN INCORROUT
WHEN USING A LIBTYPE=SHARED LIBRARY, YOU ARE UNABLE TO RUN AN
EXPORT, IMPORT, DB RESTORE OR RESTORE BACKUPSET.

 -Original Message-
 From: Tom Tann{s [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, September 25, 2001 9:28 AM
 To: [EMAIL PROTECTED]
 Subject: Import node from a shared library..


 Hello!

 I'm not very experienced in using shared libraries, so I
 might have missed
 something here...

 Anyway.

 Export works fine.

 Import, however, fail with the messages





Re: ibm3493

2001-10-04 Thread Richard Cowen

Page 204ff of the Operator Guide (GA32-0280-13).

Note: You can also view LAN information from the 3494 Tape Library
Specialist (see 3494 Tape Library Specialist Features and Functions on
page 255). The LAN options option allows the following operations:

v Add LAN host
v Delete LAN host
v Update LAN host information
v LM LAN information



LM LAN Information
The Library LAN Information window (Figure 139) supplies the LAN information
about the library that the host system requires to communicate with the
library.

Note: If a Model HA1 is installed, information for both Library Managers is
shown.
An asterisk (*) indicates that the item is for the local Library Manager.

Library Transaction Program Name
Specifies the name of the LAN transaction program that runs on the Library
Manager to receive data from the host.

Library Network ID Specifies the name of the remote network that the
adjacent control point (the Library Manager) resides in.

The Common Programming Interface (CPI) - Communications partner_LU_name of
the host, consists of the Library Manager network identifier and the Library
Manager location name. For example, if the Library Manager partner_LU_name
is USIBMSU.LIBMGRC3, then the Library Manager Network ID is USIBMSU.

Library Location Name Specifies the remote location name (of the 3494
Library Manager) that the host communicates with.

The Common Programming Interface (CPI) - Communications partner_LU_name of
the Library
Manager, consists of the network identifier and the location name. For
example, if the Library Manager
partner_LU_name is USIBMSU.LIBMGRC3, then the Library Manager Location Name
is LIBMGRC3.

Library Adapter Address Specifies the LAN adapter address of the remote
controller (the Library Manager). This can be the Library Manager adapter
card universally
administered address (UAA), such as 10005A8A5E75, or a locally administered
address (LAA), such as 40003494001A.

Library IP Address The Library Manager IP Address is the unique Internet
address assigned to the 3494 Library Manager.

Library Host Name The Library Name is the Hostname defined in the TCP/IP
network for the Library Manager. In Figure 139 on page 217, libmgrc3.ibm.com
is a TCP/IP Hostname.

 -Original Message-
 From: Gerald Wichmann [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, October 04, 2001 1:48 PM
 To: [EMAIL PROTECTED]
 Subject: ibm3493


 Anyone know how to change the IP on an ibm3494 tape library? Currently
 our TSM server was on 10.100.2.1/16 and we've moved it to
 217.16.217.1/24.. I need to update the tape library and change it's IP
 to that subnet, then the TSM server to look to that new IP instead of
 the old one.

 I'm not really sure how to do that on an ibm3494..

 Gerald




Re: tsm 4.2.1 licensing

2001-10-18 Thread Richard Cowen

tsm 4.2.1.2, AIX:
10/09/01   16:04:59  ANR2017I Administrator ADMIN issued command:
REGISTER
  LICENSE file=50mgsyslan.lic
10/09/01   16:04:59  ANR2852I Current license information:
10/09/01   16:04:59  ANR2853I New license information:
10/09/01   16:04:59  ANR2827I Server is licensed to support Managed
System for
  LAN for a quantity of 50.
10/09/01   16:05:13  ANR2017I Administrator ADMIN issued command:
REGISTER
  LICENSE file=10mgsyslan.lic
10/09/01   16:05:13  ANR2852I Current license information:
10/09/01   16:05:13  ANR2827I Server is licensed to support Managed
System for
  LAN for a quantity of 50.
10/09/01   16:05:13  ANR2853I New license information:
10/09/01   16:05:13  ANR2827I Server is licensed to support Managed
System for
  LAN for a quantity of 60.
10/09/01   16:05:21  ANR2017I Administrator ADMIN issued command:
REGISTER
  LICENSE file=5mgsyslan.lic
10/09/01   16:05:21  ANR2852I Current license information:
10/09/01   16:05:21  ANR2827I Server is licensed to support Managed
System for
  LAN for a quantity of 60.
10/09/01   16:05:21  ANR2853I New license information:
10/09/01   16:05:21  ANR2827I Server is licensed to support Managed
System for
  LAN for a quantity of 65.

 -Original Message-
 From: Jim Sporer [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, October 18, 2001 4:49 PM
 To: [EMAIL PROTECTED]
 Subject: Re: tsm 4.2.1 licensing


 I used  a 'reg lic file=50mssyslan.lic number=20'  trying to get 1000
 clients license.  I then did a 'query license' and it told me
 I only had 50
 clients licensed.   I think there is something messed up with
 the licensing
 for version 4.2.1.
 Jim Sporer

 At 12:32 PM 10/18/2001 -0700, you wrote:
 Same result..
 
 tsm: TSMreg lic file=mgsyslan.lic number=1
 ANR2852I Current license information:
 ANR9634E REGISTER LICENSE: No license certificate files were
 found with
 the ./mgsyslan.lic specification.
 ANS8001I Return code 3.
 
 tsm: TSMreg lic file=1mgsyslan.lic
 ANR2852I Current license information:
 ANR2853I New license information:
 
 Activity log is complaining about the license files:
 
 10/18/01   12:22:14  ANR2017I Administrator GWICHMAN
 issued command:
 REGISTER
LICENSE file=mgsyslan.lic number=1
 
 10/18/01   12:22:14  ANR2852I Current license information:
 
 10/18/01   12:22:14  ANR9634E REGISTER LICENSE: No license
 certificate files
were found with the ./mgsyslan.lic
 specification.
 10/18/01   12:22:14  ANR2017I Administrator GWICHMAN
 issued command:
 ROLLBACK
 10/18/01   12:22:46  ANR2017I Administrator GWICHMAN
 issued command:
 REGISTER
LICENSE file=1mgsyslan.lic
 
 10/18/01   12:22:46  ANR2852I Current license information:
 
 10/18/01   12:22:46  ANR9626E Invalid license certificate file:
 
./1mgsyslan.lic.
 
 10/18/01   12:22:46  ANR2853I New license information:
 
 10/18/01   12:23:29  ANR2017I Administrator GWICHMAN
 issued command:
 QUERY
 more...   (ENTER to continue, 'C' to cancel)
 
ACTLOG
 
 Perhaps I mishandled the upgrade. I went from 4.1.2 - 4.2.0 - 4.2.1
 
 
 
 Gerald Wichmann
 System Engineer
 StorageLink
 408-844-8893 (v)
 408-844-9801 (f)
 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]
 On Behalf Of
 Joshua S. Bassi
 Sent: Thursday, October 18, 2001 1:21 PM
 To: [EMAIL PROTECTED]
 Subject: Re: tsm 4.2.1 licensing
 
 Instead of doing it the old way, licensing now works by doing:
 
 'reg lic file=mgsyslan.lic number=1' (or however many you
 are trying to
 license.
 
 
 --
 Joshua S. Bassi
 Independent IT Consultant
 IBM Certified - AIX/HACMP, SAN, Shark
 Tivoli Certified Consultant- ADSM/TSM
 Cell (408)(831) 332-4006
 [EMAIL PROTECTED]
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]
 On Behalf Of
 Gerald Wichmann
 Sent: Thursday, October 18, 2001 11:19 AM
 To: [EMAIL PROTECTED]
 Subject: tsm 4.2.1 licensing
 
 Is there a trick to licensing or why does tsm come back with
 this when I
 attempt to add a license:
 
 tsm: TSMreg lic file=1mgsyslan.lic
 ANR2852I Current license information:
 ANR2853I New license information:
 
 tsm: TSM
 
 And of course, no license gets added.. ?
 
 Gerald Wichmann
 System Engineer
 StorageLink
 408-844-8893 (v)
 408-844-9801 (f)




Re: 4.2.1.7 Server and DRM

2001-11-19 Thread Richard Cowen

 -Original Message-
 From: Magura, Curtis [mailto:[EMAIL PROTECTED]]
 Sent: Monday, November 19, 2001 9:51 AM
 To: [EMAIL PROTECTED]
 Subject: 4.2.1.7 Server and DRM


 We are failing on the DRM license which we really did purchase.
 Don't seem to have the DRM.lic file to correct this. We went from 4.1.x.x
 to 4.2.0 to 4.2.0.1 to 4.2.1.7. Somewhere in all of that it appears we
 have lost the file...

Known problem:

Solution, uninstall tivoli.xxx.license.xxx, then reinstall.
Worked for me.



Re: copying scripts between tsm servers

2001-11-30 Thread Richard Cowen

 -Original Message-
 From: Steve Bennett [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, November 29, 2001 6:10 PM
 To: [EMAIL PROTECTED]
 Subject: copying scripts between tsm servers

 I need to copy dozens of scripts from one TSM 4.1.3 server to another.
 Any suggestions?


source:
query script abc format=raw outputfile=abc.raw

dest:
define script abc file=abc.raw

or make a macro:
define macro QSCR query script $1 format=raw outputfile=$1.raw

create a macro to run it for each script:
select distinct 'RUN QSCR ',name from scripts  qscr.mac1

get rid of heading:

grep ^RUN qscr.mac1  qscr.mac2

and run it:

MACRO qscr.mac2

Do something similar to load them into the new server.

Note that the output from the Query script xxx outputfile=xxx.raw is in the
tsm server directory unless you provide a complete pathname...

I would create a Perl script that did the whole thing at once, assuming you
have connectivity to both servers...



Re: Migration/storage pool backup ?'s

2002-01-04 Thread Richard Cowen

A small script to issue the appropriate MOVE DATA commands may be easier
to control than auto migration.

Foreach stgpool where device_class_name=DISK and stgpool_type=PRIMARY
foreach volume where pct_utilized  50 and stgpool_name=$stgpool
move data $volume_name to next_stgpool

example: (on unix, grep for MOVE, munge out the extra spaces and send thru a
dsmadmc command.)

select 'MOVE DATA ', volume_name as
, -
'STGPOOL=', nextstgpool, 'WAIT=YES' from volumes, stgpools -
where stgpools.stgpool_name=volumes.stgpool_name and
stgpools.stgpool_name in -
(select stgpool_name from volumes -
where volume_name in -
(select volume_name -
from volumes where pct_utilized50 and stgpool_name
in -
(select stgpool_name from stgpools where
pooltype='PRIMARY' and devclass='DISK') -
) -
)

output:
Unnamed[1]
Unnamed[3]NEXTSTGPOOL   Unnamed[5]
--
------
MOVE DATA /opt/tivoli/tsm/server/bin/sbkup00.dsm
STGPOOL=  LTOPOOL   WAIT=YES
MOVE DATA /tsmdata1/datavol3
STGPOOL=  LTOPOOL   WAIT=YES
MOVE DATA /usr/local/tsm/datavol01.dsm
STGPOOL=  LTOPOOL   WAIT=YES
MOVE DATA /usr/local/tsm/datavol02.dsm
STGPOOL=  LTOPOOL   WAIT=YES


 -Original Message-
 From: Jonathan Aberle [mailto:[EMAIL PROTECTED]]
 Sent: Friday, January 04, 2002 1:11 PM
 To: [EMAIL PROTECTED]
 Subject: Migration/storage pool backup ?'s


 Hello,

 I am experiencing some problems with migration and storage pool backups.
My situation is that I have a small disk storage pool that is primarily used
for oracle archive logs that archive at any time during the  day.  During
the hours of 6-12, we perform offsite copies of the tape and disk  storage
pools.
 During this time, no tape drives are available for client  backups so I
need to make sure that the disk storage pool is completely  migrated to tape
prior to starting the offsite copies.  Here is what I do:



Re: Handling spikes in storage transfer

2002-01-14 Thread Richard Cowen

You could also try a query content on some of your disk storage pools,
assuming thats where the node's data is going first.  Why can't you check
the client's log?

query content volume-name node=node-name filespace=fs-name

 -Original Message-
 From: Cook, Dwight E (SAIC) [mailto:[EMAIL PROTECTED]]
 Sent: Monday, January 14, 2002 11:13 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Handling spikes in storage transfer


 Try this... but alter the 1 to fit your need... maybe need
 2 etc...

 select * from adsm.backups where (node_name='YOUR_NODE' and
 cast((current_timestamp-backup_date)day as decimal(18,0))1)

 -Original Message-
 From: Zoltan Forray/AC/VCU [mailto:[EMAIL PROTECTED]]
 Sent: Monday, January 14, 2002 10:02 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Handling spikes in storage transfer


 I already tried that. The information it gives isn't detailed
 enough. It
 just tells me about the filespaces.

 I need to know specifics, such as the names/sizes of the
 files/objects in
 the file spaces.

 Anybody have any sql to do such ?

 Thanks, anyway !
 --
 --
 
 Zoltan Forray
 Virginia Commonwealth University - University Computing Center
 e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807




 Cook, Dwight E (SAIC) [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 01/14/2002 10:29 AM
 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: Handling spikes in storage transfer


 Do a q occ nodename and look for what file systems are out on your
 diskpool in great quantity.
 That is, if you send all data first to a diskpool and then
 bleed it off to
 tape (daily).
 That will give you an idea of what file systems are sending
 the most data,
 currently.
 Then you may perform something like a show version nodename
 filespacename to see each specific item.
 you might note the filecount in the q occ listing
 to see how
 much will be displayed in the show version command.

 hope this helps...
 later,
 Dwight

 -Original Message-
 From: Zoltan Forray/AC/VCU [mailto:[EMAIL PROTECTED]]
 Sent: Monday, January 14, 2002 8:47 AM
 To: [EMAIL PROTECTED]
 Subject: Handling spikes in storage transfer


 I have an SGI client node that while normally sends = 1.5GB
 of data, is
 now sending 36GB+.

 Without accessing the client itself, how can I find out what
 is causing
 this increase in TSM traffic ?

 I have contacted the client owner, but their response is
 taking too long
 and this spike it wreaking havoc on TSM.

 --
 --
 
 Zoltan Forray
 Virginia Commonwealth University - University Computing Center
 e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807




Re: Handling spikes in storage transfer

2002-01-15 Thread Richard Cowen

Do you have a platform-compatible box?  Set up a dsm.sys/dsm.opt with that
node name, and do something like:

dsmc query backup /filesystem-name/* -fromdate=1/1/2002 -todate=1/2/2002

Add -fromtime -totime if desired.

 -Original Message-
 From: Zoltan Forray/AC/VCU [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, January 15, 2002 8:49 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Handling spikes in storage transfer


 Did that. TERSE is on and the log contains only the summary of what is
 going on. No file info.

 Don't have access to this system. It is an SGI IRIX box and
 the owners are very particular about who accesses it.




Re: Identifying client schedule mode.

2002-01-24 Thread Richard Cowen

As was suggested, check the activity log.  You will have to associate the
messages yourself, TSM does not do a very good job tying messages logically
together.  The session below that has a summary record is 955.

01/23/2002 10:00:01 ANR2561I Schedule prompter contacting NORBERTSTA
(session 952) to start a scheduled operation.

01/23/2002 10:00:03 ANR0406I Session 953 started for node NORBERTSTA (SUN
SOLARIS) (Tcp/Ip 10.x.x.x (43321)).
01/23/2002 10:00:04 ANR0403I Session 953 ended for node NORBERTSTA (SUN
SOLARIS).
01/23/2002 10:00:04 ANR0406I Session 954 started for node NORBERTSTA (SUN
SOLARIS) (Tcp/Ip 10.x.x.x (43322)).
01/23/2002 10:00:05 ANR0403I Session 954 ended for node NORBERTSTA (SUN
SOLARIS).
01/23/2002 10:00:06 ANR0406I Session 955 started for node NORBERTSTA (SUN
SOLARIS) (Tcp/Ip 10.x.x.x (43323)).
01/23/2002 10:00:17 ANR0406I Session 956 started for node NORBERTSTA (SUN
SOLARIS) (Tcp/Ip 10.x.x.x (43324)).

 -Original Message-
 From: Martin, Jon R. [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, January 24, 2002 7:19 AM
 To: [EMAIL PROTECTED]
 Subject: Identifying client schedule mode.


 Hello, this should be an easy one...

 The TSM Server has schedmode set to any.  What can I run from
 the server to
 see what mode a client is using?  I have been selecting from
 various tables
 but have not had any luck in finding the answer.

 Thanks,
 Jon




Re: Summary Table Problems sample

2002-12-11 Thread Richard Cowen
Server 5.1.5.4 AIX 5.1
Clients
STINGRAY1   5.1.5.5  (upgraded from 5.1.5.0 around 12:20:00)
RICHARDC2K  5.1.5.4
NORBERT 4.2.1.15


Using the Summary Table:

Session NodeStartTime   EndTime Objects Bytes  Elapsed  Session
PlatformMedia Wait
3   RICHARDC2K  12:36:1113:14:4415757   245.6MB 2313
ARCHIVE WinNT   49
13  STINGRAY1   12:53:2812:53:335   9.7KB
5   BACKUP  AIX 0
43  RICHARDC2K  14:12:0714:12:207   1.5KB
13  BACKUP  WinNT   0
47  RICHARDC2K  14:14:5014:39:072   591
1457BACKUP  WinNT   0
142 NORBERT 15:55:3815:56:522   1.4GB   74
ARCHIVE SUN SOLARIS 39



From the actlog:

Session NodeStartTime   EndTime Objects Bytes  Elapsed  Session
PlatformMedia Wait
20  STINGRAY1   12:18:5212:33:3613  1.9MB
884 backup  AIX 0
3   RICHARDC2K  12:36:1113:14:445900250.9MB 2313
archive WinNT   49
13  STINGRAY1   12:53:2812:53:335   9.7KB
5   backup  AIX 0
43  RICHARDC2K  14:12:0714:12:207   1.5KB
13  backup  WinNT   0
47  RICHARDC2K  14:14:5014:39:072   599
1457backup  WinNT   0
142 NORBERT 15:55:3815:56:522   1.4GB   74
archive SUN SOLARIS 39

The Norbert session actually ran for almost an hour:

12/10/2002 14:59:08  ANR0406I Session 71 started for node NORBERT (SUN
SOLARIS)
  (Tcp/Ip 10.178.90.202(38907)).
12/10/2002 14:59:08  ANR0403I Session 71 ended for node NORBERT (SUN
SOLARIS).
12/10/2002 14:59:09  ANR0406I Session 72 started for node NORBERT (SUN
SOLARIS)
  (Tcp/Ip 10.178.90.202(38908)).
12/10/2002 14:59:23  ANR0403I Session 72 ended for node NORBERT (SUN
SOLARIS).
12/10/2002 15:00:13  ANR0406I Session 73 started for node NORBERT (SUN
SOLARIS)
  (Tcp/Ip 10.178.90.202(38909)).
12/10/2002 15:00:13  ANR0406I Session 74 started for node NORBERT (SUN
SOLARIS)
  (Tcp/Ip 10.178.90.202(38910)).
12/10/2002 15:00:31  ANR0406I Session 75 started for node NORBERT (SUN
SOLARIS)
  (Tcp/Ip 10.178.90.202(38911)).
12/10/2002 15:00:31  ANR0403I Session 75 ended for node NORBERT (SUN
SOLARIS).
12/10/2002 15:16:12  ANR0482W Session 73 for node NORBERT (SUN SOLARIS)
  terminated - idle for more than 15 minutes.
12/10/2002 15:55:36  ANR0403I Session 74 ended for node NORBERT (SUN
SOLARIS).
12/10/2002 15:55:38  ANR0406I Session 142 started for node NORBERT (SUN
  SOLARIS) (Tcp/Ip 10.178.90.202(38923)).
12/10/2002 15:55:38  ANE4952I (Session: 142, Node: NORBERT)  Total
number of
  objects inspected:2
12/10/2002 15:55:38  ANE4953I (Session: 142, Node: NORBERT)  Total
number of
  objects archived: 2
12/10/2002 15:55:38  ANE4958I (Session: 142, Node: NORBERT)  Total
number of
  objects updated:  0
12/10/2002 15:55:38  ANE4960I (Session: 142, Node: NORBERT)  Total
number of
  objects rebound:  0
12/10/2002 15:55:38  ANE4957I (Session: 142, Node: NORBERT)  Total
number of
  objects deleted:  0
12/10/2002 15:55:38  ANE4970I (Session: 142, Node: NORBERT)  Total
number of
  objects expired:  0
12/10/2002 15:55:38  ANE4959I (Session: 142, Node: NORBERT)  Total
number of
  objects failed:   0
12/10/2002 15:55:38  ANE4961I (Session: 142, Node: NORBERT)  Total
number of
  bytes transferred: 1.38 GB
12/10/2002 15:55:38  ANE4963I (Session: 142, Node: NORBERT)  Data
transfer
  time:3,279.74 sec
12/10/2002 15:55:38  ANE4966I (Session: 142, Node: NORBERT)  Network
data
  transfer rate:  442.39 KB/sec
12/10/2002 15:55:38  ANE4967I (Session: 142, Node: NORBERT)  Aggregate
data
  transfer rate:436.30 KB/sec
12/10/2002 15:55:38  ANE4968I (Session: 142, Node: NORBERT)  Objects
compressed
  by:0%%
12/10/2002 15:55:38  ANE4964I (Session: 142, Node: NORBERT)  Elapsed
processing
  time:00:55:25
12/10/2002 15:56:52  ANR0403I Session 142 ended for node NORBERT (SUN
SOLARIS).



Re: tsm server monitoring products

2002-12-12 Thread Richard Cowen
Blatant self-serving advertisement follows.
===

CNT offers a TSM Reporting Tool as part of our TSM Consulting Services.
Interested parties can contact me or John Duffy for more details.

Except from collateral:

Key elements of TSM Monitoring Service
The TSM Monitoring Services tool collects daily information
from the TSM server.The collected information is
analyzed, categorized and tabulated by CNT's expert system.
The resulting information-accessible only by your
administrator-provides daily pictures of your TSM
environment, including administrative events and all
exceptions, categorized by severity and presented in a
drill-down format.This helps your TSM administrator
quickly view the state of your TSM environment from
anywhere on the internet, and frees him or her from
monotonous but crucial daily monitoring.Your TSM
administrator can focus on more strategic initiatives in
the backup/recovery environment.

Richard Cowen
Senior Technical Specialist
CNT
508-293-0337

John Duffy
[EMAIL PROTECTED]
(908) 898-1100 x 3514

-Original Message-
From: Justin Bleistein [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 2:32 PM
To: [EMAIL PROTECTED]
Subject: tsm server monitoring products


Like servergraph does anyone have a list of anymore products for this
purpose. I'm trying to compile a list of products so that I can review them
for our environment?. Thanks!.

--Justin Richard Bleistein
Unix Systems Administrator (Sungard eSourcing)
Desk: (856) 866 - 4017
Cell:(856) 912-0861
Email: [EMAIL PROTECTED]



Re: 3584 library loading

2002-12-17 Thread Richard Cowen
... Is that called a paradox or a conundrum?

I would call that a Catch-22.



Re: System Object / Mgmt. Classes / Policy Sets / Backup Groups - - What's the best way??

2003-01-31 Thread Richard Cowen
tsm: ORCAV5help def clientopt

DEFINE CLIENTOPT
DEFINE CLIENTOPT (Define an Option to an Option Set)
Use this command to add a client option to an option set.
For details about the options and the values you can specify, refer to
Backup-Archive Clients Installation and User's Guide.
Privilege Class
To issue this command, you must have system privilege or unrestricted policy
privilege.
Syntax
-DEFine CLIENTOpt--option_set_name--option_name--option_value--

   .-Force--=--No--.
--+---+--+--+-
   '-Force--=--+-No--+-'  '-SEQnumber--=--number-'
   '-Yes-'



Parameters
  option_set_name (Required)
  Specifies the name of the option set.
  option_name (Required)
  Specifies a client option to add to the option set.
  Notes:
To add an INCLUDE or EXCLUDE statement to a client option set, the
correct
syntax is:
define clientopt option_set_name inclexcl include c:\proj\text\devel.*.

-Original Message-
From: Hart, Charles [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 31, 2003 12:01 PM
To: [EMAIL PROTECTED]
Subject: Re: System Object / Mgmt. Classes / Policy Sets / Backup Groups
- - What's the best way??


I've read through this thread and need to implement as our retention for
WIn2k is 180 Current and Deleted.  I have created the separate management
class and pointed it to the copy group etc.  My question is what is the
proper syntax to add the client opt to an existing client option set on the
TSM server side as it would be impossible to have our Intel platform admins
make a dsm.opt change to 200+ WIn2k boxes.

I tried the following syntax but get no where.  Someone had mentioned they
did do the Server side inclexcl statement.

upd clientopt INCLUDE.SYSTEMOBJECTS ALL SYSTEM_OBJECTS
Invalid option sequence number - SYSTEMOBJECT.

upd cliento intel INCLEXCL SYSTEMOBJECT SYSTEM_OBJECTS 0
ANR2023E UPDATE CLIENTOPT: Extraneous parameter - 0.

upd clientopt intel INCLUDE.SYSTEMOBJECTS ALL SYSTEM_OBJECTS
ANR2195E UPDATE CLIENTOPT: Invalid option sequence number - ALL.

Thank You



Rooting for Richard

2003-02-05 Thread Richard Cowen
http://www.thebostonchannel.com/mlb/1954826/detail.html

B.C. To Play B.U. For Beanpot Trophy
Eagles Have Three-Game Winning Streak Against Terriers



Re: IBM/TSM web site errors

2003-03-20 Thread Richard Cowen
Its working for me.

http://www-3.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManag
er.html

click downloads and get to:

http://www-1.ibm.com/support/search.wss?rs=663tc=SSGSG7dc=D400rankprofile
=8

which has links (example patches to server v5 and ask you to login:

https://www-120.ibm.com/software/support/ecare/support_login.jsp

which gets you to the page you selected:

http://www-1.ibm.com/support/entdocview.wss?rs=663context=SSGSG7q=uid=swg
24001811loc=en_UScs=utf-8lang=en


which gets you to index.storsys.ibm.com (where it always has.)


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 20, 2003 2:06 PM
To: [EMAIL PROTECTED]
Subject: IBM/TSM web site errors


As far as I can tell, the new IBM TSM web site is completely non
functional.

I can't click the DOWNLOADS or APARS link without getting an error.

I can't get far enough to log in.

Is anybody else having this problem?

And does anybody know who we should call to report the web site down?


Re: lbtest batch input file syntax

2004-09-30 Thread Richard Cowen
This is sort of old- use at your own risk.



From: ADSM: Dist Stor Manager on behalf of Paul Zarnowski
Sent: Thu 9/30/2004 3:47 PM
To: [EMAIL PROTECTED]
Subject: lbtest batch input file syntax



Has anyone figured out the syntax for the lbtest batch input file?
Thanks.
..Paul


--
Paul Zarnowski Ph: 607-255-4757
719 Rhodes Hall, Cornell UniversityFx: 607-255-8521
Ithaca, NY 14853-3801  Em: [EMAIL PROTECTED]


Re: How Many Backup Administrators to Data Backed Up?

2005-01-17 Thread Richard Cowen
/* Begin tongue-in-cheek
*/
 
I use this formula:
 
wc = windoze clients
xc = unix clients
nc = netware clients
nas = nas clients
lf = lan-free clients
lb = libraries
d = number of different tape drive technologies
s = tsm instances
u = upgrades per year (O/S, DB, TSM, etc)
pm = platform migrations per year
b = percent of capacity being used (time windows, cpu, LAN, FC, etc)
t = CNT's TSM Reporting Tool being used (shameless plug)
 
a = TSM FTE's

a = ( wc /200 + xc / 400 + nc / 50 + nas / 10 + lf /25 + lb /10 + d / 2 + s /8 
+ u / 8 + pm / 2 ) * (1+b-.6)  * (t * 0.4)
 
So if wc =400, xc = 100, nc = 5 nas = 4 lf = 5, the client contribution is 2 + 
.5 + .1 +.4 + .2 = 3.2
And if lb = 2, d = 1, s = 4, u = 2, pm = .5, b=.60, the admin contribution is 
.1 + .5 + .5 + .25 + .25 = 1.6
Preliminary total = 4.8
If busy factor is 0.8, multiply by 1.2 or 5.75.
 
if t=1, total = 2.3, which would be two good admins, one good and two medium 
admins, or seven poor admins.
 
/* End tongue-in-cheek
*/
 


From: ADSM: Dist Stor Manager on behalf of Prather, Wanda
Sent: Mon 1/17/2005 12:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: How Many Backup Administrators to Data Backed Up?



A couple of years ago, a very large TSM installation (a member of our
local user group) got together with a couple of other very large TSM
installations and seriously studied this question.  I don't remember how
to find the posting, but I did write down the numbers.

What they came up with is that the number of FTE admins is related not
to your hardware, number of TSM servers, or how much data you back up,
but on the number of SCHEDULES you have running (which for most sites is
about equal to the number of clients).

Worked out to 1 FTE for each 150-200 schedules.

If you need a contact # for more info, I can probably get it for you. 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hart, Charles
Sent: Wednesday, January 12, 2005 9:33 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: How Many Backup Administrators to Data Backed Up?


Too true... I am finding that we spend a lot less time managing our TSM
as we have to spend managing the Netbackup environment.  ;-)

Thanks again. This is great info, now if I can find the time to
compile

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Kauffman, Tom
Sent: Wednesday, January 12, 2005 8:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: How Many Backup Administrators to Data Backed Up?


Yeah, but then I'll come along and screw up the numbers grin.

One TSM server, 10 LTO2 drives in a 3584 library; 25 servers (18 AIX, 7
Windows). We don't do desktop systems.

We do about 2.6 TB of backup and archiving nightly and copy about 1.8 TB
to move off-site daily.

I'm the primary TSM admin -- I spend less then 2 hours per day on TSM on
a bad day, maybe 40 minutes on a good day. I have two aides that spend
an average of 20 minuts total per day on TSM. There are a few
exceptions, but for the most part we're tuned to be fully automated.

I am about to totally rework my database backup process for the backup
that goes offsite, and I expect to invest roughly 20 hours over the next
two weeks doing the coding, testing, and documentation. But on a
day-to-day process, we treat TSM with benign negelct here.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hart, Charles
Sent: Wednesday, January 12, 2005 8:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: How Many Backup Administrators to Data Backed Up?

Thank you all for your input.  Seems like the Backup World is big
enough, you would think Gartner would publish a Basic Ratio as they do
for SAN admins to disk. (Like many places our mngt thinks Gartner is
King) but this exercise we go through annually is to get upper mngt's
but in).

Have a great day!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
David Nicholson
Sent: Wednesday, January 12, 2005 6:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: How Many Backup Administrators to Data Backed Up?


What a great time for this post/question!!!  Thanks for asking it!!


(2) TSM servers, 2 tape Libraries (3494 w/ 15 3590's drives / 2500 cells
and a 3584 w/ 6 LTO II drives / 250 cells)
800 nodes (AIX, Windows, Linux) Every flavor of DB or app you can
imagine.
We backup 2-3TB a day.
Total stored in TSM is 90TB, combined database used is 80GB.

We invest around 4-8 hours a day on TSM stuff.  TSM like everything
else
is under staffed...we could be investing about 3 times the amount of
people resources on TSM that we currently do.


Dave


Re: NAS Backup Summary: Help!

2005-04-21 Thread Richard Cowen
I believe you will need to check the actlog and relate the Process number to 
the summary column NUMBER.
 
ANR1064I Differential backup of NAS node ANODE, file system /filesystem1, 
started as process 1615 by administrator ANADMIN.




From: ADSM: Dist Stor Manager on behalf of Joni Moyer
Sent: Thu 4/21/2005 3:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: NAS Backup Summary: Help!



Hello Everyone!

I am trying to create a select statement that will summarize NAS backups,
but when I run this command I get the following output.  Is there a way to
find out what filesystem it had backed up?  That is the last piece of what
I need for this report and I can't figure out how where that info. would
come from.  Thank you in advance for your help!

select entity as Admin Task,date(start_time) as Date,time(start_time)
as Start,time(end_time) as End,cast(substr(cast(end_time-start_time as
char(20)),3,8) as char(8)) as Duration, schedule_name as
Schedule,examined as Examined,affected,failed,cast(bytes/1024/1024 as
decimal(6,0)) as MB,successful from summary where
start_time=current_timestamp - 24 hours and activity not in ('ARCHIVE')
and activity='NAS Backup' order by entity

Admin Task: NAS_SERVER_2
  Date: 2005-04-20
 Start: 19:00:17
   End: 19:01:53
  Duration: 00:01:36
  Schedule:
  Examined: 0
  AFFECTED: 0
FAILED: 0
MB: 42
SUCCESSFUL: YES

Admin Task: NAS_SERVER_2
  Date: 2005-04-20
 Start: 19:03:11
   End: 19:06:05
  Duration: 00:02:54
  Schedule:
  Examined: 0
  AFFECTED: 0
FAILED: 0
MB: 0
SUCCESSFUL: YES

Admin Task: NAS_SERVER_2
  Date: 2005-04-20
 Start: 19:06:06
   End: 19:09:07
  Duration: 00:03:01
  Schedule:
  Examined: 0
  AFFECTED: 0
FAILED: 0
MB: 37
SUCCESSFUL: YES

Admin Task: NAS_SERVER_2
  Date: 2005-04-20
 Start: 19:09:07
   End: 19:12:01
  Duration: 00:02:54
  Schedule:
  Examined: 0
  AFFECTED: 0
FAILED: 0
MB: 38
SUCCESSFUL: YES

Admin Task: NAS_SERVER_2
  Date: 2005-04-20
 Start: 19:12:01
   End: 19:14:48
  Duration: 00:02:47
  Schedule:
  Examined: 0
  AFFECTED: 0
FAILED: 0
MB: 0
SUCCESSFUL: YES

Admin Task: NAS_SERVER_2
  Date: 2005-04-20
 Start: 19:14:48
   End: 19:17:35
  Duration: 00:02:47
  Schedule:
  Examined: 0
  AFFECTED: 0
FAILED: 0
MB: 0
SUCCESSFUL: YES

Admin Task: NAS_SERVER_2
  Date: 2005-04-20
 Start: 19:17:35
   End: 19:20:30
  Duration: 00:02:55
  Schedule:
  Examined: 0
  AFFECTED: 0
FAILED: 0
MB: 0
SUCCESSFUL: YES


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: Why virtual volumes?

2007-08-22 Thread Richard Cowen
I think orinally, it was because intersite SCSI/FC was impossible or too
expensive, while IP was cheap. (Or maybe one TSM server had scsi tape
drives and a second did not.)
Virtual volumes are basically treating one TSM server as a client and
archiving to the other TSM server over IP.
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Keith Arbogast
Sent: Wednesday, August 22, 2007 4:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Why virtual volumes?

Richard,
You asked thought provoking questions, but didn't answer mine. What is
the compelling reason to use virtual volumes? Offsite copypools and
certain restorability of the TSM database are essential. Thank you for
spotlighting those points. However, I can do those without virtual
volumes. Right? What circumstances make virtual volumes helpful,
preferable or necessary? TSM development designed and built them for a
reason. What is the reason?

This is not a rhetorical question. I am hoping someone will turn this
light bulb on for me.

With best wishes,
Keith Arbogast


Re: TSM locked me out!!!!

2008-01-03 Thread Richard Cowen
Any chance you have more than one instance of TSM running on that host?
I have this problem on my laptop, running several instances, and after
some days of not running, I try and start them up.
The console (dsmserv) mode connection is always to the same instance, no
matter what env variables I set ...
v5.4.1.2, windows xp.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Loon, E.J. van - SPLXM
Sent: Thursday, January 03, 2008 10:14 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM locked me out

Hi *SM-ers!
After a long time I tried logging on to my test TSM server, but I
couldn't.
I canceled the server (kill -9) and started it in the foreground. It
seemed to be related to an unexpected system date, so here is what I
did:

accept date
ANR2017I Administrator SERVER_CONSOLE issued command: ACCEPT DATE
ANR0894I Current system has been accepted as valid.
TSM:AAR1

Ok, so now I enable the sessions again:

enable sessions
ANR2017I Administrator SERVER_CONSOLE issued command: ENABLE SESSIONS
ANR2552I Server now enabled for Client access.
TSM:AAR1

And I try connecting through the Windows Admin again:

ANR0407I Session 5 started for administrator XI01EL (WinNT) (Tcp/Ip
171.21.240.138(1110)).
ANR0420W Session 5 for node XI01EL (WinNT) refused - server disabled for
user access.

HUH?!?!?!? It's an admin session, so I always should be able to get
in!
Also, the server IS enabled according to the q status:

q status
ANR2017I Administrator SERVER_CONSOLE issued command: QUERY STATUS
Storage Management Server for AIX-RS/6000 - Version 5, Release 3, Level
5.0


Server Name: AAR1
 Server host name or IP address:
  Server TCP/IP port number: 1503
 Server URL:
..
 Subfile Backup: No
   Availability: Enabled for Client sessions
..

I'm lost! Anybody seen this before?
Thanks in advance!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail
or any attachment may be disclosed, copied or distributed, and that any
other action related to this e-mail or attachment is strictly
prohibited, and may be unlawful. If you have received this e-mail by
error, please notify the sender immediately by return e-mail, and delete
this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or
its employees shall not be liable for the incorrect or incomplete
transmission of this e-mail or any attachments, nor responsible for any
delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
Airlines) is registered in Amstelveen, The Netherlands, with registered
number 33014286
**


Re: ANR8963E ANR1791W

2008-02-12 Thread Richard Cowen
...
 ANR1791W HBAAPI wrapper library /usr/lib/libHBAAPI.a(shr_64.o) failed
to load or is missing.

This is because after enabling SAN Discovery, you need to stop/start the
TSM instance to get the module to load.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Andy Huebner
Sent: Tuesday, February 12, 2008 10:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] ANR8963E  ANR1791W

We started seeing this error after we upgrade to 5.4.2.0 ANR8963E Unable
to find path to match the serial number defined for drive XX in
library IBM3494 This error causes the path to go off-line.  We update
the path with AUTODETECT=YES and it fixes the path every time.  But the
error will happen again and again... but to a random drive.

IBM asked us to turn on SAN Discovery to resolve the error which has
generated this error:
ANR1791W HBAAPI wrapper library /usr/lib/libHBAAPI.a(shr_64.o) failed to
load or is missing.

We are running AIX 5.3 and the correct version of the API is loaded.
This whole thing is starting to drive us nuts with paths dropping every
day and with IBM not having a solution yet.

Does anyone have a solution to these errors?  We are stuck and IBM is
working on it.

AIX 5.3
TSM 5.4.2.0
IBM3494
3590-E drives
A very static tape SAN

Andy Huebner


Re: lan-free tape dismount retention

2003-07-16 Thread Richard Cowen
in dsmserv.opt of Library Client:

SHAREDLIBIDLE YES



-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 16, 2003 9:08 AM
To: [EMAIL PROTECTED]
Subject: lan-free tape dismount retention


Hello!

I'm working on the TPOC right now and I have the mount retention set to
 minutes, but as you can see below the volumes is being dismounted
seconds after the archive session to the tape drive has completed for the
storage agent.  Is there a parameter that must be set on the storage agent?
Or do I need to change something under the TSM server configuration.
Thanks!


Re: Month End Strategies (TDP for Exchange)

2004-03-30 Thread Richard Cowen
You might read the manual around the backup command, looking at the COPY type of 
backup.
Note in the include/exclude configuration where you can specify a different MC for the 
different types of backup. Consider using a different MC for FULL and COPY backup 
types...


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Coats, Jack
Sent: Tuesday, March 30, 2004 3:44 PM
To: [EMAIL PROTECTED]
Subject: Re: Month End Strategies (TDP for Exchange)


I have been thinking over this option for someone else.

One option is to set up a separate tape pool (with copy pool if you want two
copies), and set up a management class for it that expires data after x
years.  Only do a monthly backup of Exchange to this pool.  And make the
copypool version if that is in our requiremets.

Just a thought. ... Let us know what you figure out!

 -Original Message-
 From: Wayne Prasek [SMTP:[EMAIL PROTECTED]
 Sent: Tuesday, March 30, 2004 11:20 AM
 To:   [EMAIL PROTECTED]
 Subject:  Month End Strategies (TDP for Exchange)

 I'm wondering what the best strategy to get a month end backup of
 Exchange going.  We would like to keep the backup for an indefinite
 amount of time (most likely a few years).  The backup tape(s) will be
 sent offsite.  Currently we have a nightly backup of Exchange which
 expires after 30 days.

 I have a couple ideas on how to accomplish this.  I want to hear what
 other people are doing.   Thanks in advance.

 ***
  Wayne Prasek
  I.T. Support Analyst (Server)
  204-985-1694
  [EMAIL PROTECTED]
 ***


Re: Taking dbbackups on remote TSM servers

2004-05-06 Thread Richard Cowen
There are several ways to do remote tape creation, and all are in use somewhere. Here 
are the most common.
 
1. Backup to a remote TSM server over IP and then create the tape locally.
2. Server to server via normal IP.
3. Tape extension works using IP, T3, DS3, OC3, WDM, etc. Many hardware vendors 
make such equipment.
 
Options 1 and 2 require a TSM Server running at the remote site with a tape dirve 
(library).
Option 3 just requires the tape drive (library) to be remote- although to restore 
remotely, you would still require the TSM server to be there.  Some use Option 1 and 
Option 3 bi-directionally, when they have two data centers serving as D/R sites to 
each other.
 
For more information you can contact me at [EMAIL PROTECTED]
 


Re: journaling on SUN

2004-06-23 Thread Richard Cowen
Do files live forever?  Or migrate to other storage after a year or two, or by some 
other (least-recently-touched) algorithm?
I assume the files are functionally read-only.
Are the path names date dependent?
Is there an external list of newly added filenames? (This would be a psuedo journaled 
approach.)
Does the PACS offer a way to emulate HSM?
Can you store the original files on a special document server 
(content-addressable-storage, ala EMC Centerra?)
Some have (or will have) TSM API's
 
Is there any other regularity that can help without risking a miss ?



From: ADSM: Dist Stor Manager on behalf of Joel Fuhrman
Sent: Wed 6/23/2004 1:54 PM
To: [EMAIL PROTECTED]
Subject: journaling on SUN



Does the TSM client for SUN support journaling?  If not, are there any rumors
that its being developed?

The system administrator for a medical imaging system (GE-PACS) whats to use TSM
for backup.  The system has millions of small files which are never changed.
Only new files are added to the system.  So this seem ideal for journaling.

If journaling is not an option, I would welcome suggestions on methods which
would guarantee that all files are backed up without having to suffer the
overhead of the standard backup process.


Re: Database entry size for an object

2001-06-26 Thread Richard Cowen

From a posting of mine last year: (adsm v3.1)

From the occupancy table:

22,254,868 ARCHTAPE  archives on tape
40,030,448 BACKUP3590_OFFSITEbackup copypool on tape
40,065,503 TAPEPOOL  backups on tape

   102,350,819  entries

52,600MBDatabase assigned capacity

52,600,000,000 / 102,350,819 = 513 bytes/entry.



Re: problems with 3583 ultrium library

2001-07-24 Thread Richard Cowen

I'm having a the following problem with a 3883 library with three lto
drives:

ANR8358E Audit operation is required for library MAXIM.
ANR8834E Library volume **UNKNOWN** is still present in library MAXIM drive
LTO1 (/dev/rmt7), and must be removed manually.

Is there a message on the LCD panel of the library?  If so, clear it and
restart TSM Server. That cleared it for me



Re: New Question Re - Tape problem after moving TSM to new server

2006-03-27 Thread Richard Cowen
There is (or should be) a file IBMtape.conf in /usr/kernel/drv.
See the manual IBM SCSI Tape Drive, Medium Changer, and Library Device
Drivers. GC35-0154-05.1
Look in the Solaris chapters 26-29.
Basically you define which lun's IBMtape will see.  There is a file in
the same directory named st.conf, which you need to consider, as well,
as you don't want the same device seen by more than one driver!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Farren Minns
Sent: Monday, March 27, 2006 8:48 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] New Question Re - Tape problem after moving TSM to new
server

Trying to run TSM 5.1.6.2 on Solaris 2.9

IBMtape version - 4.1.2.6

Hi All

As this isn't strictly a TSM question I will understand if it gets
ignored, but does anyone know how (or if it is even possible) to get
Solaris to only use the IBMtape device drivers and not to also use the
default Solaris SCSI tape drivers. If I could get the OS to do that, I
would hope to see my two IBMtape drives assigned to 0 and 1 (0stc, 1stc)
again, and this would save me a lot of extra work.

If there are any Solaris people out there that can help me with this it
was be awesome.

Many thanks

Farren


##
The information contained in this e-mail and any subsequent
correspondence is private and confidential and intended solely for the
named recipient(s).  If you are not a named recipient, you must not
copy, distribute, or disseminate the information, open any attachment,
or take any action in reliance on it.  If you have received the e-mail
in error, please notify the sender and delete the e-mail.

Any views or opinions expressed in this e-mail are those of the
individual sender, unless otherwise stated.  Although this e-mail has
been scanned for viruses you should rely on your own virus check, as the
sender accepts no liability for any damage arising out of any bug or
virus infection.
##

SPECIAL NOTICE

All information transmitted hereby is intended only for the use of the
addressee(s) named above and may contain confidential and privileged
information. Any unauthorized review, use, disclosure or distribution
of confidential and privileged information is prohibited. If the reader
of this message is not the intended recipient(s) or the employee or agent
responsible for delivering the message to the intended recipient, you are
hereby notified that you must not read this transmission and that disclosure,
copying, printing, distribution or use of any of the information contained
in or attached to this transmission is STRICTLY PROHIBITED.

Anyone who receives confidential and privileged information in error should
notify us immediately by telephone and mail the original message to us at
the above address and destroy all copies.  To the extent any portion of this
communication contains public information, no such restrictions apply to that
information. (gate02)


Re: Error writing volume history file

2007-03-20 Thread Richard Cowen
Some time ago (years), that error message would appear when a deadlock
of some kind prevented the volume history table from being accessed
during the write attempt.

One GB scratch volumes?  At what rate do you create/delete these during
the night?  A high rate would increase the chance of a timing window
being illuminated.

Just curious, why such a small volume?  Would larger volumes result in a
lower rate of creation/deletion?


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thomas Denier
Sent: Tuesday, March 20, 2007 1:57 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Error writing volume history file

We are seeing the following pair of messages occasionally:

ANRD icvolhst.c(5267): ThreadId 47 Error Writing to output File.
ANR4510E Server could not write sequential volume history information to
/var/tsm_automation/volumehistory.

Successive occurances of this pair of messages are typically a week to
two weeks apart. We have a 5.3.4.0 server running under mainframe Linux.
There is plenty of disk space available in the /var file system. The
Linux error logs do not report any I/O errors when these TSM messages
occur. The path at the end of the second message is the one specified
for the sequential volume history file in our server options file.

I have already contacted IBM, but I am not optimistic about getting a
resolution; the last e-mail I got from IBM requested 'ls' command output
needed to check the possibility of root not having write access to the
volume history file.
Has anyone else seen this and found a way to stop it?


Re: Remote Vaulting

2005-07-07 Thread Richard Cowen
There is equipment available to extend fibre to any distance over IP for tape.  
The library/drives look local to the TSM server.
Our customers use this solution quite a bit.  Some do cross-connections between 
datacenters (I'll do your DR, you do mine.)
 
Your hosuekeeping is somewhat different, as your copy pool tapes are now 
accessible/readwrite not unavailable/vault and your db backup tapes are 
also online.  (Or, you can actually remote your primary pool volumes instead 
and leave your copy pool volumes local.  There are pros and cons to each method 
and sizing decisions around the bandwidth required.)
 
Small Ad:
 
(If you look at my email, you will see the company that makes the extension 
devices, and supplies the people to design, install and maintain them, if so 
desired.)



From: ADSM: Dist Stor Manager on behalf of Prather, Wanda
Sent: Thu 7/7/2005 5:01 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: remote vaulting



Depends on what you mean by remote.
We have just set up a tape library for our offsite tapes.

But we're within range to use fibre, so as far as TSM is concerned, it's
really a local tape library, even though it's physically in another
building.  Ideal situation, if the fibre limits meet your DR
requirements.

Wanda Prather
I/O, I/O, It's all about I/O  -(me)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Miller, Ryan
Sent: Wednesday, July 06, 2005 4:30 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: remote vaulting


What info are you looking for, we have had a remote vault for 5
years

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Pugliese, Edward
Sent: Wednesday, July 06, 2005 3:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: remote vaulting

You could use server to server communication and then the remote TSM
server is the one with the library.

Local server sends virtual volumes to remote server.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
bob molerio
Sent: Wednesday, July 06, 2005 3:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] remote vaulting

HI,

Has anyone implemeted this?

TSM server with a remote tape library?

I can't seem to find any information about this anywhere.

Thanks,

Bob M



 


Re: strange incremental behavior on Windows 5.2.2.0

2005-08-09 Thread Richard Cowen
Maybe this one?


 


IC38377: Backup of Windows2003 System Object fails - ANS1950E - caused after a 
Microsoft VSS_E_PROVIDER_VETO error

 
  A fix is available
IBM Tivoli Storage Manager V5.2 Fix Pack 3 Clients and READMEs 
http://www-1.ibm.com/support/docview.wss?rs=0uid=swg24007407 


APAR status 
Closed as program error.


Error description 
The backup of the Windows2003 System Object with:

ANS1950E Backup via Microsoft Volume Shadow Copy failed.  See
error log for more detail.

The error reported in the dsmerror.log is:

CreateSnapshotSet():  pAsync-QueryStatus() returns
hr=VSS_E_PROVIDER_VETO

It has also been reported that after this failure, subsequent
incremental backups of the Windows 2003 system object may fail
with:

ANS1304W Active object not found
Local fix 
Problem summary 

* USERS AFFECTED: All TSM B/A client v5.2.0 v5.2.2 on  *
* Windows 2003.*

* PROBLEM DESCRIPTION: See ERRROR DESCRIPTION. *

* RECOMMENDATION: Apply fixing PTF when available. *
* This problem is currently projected  *
* to be fixed in PTF level 5.2.3.  *
* Note that this is subject to change at   *
* the discretion of IBM.   *

If BACKUP SYSTEMSTATE or BACKUP SYSTEMSERVICES on Windows 2003
fails, subsequent incremental backups of the Windows 2003 system
state or system service may fail with the message ANS1304W
Active object not found.
Backup processing halts after the ANS1304W message is issued,
even if there are more files to process.
Problem conclusion 
The client code has been fixed so that after a system state or
system services backup failure, subsequent attempts will not
fail (barring any other conditions that might cause a failure).
Temporary fix 
Windows interim fix 5.2.2.5

IC38377: Backup of Windows2003 System Object fails - ANS1950E - caused after a 
Microsoft VSS_E_PROVIDER_VETO error

 
  A fix is available
IBM Tivoli Storage Manager V5.2 Fix Pack 3 Clients and READMEs 
http://www-1.ibm.com/support/docview.wss?rs=0uid=swg24007407 


APAR status 
Closed as program error.


Error description 
The backup of the Windows2003 System Object with:

ANS1950E Backup via Microsoft Volume Shadow Copy failed.  See
error log for more detail.

The error reported in the dsmerror.log is:

CreateSnapshotSet():  pAsync-QueryStatus() returns
hr=VSS_E_PROVIDER_VETO

It has also been reported that after this failure, subsequent
incremental backups of the Windows 2003 system object may fail
with:

ANS1304W Active object not found
Local fix 
Problem summary 

* USERS AFFECTED: All TSM B/A client v5.2.0 v5.2.2 on  *
* Windows 2003.*

* PROBLEM DESCRIPTION: See ERRROR DESCRIPTION. *

* RECOMMENDATION: Apply fixing PTF when available. *
* This problem is currently projected  *
* to be fixed in PTF level 5.2.3.  *
* Note that this is subject to change at   *
* the discretion of IBM.   *

If BACKUP SYSTEMSTATE or BACKUP SYSTEMSERVICES on Windows 2003
fails, subsequent incremental backups of the Windows 2003 system
state or system service may fail with the message ANS1304W
Active object not found.
Backup processing halts after the ANS1304W message is issued,
even if there are more files to process.
Problem conclusion 
The client code has been fixed so that after a system state or
system services backup failure, subsequent attempts will not
fail (barring any other conditions that might cause a failure).
Temporary fix 
Windows interim fix 5.2.2.5


Re: 1 of 3 TOR reports refuses to generate?

2005-08-11 Thread Richard Cowen
Hi Steve,

Just out of curiosity, could you tell me what TOR is giving you that TRT 
doesn't?  I am always interested in improving our product...

Thanks.


Re: TSM Operational Reporting and E-mail

2005-09-13 Thread Richard Cowen
You could try:
 
Open a DOS window.
Enter:
nslookup
 set type=MX
 super.net.pk
 
You should see something like:
 
Server:  xxx.yyy.com
Address:  111.222.222.001
Non-authoritative answer:
yyy.com  MX preference = 10, mail exchanger = mail.yyy.com
yyy.com  MX preference = 20, mail exchanger = mail2.yyy.com
mail.yyy.com   internet address = 111.222.222.010
mail2.yyy.com  internet address = 111.222.222.011
 
The names and addresses should match your environment.
If you do not resolve any MX types, check your Network settings (DNS, etc.)
 



From: ADSM: Dist Stor Manager on behalf of Sandra
Sent: Tue 9/13/2005 1:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM Operational Reporting and E-mail



Dear All,
I am running TSM 5.2.3 with maintenance applied till 5.2.6 on Windows 2003
server.

Operational reporting is working fine except that it can't find my SMTP server.
I can't find any logs where i can track down the problem.

I have previously configured this on Win2000 machine it worked fine.. now in my
new company i  have it on Win2003. I have tried to give both IP and fully
qualified domain name in SMTP address of my SMTP Server. Its actually exchange
2000 with SP3.

I can't re-install operational reporting since it will have to be installed on
other machine.

Please enlighten me what is it that i m missing and whree can I find logs of
this connection failure.

Regards
Sandra


Re: HOW TO SCRIPT TSM BACKUP VERIFICATION IN WINDOWS

2005-10-28 Thread Richard Cowen
Issue a query backup using the TSM client on the node that does the
backup, and compare the output to the directory in question.
You can probably do that with the windows shell, but it is better in
Perl, or some higher-level language.

Dsmc query backup full_directory_path_name  mylist.txt

This will show you all active files, with backup dates/times and
sizes.




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Laura Mastandrea
Sent: Friday, October 28, 2005 2:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] HOW TO SCRIPT TSM BACKUP VERIFICATION IN WINDOWS

Can someone share with me a Windows script that will verify files in a
directory have been successfully backed up within TSM?

SPECIAL NOTICE

All information transmitted hereby is intended only for the use of the
addressee(s) named above and may contain confidential and privileged
information. Any unauthorized review, use, disclosure or distribution
of confidential and privileged information is prohibited. If the reader
of this message is not the intended recipient(s) or the employee or agent
responsible for delivering the message to the intended recipient, you are
hereby notified that you must not read this transmission and that disclosure,
copying, printing, distribution or use of any of the information contained
in or attached to this transmission is STRICTLY PROHIBITED.

Anyone who receives confidential and privileged information in error should
notify us immediately by telephone and mail the original message to us at
the above address and destroy all copies.  To the extent any portion of this
communication contains public information, no such restrictions apply to that
information. (gate02)


Re: HOW TO SCRIPT TSM BACKUP VERIFICATION IN WINDOWS

2005-10-28 Thread Richard Cowen
Oops.

Here is the perl code in-line.


#!/usr/local/bin/perl
#
# USes a query backup to verify a directory is backed up.
#
# Assumptions:
#
#   The normal TSM environmentals will need to be set
(DSM_DIR,DSM_CONFIG).
#   The dsmc.exe will need to be locatable by the PATH variable.
#   All configuration, log files, etc will be found in the working
directory.
#   The dsm.op file has password generate set.
#
# INVOCATION: dircomp.pl dir_path_name
#
#
# -d debugging
# -h help
# -p pathname
# -w host is windows


$version=1.0;

#
# HISTORY:
#
# 10/28/2005Started.
#

require getopts.pl;

Getopts('wdhp:');

if ($opt_h) {
print Version $version\n;
print dircomp -p full path name to directory -d -h -w\n;
print  -d debugging\n -h help\n -w windows\n;
exit;
}


($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst)=localtime(time);
$year+=1900;  #number of years since 1900

$ryear=$year;   # for debugging
$rmon=sprintf(%02d,$mon+1);

$tmon=sprintf(%02d,$mon+1);

$tmday=sprintf(%02d,$mday);
$thour=sprintf(%02d,$hour);
$tmin=sprintf(%02d,$min);
$tsec=sprintf(%02d,$sec);
$timestamp=$year/$tmon/$tmday $thour:$tmin:$tsec;

if ($opt_w) {
$slash=\\;
$mkdir=mkdir;
$pathsep=\\;
$PATHsep=;;
$attrib=attrib ;
$cd=chdir /d;
$dsmexec=dsmc.exe;
}
else {
$slash=/;
$mkdir=mkdir -p;
$pathsep=/;
$PATHsep=:;
$dsmexec=dsmc;
}

die No pathname for directory specified - use the p flag\n if !$opt_p;

$dirpath=$opt_p;
$dsmpath=$dirpath;

if ($opt_w) {
open(CHDIR,chdir |);$mydir=CHDIR;close(CHDIR);chomp $mydir;
print Current dir: $mydir\n if $opt_d;
}


print DSM_CONFIG $ENV{DSM_CONFIG}\n if $opt_d;
print DSM_DIR $ENV{DSM_DIR}\n if $opt_d;


undef %dirfiles;



opendir(LIST,$opt_p);
while ($filename=readdir(LIST)) {
chomp $filename;
next if $filename eq .;
next if $filename eq ..;
$dirfiles{$filename}=1;
print local file $filename\n if $opt_d;
}
closedir(LIST);

undef %dsmfiles;

$command_base=$dsmexec query backup ;

if ($opt_w) {
$command=$command_base . $dsmpath . $slash . $slash;
}
else {
$command=$command_base . $dsmpath . $slash;
}

print Will issue query backup command $command\n if $opt_d;

# IBM Tivoli Storage Manager
# Command Line Backup/Archive Client Interface - Version 5, Release 2,
Level 3.11
# (c) Copyright by IBM Corporation and other(s) 1990, 2004. All Rights
Reserved.

# Node Name: 17668XP
# Session established with server 17668XP_SERVER2: Windows
#   Server Version 5, Release 3, Level 0.0
#   Server date/time: 10/28/2005 16:20:21  Last access: 10/28/2005
16:20:02
#
# Size  Backup DateMgmt Class A/I File
#   ----- --- 
# 0  B  10/28/2005 16:20:02STANDARDA
\\17668xp\c$\edrive\reporting\test

$skip=1;


open(CMD,$command|) || die Can't issue $command $!\n;

while ($line=CMD) {
chomp $line;
#   print dsmc output $line\n if $opt_d;
if ($skip) {
if ($line=~/---/) {
$skip=0;
}
}
else {

($size,$scale,$date,$time,$mgmt,$active,$fullpath)=split( ,$line);
if ($opt_w) {
@words=split(/\\/,$fullpath);
}
else {
@words=split(/\//,$fullpath);
}
$lastword=$words[$#words];

print dsm fullpath $fullpath, file: $lastword\n if
$opt_d;
$dsmfiles{$lastword}=1;
}
} # while
close(CMD);

foreach $filename (sort keys %dirfiles) {
if ($dsmfiles{$filename}) {
print $filename is backed up\n if $opt_d;
}
else {
print $filename not backed up\n;
}
} # foreach


exit;
---

SPECIAL NOTICE

All information transmitted hereby is intended only for the use of the
addressee(s) named above and may contain confidential and privileged
information. Any unauthorized review, use, disclosure or distribution
of confidential and privileged information is prohibited. If the reader
of this message is not the intended recipient(s) or the employee or agent
responsible for delivering the message to the intended recipient, you are
hereby notified that you must not read this transmission and that disclosure,
copying, printing, distribution or use of any of the information contained
in or attached to this transmission is STRICTLY PROHIBITED.

Anyone who receives confidential and privileged information in error should
notify us immediately by telephone and mail the original message to us at
the above address and destroy all copies.  

Re: TSM Database Size Growing Since Upgrading to 5.2.4.5

2006-01-25 Thread Richard Cowen
Graph your occupancy (num_files) by domain for each server and see if it
shows uniform growth.
If not, drill down to node occupancies (num_files), and so on.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Andrew Carlson
Sent: Wednesday, January 25, 2006 2:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Database Size Growing Since Upgrading to
5.2.4.5

I might have thought that too if it wasn't all 3 instances. I graphed my
database sizes back to 2004 (I have them in a mysql database), and it is
striking the growth curve since 11/12/2005.

Jack Coats wrote:

No unusual new clients being added?  Retaining more data?
Change of include/exclude files? ... Just grasping for straws...

-Original Message-
Subject: [ADSM-L] TSM Database Size Growing Since Upgrading to 5.2.4.5

On November 11th, we upgraded our 3 tsm instances from 5.2.1 to
5.2.4.5.  Since then, our three databases have grown at an alarming
rate.  They have grown:

56GB to 75GB (33%, 19Gb growth)
46GB to 65GB (41%, 19Gb growth)
53GB to 63GB (18%, 10Gb growth)

Anyone have problems like this?  Thanks.
Privileged and Confidential: The information contained in this e-mail
message is intended only for the personal and confidential use of the
intended recipient(s). If the reader of this message is not the intended
recipient or an agent responsible for delivering it to the intended
recipient, you are hereby notified that you have received this document
in error and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately by e-mail, and
delete the original message.





SPECIAL NOTICE

All information transmitted hereby is intended only for the use of the
addressee(s) named above and may contain confidential and privileged
information. Any unauthorized review, use, disclosure or distribution
of confidential and privileged information is prohibited. If the reader
of this message is not the intended recipient(s) or the employee or agent
responsible for delivering the message to the intended recipient, you are
hereby notified that you must not read this transmission and that disclosure,
copying, printing, distribution or use of any of the information contained
in or attached to this transmission is STRICTLY PROHIBITED.

Anyone who receives confidential and privileged information in error should
notify us immediately by telephone and mail the original message to us at
the above address and destroy all copies.  To the extent any portion of this
communication contains public information, no such restrictions apply to that
information. (gate02)


Re: upgrade from v4.2.2.5 to v5.1.0.0 failed (AIX)

2002-06-20 Thread Richard Cowen

Did you apply the latest patches to 5.1.0.0? (5.1.0.2, I believe.)

-Original Message-
From: Gretchen L. Thiele [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 3:16 PM
To: [EMAIL PROTECTED]
Subject: upgrade from v4.2.2.5 to v5.1.0.0 failed (AIX)


I now in the process of recovering to v4.2.2.5 from an attempted
upgrade to v5.1.0.0 (from the CD). All went well from an installation
point of view, but when I started the server I got:

ANRD iminit.c(820): ThreadId0 Error 8 converting Group.Members to
Backup.Groups.

Called it in as a sev 1 over an hour ago, but haven't gotten a
callback yet. Since this is production, I can't wait anymore and
decided to recover.

Not sure what to do now except wait for support to call back. I
really need the move nodedata command...

Gretchen Thiele
Princeton University



Re: sun equivilent of rmdev/mkdev

2002-07-16 Thread Richard Cowen

Maybe you are looking for: add_drv / rm_drv ?

Maintenance Commands  add_drv(1M)

NAME
 add_drv - add a new device driver to the system

SYNOPSIS
 add_drv   [  -b basedir  ]   [   -c class_name   ][   -i
 'identify_name...'  ]  [ -m 'permission','...'  ]  [ -n ]  [
 -f ]  [ -v ] device_driver

DESCRIPTION
 The  add_drv  command is used to  inform  the  system  about
 newly installed device drivers.


Can anyone tell me the Sun equivilent of the rmdev mkdev commands?



Re: Redirect Output in Windows dsmadmc

2002-07-25 Thread Richard Cowen

When will we be able to do this with a select statement and use a greater
than symbol under windows?

-Original Message-
From: Andy Raibeck [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 24, 2002 8:16 PM
To: [EMAIL PROTECTED]
Subject: Re: Redirect Output in Windows dsmadmc


Consider it noted.

Don't know if this will help, but if the Admin client is done in batch
mode, then the redirection will work:

   dsmadmc -id=adminid -pa=x q se  c:\test dir\qse.txt

Regards,

Andy



Re: ANR8925W storage agent drive timeouts

2002-07-31 Thread Richard Cowen

I had a PMR on a similiar issue some time ago (Jan 2002)
PMR 28921,469 - retention timeout for storage agent
I don't know how my 20 minutes got to be your 10 minutes, though.


Richard,
   Final answer is that it's hard-coded 20 minutes.  Hopefully, your
storage agent doesn't die very often, so it shouldn't be too much of
an issue.  If this is not the case, or if you need anything further,
please let us know.
   Thank you for using Tivoli support.
.
Yours truly,
Josh Davis - TSM/TDP Support

-Original Message-
From: David E Ehresman [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 31, 2002 10:18 AM
To: [EMAIL PROTECTED]
Subject: ANR8925W


Is there a parm that controls the time that tsm waits for a storage
agent to respond before issuing the ANR8925W Drive DRIVE105 in library
3494 has not been confirmed for use by server MINERVA_SA for over 600
seconds. Drive will be reclaimed for use by others message?

David



Re: Eternal Data retention brainstorming.....

2002-08-15 Thread Richard Cowen

Do you use DRM and if so, do you copy all primary pools?
If so, snapshot a DB backup and have your offsite people box up all your
current offsite tapes plus this snapshot. Very carefully mark those boxes!
Back at the TSM server, mark all these volumes as destroyed and let them
be re-created as your backup stg pools processes run.  Of course, the
downsides are:

1)Until the new set of copy pool tapes are complete,you are exposed to a
primary volume failure.
 (you could of course set up a new server with the old database and a new
library and get data back...)

2)Re-creating that new set of tapes will take some timemaybe more than
24hrs / day?

Maybe use the exercise as an excuse to change tape technology!


-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 3:31 PM
To: [EMAIL PROTECTED]
Subject: Eternal Data retention brainstorming.


Folks,
I have a theoretical question about retaining TSM data in an unusual
way. Let me explain.

Lets say legal comes to you and says that we need to keep all TSM
data backed up to a certain date, because of some legal investigation
(NAFTA, FBI, NSA, MIB, insert your favorite govt. entity here). They want a
snapshot saved of the data in TSM on that date.

Anybody out there ever encounter that yet?

On other backup products that are not as sophisticated as TSM, you
just pull the tapes, set them aside and use new tapes. With TSM and it's
database, it's not that simple. Pulling the tapes will do nothing, as the
data will still expire from the database.

The most obvious way to do this would be to:

1. Export the data to tapes  store them in a safe location till some day.
This looks like the best way on the surface, but with over 400TB of data in
our TSM environment, it would take a long time to get done and cost a lot if
they could not come up with a list of hosts/filespaces they are interested
in.

Assuming #1 is unfeasible, I'm exploring other more complex ideas.
These are rough and perhaps not thought through all the way, so feel free to
pick them apart.

2. Turn off expire inventory until the investigation is complete. This one
is really scary as who knows how long an investigation will take, and the
TSM databases and tape usage would grow very rapidly.

3. Run some 'as-yet-unknown' expire inventory option that will only expire
data backed up ~since~ the date in question.

4. Make a copy of the TSM database and save it. Set the reuse delay on all
the storage pools to 999, so that old data on tapes will not be
overwritten.
In this case, the volume of tapes would still grow (and need to
perhaps be stored out side of the tape libraries), but the database would
remain stable because data is still expiring on the real TSM database.
To restore the data from one of those old tapes would be complex, as
I would need to restore the database to a test host, connect it to a drive
and pretend to be the real TSM server and restore the older data.

5. Create new domains on the TSM server (duplicates of the current domains).
Move all the nodes to the new domains (using the 'update node ...
-domain=..' ). Change all the retentions for data in the old domains to
never expire. I'm kind of unclear on how the data would react to this. Would
it be re-bound to the new management classes in the new domain? If the
management classes were called the same, would the data expire anyways?

Any other great ideas out there on how to accomplish this?

Thanks,
Ben



Re: Documenting a Script in TSM : Issue Message

2002-09-06 Thread Richard Cowen

I believe you want:

ISSUE MESSAGE (Issue a Message From a Server Script)
Use this command with return code processing in a script to issue a message
from a server
script to determine where the problem is with a command in the script.

Syntax   ISSUE MESSAGE message_severity message_text

Parameters
 message_severity (Required)
 Specifies the severity of the message. The message severity indicators are:
 E Error. ANR1498E is displayed in the message text.
 I Information. ANR1496I is displayed in the message text.
 S Severe. ANR1499S is displayed in the message text.
 W Warning. ANR1497W is displayed in the message text
message_text (Required)
 Specifies the description of the message.

-Original Message-
From: Coats, Jack [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 06, 2002 9:06 AM
To: [EMAIL PROTECTED]
Subject: Documenting a Script in TSM


There is probably a simple answer...

I am running a script from TSM on NT with several commands
that do SQL Queries and TSM commands each morning.

How can I do the equivalent of an 'Echo' or 'Type' command
in the scripts to put a text 'header' of sorts out before each
of the commands?

Example:   I do a 'RUN MORNING' from my web browser TSM
command line.  It executes the following commands:

/* Morning Report */
/**/
/* First, Copy Pool Tapes to go off site */
query volume * access=readw,reado -
  status=full,filling stgpool=copypool
/* */
/* Second, List TODAYs DataBase backup tape */
query volhist type=dbbackup begindate=today
/* */
/* Third, List the volumes ready to come on-site */
query volume * access=offsite status=empty
/* */
/* Fourth, List all the events in the last*/
/*   work day that should have been completed */
QUERY EVENT * * BEGINDATE=TODAY-1 -
  BEGINTIME=08:00 ENDDATE=TODAY ENDTIME=08:00 -
  EXCEPTIONSONLY=NO FORMAT=STANDARD
/* */
/* Fifth, Query the journal logs and the */
/*   database for their size, 80% is needs */
/*   some attention */
query log
query db
/* Library Volumes */
query libv
/* All Volumes in Catalog */
query vol
/* Space Usage - q occupancy */
query auditoccupancy
/* Storage Group Status */
query stg

And I just want to put what is in the comments out to the
display along with the results of the commands.

Suggestions?



Re: Data in offsite pool that is not in onsite pool

2002-09-06 Thread Richard Cowen

Things to search for in actlog that can cause a volume to become
unavailable:

ANR1175W,unavail # Volume DHB003 contains files which could not be
reclaimed.
ANR1180W,unavail # Access mode for volume xxx has been set to unavailable
due to file read or integrity errors.
ANR1410W,unavail # Access mode for volume xxx now set to unavailable.
ANR4247E,unavail # Audit command: Missing information for volume volume name
cannot be created.
ANR8313E,unavail # Volume volume name is not present in library library
name.
ANR8343I,unavail # Request request number for volumexxx canceled (PERMANENT)
by administrator name.
ANR8356E,unavail # Incorrect volume xxx was mounted instead of volume
expected yyy in library
ANR8359E,unavail # Media fault detected on device www volume xxx in drive
yyy of library zzz.
ANR8782E,unavail # Volume yyy could not be accessed by library xxx.

-Original Message-
From: Steve Hicks [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 06, 2002 9:27 AM
To: [EMAIL PROTECTED]
Subject: Re: Data in offsite pool that is not in onsite pool


First off, thanks to all who responded to my e-mail. I think Ed has hit
the nail on the head here. As we haven't been cursed with hat many bad
tapes (that I'm aware of), we have had a significant amount of trouble
with our tape drives. Something I noticed while attempting to troubleshoot
this problem was that the list of tapes marked Unavailable was pretty long
(32 tapes). They are a mixture of onsite and offsite tapes. Obviously, if
TSM can't access the tape, it can't use the data on it in the reclamation
process. Last Friday I went through and did an upd vol VOLSER
access=readwrite on the tapes marked unavailable, now the list is back.
What is putting these tapes in an unavailable state? I will try the
restore vol command today and I'll report back what kind of luck I find.
Steve Hicks
CIS System Infrastructure Lead /
AIX Administrator
Knoxville Utilities Board





Charles Anderson [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
09/04/2002 04:54 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Data in offsite pool that is not in onsite pool


Steve,

We recently had a couple of tapes that either went bad, or had copies
of files on them marked as bad.  What I would do is look at the actlog
and see what vols reclaimation isn't able to continue on, and then do a
restore vol VOLSER preview=yes on that volume. Most of the volumes
that will come up as being needed will be the ones that you see in the
actlog now as being needed, but are offsite.  We brought the tapes from
offsite, checked them in, restored vol without the preview option, and
then checked the offsite vols back out.  It took something like 80 tapes
to restore 5 onsite tapes with bad data.  it was a lot of fun!

-ed


 [EMAIL PROTECTED] 2002/09/03 08:57:30 AM 
Our offsite pool reclamation process is calling for tapes that are
listed
as being in the vault (and they actually are). Obviously reclamation
can
not take place for these volumes, as they are not onsite. First, what
is
the best way to go about fixing this? Second, what could have gotten
our
pools out of sync? Any help is much appreciated.
Thanks,
Steve Hicks
CIS System Infrastructure Lead /
AIX Administrator
Knoxville Utilities Board



Re: 3494 Utilization, and other Reporting Concerns

2002-09-09 Thread Richard Cowen

If the List doesn't mind occasional vendor self-interested posts, I would
like to offer one for CNT's TSM Reporting Service.  This is just one aspect
of a more encompassing offering of TSM consulting and implementation
services, but it may be germane to this thread.  If we get a volunteer
reviewer, I would be happy to send sample HTML's



-Original Message-
From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 09, 2002 10:45 AM
To: [EMAIL PROTECTED]
Subject: Re: 3494 Utilization


TSMManager doesn't do predictions of when you're going to run out of space.
Servergraph/TSM is better for serious capacity planning.

Should some impartial third party review the various TSM monitoring products
and send the list a write-up?
Paul Seay? Dwight Cook? Wanda Prather? Somebody?

We, of course, have detailed feature-by-feature comparisons, but can't
really claim to be impartial  ;-}
-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Cahill, Ricky
 Sent: Monday, September 09, 2002 10:01 AM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 Take a look at www.tsmmanager.com this will do exactly what you
 want and in
 a graphical format, you can download it and use the evaluation license to
 get the info you need. To be honest after now using this for a couple of
 months I can't see how anyone could do without it. It's especially good at
 doing nice pretty reports for the management to get them off your back and
 give them more paper to pass around in meetings ;)

.Rikk

 -Original Message-
 From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
 Sent: 09 September 2002 13:44
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 In the long run, we are attempting to quantify exactly how busy our
 ATLs/drives are over time for a number of reasons -- capacity planning,
 adjusting schedules to better utilize resources, and possibly even justify
 the purchase of new tape drives.

 At this point I have been asked to simply come up with a minutes or hours
 per 24 hour period any particular drive is in use.

 A query mount every minute might work, but it just isn't a good solution
 for two reasons -- for clients writing directly to tape, the mounted tape
 won't show up in query mount, and most of these servers already have an
 extensive number of scripts accessing them periodically for various
 monitoring and reporting functions - I hesitate to add any more to them.

 My last resort is going to be to extract the activity log once every 24
 hours and examine the logs and match the mount/dismounts by drive and
 attempt to calculate usage that way if there isn't something better.  With
 the difficulty in matching mounts to dismounts, I'm not entirely convinced
 it's worth the trouble.


 -Original Message-
 From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]]
 Sent: Friday, September 06, 2002 4:55 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 If you use the atape device driver, you (supposedly) can turn on logging
 within it.  Then every dismount writes a record of how many bytes were
 read/written during that mount.
 Never tried it ... if you can get it working, let me know how,
 please! We'd
 love to be able to do that.

 Right now we CAN show you library-as-a-whole data rates, just by layering
 all the tape-drive-writing tasks (migration, backup stgpool,
 backup DB, etc)
 one atop the other minute by minute.  Maybe that's enough - why
 do you need
 drive-by-drive data rates?

 -
 Mr. Lindsay Morris
 CEO, Servergraph
 www.servergraph.com
 859-253-8000 ofc
 425-988-8478 fax


  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Jolliff, Dale
  Sent: Friday, September 06, 2002 2:16 PM
  To: [EMAIL PROTECTED]
  Subject: Re: 3494 Utilization
 
 
  Paul said that Servergraph has this functionality - According to our
  hardware guys, the 3494 library has some rudimentary mount statistics
  available.
 
  I'm going to be looking into both of those options.
 
  Surely someone has already invented this wheel when trying to
 justify more
  tape drives - other than pointing to the smoke coming from the
 drives and
  suggesting that they are slightly overused
 
 
 
 
 
  -Original Message-
  From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
  Sent: Friday, September 06, 2002 6:45 AM
  To: [EMAIL PROTECTED]
  Subject: Re: 3494 Utilization
 
 
  Good question, never actually thought about it...
  I would think that the sum of the difference between mount 
  dismount times
  for each drive...
  OH THANKS. now I won't be able to sleep until I code some select
  statement to do this :-(
  if I figure it out, I'll pass it along
 
  Dwight
 
 
 
  -Original Message-
  From: Jolliff, Dale 

Re: OK IBM _ ENOUGH ALREADY with the IBM/Tivoli TSM website -

2002-10-23 Thread Richard Cowen
I downloaded the AIX server v5151 yesterday, I can navigate the site today.

ftp index.storsys.ibm.com
Connected to serviceb.boulder.ibm.com.
220 serviceb.boulder.ibm.com FTP server (Version wu-2.6.2(1) Mon Dec 3
15:26:19 MST 2001) ready
User (serviceb.boulder.ibm.com:(none)): ftp
331 Guest login ok, send your complete e-mail address as password.
Password:
230 Guest login ok, access restrictions apply.
ftp cd tivoli-storage-management
250 CWD command successful.
ftp dir
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 88
-rw-rw-r--   1 18125700 200  490 Jun 28 2001  .message
-rw-r-x---   1 0200   71 Jan 24 2000  .profile
d--x--s--x   2 0200  512 Jan 24 2000  bin
d--x--s--x   2 0200  512 Jan 24 2000  dev
drwxrwsr-x   3 18125700 200  512 Jun 26 2001  devices
d--x--s--x   2 0200  512 Jan 24 2000  lib
drwxrwsr-x   9 18125700 200  512 Jul 31 2001  maintenance
drwxrwsr-x   9 18125700 200  512 Apr  5 2002  patches
drwxrwsr-x   2 18125700 200  512 Nov 29 2000  plus
drwxrwsr-x   2 18125700 200 8192 Oct 22 17:05 swap
226 Transfer complete.
ftp: 628 bytes received in 0.00Seconds 628000.00Kbytes/sec.
ftp cd patches
250 CWD command successful.
ftp cd client
250 CWD command successful.
ftp dir
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 32
drwxrwsr-x  10 18125700 200  512 Mar 30 2001  v3r7
drwxrwsr-x  12 18125700 200  512 Jun 11 10:49 v4r1
drwxrwsr-x  13 217  200  512 May 31 06:05 v4r2
drwxrwsr-x   9 18125700 200  512 Oct 18 11:03 v5r1
226 Transfer complete.
ftp: 250 bytes received in 0.01Seconds 25.00Kbytes/sec.
ftp cd v5r1
250 CWD command successful.
ftp dir
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 80832
-rw-rw-r--   1 18125700 200  41166639 Oct 17 17:04 IP22546_02.exe
-rw-rw-r--   1 18125700 20091223 Oct 17 17:04
IP22546_02_READ1STC.TXT
-rw-rw-r--   1 18125700 200 2547 Oct 17 17:04 IP22546_README.FTP
226 Transfer complete.
ftp: 237 bytes received in 0.00Seconds 237000.00Kbytes/sec.


-Original Message-
From: Alan Davenport [mailto:Alan.Davenport;SELECTIVE.COM]
Sent: Wednesday, October 23, 2002 2:59 PM
To: [EMAIL PROTECTED]
Subject: Re: OK IBM _ ENOUGH ALREADY with the IBM/Tivoli TSM website -


Amen. I used to get the clients from the Boulder FTP site and even that has
not worked for a few weeks now. I'm sure glad I kept the clients I
downloaded on my local hard drive else I would not have been able to get
anything done this week!



Request for large Oracle database (2 TB or larger) running RMAN/T DP, backup throughputs

2002-10-25 Thread Richard Cowen
Full and Incremental examples would be nice.
If you decide to contribute, please include OS and platform models for
Oracle host and TSM server, with network and library specs, number of
streams, etc.
Thanks in advance for any information.



Re: Expiration on a Test server

2008-08-15 Thread Richard Cowen
I did not interview much. The install date is in the status tren reoport file.

-Original Message-
From: Kauffman, Tom [EMAIL PROTECTED]
Sent: Friday, August 15, 2008 11:34 AM
To: ADSM-L@VM.MARIST.EDU ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Expiration on a Test server

FWIW, we use multiple networks for TSM backups/archives - and we use DNS.

My TSM server is columbia; it also answers to columbia-pri (our 'private' 
network for SAP only); columbia-adm (administrative); columbia-bu1, 
columbia-bu2, columbia-bu3, and columbia-bu4 (gigabit dedicated backup 
networks). In addition we use CNAMES in our DNS that point to the columbia 
names. All our DSM.SYS entries reference the cnames, so we can move to a new 
TSM server any time we want just by changing the cname entries in DNS.

The primary network uses the 10.x.y.z address range; all the others are from 
192.168.y.z.

Tom

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Friday, August 15, 2008 11:06 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Expiration on a Test server

We don't use DNS references. We use specific IP addresses to force
communications onto a private connection/subnet between the servers.

Sounds like I need to go with the unplug from the network method before
I start the server up and run the expire.

Thanks for all the feedback.  I am glad I asked.



Richard Rhodes [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
08/15/2008 10:54 AM
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] Expiration on a Test server






This is exactly what I do when testing a new TSM upgrade.  I restore a DB
to our test server, but before
starting it I put bogus entries in the /etc/hosts file for each of our
production tsm instances and AIX servers, including
the library manager (we have dns aliases for each TSM instance).  That way
the test system is completely
 isolated with no chance of trying to  contact a production server.

TEST BY TRYING TO PING the production servers/instances!
We use AIX.  One thing to watch our for is that the default lookup order
is
DNS first, hosts file second.  (at
that's how our systems were)  I had to change the search order in the
/etc/netsvc.conf file to do local search
first, then DNS.

Rick

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 08/15/2008
10:37:46 AM:

 When we did our upgrade testing we did not allow the test server to talk
 to the production server by creating a false entry in the hosts file.
 We also did not allow our test server to connect to the libraries.  This
 may be the paranoid approach, but we did not break anything.



-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.
CONFIDENTIALITY NOTICE:  This email and any attachments are for the 
exclusive and confidential use of the intended recipient.  If you are not
the intended recipient, please do not read, distribute or take action in 
reliance upon this message. If you have received this in error, please 
notify us immediately by return email and promptly delete this message 
and its attachments from your computer system. We do not waive  
attorney-client or work product privilege by the transmission of this
message.


Re: TDP for Databases 6.3 -- Documentation

2011-10-25 Thread Richard Cowen
It has been working, and just now started working again.
Richard Cowen

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Richard Sims
Sent: Tuesday, October 25, 2011 7:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP for Databases 6.3 -- Documentation

On Oct 25, 2011, at 7:26 AM, Sascha Askani wrote:

 Del,
 
 thanks for your reply. The link you provided does not work for me, though...
 
 Bad Gateway
 The proxy server received an invalid response from an upstream server.
 

Indeed...  Going to
  
http://www.ibm.com/developerworks/wikis/display/tivolidoccentral/Tivoli+Storage+Manager
and clicking on either the 6.2 or 6.3 Information Center links yields that 
error.
The IBM Web people aren't testing what they put into production.

 Richard Sims


Re: Dev Class type FILE with multiple directories

2011-11-11 Thread Richard Cowen
A colleague just sent me this link:

See DCF 1497567:
Allocation and use of scratch and predefined FILE storage pool volumes
http://www-01.ibm.com/support/docview.wss?uid=swg21497567

By the way, keep in mind that at least on some appliances, the NFS-mounted 
filespace used/remaining space attributes apply to the entire appliance 
filesystem, no matter which directory you query.  As the platforms improve, 
some sort of quota system will probably be provided.  Although whether it apply 
to pre-compressed/deduped or post-compressed/deduped numbers is a good question.

How to allocate directories among devclasses is becoming a more interesting 
topic as NFS-mounted dedup appliances become more prevalent.  Some resources 
are tied to devclasses, such as mountpoints, that would require some 
consideration.

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Richard Rhodes
Sent: Friday, November 11, 2011 3:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Dev Class type FILE with multiple directories

If you have a dev class of type FILE with multiple directories, how does
TSM determine which dir to create a new volumes in?

I've checked everywhere I can think of and the most I can find is comments
that TSM is free to  pick any dir to create a new scratch vol in. Assuming
lots of free space in multiple directories specified for one dev class
type file,  how does TSM pick which dir to create a file dev in?

What I'm interested in is one of those what if questions.  If you have a
file dev class (NFS) on a dedup box and its getting full.  So you purchase
a new box, set it up, and add it to the devclass as an extra directory.
You now have two directories.  If one is full TSM will use the other, but
if both have freespace I'm curious how TSM decides which to use.

Just curious . . .

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


Re: UN-mixing LTO-4 and LTO-5

2012-06-12 Thread Richard Cowen
I will be out-of-office from June 12th to June 15th.
I will reply to emails when I can.

Richard Cowen
CPP Associates
Cell 508-873-9468
Land 978-456-9968


Re: TSM for Virtual Environments - Reporting

2012-09-28 Thread Richard Cowen
Shawn,

Are you seeing these messages?

ANE4146I Starting Full VM backup of Virtual Machine 'vmname'
ANE4147I Successful Full VM backup of Virtual Machine 'vmname'
ANE4148E Full VM backup of Virtual Machine 'vmname' failed with RC rc

Do they have session/process numbers?

There are a host of other VM related messages that should help describe the
status...

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Shawn Drew
Sent: Friday, September 28, 2012 4:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM for Virtual Environments - Reporting

Quick question, may be a long answer.
How is everyone reporting on success/failures for Full VM backups using TSM
for VE.  My reports are grabbing lines from the actlog and it is uugly.

2012-09-25  11:40:27ANE4142I   virtual machine NODE1 backed up
to nodename DATAMOVER1
2012-09-25  11:40:27ANE4142I   virtual machine NODE2 backed up
to nodename DATAMOVER1
2012-09-25  11:40:27ANE4143I Total number of virtual machines
failed: 0



Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for the
addressees and is confidential. If you receive this message in error, please
delete it and immediately notify the sender. Any use not in accord with its
purpose, any dissemination or disclosure, either whole or partial, is
prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that
certain functions and services for BNP Paribas may be performed by BNP
Paribas RCC, Inc.


Re: Monitor open volumes for a file deviceclass session

2012-12-05 Thread Richard Cowen
Keith,

To do this accurately, it is necessary to build a kind of state map of
sessions and processes and file/tape opens and closes.
In the actlog, file/tape opens and closes are recorded with the volume name
and linked to the session/process number that originated the request.  (This
session/process number can be tricky to trace back to a common node or
process command. Proxies, LanFree, NDMP, etc. all add complexity.)

Look for ANR0510I and ANR0511I, ANR0513I and ANR0514I messages, selecting an
appropriate period (24 hours?)
Keep the records sorted by session/process number and date and time.
Every time a session/process has a volume open, increment the open count-
check for new high point.
Every time a session/process has a volume close, decrement the open count.
I suggest Perl and csv formatted actlog selects.

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Arbogast, Warren K
Sent: Tuesday, December 04, 2012 11:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Monitor open volumes for a file deviceclass session

To estimate the needed value for the 'numopenvolumesallowed' option for a
deduplicated file deviceclass pool, the 6.3 Server Admin  Reference Guide
says to,  Monitor client sessions and server processes. Note the highest
number of volumes open for a single session or process. Increase the setting
of NUMOPENVOLSALLOWED if the highest number of open volumes is equal to the
value specified by NUMOPENVOLSALLOWED.  (page 1388)

How does one see and correlate open volumes with sessions or processes on
such a pool from within TSM? We're at server version 6.3.2 running on RHEL
5.

Thank you,
Keith Arbogast


Re: ANR8939E The adapter for tape drive (\\.\TapeXX) cannot handle the block size needed to use the volume

2013-03-21 Thread Richard Cowen
Charles,

Maybe this link will help- it suggests  7.0.0.8 HBA firmware ( a little newer 
than yours.)

http://www-01.ibm.com/support/docview.wss?uid=swg21500578

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Welton, Charles
Sent: Wednesday, March 20, 2013 8:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] ANR8939E The adapter for tape drive (\\.\TapeXX) cannot 
handle the block size needed to use the volume

Hello:

Before I begin, here are the hardware specs of our small TSM environment:

TSM SERVER VERSION: 5.4.2.0
TSM SERVER OS VERSION: Windows Server 2003 R2 Standard Edition TSM SERVER TYPE: 
IBM System x3650 (MEMORY: 4096 MB; CPU: Quad 2.33 GHz Intel Xeon) PHYSICAL TAPE 
LIBRARY: IBM 3576 w/ four tape drives (SCSI attached to TSM server) VIRTUAL 
TAPE LIBRARY: DL210 (fiber attached to TSM server) SCSI ADAPTER: Adaptec SCSI 
Card 29320LPE - Ultra320 SCSI (driver version 7.0.0.6)

I'm not sure what else needs to be mentioned from a hardware configuration 
perspective.  But, today we had someone replace some bad memory in our TSM 
server.  When the system was back up and TSM was up, our standard operating 
procedure is to test both virtual and physical tape mounts by running simple 
MOVE DATA commands.  When we tested the virtual tape library, no problems at 
all, but when we tested the physical tape library, we received and continue to 
receive the following error for all four tape drives:

03/20/2013 18:18:20   ANR8939E The adapter for tape drive KSFS0203-DRIVE03
   (\\.\Tape67file:///\\.\Tape67) cannot handle  the 
block size needed to use
   the volume. (SESSION: 341, PROCESS: 3)

HOWEVER, when we run an incremental TSM database backup, which uses a new 
scratch tape, it works without issue.  Absolutely NOTHING has changed from a 
software/firmware perspective in our TSM environment.  In troubleshooting the 
problem, we completely removed the physical tape library and tape drives from 
an OS and TSM perspective, reconfigured them, but still continue to receive the 
errors.

Has anyone run into this issue before?  Does anyone have any suggestions?

Thanks...


Charles Welton
This email contains information which may be PROPRIETARY IN NATURE OR OTHERWISE 
PROTECTED BY LAW FROM DISCLOSURE and is intended only for the use of the 
addresses(s) named above.  If you have received this email in error, please 
contact the sender immediately.


Re: Collocation anomaly report

2013-04-16 Thread Richard Cowen
Grant,

If you care to script, do daily:

Select new volumes assigned to pools in last 24 hours.
(Maybe search actlog for ANR1340I messages.)

Filter for volumes in pools with collocate (NODE or GROUP).

For each volume check volumeusage for any nodes that do not belong.
Assume the first node belongs.

Report those volumes as candidates for move nodedata.

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Grant 
Street
Sent: Tuesday, April 16, 2013 7:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Collocation anomaly report

Hello

We use collocation to segment data into collocation groups and nodes, but 
recently found that collocation is on a best efforts basis and will use any 
tape if there is not enough space.

I understand the theory behind this but it does not help with compliance 
requirements. I know that we should make sure that there are always enough free 
tapes, but without any way to know we have no proof that we are in compliance.

I have created an RFE
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=33537
. Please vote if you agree:-)

While I wait a more than two years for this to be implemented, I was wondering 
if anyone had a way to report on any Collocation anomalies?
I created the following but still not complete enough

select volume_name, count(volume_name) as Nodes_per_volume from (select 
Unique  volume_name , volumeusage.node_name from volumeusage, nodes where 
nodes.node_name = volumeusage.node_name and nodes.
collocgroup_name is null) group by (volume_name) having count
(volume_name) 1

and

select unique volume_name, count(volume_name) as Groups_per_volume
from (select Unique  volume_name ,  collocgroup_name from volumeusage, nodes 
where nodes.node_name = volumeusage.node_name ) group by
(volume_name) having count(volume_name) 1

Thanks in advance

Grant


Re: TDP Oracle issue: thousands of near simultaneous sessions per node

2013-05-29 Thread Richard Cowen
Rick,

I have seen something similar recently.  More than 1 million client sessions in 
24 hours from a single DP Oracle node.  My 32-bit perl script ran out of 
memory- a 64-bit version handled it...
I haven't investigated any further, just a lot of ANR0406I/ANR0403I pairs. A 
few hundred sessions actually sent data.

Richard


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick 
Adamson
Sent: Wednesday, May 29, 2013 2:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP Oracle issue: thousands of near simultaneous sessions 
per node

Ricky,
Obviously there are multiple scripts each scheduled at different times, but 
appears that each one allocates one channel.
The TSM server has max sessions set at 300, and there have been no 
modifications to the defaults for resource utilization on the client.

Thanks,
~Rick


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Wednesday, May 29, 2013 1:03 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP Oracle issue: thousands of near simultaneous sessions 
per node

Rick,
How many channels do the DBA have set in RMAN to use for the backups. Also how 
many max sessions are set up on the server node and  resourceutilization in the 
dsm.sys file on the client

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick 
Adamson
Sent: Wednesday, May 29, 2013 11:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TDP Oracle issue: thousands of near simultaneous sessions per 
node

Looking for some direction...

I recently inherited a TSM server along with many TDP for Oracle nodes.
The TSM server is 5.4 (upgrade on the horizon).
The Oracle clients run on mostly AIX, but some Suse Linux, the TDP client is 
ver.5.5.x and the BA clients are primarily 6.2.

All of the database and log backups are initiated by CRON and when they start a 
single Oracle node will start and stop literally thousands of sessions on the 
TSM server within a minute or two.

Oracle and RMAN are new to me and I have talked with our DBA team regarding the 
problem but to no surprise they imply it is a TSM issue.
When this occurs the obvious impact to the TSM server is devastating, 
performance is crushed and the console churn is a blur.

In reading the Oracle and TDP documentation I am suspicious of the RMAN scripts 
as the cause, but before I go testing changes or opening a support case, I 
thought I would ask the group.

The RMAN scripts do perform several crosschecks during the process and I 
suspect that each backup piece it queries for starts and stops a client session 
on the TSM server.

Has anyone seen this issue before? Is there any way to get the TDP client to 
perform all operations during a single, or minimal,  sessions?
I have included the RMAN script I suspect below.

All feedback welcome,
~Rick

===

rman target backup/sqbkup3 catalog rman10/sqbkup3@rcat1_catalog.world EOF # 
/opt/oracle/log/rman_full_csp1.log 21 run { allocate channel tdp1 type 
'sbt_tape'
parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.csp1.opt)';
setlimit channel tdp1 kbytes 1073741824 maxopenfiles 32 readrate 200;

sql 'alter system archive log current';
resync catalog;
change archivelog all validate;
backup database include current controlfile format 'df_%t_%s_%p'; backup 
archivelog all delete input format 'al_%t_%s_%p'; release channel tdp1; } EOF #

rman EOF # /opt/oracle/log/backup_delete_csp1.log 21 connect target 
backup/sqbkup3 connect catalog rman10/sqbkup3@rcat1_catalog.world
allocate channel for delete type 'sbt_tape'
parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.csp1.opt)';
configure retention policy to recovery window of 14 days; list backup of 
database; list backup of database summary; list backup of archivelog all 
summary; crosscheck backup; delete noprompt obsolete; delete noprompt expired 
backup; crosscheck backup; release channel; allocate channel for delete type 
disk; crosscheck backup; delete noprompt obsolete; delete noprompt expired 
backup; crosscheck backup; list backup of database; EOF

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
CONFIDENTIALITY NOTICE: If you have received this email in error, please 
immediately notify the sender by e-mail at the address shown.This email 
transmission may contain confidential information.This information is intended 
only for the use of the individual(s) or entity to whom it is intended even if 
addressed incorrectly. Please delete it from your files if you are not the 
intended recipient. Thank you for your compliance.


Re: So long, and thank you...

2015-04-04 Thread Richard Cowen
Wanda.
I thought you were going to say, So long, and thanks for all the fish.
It is good to hear there is a next non-IT-support chapter in life.
The collected wisdom of the list has just taken a big hit.
I don't think it's a requirement to be working to stay subscribed to the list, 
in case you can't  shake the habit.
May your days be bug-free, and all your releases be clean!

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Friday, April 03, 2015 5:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] So long, and thank you...

This is my last day at ICF, and the first day of my retirement!

I'm moving on to the next non-IT-support chapter in life.


I can't speak highly enough of the people who give of their time and expertise 
on this list.

I've learned most of what I know about TSM here.


You all are an amazing group, and it has been a  wonderful experience in 
world-wide collaboration.


Thank you all!


Best wishes,

Wanda


Re: Bring back TSM Administrator's Guide

2015-12-14 Thread Richard Cowen
I, too, would ask however made this decision to rethink it.
I have Guides I have used since 1999:
v37
v51
v52
v53
v54
v55
v61
v62
v63
v71

Why stop publishing them (particularly in pdf form?)
Richard

PS. Kudos to Adobe for creating a truly "portable" format and retaining 
backward compatibility all these years.)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Erwann 
SIMON
Sent: Monday, December 14, 2015 4:06 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Bring back TSM Administrator's Guide

I remember the time when Andy Raibeck was advising to read it cover to cover...

Yes, please, bring the Admin Guide back.

--
Best regards / Cordialement / مع تحياتي
Erwann SIMON

- Mail original -
De: "Roger Deschner" 
À: ADSM-L@VM.MARIST.EDU
Envoyé: Samedi 12 Décembre 2015 02:39:28
Objet: [ADSM-L] Bring back TSM Administrator's Guide

A great book is gone. The TSM Administrator's Guide has been obsoleted as of 
v7.1.3. Its priceless collection of how-to information has been scattered to 
the winds, which basically means it is lost. A pity, because this book had been 
a model of complete, well-organized, documentation.

The "Solution Guides" are suitable only for planning a new TSM installation, 
and do not address existing TSM sites. Furthermore, they are much too narrow, 
and do not cover real-world existing TSM customers.
For instance, we use a blended disk and tape solution (D2D2D2T) to create a 
good working compromise between faster restore and storage cost.

Following links to topics in the online Information Center is a haphazard 
process at best, which is never repeatable. There is no Index or Table of 
Contents for the online doc - so you cannot even see what information is there. 
Unless you actually log in, there is no way to even leave a "trail of 
breadcrumbs". Browser bookmarks are useless here, due to IBM's habit of 
changing URLs frequently. This is an extremely inefficient use of my time in 
finding out how to do something in TSM.

Search is not an acceptable replacement for good organization. Search is 
necessary, but it cannot stand alone.

Building a "collection" is not an answer to this requirement. It still lacks a 
coherent Index or Table of Contents, so once my collection gets sizeable, it is 
also unuseable. And with each successive version, I will be required to rebuild 
my collection from scratch all over again.

Despite the fact that it had become fairly large, I humbly ask that the 
Administrator's Guide be published again, as a single PDF, in v7.1.4.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Select from Occupancy table is blank for Directory pool

2015-12-23 Thread Richard Cowen
Try REPORTING_MB.
Also

Select colname from syscat.columns where tabname='OCCUPANCY" and 
tabschema='TSMDB1'
richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Arbogast, Warren K
Sent: Wednesday, December 23, 2015 3:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Select from Occupancy table is blank for Directory pool

Hi Stephen,
Thank you for sharing that idea. It didn’t quite reveal the mystery of the 
Occupancy table yet. I wonder whether new tables or new attributes have been 
added for directory container pools?

tsm: TSMBL01>select sum(logical_mb) from occupancy where node_name='ESAPPPJ10'


Unnamed[1]
--
  


tsm: TSMBL01>







On 12/23/15, 15:01, "ADSM: Dist Stor Manager on behalf of Stefan Folkerts" 
 wrote:

>Hi Keith,
>Try this:
>select sum(logical_mb) from occupancy where node_name='ESAPPJ10'
>
>On Wed, Dec 23, 2015 at 8:50 PM, Arbogast, Warren K  wrote:
>
>> We are running TSM/EE 7.1.4 on RHEL6, and are actively migrating 
>> backups on other primary pools to directory container pools, which 
>> are named ‘DEDUP'.
>>
>> I need to count MB in the DEDUP pools by node for all pertinent nodes.
>>
>> When I run ‘q occupancy ’ from an Admin command line, the 
>> logical_mb in the DEDUP pool for that node is displayed, like this:
>>
>>
>> tsm: TSMBL01>q occ esappj10
>>
>>
>> Node Name   Type  Filespace   FSID  Storage   Number of Physical
>> Logical
>>
>>   Name  Pool Name FilesSpace
>>   Space
>>
>> Occupied
>>Occupied
>>
>> (MB)
>>(MB)
>>
>> --    --    --  ---  
>> ---
>> ---
>>
>> ESAPPJ10Bkup  /  3  DEDUP19,011-
>>  799.10
>>
>> ESAPPJ10Bkup  /boot  2  DEDUP64-
>>  244.57
>>
>> ESAPPJ10Bkup  /home  4  DEDUP10,462-
>>  415.37
>>
>> ESAPPJ10Bkup  /opt   5  DEDUP27,215-
>>7,545.83
>>
>> ESAPPJ10Bkup  /usr   6  DEDUP86,980-
>>5,377.92
>>
>> ESAPPJ10Bkup  /var   1  DEDUP10,042-
>>4,102.47
>>
>>
>>
>> But, when I run a select statement on the Occupancy table for the 
>> same node, the output is blank, like this:
>>
>>
>> tsm: TSMBL01>select logical_mb from occupancy where node_name='ESAPPJ10'
>>
>>
>>LOGICAL_MB
>>
>> -
>>
>>
>>
>> Surely there is a way to do this. Can someone tell me how to get that 
>> data from the Occupancy table, or another table, if I am looking in 
>> the wrong place?
>>
>> Thank you,
>> Keith Arbogast
>> Indiana University
>>
>>
>>
>>
>>


Re: Bring back TSM Administrator's Guide

2016-02-03 Thread Richard Cowen
I could not have expressed my own opinions better- thank you Roger.
I still remember being skeptical going from the IBM Red Books (real paper) to 
PDF's.
But it's not the same leap going to online "information centers".  With each 
new release, I try again, but I am always disappointed.  Maybe it's an "age 
thing."

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Wednesday, February 03, 2016 9:28 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Bring back TSM Administrator's Guide

I'm glad that the last PDFs of this valuable book at Version 7.1.1 are being 
made more widely available. I have already downloaded that PDF to the USB thumb 
drive I carry around everywhere, for access regardless of the network 
condition. This could be in a disaster recovery scenario, or at a client node 
site where I might need to create a new account and password just to access the 
net and read the doc. The fact that it was deemed necessary to make this PDF 
more widely available, points out the problem.

If you are hinting that you are considering publishing an updated 7.1.5 
Administrator's Guide, that would be great! An important thing to update would 
be container storage pools. When to use them, how they compare to other types 
of storage pools (disk, file, tape, VTL...), and how to set them up and manage 
them. Perhaps include them in a new column in Table 5 on Page 77 of the 7.1.1 
Admin Guide. Another would be deduplication, which has a lot of new powerful 
features in 7.1.4, which will make it attractive to users who have not 
deduplicated before.

I rarely use the online Information Center. I find its hierarchical 
micro-partitioning of information to be way too narrowly focused, leading me to 
overlook obvious solutions to the problem at hand.
Especially new solutions such as container storage pools and the new features 
of deduplication. The Information Center is also especially poor in the client 
area. I always go to the PDFs first, even when online with fast network access. 
Therefore I can't give feedback in the way you suggest.

I tried the Custom PDF file creation facility, and I found it disappointing. 
The resulting PDF has only topics I could think of, and omits the ones I really 
need to solve a problem. It requires clairvoyance to make a useful PDF 
containing the answer to a problem that I haven't had yet, and it has no index.

So I am going to reiterate what I said before: Update the Administrator's Guide 
for 7.1.5, and keep it up-to-date going forward as a single book. This book is 
a key asset for the entire Spectrum Protect product, especially as existing 
customers seek to utilize features they haven't used before, and as new 
customers explore the incredible depth of features in the product.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 27 Jan 2016, Clare Byrne wrote:

>Following up on this thread, the 7.1.1 Administrator's Guide PDFs are 
>now available in IBM Knowledge Center at 
>http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.1/com.ibm.itsm.ts
>m.doc/r_pdfs_tsm.html This is in addition to where they were available 
>before, in the .zip file and on the Tivoli Storage Manager wiki. We 
>hope this makes it easier to find and access the 7.1.1 guides.
>
>A reminder: We are still very much interested in hearing about what the 
>most important subjects from the 7.1.1 Administrator's Guide are for us 
>to consider for updates. Knowing more specifics about your priorities 
>is very important to us to set priorities.
>
>One way to give such feedback privately: Go to any topic (in any 
>version) for Tivoli Storage Manager in IBM Knowledge Center and click 
>the Feedback link in the menu bar at the very bottom. If your comments 
>exceed the 1000-character limit in the feedback form, you can include 
>your email address in the feedback and ask us to contact you.
>
>Clare Byrne
>Information Development for IBM Spectrum Protect Tucson, AZ  US
>
>
>__
>
>Please be assured that we in IBM development are listening and are 
>discussing future actions regarding the Administrator's Guide content 
>since the feedback about the 7.1.3 release that did not include that book.
>I gave some information in a response to that earlier feedback:
>http://www.mail-archive.com/adsm-l@vm.marist.edu/msg99286.html
>
>We are working on a plan to publish a solution guide for the 
>disk-to-tape architecture next. This should fill in some gaps left by 
>the removal of the Administrator's Guide. Timing of the publication is 
>yet to be determined.
>
>We are also discussing how to fill the information needs for those who 
>will not be using one of the four documented architectures. We already 
>point to the 7.1.1 guides from newer releases but we realize this is 
>not ideal. Possible actions include creating 

Re: Migration should preempt reclamation

2016-02-18 Thread Richard Cowen
A similar script could use MOVE DATA instead of RECLAIM, which has the 
advantage of checking as each MOVE ends to see if there are drive resources, 
and intelligently picking a volume, before starting a new MOVE.  It also can 
check an external file for a "pause" or "halt" command, or parse the actlog(s) 
for ANR1496I messages for similar "commands", and re-read the configuration 
file so you can affect it without stopping/starting.
Richard



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Thursday, February 18, 2016 9:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Migration should preempt reclamation

This is a tough one.  On the one hand we want Reclamation to use as many tape 
drives as possible, but not consume them all.  We also have multiple TSM 
instances wanting library resources.  The TSM instances are blind to each 
others needs.  This _IS_ difficult to control.

The _current_ solution controls reclamation completely manually from a set of 
scripts. 
It works something like this:

- we run a library sharing environment across a bunch of TSM instances
- reclamation is set to never run automatically - all stgpools are set 
to not run reclamation automatically (reclamation pct = 100)
- define the max number of drives reclamation can use in a library
   (reclamation can use up to this number)
- define the number unused drives in a library that MUST be UNUSED before 
   another reclamation is started
   (there are always some number of unused drives available for non-reclamation 
jobs to start)
- define on stgpools the number of reclamation process allowed - we set it to 1 
   (one reclamation process equals 2 tape drives)

Late morning we kick in the script

- Crawls through all our TSM instances and gets a count of tapes per stgpool
that could be reclaimed (above some rec pct).
- Sorts the list of stgpools/counts by the count
- Scripts loops.  
On each loop it will start a new stgpool reclamation if:
  - max number of drives allowed for reclamation hasn't been hit 
  - required number of unused drives are still unused

Later in the day we kill this script, letting running reclamation jobs run to 
completion.
If buy the next morning (when migrations want to run) we still have 
reclamations running, they get nuked!

. . .repeat each day . . . .



The result, at a gross level we keep some number of drives open for other 
sessions/jobs to use, and yet allow reclamation to use up to the defined limit 
of drives if no one other processes are using them.  

It has major flaws, but has really smoothed out our tape drive contention and 
resources used for reclamation.  The one thing I really like is that it lets 
the stgpool with the most reclaimable tapes in whatever TSm instance to run the 
longest.

One core overall issue - no amount of playing around like this can make up for 
not having the resources you need to drive TSM.  If you don't have the drives 
to process the work, nothing will really help. 

Rick





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Wednesday, February 17, 2016 8:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Migration should preempt reclamation

I was under the impression that higher priority tasks could preempt lower 
priority tasks. That is, migration should be able to preempt reclamation. But 
it doesn't. A very careful reading of Administrator's Guide tells me that it 
does not.

We're having a problem with a large client backup that fails, due to a disk 
stgpool filling. (It's a new client, and this is its initial large
backup.) It fills up because the migration process can not get a tape drive, 
due to their all being used for reclamation. This also prevents the client 
backup from getting a tape drive directly. Does anybody have a way for 
migration to get resources (drives, volumes, etc) when a storage pool reaches 
its high migration threshold, and reclamation is using those resources? 
"Careful scheduling" is the usual answer, but you can't always schedule what 
client nodes do. Back on TSM 5.4 I built a Unix cron job to look for this 
condition and cancel reclamation processes, but it was a real Rube Goldberg 
contraption, so I'm reluctant to revive it now in the TSM 6+ era. Anybody have 
a better way?

BTW, I thought I'd give the 7.1.4 Information center a try to answer this. I 
searched on "preemption". 10 hits none of which were the answer.
So I went to the PDF of the old Administrator's Guide and found it right away. 
We need that book!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


-The information contained in this 
message is intended only for the personal and confidential use of the 
recipient(s) named above. If the reader of this message is not 

Re: Script to register node using Curl and Rest API

2016-04-01 Thread Richard Cowen
That referenced page indicates a POST command to register a node.
Does anyone know if the TSM server itself has (or is planned to support) the 
ability to listen to REST requests?
Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Anderson Douglas
Sent: Friday, April 01, 2016 1:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Script to register node using Curl and Rest API

Hello



Here we have started news tests using REST API through Operation center. We 
adopted Curl to send the commands.

Some commands like this below runs normally



curl -X GET -H "Accept: application/json" -H "OC-API-Version: 1.0" -k --user 
aaa:bbb https://xx.xx.xx.xx:11090/oc/api/help

curl -X PUT -H "Accept: application/json" -H "OC-API-Version: 1.0" -k --user 
aaa:bbb https://xx.xx.xx.xx:11090/oc/api/servers//clients//unlock



Now we need to implementade a rest command to register a node into TSM, But not 
able. Does anyone know how to do this?



This is a link with REST API supportable commands

http://www-01.ibm.com/support/docview.wss?uid=swg21973011





Anderson Douglas da Silva


Re: Drive preference in TSM

2016-05-23 Thread Richard Cowen
Kumar,

To use LAN-Free, you configure a TSM Storage Agent.
On the TSM instance playing the role of Library Manager, you define all the 
drives and paths to the drives for the Library Manager  to use.

You also define paths for each of the Library Clients, using the appropriate 
local devices.
The Storage Agent is a Library Client, and thus will have its drive paths 
defined on the Library Manager.

You can vary drive paths offline by Library Client, and so in that manner 
reserve drives for any given Library Client (all other Library Client drive 
paths for those drives would be offline.)

The offline/online can be dynamic during the day, as needed (UPDATE PATH.)
You can also simply define paths to drives you want to be exclusively used by a 
single Library Client, but this limits your flexibility.

This is the only method I am aware of to "reserve" drives for a specific 
purpose.

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of S Kumar
Sent: Monday, May 23, 2016 1:14 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Drive preference in TSM

Hi Maurice,

Thanks for pointing out at basic level.

But if you read my mail, where as i have clearly written about the feature of 
drive preference in TSM rather than disk pool or any other alternate.

I am not looking for alternate in my current setup. If i have plenty of drive 
in my setup than why will use the disk pool. Also i have the sufficient SAN 
Agent license for that.


Some where i had read, TSM is fully capable to take the backup in tape, then 
why we should not use?

Draw back of using the disk/file pool is, if disk/file pool and your data exist 
in same storage. Suppose if your storage fail or it is not recoverable, than it 
is no use. You can'nt restore data from disk pool as it is in same storage.

So it is wise to do not use from same storage for backup and data purpose.

And most of the customer do not have multiple storage.

Two feature i missing in TSM and i am looking for it and if it is available its 
good

1. Drive preference for node.
2. Copy of node data in same TSM Server for safety purpose. Not in replica.
So that we can have two physical media for same node data.

Regards,




On Sun, May 22, 2016 at 10:03 PM, Maurice van 't Loo 
wrote:

> Hello Kumar,
>
> Best is to start reading about the TSM basics, or better: do the basic 
> TSM training.
>
> In your case I should use a diskpool to receive the archlog data, then 
> migrate to tape. In this case you can also use the benefits of collocation.
>
> Good luck,
> Maurice
>
> 2016-05-16 12:06 GMT+02:00 S Kumar :
>
> > Hi,
> >
> > I came to a situation where customer wants database log backup in 
> > tape in every two hours. This setup is having more than 10 SAP 
> > production server and other database server. So they want logs to be 
> > backup of all the production server  and other database server in tape.
> >
> >
> > They have plenty of tapes drives, so for him tape drive  is not a 
> > constraint for them.
> >
> > TSM has been configured with SAN agent and lot of time it is going 
> > in mounting and dismounting for various database node.
> >
> > If drive preference is available for a node in TSM then we can 
> > define a node for log backup and attached the preferred tape drive 
> > with nodes. and these nodes are group to single collocation group. 
> > so all the logs backup will go in one drive and single cartridge. So 
> > that the cartridge seek
> time
> > can also be avoided.
> >
> > Is there such type of feature is available with TSM.
> >
> >
> > Regards,
> >
>


This email has been scanned by BullGuard antivirus protection.
For more info visit www.bullguard.com



Re: query occupancy command - file names

2016-08-07 Thread Richard Cowen
Paul,

First, get a list of volumes that have data from that filespace in that stgpool.

(use the table volumeusage)

Then for each volume (say, volid), issue the SELECT with volume_name=volid.
This will break the select into manageable chunks.
The output can still be very large.
One goal of proper storage pool design is to keep the number of volumes with 
data from a given node/filespace to a reasonable number (for recovery 
efficiency.)

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Sunday, August 07, 2016 7:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query occupancy command - file names

Hi,

Tried that but it failed after 18 hours with the following memory failure:

ANS8000I Server command: 'select file_name from contents where volume_name in 
(select volume_name from volumes where stgpool_name='B KPLTO6POOL' )'
ANR0163E tbnsql.c(789): Database insufficient memory detected on 47:1.
ANR0162W Supplemental database diagnostic information:  -1:57011:-1218 
([IBM][CLI Driver][DB2/NT64] SQL1218N  There are no pages currently available 
in bufferpool "1".  SQLSTATE=57011 ).
ANR0516E SQL processing for statement select file_name from contents where 
volume_name in ( select volume_name from volumes where st gpool_name = 
'BKPLTO6POOL' )  failed.


Thanks & Regards
Paul



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Ullrich Mänz
Sent: Thursday, 4 August 2016 6:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] AW: query occupancy command - file names

Hi Paul,

try out that:

select file_name from contents where volume_name in (select volume_name from 
volumes where stgpool_name='BKPLTO6POOL')

Because data is taken out of contents table, this query may run very long.

Regards
Ulli Maenz

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Im Auftrag von 
Paul_Dudley
Gesendet: Donnerstag, 4. August 2016 02:11
An: ADSM-L@VM.MARIST.EDU
Betreff: [ADSM-L] query occupancy command - file names

The query occupancy command below shows 283 files in the bkplto6pool storage 
pool for this filespace. Is it possible to get a list of the file names?



tsm: TSMANL>query occ ANLDBPROD_SQL stgp=bkplto6pool type=backup



Node Name  Type Filespace  FSID Storage  Number of  
  Physical Logical

NamePool NameFiles  
 Space   Space


  OccupiedOccupied


  (MB)(MB)

--  --  -- --- 
--- ---

DBPROD-Bkup DBPROD-   17   BKPLTO6PO- 283 
9,771,429.0 9,771,429.0

_SQL\data\00-   OL  
0   0

 01







Thanks & Regards

Paul





Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au

61-3-8842-5603








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add ressees. Any unauthorised dissemination or use is strictly 
prohibited. If you received this e-mail in error, please immediately notify the 
sender by return e-mail from your s ystem. Please do not copy, use or make 
reference to it for any purpose, or disclose its  contents to any person.
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and content 
filtering.
http://www.mailguard.com.au

Click here to report this message as spam:
https://console.mailguard.com.au/ras/1OYBJ9Ghcx/5mAucmMa3OSDp5zeRiFgkA/1.222







ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add ressees. Any unauthorised dissemination or use is strictly 
prohibited. If you received this e-mail in error, please immediately notify the 
sender by return e-mail from your s ystem. Please do not copy, use or make 
reference to it for any purpose, or disclose its  contents to any person.


This email has been scanned by BullGuard antivirus protection.
For more info visit www.bullguard.com



Re: Some DISK volumes will not mount.

2017-02-28 Thread Richard Cowen
Do you know why the volumes seem to be missing on the DD?
Do other volumes in that mtree seem to exist? (maybe it's a mount issue?)
Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Tuesday, February 28, 2017 11:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Some DISK volumes will not mount.

Richard,

You are correct.

The db2 database within TSM sees them but they are not there on /mnt/ddstgpool3 
mount.

The good part is I now know what the problem is, the bad part is im going to 
have find out how to remove those volumes that don't exit.


Thanks for the help, I really apprciate it.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Richard Cowen
Sent: Monday, February 27, 2017 4:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Some DISK volumes will not mount.

Ricky,
Can you see the volume from the linux server?
Maybe the /mnt/ddstgpool3 isn't mounted?
Can you see the file on the DD?
Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Monday, February 27, 2017 4:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Some DISK volumes will not mount.

Has anybody had this problem, I hope someone has an answer, I'm dying here 
trying to figure this out.


ANR1893E Process 288 for SPACE RECLAMATION completed with a completion state of 
FAILURE.
ANR1401W Mount request denied for volume /mnt/ddstgpool3/DA46.BFS - mount 
failed.

My TSM server is running Linux OS/TSM 7.1.1.0 and the attached storage is a DD 
4500.

For some reason it will not reclaim all the volumes. It sees the volumes if you 
perform a Q VOL but it will not MOVE the data, it will not AUDIT the volume, it 
will not DELETE the volume.

tsm: PROD-TSM01-VM>q vol  /mnt/ddstgpool3/DA46.BFS f=d

   Volume Name: /mnt/ddstgpool3/DA46.BFS
 Storage Pool Name: DDSTGPOOL4500
 Device Class Name: DDFILE1
Estimated Capacity: 99.7 G
   Scaled Capacity Applied:
  Pct Util: 1.7
 Volume Status: Full
Access: Read/Write
Pct. Reclaimable Space: 99.1
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 142
 Write Pass Number: 1
 Approx. Date Last Written: 12/26/2016 01:33:33
Approx. Date Last Read: 01/09/2017 23:08:46
   Date Became Pending:
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location:
Volume is MVS Lanfree Capable : No
Last Update by (administrator): ADMIN
 Last Update Date/Time: 02/27/2017 13:40:56
  Begin Reclaim Period:
End Reclaim Period:
  Drive Encryption Key Manager:
   Logical Block Protected: No

Also, one last error hint that does me no good. Is there may be another volume 
that is involved in the move process and it is not even in the storage pool. 
So, if you look up the volume that is also called on to perform the reclaim it 
will tell you the following.

tsm: PROD-TSM01-VM>q vol  /mnt/ddstgpool4/EF66.BFS.
ANR2034E QUERY VOLUME: No match found using this criteria.


PLEASE SOMEBODY GIVE ME A HAND, and I don't mean clapping your hands.

Thanks!






_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disclosure, distribution, forwarding, printing, or copying of this email 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately and destroy all copies of the original message.


This email has been scanned by BullGuard antivirus protection.
For more info visit www.bullguard.com

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disc

Re: Some DISK volumes will not mount.

2017-02-27 Thread Richard Cowen
Ricky,
Can you see the volume from the linux server?
Maybe the /mnt/ddstgpool3 isn't mounted?
Can you see the file on the DD?
Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Monday, February 27, 2017 4:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Some DISK volumes will not mount.

Has anybody had this problem, I hope someone has an answer, I'm dying here 
trying to figure this out.


ANR1893E Process 288 for SPACE RECLAMATION completed with a completion state of 
FAILURE.
ANR1401W Mount request denied for volume /mnt/ddstgpool3/DA46.BFS - mount 
failed.

My TSM server is running Linux OS/TSM 7.1.1.0 and the attached storage is a DD 
4500.

For some reason it will not reclaim all the volumes. It sees the volumes if you 
perform a Q VOL but it will not MOVE the data, it will not AUDIT the volume, it 
will not DELETE the volume.

tsm: PROD-TSM01-VM>q vol  /mnt/ddstgpool3/DA46.BFS f=d

   Volume Name: /mnt/ddstgpool3/DA46.BFS
 Storage Pool Name: DDSTGPOOL4500
 Device Class Name: DDFILE1
Estimated Capacity: 99.7 G
   Scaled Capacity Applied:
  Pct Util: 1.7
 Volume Status: Full
Access: Read/Write
Pct. Reclaimable Space: 99.1
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 142
 Write Pass Number: 1
 Approx. Date Last Written: 12/26/2016 01:33:33
Approx. Date Last Read: 01/09/2017 23:08:46
   Date Became Pending:
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location:
Volume is MVS Lanfree Capable : No
Last Update by (administrator): ADMIN
 Last Update Date/Time: 02/27/2017 13:40:56
  Begin Reclaim Period:
End Reclaim Period:
  Drive Encryption Key Manager:
   Logical Block Protected: No

Also, one last error hint that does me no good. Is there may be another volume 
that is involved in the move process and it is not even in the storage pool. 
So, if you look up the volume that is also called on to perform the reclaim it 
will tell you the following.

tsm: PROD-TSM01-VM>q vol  /mnt/ddstgpool4/EF66.BFS.
ANR2034E QUERY VOLUME: No match found using this criteria.


PLEASE SOMEBODY GIVE ME A HAND, and I don't mean clapping your hands.

Thanks!






_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disclosure, distribution, forwarding, printing, or copying of this email 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately and destroy all copies of the original message.


This email has been scanned by BullGuard antivirus protection.
For more info visit www.bullguard.com



Re: cancelling replication for a nodegroup

2017-04-19 Thread Richard Cowen
Jeannie,

You could try looking in the processes table:

Select 
PROCESS_NUM,PROCESS,START_TIME,FILES_PROCESSED,BYTES_PROCESSED,BYTES_TO_PROCESS,STATUS
 from processes where process='Replicate Node'

Something like this comes back.

32701,Replicate Node,2017-04-02 04:00:25.00,572409,139571931,,Replicating 
node(s) N12; N13; N120; N121. File spaces complete: 75. File space

s identifying and replicating: 0. File spaces replicating: 9. File spaces not 
started: 0. Files current: 543;961. Files replicated: 12;038 of 3;835;217. 
Files updated: 16;410 of 16;410. Files deleted: 9;306 of 24;724. Amount 
replicated: 133 MB of 556 GB. Amount transferred: 133 MB. Elapsed time: 0 Days; 
3 Hours; 2 Minutes.

Find the one with the desired node name

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Jeannie Bruno
Sent: Wednesday, April 19, 2017 10:05 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] cancelling replication for a nodegroup

I thought one could use a 'select' statement and 'cast' the process number as a 
variablebut I might be remembering incorrectly.

-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [OITS]

Sent: Wednesday, April 19, 2017 10:00 AM

To: ADSM-L@VM.MARIST.EDU

Subject: Re: [ADSM-L] cancelling replication for a nodegroup


** THIS IS AN EXTERNAL EMAIL ** Use caution before opening links / attachments. 
Never supply UserID/PASSWORD information.


I agree, middle of the night is a problem.

I don't know of a way to cancel without knowing the PROCESS number.

-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Jeannie Bruno

Sent: Wednesday, April 19, 2017 7:52 AM

To: ADSM-L@VM.MARIST.EDU

Subject: Re: [ADSM-L] cancelling replication for a nodegroup

Hello.  Thanks.   But if this was in the middle of the night and you didn't 
know what the process number was and I wanted to create a script and schedule 
it so to cancel that same node group at a specific time on a specific 
day..how do I find out what the process number is?


-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [OITS]

Sent: Wednesday, April 19, 2017 8:42 AM

To: ADSM-L@VM.MARIST.EDU

Subject: Re: [ADSM-L] cancelling replication for a nodegroup


** THIS IS AN EXTERNAL EMAIL ** Use caution before opening links / attachments. 
Never supply UserID/PASSWORD information.


I use CAN PRO nnn.

I believe the intent for CAN REP is to stop all of them, regardless of how many.



-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Jeannie Bruno

Sent: Wednesday, April 19, 2017 7:24 AM

To: ADSM-L@VM.MARIST.EDU

Subject: [ADSM-L] cancelling replication for a nodegroup

Hello.  I see there is a 'cancel replication' command but it doesn't look like 
there is an option for a specific node or nodegroup.  But what if there are 
many nodegroups replicating and I want to cancel just one of them.

Anyone know of script or 'select' statement I can use to cancel a specific 
nodegroup name?

Do I need the process number first?




Jeannie Bruno

Senior Systems Analyst

jbr...@cenhud.com>

Central Hudson Gas & Electric

(845) 486-5780

We Care About Your Privacy. This message may contain confidential and/or 
privileged information and is only for the intended recipient. If the reader of 
this message is not the intended recipient, or an employee or agent responsible 
for delivering this message to the intended recipient, please notify the sender 
immediately by replying to this note and deleting all copies. Thank you.


CONFIDENTIALITY AND PRIVILEGE NOTICE

This e-mail message, including attachments, if any, is intended for the person 
or entity to which it is addressed and may contain confidential or privileged 
information. Any unauthorized review, use, or disclosure is prohibited. If you 
are not the intended recipient, please contact the sender and destroy the 
original message, including all copies, Thank you.

This email has been scanned by BullGuard antivirus protection.
For more info visit 
www.bullguard.com


Re: No space available in storage pool failure but there is plenty of space

2017-07-18 Thread Richard Cowen
Do you have lib volumes available (scratch/empty)?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Tuesday, July 18, 2017 11:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] No space available in storage pool failure but there is 
plenty of space

I already considered that.  The tape pool is defined at 1500-scratch and only 
920-are used:

11:22:33 AM   PROCESSOR : show sspool
  -> Pool ARCHIVEPOOL(2): Strategy=10, ClassId=0, ClassName=DISK,
Next=6, ReclaimPool=0, HighMig=90, LowMig=70, MigProcess=1, Access=0,
MaxSize=0, Cache=0, Collocate=0, Reclaim=60, MaxScratch=0,
ReuseDelay=0, crcData=False, verifyData=True,
ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0
Index=0, OpenCount=0, CreatePending=False, DeletePending=False
CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes
deduplicate=False, identifyProcess=1
Shreddable=False, shredCount=0
AutoCopy=Client, Encrypted=0, Compression=0
DS Extension: VolListSize=16, VolCount=1, VolLast=0
  -> Pool BACKUPPOOL(1): Strategy=10, ClassId=0, ClassName=DISK,
Next=6, ReclaimPool=0, HighMig=95, LowMig=90, MigProcess=2, Access=0,
MaxSize=0, Cache=0, Collocate=0, Reclaim=60, MaxScratch=0,
ReuseDelay=0, crcData=False, verifyData=True,
ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0
Index=1, OpenCount=0, CreatePending=False, DeletePending=False
CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes
deduplicate=False, identifyProcess=1
Shreddable=False, shredCount=0
AutoCopy=Client, Encrypted=0, Compression=0
DS Extension: VolListSize=24, VolCount=18, VolLast=2
  -> Pool COPYPOOL-OFFSITE(-2): Strategy=30, ClassId=4, ClassName=IBM3494-SUN,
Next=0, ReclaimPool=0, HighMig=90, LowMig=70, MigProcess=1, Access=0,
MaxSize=0, Cache=0, Collocate=0, Reclaim=66, MaxScratch=1500,
ReuseDelay=0, crcData=False, verifyData=True,
ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0
Index=2, OpenCount=0, CreatePending=False, DeletePending=False
CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes
deduplicate=False, identifyProcess=0
Shreddable=False, shredCount=0
AutoCopy=Client, Encrypted=0, Compression=0
AS Extension: NumDefVols=770, NumEmptyVols=0,
NumScratchVols=770, NumRsvdScratch=0
  -> Pool PRIMARY-ONSITE(6): Strategy=30, ClassId=3, ClassName=IBM3494,
Next=0, ReclaimPool=0, HighMig=90, LowMig=70, MigProcess=1, Access=0,
MaxSize=0, Cache=0, Collocate=0, Reclaim=63, MaxScratch=1500,
ReuseDelay=0, crcData=False, verifyData=True,
ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0
Index=3, OpenCount=0, CreatePending=False, DeletePending=False
CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes
deduplicate=False, identifyProcess=1
Shreddable=False, shredCount=0
AutoCopy=Client, Encrypted=0, Compression=0
AS Extension: NumDefVols=920, NumEmptyVols=0,
NumScratchVols=920, NumRsvdScratch=0


On Tue, Jul 18, 2017 at 11:11 AM, Loon, Eric van (ITOPT3) - KLM < 
eric-van.l...@klm.com> wrote:

> Hi Zoltan!
> Try this one: issue a "show sspool" command. If the sum of 
> NumScratchVols and NumRsvdScratch equals your total amount of scratch 
> tapes, you're probably hit by this APAR: 
> http://www-01.ibm.com/support/
> docview.wss?uid=swg1IC77685
> I have bine there, either raise your maxscratch value or bounce the 
> server to reset the NumRsvdScratch value. Although the APAR is old and 
> states it's fixed in 6.2, I still get hit by it on my 6.3 servers. I 
> raised the maxscratch to 100,000 to get rid of it. These reserved 
> scratches are (in my
> case) mainly caused by failed storage agent scratch mounts.
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Zoltan Forray
> Sent: dinsdag 18 juli 2017 14:49
> To: ADSM-L@VM.MARIST.EDU
> Subject: No space available in storage pool failure but there is 
> plenty of space
>
> TSM Linux server 7.1.6.3.  Client is Linux 7.1.6.4.
>
> This morning at 6am is the second time I have had this "failure" when 
> it isn't true.
>
> ANR0522W Transaction failed for session 19482 for node 
> VCU-GS1.CHPC.VCU.EDU (Linux x86-64) - no space available in storage 
> pool BACKUPPOOL and all successor pools.
>
> But it isn't true.  The BACKUPPOOL pool is only 92% used (of 9TB) and 
> the hi/low triggers are 95/90.  I checked activity logs and there 
> haven't been any recent migrations.
>
> The NEXTPOOL is tape and all 9-drives are free and the MAXSCRATCH 
> count hasn't been hit.
>
> The backup only transferred 60GB (of 108TB examined) before dying due 
> to this erroneous error.
>
> No 

Re: No space available in storage pool failure but there is plenty of space

2017-07-18 Thread Richard Cowen
Node has mount points?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Tuesday, July 18, 2017 11:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] No space available in storage pool failure but there is 
plenty of space

Yep - 100+ scratch tapes in both tape libraries.  As I mentioned, there is no 
sign of any attempt to mount a tape and all drives were available at that time.

On Tue, Jul 18, 2017 at 11:29 AM, Richard Cowen <rco...@cppassociates.com>
wrote:

> Do you have lib volumes available (scratch/empty)?
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Zoltan Forray
> Sent: Tuesday, July 18, 2017 11:24 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] No space available in storage pool failure but 
> there is plenty of space
>
> I already considered that.  The tape pool is defined at 1500-scratch 
> and only 920-are used:
>
> 11:22:33 AM   PROCESSOR : show sspool
>   -> Pool ARCHIVEPOOL(2): Strategy=10, ClassId=0, ClassName=DISK,
> Next=6, ReclaimPool=0, HighMig=90, LowMig=70, MigProcess=1, 
> Access=0,
> MaxSize=0, Cache=0, Collocate=0, Reclaim=60, MaxScratch=0,
> ReuseDelay=0, crcData=False, verifyData=True,
> ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0
> Index=0, OpenCount=0, CreatePending=False, DeletePending=False
> CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes
> deduplicate=False, identifyProcess=1
> Shreddable=False, shredCount=0
> AutoCopy=Client, Encrypted=0, Compression=0
> DS Extension: VolListSize=16, VolCount=1, VolLast=0
>   -> Pool BACKUPPOOL(1): Strategy=10, ClassId=0, ClassName=DISK,
> Next=6, ReclaimPool=0, HighMig=95, LowMig=90, MigProcess=2, 
> Access=0,
> MaxSize=0, Cache=0, Collocate=0, Reclaim=60, MaxScratch=0,
> ReuseDelay=0, crcData=False, verifyData=True,
> ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0
> Index=1, OpenCount=0, CreatePending=False, DeletePending=False
> CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes
> deduplicate=False, identifyProcess=1
> Shreddable=False, shredCount=0
> AutoCopy=Client, Encrypted=0, Compression=0
> DS Extension: VolListSize=24, VolCount=18, VolLast=2
>   -> Pool COPYPOOL-OFFSITE(-2): Strategy=30, ClassId=4, 
> ClassName=IBM3494-SUN,
> Next=0, ReclaimPool=0, HighMig=90, LowMig=70, MigProcess=1, 
> Access=0,
> MaxSize=0, Cache=0, Collocate=0, Reclaim=66, MaxScratch=1500,
> ReuseDelay=0, crcData=False, verifyData=True,
> ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0
> Index=2, OpenCount=0, CreatePending=False, DeletePending=False
> CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes
> deduplicate=False, identifyProcess=0
> Shreddable=False, shredCount=0
> AutoCopy=Client, Encrypted=0, Compression=0
> AS Extension: NumDefVols=770, NumEmptyVols=0,
> NumScratchVols=770, NumRsvdScratch=0
>   -> Pool PRIMARY-ONSITE(6): Strategy=30, ClassId=3, ClassName=IBM3494,
> Next=0, ReclaimPool=0, HighMig=90, LowMig=70, MigProcess=1, 
> Access=0,
> MaxSize=0, Cache=0, Collocate=0, Reclaim=63, MaxScratch=1500,
> ReuseDelay=0, crcData=False, verifyData=True,
> ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0
> Index=3, OpenCount=0, CreatePending=False, DeletePending=False
> CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes
> deduplicate=False, identifyProcess=1
> Shreddable=False, shredCount=0
> AutoCopy=Client, Encrypted=0, Compression=0
> AS Extension: NumDefVols=920, NumEmptyVols=0,
> NumScratchVols=920, NumRsvdScratch=0
>
>
> On Tue, Jul 18, 2017 at 11:11 AM, Loon, Eric van (ITOPT3) - KLM < 
> eric-van.l...@klm.com> wrote:
>
> > Hi Zoltan!
> > Try this one: issue a "show sspool" command. If the sum of 
> > NumScratchVols and NumRsvdScratch equals your total amount of 
> > scratch tapes, you're probably hit by this APAR:
> > http://www-01.ibm.com/support/
> > docview.wss?uid=swg1IC77685
> > I have bine there, either raise your maxscratch value or bounce the 
> > server to reset the NumRsvdScratch value. Although the APAR is old 
> > and states it's fixed in 6.2, I still get hit by it on my 6.3 
> > servers. I raised the maxscratch to 100,000 to get rid of it. These 
> > reserved scratches are (in my
> > case) mainly caused by failed storage agent scratch mounts.
> > Kind 

Re: Farewell

2017-07-20 Thread Richard Cowen
And thank you Paul for your contributions, and good luck to you.

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Wednesday, July 19, 2017 10:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Farewell

I seem to be having trouble with my email client, sending this email out.  
Sorry about any partial sends!

I guess it's time for me to say goodbye to this list, and to this wonderful 
ADSM community.  Thank you to Marist college for hosting this list -- it has 
been a wonderful resource for me over the years.  I started with ADSM v1.2, 
sometime around 1994.  I retired from Cornell a few months ago.  I have enjoyed 
working with many of you, and with many IBM developers.  Thank you.  It is nice 
to see TSM continue to be developed, and I will be following it for some time 
to come.  Good luck to you all.

..Paul


This email has been scanned by BullGuard antivirus protection.
For more info visit www.bullguard.com



Re: Select from occupancy numfiles aberration

2017-12-13 Thread Richard Cowen
Has anyone seen a case when a select from occupancy returns rows where the 
NUMFILES columns is acting as an accumulator?
SP 7.1.7.100

The command.

select 
NODE_NAME,TYPE,FILESPACE_NAME,STGPOOL_NAME,NUM_FILES,PHYSICAL_MB,LOGICAL_MB,REPORTING_MB,FILESPACE_ID
 from occupancy

Here is a sample output.

NODE_NAME,TYPE,FILESPACE_NAME,STGPOOL_NAME,NUM_FILES
SRV1UPPCSXODB01,Bkup,/backup,POOL1,1
SRV1UPWSEXODB10,Bkup,/tmp/SECUPD,POOL1,1
SRV1UPWSEXODB20,Bkup,/tmp/SECUPD,POOL1,1
SRV1UPPCSXODB01,Bkup,/admin,POOL1,1
SRV1UPPCSXODB01,Bkup,/db_backup,POOL1,1
SRV1UP451XAPP20,Bkup,/vol,POOL1,1
SRV1WPSPVEAPP12_DM01,Bkup,\VMFULL-srv1uppgukweb04,POOL1,1
SRV2_DC,Bkup,\VMFULL-provtest,POOL1,1
SRV1UPDADXODB01,Bkup,/usr4,POOL1,1
SRV1UPDADXODB01,Bkup,/usr3,POOL1,1
SRV1UPPGUKODB01,Bkup,/usr2,POOL1,1
SRV1UPINFMODB03,Bkup,/opt/InformBak,POOL1,2
SRV1UPWSEXODB21,Bkup,/tmp/SECUPD,POOL1,2
SRV2UCDADXODB-1,Bkup,/usr2,POOL1,3
SRV1UDSIMBODB01,Bkup,/usr2,POOL1,3
SRV1UPINFMODB04,Bkup,/opt/InformBak,POOL1,3
DELETED,Bkup,DELETED,POOL1,4
.
SRV2UPCLMODB002,Bkup,/data1,POOL1,423068
SRV2UPPGUKODB01,Bkup,/usr1,POOL1,435623
SRV1UCWCMXODB01,Bkup,/usr1,POOL1,443582
SRV2UCWCMXODB01,Bkup,/usr1,POOL1,459086
SRV1UDEDIXAPP02,Bkup,/appdev,POOL1,489339
SRV1UDRASXODB01,Bkup,/usr1,POOL1,573502
SRV1UDEDIXAPP01,Bkup,/appdev,POOL1,635843
SRV2UDSIGEODB01,Bkup,/usr1,POOL1,743718
SRV2UCSIGEODB01,Bkup,/usr1,POOL1,800956
SRV2UCSIMBODB-4,Bkup,/usr1,POOL1,862419
SRV1UDRASXAPP01,Bkup,/usr1,POOL1,901929
SRV2UPSIGEODB01,Bkup,/usr1,POOL1,950103
SRV2UPSIMBODB-4,Bkup,/usr1,POOL1,981091
SRV1UPWSEXODB21,Bkup,/u01,POOL1,1076243
SRV2UPSIBPODB-1,Bkup,/usr1,POOL1,1106236
SRV1UPWSEXODB20,Bkup,/u01,POOL1,1119196

Thanks for any ideas...

Richard


Re: Monthly backups of VMs

2017-11-16 Thread Richard Cowen
Can you use backupsets or export nodes to real tape (no client impact.)
Or full restores to a dummy node and then archive those to real tape  (once a 
month), again no direct client impact.
Can the "monthlles" be spread over 30 days?

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Harris, Steven
Sent: Thursday, November 16, 2017 4:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Monthly backups of VMs

HI All

Environment is
TSM 7.1.1 server on AIX. 7.1.1 Storage agents on Linux,  7.1.1  BA clients, 
7.1.1 VE clients,  VMWare 5.5.  The VMware backups are via the SAN to a 
Protectier VTL.

My Client is an international financial organization so we have lots or 
regulatory requirements including SARBOX.  All of these require a monthly 
backup retained 7 years.  Recent trends in application design have resulted in 
multiple large MSSQL databases - up to 10 TB that never delete their data.  
Never mind the logic, the hard requirement is that these be backed up monthly 
and kept for 7 years, and that no variation will be made to the application 
design.

Standard process has been a daily VE incremental backup to a daily node  and 
monthly full to a separate node.  The fulls are becoming untenable on several 
grounds.  The VBS Servers need to run a scsi rescan on weekdays to pick up any 
changed disk allocations, and this interrupts any running backups.  The 
individual throughput of the Virtual tape drives is limited so sessions run for 
a long time and there is not enough real tape to use that.   Long running 
backups cause issues with the storage on the back end because the snapshots are 
held so long.

Does anyone have any practical alternate approaches for taking a monthly VMware 
backup for long term retention?

Thanks

Steve

Steven Harris

TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Huge differences in file count after exporting node to a different server

2017-10-25 Thread Richard Cowen
If you saved or still have the actlog from the time of the export check for 
messages like:

ANR0635I EXPORT NODE: Processing node RCOWEN in domain STANDARD.
ANR0627I EXPORT NODE: Copied 2 file space 0 archive files, 23420 backup files, 
and 0 space managed files.
ANR0629I EXPORT NODE: Copied 299457501 bytes of data.

And similar IMPORT messages.

Richard
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Wednesday, October 25, 2017 4:05 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Huge differences in file count after exporting node to a 
different server

There is a small difference but not enough to make that big of a difference in 
file/object-count but not in occupancy.

On Wed, Oct 25, 2017 at 3:43 PM, Sasa Drnjevic 
wrote:

> > Any thoughts?
>
> Directories vs files?
>
> Are you sure the mgmt classes are 100% same?
>
>
> Regards.
>
> --
> Sasa Drnjevic
> www.srce.unizg.hr
>
>
>
>
>
> On 2017-10-25 21:35, Zoltan Forray wrote:
> > I am curious if anyone has seen anything like this.
> >
> > A node was exported (filedata=all) from one server to another (all
> servers
> > are 7.1.7.300 RHEL)
> >
> > After successful completion (took a week due to 6TB+ to process) and 
> > copypool backups on the new server, the Total Occupancy counts are 
> > the
> same
> > (13.52TB).  However, the file counts are waaay off 
> > (original=17,561,816
> vs
> > copy=12,471,862)
> >
> > There haven't been any backups performed to either the original 
> > (since
> the
> > export) or new node. Policies are the same on both servers and even 
> > if
> they
> > weren't, that wouldn't explain the same occupancy size/total.
> >
> > Neither server runs dedup (DISK based storage volumes).
> >
> > Any thoughts?
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator 
> > Xymon Monitor Administrator VMware Administrator Virginia 
> > Commonwealth University UCC/Office of Technology Services 
> > www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be a phishing 
> > victim - VCU and other reputable organizations will never use email 
> > to request that you reply with your password, social security number 
> > or confidential personal information. For more details visit 
> > http://phishing.vcu.edu/
> >
>



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be 
a phishing victim - VCU and other reputable organizations will never use email 
to request that you reply with your password, social security number or 
confidential personal information. For more details visit 
http://phishing.vcu.edu/


This email has been scanned by BullGuard antivirus protection.
For more info visit www.bullguard.com



Re: Looking for suggestions to deal with large backups not completing in 24-hours

2018-07-19 Thread Richard Cowen
Canary! I like it!
Richard

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Skylar 
Thompson
Sent: Thursday, July 19, 2018 10:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Looking for suggestions to deal with large backups not 
completing in 24-hours

There's a couple ways we've gotten around this problem:

1. For NFS backups, we don't let TSM do partial incremental backups, even if we 
have the filesystem split up. Instead, we mount sub-directories of the 
filesystem root on our proxy nodes. This has the double advantage of letting us 
break up the filesystem into multiple TSM filespaces (giving us directory-level 
backup status reporting, and parallelism in TSM when we have 
COLLOCG=FILESPACE), and also parallelism at the NFS level when there are 
multiple NFS targets we can talk to (as in the case with Isilon).

2. For GPFS backups, in some cases we can setup independent filesets and let 
mmbackup process each as a separate filesystem, though we have some instances 
where the end users want an entire GPFS filesystem to have one inode space so 
they can do atomic moves as renames. In either case, though, mmbackup does its 
own "incremental" backups with filelists passed to "dsmc selective", which 
don't update the last-backup time on the TSM filespace. Our workaround has been 
to run mmbackup via a preschedule command, and have the actual TSM incremental 
backup be of an empty directory (I call them canary directories in our 
documentation) that's set as a virtual mountpoint. dsmc will only run the 
backup portion of its scheduled task if the preschedule command succeeds, so if 
mmbackup fails, the canary never gets backed up, and will raise an alert.

On Wed, Jul 18, 2018 at 03:07:16PM +0200, Lars Henningsen wrote:
> @All
> 
> possibly the biggest issue when backing up massive file systems in parallel 
> with multiple dsmc processes is expiration. Once you back up a directory with 
> ???subdir no???, a no longer existing directory object on that level is 
> expired properly and becomes inactive. However everything underneath that 
> remains active and doesn???t expire (ever) unless you run a ???full??? 
> incremental on the level above (with ???subdir yes???) - and that kind of 
> defeats the purpose of parallelisation. Other pitfalls include avoiding 
> swapping, keeping log files consistent (dsmc doesn???t do thread awareness 
> when logging - it assumes being alone), handling the local dedup cache, 
> updating backup timestamps for a file space on the server, distributing load 
> evenly across multiple nodes on a scale-out filer, backing up from snapshots, 
> chunking file systems up into even parts automatically so you don???t end up 
> with lots of small jobs and one big one, dynamically distributing load across 
> multiple ???proxies??? if one isn???t enough, handling exceptions, handling 
> directories with characters you can???t parse to dsmc via the command line, 
> consolidating results in a single, comprehensible overview similar to the 
> summary of a regular incremental, being able to do it all in reverse for a 
> massively parallel restore??? the list is quite long.
> 
> We developed MAGS (as mentioned by Del) to cope with all that - and more. I 
> can only recommend trying it out for free.
> 
> Regards
> 
> Lars Henningsen
> General Storage

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


  1   2   >