What are you seeing for tape speeds?

2011-05-11 Thread Gary Bowers

Can anyone comment on what kind of speeds you are seeing on LTO-5 tape
drives?  Any information is much appreciated.

Gary


Re: licensing question

2011-05-11 Thread Gary Bowers

In general the way I understand this would work is that if you are
using server to server to send data to the remote TSM server, then you
will need to license it.  If it is just sitting waiting for a DB
restore and moving of tapes, then it does not need to be licensed.  It
falls back to the active/passive active/active philosophy, similar to
clustering the TSM server.  If they are both active at the same time,
then you need licenses.

Hope this helps.

Gary Bowers
Itrus Technologies




On May 11, 2011, at 9:57 AM, Thomas Denier wrote:


-David Tyree wrote: -


  We are thinking about adding an additional TSM server
to our environment to use for an offsite DR.  Basically we would
duplicate our onsite data to the new offsite TSM server.
  How would we go about licensing for the additional
server? Or does it matter?


There doesn't seem to be any source for authoritative answers to most
questions about TSM licensing. However, my best guess is as follows:

You will need value units for the system hosting the new offsite
server,
unless you already have the system licensed for some reason. You will
need the same number of value units you would need if you going to
install TSM client software on the same hardware and have it send
backups to an existing TSM server.

You will not need any additional value units for the client systems
that have backups stored on one TSM server now and will have backups
stored on two TSM servers in the future.


Re: LTO5 performance

2011-05-03 Thread Gary Bowers

Thanks Charles.  Yes, these are issues to consider.  I was really just
looking to see what other people were getting.  HBA's are dedicated 4/
HBA on 8 Gbit cards.  I have a feeling it is a device driver issue, or
an issue with the IO cards in the Quantum.  Since they are HP drives,
I have to use the TSM device driver, and not Atape.  Just checking to
see if anyone else has experienced LTO5 not living up to published
speeds.

Also, I can read off the disk subsystem at 1000+ MB/s, so I'm
confident that the system is able to push data fast enough.  I am
getting proportional speed increases with number of drives.  IE 2
drives gives 100 MB/s and 8 gives 400 MB/s over 1-2 hours.

Thanks all.



On May 3, 2011, at 8:22 AM, Hart, Charles A wrote:


Few things to consider

1) Are the HBA's for the LTO5 Drives dedicated or do they have other
devices.

2) How many LTO drives per HBA?

3) Speed of Fibre 2/4/8Gb



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
Behalf Of
Gary Bowers
Sent: Monday, May 02, 2011 9:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] LTO5 performance

Just a quick poll of what people are seeing for LTO-5 performance.  Is
it anywhere close to the rated 140 MB/s native, or much lower?  I ask,
because I have a customer that is only getting about 40-50 MB/s from
HP
LTO5 drives in a Quantum library.  This number seems really low to me,
but before I jump to making recommendations, I would like to have some
real world numbers to go off of.  You'd think Google would find some,
but my searches turned up nill.

As an aside, has anyone had performance problems with Quantum
libraries
and their built-in Fibre Switches?

OS is AIX 6, having the same performance from both TSM server, AIX and
Solaris 10 LanFree.

Thanks for sharing :)

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the
intended
recipient or his or her authorized agent, the reader is hereby
notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify
the
sender by replying to this message and delete this e-mail immediately.


Re: LTO5 performance

2011-05-03 Thread Gary Bowers

Thanks all for the response.  I agree with al the statements made.  I
am just trying to determine whether anyone is getting the published
specs on the LTO5, or what the real world transfer rates are.

I am running archives of large Oracle database files, so I don't think
that file size is an issue.
I am running on new P770 equipment, so it should be able to push the
data.  I can write to /dev/null at 1000+ MB/s.

Before I make recommendations to fix the LTO5 performance, I just want
to be sure that it is possible.  Have the vendors overstated the
performance metrics, or is this environment an exception?  It seems to
be writing at the minimum sync speed for LTO5.

Thanks again.

Gary


On May 3, 2011, at 9:55 AM, Richard Sims wrote:


A general advisory...  Commonly overlooked in throughput reviews is
the computer's system bus.  Sites may be implementing new technology
with ever-increasing data rates on the same computer they've been
using for years, and then wonder why they aren't seeing the
throughput they expect.  LTO5's specs include a 280 MB/s data rate
(with 2:1 compression).  The capacity of a PCI bus is 266 MB/s - and
that bus has to be able handle traffic other than what's going to
one tape drive.  Be sure to examine the system in total when sizing
for certain data rates, whether a traditional computer system or a
SAN-based storage solution.

Richard Sims  at Boston University


LTO5 performance

2011-05-02 Thread Gary Bowers

Just a quick poll of what people are seeing for LTO-5 performance.  Is
it anywhere close to the rated 140 MB/s native, or much lower?  I ask,
because I have a customer that is only getting about 40-50 MB/s from
HP LTO5 drives in a Quantum library.  This number seems really low to
me, but before I jump to making recommendations, I would like to have
some real world numbers to go off of.  You'd think Google would find
some, but my searches turned up nill.

As an aside, has anyone had performance problems with Quantum
libraries and their built-in Fibre Switches?

OS is AIX 6, having the same performance from both TSM server, AIX and
Solaris 10 LanFree.

Thanks for sharing :)


Re: Tsm backing up mysql databases

2011-02-24 Thread Gary Bowers

I ran into the same issue.  It seemed like the only way to do it was
to dump to a file then backup using dsmc.  This was fine, except that
we had a client with a 2 TB MYSQL DB.  (No comments necessary).  I
used the adsmpipe command to back this up to TSM with great success.
We were able to backup and restore using the command.  I believe that
I had to write a script that purged the old data though.  It's been a
while since I set it up.  There are no other options other than these
two though.  If the DB is small, export to a file.  If it is huge use
adsmpipe.

Gary Bowers
Itrus Technologies




On Feb 24, 2011, at 7:51 AM, Lee, Gary D. wrote:


We now have a request to back up a server running some library app
that uses mysql for its databases.

The only guidance I have seen so far searching the internet is to
use adsmpipe.

Are any of you doing mysql backups, if so how?



Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310



Re: LTO-5 Experience?

2011-01-19 Thread Gary Bowers

I have a customer that is running 6 LTO-5 drives in a 3310 using IBM
LTO5 media.  They have had many issues with LTO5.  In the past 6
months, they have replaced 4 drives mostly due to stuck tapes.
Several engineering firmwares have been added, and they are now
finally able to eject tapes more consistently.

I would say that this customer is not happy with the transition at
this point (LTO3 previously).  They run an HSM environment, and tapes
getting stuck in the drives prevents restores of their HSM data.  This
makes unhappy doctors.

I know that they sent at least 2 of their drives to IBM with a tape
still stuck in it to perform diagnostics.  I really have never seen
anything like this in the 10+ years I've been doing TSM.

If you have not upgraded to the latest firmware, make sure you do so.

Gary Bowers
Itrus Technologies




On Jan 19, 2011, at 5:51 AM, Christian Svensson wrote:


Hi,
Does anyone run LTO-5 libraries today?
How are your experience of LTO-5?

I have only installed one library and we got a lot of problem with
does drives.
Just wonder if that is only one timer or if many other also have
problems with LTO-5

I don't care what TSM version you are running.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson
Supported Platform for CPU2TSM:: 
http://www.cristie.se/cpu2tsm-supported-platforms


Re: TSM 6.2 and compatible AIX versions

2010-08-30 Thread Gary Bowers

Not sure what release we were at, but we had issues with APAR
IZ74508.  Basically the IOCP subsystem was not functional, and about
every week the TSM server would hang, and processes could not be
killed.  Even the TSM server could not be shut down.  We had to kill
the processes.

We had difficulty determining where this was actually fixed, and not
just an efix.  We installed the latest fixes as of 2 weeks ago, and
the problem has gone away.

Since then the TSM server has been stable.

Gary Bowers
Itrus Technologies




On Aug 30, 2010, at 9:02 AM, Melly, Timothy wrote:


To All,

IS anyone running TSM 6.2 at AIX 6.1 TL4 SP3. Any know issues? Is
there a recommended Stable AIX 6.1 version.

Regards, Tim


Getting backup duration in TSM 6.2 select statement

2010-06-21 Thread Gary Bowers

I must be missing something.  It used to be that we could use the
following select statement to get event durations from the summary
table.

select event, (end_time - start_time) seconds from summary.

I am keeping this simple for illustrative purposes.

I verified that this works as expected in 5.5.  This used to return
the total number of seconds that an event like a backup or migration
ran.  Now it returns just the number of seconds.  For instance.

If the process took 1 hour 20 minutes and 30 seconds, the command
should return 4800 seconds.  Instead it just returns 30.  The number
of seconds in the timestamp field.

If I run the same select statement for minutes I get 20 instead of
80... etc.

This seems to only be  problem with the summary table, as running a
select from the processes table works as expected.  Does anyone else
see this???

I am running TSM 6.2.1.0 on AIX 6.1.  I am having to rewrite all kinds
of scripts in order to accomodate this.  I know that we are supposed
to cast the timestamp as an integer, but I have not had any luck with
that either.  That just helps me do math with it like in calculating
backup speeds.

Any help is appreciated.


Re: Any way to avoid full backup of data relocated on TSM client file system??

2010-05-31 Thread Gary Bowers

Rick,

If memory serves right ( and that is always questionable) that is
exactly what used to happen at one point.  I believe that Unix systems
still do this, but I would have to test it again.  The metadata for a
file is stored in the TSM database.  At least it was until Active
Directory made the object too large to fit in the DB any more, then it
became bound by the DIRMC.  I believe that Unix systems metadata is
still small enough to fit completely in the DB, and not require a
DIRMC location, and that changes to groups and owners cause only the
metadata to backup.  Granted last time I tested this was about 5 years
ago.

I agree that just changing the ACL should not cause the whole file to
back up, but then, I guess there are trade offs.  You would need logic
in the backup code to make sure that this was the ONLY thing that
changed.  This sounds like extra overhead, and consequently longer
backups.  Just my 2 cents.

Gary
Itrus Technologies
On May 26, 2010, at 2:06 PM, Rick Adamson wrote:


Technically the actual data (file) does not change in these situations
only the associated metadata. It would be nice if this was identified
during the backup processing and just update the metadata. Thanks for
all the feedback.


~Rick


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of
Michael Green
Sent: Wednesday, May 26, 2010 1:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Any way to avoid full backup of data relocated
on
TSM client file system??

I wondering if there is any other backup software that is designed and
actually capable of handling such situations.

I bet there is not.
--
Warm regards,
Michael Green


Re: disk / file pool on SAN without SANergy or GPFS?

2010-05-05 Thread Gary Bowers

In short no.  Sanergy is what allows you to share the filesystem
amongst multiple servers.  GPFS is a an alternative to SANergy that
also provides the file and block locking necessary to share a physical
device among multiple hosts.

I may be wrong, but I believe that the only two supported LAN free
protocols are GPFS and SANergy.  If TSM supported CIFS or NFS shares
as stgpools it might be able to do something similar with a filer, but
I'm pretty confident that is not supported.  You definitely cannot do
it with iSCSI, because that is a block device, not a file share.

Hope this helps.  I have not done a SANergy install in 4 years, so
things may have changed.

Gary
Itrus Technologies

On May 5, 2010, at 5:30 PM, WHEDA TSM wrote:


Hello... we will be implementing an iSCSI SAN in the near future.
The TSM
server needs more disk storage pool space.  I want to build new
storage
pools (FILE type, but perhaps also DISK type) on SAN storage.  I don't
need SANergy for this... correct?

Some of the TSM clients that will reside on the SAN would benefit from
LAN-free backups.  All TSM clients and the TSM server are Windows
2003 /
2008, we have no other client platforms.

Without using SANergy or GPFS, can I create a FILE type storage pool
on
SAN storage and define a shared FILE type library so that the TSM
server
and the TSM SAN storage agents can both write to the pool?

Thanks... Ken


Re: How force removal of data of deleted folder.

2010-04-12 Thread Gary Bowers

Enable backdelete for the node, then use the client GUI to delete
any data that you do not want to keep.

Gary

On Apr 12, 2010, at 2:42 PM, yoda woya wrote:


I removed a folder form a node.  The domain policy is
Versions Data Deleted  = 60  version
Retain Only Version = 60 days


Without changing policy, how can wipeout that data.

I do not want to complete remove the filespace, but a particular
folder in
the filespace


Re: Virtual TSM server - using disk only

2010-03-11 Thread Gary Bowers

My experience with direct connected iSCSI storage on a TSM server is
that it gets abysmal performance unless you turn off Direct IO in
TSM.  See other posts for that.  It is technically possible, but with
the iSCSI limitation you might not want to use RMD Raw Device
Mapping in VMware.  I am not sure on this, but it makes sense given
what I have seen and read about here.  By the way, NFS and CIFS were
equally bad performers for disk pools with DirectIO turned on.  They
seem to really need the filesystem caching.  I'm guessing that
putting the disks in a VMFS would help buffer the writes, and give you
decent performance.

It is something that would need to be tested first.  I'm confident
that it would be much faster than WAN connection back to the States.
Yuck.

Good luck,

Gary Bowers
Itrus Technologies

On Mar 11, 2010, at 1:18 PM, Ochs, Duane wrote:


Good day everyone,
Has anyone explored using TSM server (windows) on a VM using Iscsi
storage ? No library requirement at this time.
I have multiple European sites within close proximity of each other
and they have outgrown the WAN coming back to the states.
Only storage available there is Iscsi and they have a substantial
VMware implementation which would allow us to ride on a VM if
feasible/functional.

Thoughts ?

Thanks,
Duane


Re: Changing attributes

2010-03-09 Thread Gary Bowers

Sounds like you've got the same client name on two different servers.
Each time they check in with TSM, they are updating their IP and
GUID.  Go check the dsm.opt on both those servers.

Gary
On Mar 9, 2010, at 2:02 PM, Fred Johanson wrote:


I've got a client that generates this 5-6 times a night:

03/09/10   03:29:03  ANR1639I Attributes changed for node
NIROLO: TCP Name from
 OLORIN to NIROLO, TCP Address from
128.135.19.3 to
 128.135.19.5, GUID from fd.5f.d0.81.cd.7b.
11.de.a8.c7.00-
 .25.64.90.25.15 to 9a.9c.1b.81.31.8b.
11.dd.a6.6f.00.19.b-
 9.46.ae.e1. (SESSION: 103251)

It goes back and forth, causing great confusion to TSM.

I've asked the owner what he's doing and he asks me the same.
Anyone with an explanation?


Re: Devtype FILE on NFS performance problem

2010-03-02 Thread Gary Bowers

Yep, had the same problem with iSCSI volumes.  Try turning off
directio with the following undocumented dsmserv.opt option.

directio no

Gary Bowers
Storage Architect
Itrus Technologies

On Mar 1, 2010, at 10:24 PM, Roger Deschner wrote:


We have a devtype=file stgpool that is on NAS disk, accessed via NFS,
and we're getting very slow performance with TSM reading or writing
it.
In a test, the Unix cp command moved data about 5 times faster than
TSM
Migration.

I have been adjusting the number of migration processes up and down,
while watching network data flow numbers with topas, trying to get
clues. There comes a point, that processor wait time goes over 90%,
and
then it hits a wall. The maximum seems to be about 10Mbytes/sec on a
point-to-point network consisting of three trunked GigE connections. A
single Unix cp command could write about 48Mbytes/sec, to the same NFS
filesystem, on the same NFS server, across the same network.

Anybody else faced this kind of issue?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
  Academic Computing  Communications Center


NFS as a storage pool volume

2009-12-29 Thread Gary Bowers

I have a strange performance issue that I am trying to work out
involving network attached storage being used for TSM stgpool volumes.

The TSM server is AIX 5.3, and the network is a dedicated Gbit.  We
started out using iSCSI for the storage pool volumes creating 10 X
250GB volumes and placing a single logical volume per 250GB physical
volume, and letting TSM do the load balancing.  We are a small shop,
and the 30-40 MB/s performance that we were seeing in backups was
acceptable.  That is until we had to audit some volumes.

For the audit, we are seeing abysmal performance.  Approximately 3-5
MB/s per volume.  Adding volumes increases the total throughput, but
the performance per volume remains around 3-5 MB/s.

From the command line we can get 30-60 MB/s using dd to and from an
iSCSI volume.  So we did some testing on NFS.

Using NFS, we are able to get 60-80 MB/s from the AIX OS using dd both
read and write.  So we decided to create volumes on the NFS mount.
The define vol command ran for 10+ hours on a 100 GB volume at 2 MB/s.

Thinking it was just the def vol that was a problem, I ran a move data
to the new volume, and it ran at 3-5 MB/s.  I then ran an audit on the
volume, and again I am only getting 3-5 MB/s.

This seems like a tuning issue inside TSM, but I could not tell you
what parameter would cause such a slow down.  I have done my homework
on this, and have not found any relevant posts.  If anyone has some
suggestions, I would love to hear them.

As a side note, defining a volume on the local root drive runs at
16-20 MB/s.

tsm: TSMq option

Server Option Option Setting   Server Option
Option Setting
-  -

CommTimeOut   60   IdleTimeOut   15
BufPoolSize   32768LogPoolSize   512
MessageFormat 1Language
AMENG
Alias HaltHALT MaxSessions   25
ExpInterval   24   ExpQuiet  No
EventServer   Yes  ReportRetrieveNo
DISPLAYLFINFO No   MirrorRead DB
Normal
MirrorRead LOGNormal   MirrorWrite DB
Sequential
MirrorWrite LOG   Parallel TxnGroupMax   256
MoveBatchSize 1000 MoveSizeThresh
2048
RestoreInterval   1,440DisableScheds No
NOBUFPREfetch No   AuditStorage  Yes
REQSYSauthoutfile Yes  SELFTUNEBUFpools- No
ize
DBPAGEShadow  No   DBPAGESHADOWFile
dbpgshdw.bdt
MsgStackTrace On   QueryAuth
None
LogWarnFullPerCe- 90   ThroughPutDataTh- 0
 nt reshold
ThroughPutTimeTh- 0NOPREEMPT
( No )
 reshold
Resource Timeout  60   TEC UTF8 Events   No
AdminOnClientPort Yes  NORETRIEVEDATENo
IMPORTMERGEUsed   Yes  DNSLOOKUP Yes
NDMPControlPort   10,000   NDMPPortRange 0,0
SHREDding AutomaticSanRefreshTime0
TCPPort   1500 TcpAdminport
1500
HTTPPort  1580 TCPWindowsize
64512
TCPBufsize32768TCPNoDelayYes
CommMethodTCPIPMsgInterval   1
ShmPort   1510 FileExit
UserExit   FileTextExit
AssistVCRRecovery Yes  AcsAccessId
AcsTimeoutX   1AcsLockDrive  No
AcsQuickInit  Yes  SNMPSubagentPort
1521
SNMPSubagentHost  127.0.0.1SNMPHeartBeatInt  5
TECHostTECPort   0
UNIQUETECevents   No   UNIQUETDPTECeven- No
ts
Async I/O No   SHAREDLIBIDLE No
3494SharedNo   CheckTrailerOnFr- On
ee
SANdiscovery  On   SSLTCPPort
SSLTCPADMINPortSANDISCOVERYTIME- 15

   Server Name:
Server host name or IP address:
 Server TCP/IP port number: 1500
   Crossdefine: On
   Server Password Set: Yes
 Server Installation Date/Time: 11/11/08   15:01:10
  Server Restart Date/Time: 12/28/09   11:50:39
Authentication: On
Password 

Re: NFS as a storage pool volume SOLVED!!!!

2009-12-29 Thread Gary Bowers

Ok, so when searching for solutions, I ran across a similar problem on
GPFS.  Turns out that DIRECTIO in TSM causes severe degradation for
GPFS and NFS and iSCSI volumes.

I added the undocumented DIRECTIO no parameter to dsmserv.opt, and
audit is running at 60+ MB/s as expected.  Hope this helps someone out
there.  There is a downside though.  Database performance seems to
suffer, as would be expected.  Startup time for TSM doubled.

Gary
Itrus Technologies

On Dec 29, 2009, at 11:17 AM, Wanda Prather wrote:


You don't say what kind of beast this network attached storage
hardware
actually is - are we talking Netapp, EMC, other?

You need to run the performance tools that are available with it and
look at
how busy its NIC card is and what kind of performance you are
getting from
its cache.

I have seen this kind of behavior before from network attached
storage when
the backstore is SATA disk (relatively slow), and the cache is not
large
enough or cannot keep up with the demands on it.

Esp. with SATA disk It makes a very big difference whether you are
reading/writing relatively small blocks that get a large percentage
of cache
hits, or long sequential streams that have to read every byte from a
single
disk with none of the data coming from cache.

And given it's ISCSI, you'll also have to look and see if there is any
strange behavior on whatever switches the I/O is going through.

Summary:  The performance problem may very well be outboard, rather
than in
TSM.

W





On Tue, Dec 29, 2009 at 11:22 AM, Gary Bowers gbow...@itrus.com
wrote:


I have a strange performance issue that I am trying to work out
involving network attached storage being used for TSM stgpool
volumes.

The TSM server is AIX 5.3, and the network is a dedicated Gbit.  We
started out using iSCSI for the storage pool volumes creating 10 X
250GB volumes and placing a single logical volume per 250GB physical
volume, and letting TSM do the load balancing.  We are a small shop,
and the 30-40 MB/s performance that we were seeing in backups was
acceptable.  That is until we had to audit some volumes.

For the audit, we are seeing abysmal performance.  Approximately 3-5
MB/s per volume.  Adding volumes increases the total throughput, but
the performance per volume remains around 3-5 MB/s.

From the command line we can get 30-60 MB/s using dd to and from an
iSCSI volume.  So we did some testing on NFS.

Using NFS, we are able to get 60-80 MB/s from the AIX OS using dd
both
read and write.  So we decided to create volumes on the NFS mount.
The define vol command ran for 10+ hours on a 100 GB volume at 2
MB/s.

Thinking it was just the def vol that was a problem, I ran a move
data
to the new volume, and it ran at 3-5 MB/s.  I then ran an audit on
the
volume, and again I am only getting 3-5 MB/s.

This seems like a tuning issue inside TSM, but I could not tell you
what parameter would cause such a slow down.  I have done my homework
on this, and have not found any relevant posts.  If anyone has some
suggestions, I would love to hear them.

As a side note, defining a volume on the local root drive runs at
16-20 MB/s.

tsm: TSMq option

Server Option Option Setting   Server Option
Option Setting
-  -

CommTimeOut   60
IdleTimeOut   15
BufPoolSize   32768
LogPoolSize   512
MessageFormat 1Language
AMENG
Alias HaltHALT
MaxSessions   25
ExpInterval   24
ExpQuiet  No
EventServer   Yes
ReportRetrieveNo
DISPLAYLFINFO No   MirrorRead DB
Normal
MirrorRead LOGNormal   MirrorWrite DB
Sequential
MirrorWrite LOG   Parallel
TxnGroupMax   256
MoveBatchSize 1000 MoveSizeThresh
2048
RestoreInterval   1,440
DisableScheds No
NOBUFPREfetch No
AuditStorage  Yes
REQSYSauthoutfile Yes
SELFTUNEBUFpools- No
  ize
DBPAGEShadow  No   DBPAGESHADOWFile
dbpgshdw.bdt
MsgStackTrace On   QueryAuth
None
LogWarnFullPerCe- 90
ThroughPutDataTh- 0
nt reshold
ThroughPutTimeTh- 0NOPREEMPT
( No )
reshold
Resource Timeout  60   TEC UTF8
Events   No
AdminOnClientPort Yes
NORETRIEVEDATENo
IMPORTMERGEUsed   Yes
DNSLOOKUP Yes
NDMPControlPort   10,000
NDMPPortRange 0,0
SHREDding Automatic
SanRefreshTime0
TCPPort   1500 TcpAdminport
1500
HTTPPort  1580 TCPWindowsize
64512
TCPBufsize32768
TCPNoDelayYes
CommMethodTCPIP
MsgInterval   1
ShmPort   1510 FileExit

Re: Querying status of a finished process

2009-10-15 Thread Gary Bowers

Probably the biggest gripes that I have against TSM.  Having dealt
with other Backup vendors, Tivoli is very good at batching things, but
it is difficult to follow a single process, or backup for that matter
through to completion.  You may be used to other tools that allow you
to see completion codes and event output sorted by job or process.
Unfortunately TSM does not do this.  You are stuck querying actlog or
running SQL or using any of the fine reporting tools out there.  Most
of us have created shortcut scripts for this.  I don't have mine
readily available, but if you ask nice, or search the archives, I'm
sure you'll find what you need.

Gary Bowers
Itrus Technologies
On Oct 14, 2009, at 7:55 PM, Tribe wrote:


Hello,

I'm a beginner with TSM and this question might be very basic.
However, I wasn't able to find the answer in the documentation, so
here's my question:

I'm using TSM 5.5 and want to run all commands through the dsmadmc
command line. I'm backing up and restoring NAS nodes.

I found ways to start backups and query running processes (query
process ID), but I don't know how to query the status of finished
processes. I just want a simple way to figure out if a backup /
restore was successful. If I use the query process ID after the
job finished, it just tells me Process cannot be found.

There must be a simple way to do that, right? I know that I can
query the actlog, but is there a better / easier way to do this,
given a process id?

Thanks,
Jan

+
--
|This was sent by m...@janseidel.net via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+
--


Re: What is the best why to identify what tape volumes a backup object is residing on.

2009-10-07 Thread Gary Bowers

I don't have the manual pulled up, but I believe that there is a way
to do a preview restore from the client that will prompt for tapes
that are required to do the restore.  I would tackle it that way.

Gary
Itrus Technologies

On Oct 7, 2009, at 1:18 PM, Brian G. Kunst wrote:


For security reasons, we're trying to identify what tapes a file is
residing on.  I've found the object_id of the file we're looking
for, 1524734887, and started a search of the CONTENTS table using
the following sql command:

SELECT * FROM CONTENTS WHERE NODE_NAME='UCSFSCL1_2' AND
OBJECT_ID=1524734887  /home/bkunst/qcontents.out

This search has been running for over 24 hours now.  Is there a
quicker/better way to search for the volume_id for this file?

Thanks,

--
Brian Kunst
Storage Administrator
UW Technology


Re: What is the best why to identify what tape volumes a backup object is residing on.

2009-10-07 Thread Gary Bowers

Ok, so I went and looked it up.  The way we used to do this was to
kick of the restore with the tapeprompt option.  This would cause
the client to pause before the tape was mounted, and prompt if you
wanted to mount the tape.  If you select no, then it does not restore
anything, and goes on to the next tape.  Just pick no on each one, and
write down the tapes it asks for.

The sho bfo works too, but can be annoying if you want to restore a
whole directory.

Hope someone finds this useful.  :)

Gary
Itrus Technologies

On Oct 7, 2009, at 1:44 PM, Remco Post wrote:


On 7 okt 2009, at 20:28, Gary Bowers wrote:


I don't have the manual pulled up, but I believe that there is a way


I was just browsing the publib...


to do a preview restore from the client that will prompt for tapes
that are required to do the restore.  I would tackle it that way.



I was expecting the same, but I can't find it, so maybe that was
wishful thinking... Maybe in a next release. Of course, one could
quite easily code this using the api, if you really wanted to


Gary
Itrus Technologies



--
Met vriendelijke groeten,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: Expiration of Exchange

2009-10-07 Thread Gary Bowers

My guess is that you are mounting up the filesystem, and backing up
the files directly.  The log files in Exchange are probably getting a
new name as they are truncated, which means that there are no versions
of the files, only a single version that goes from active to inactive
when it is deleted from the server.  Check your Retain Only
parameter of the TSM Exchange Management class.  Make sure that this
is set to 30 days and not 365 or nolimit.  This should delete older
files, but only from the date they get marked inactive.

Gary
Itrus Technologies


On Oct 7, 2009, at 8:33 PM, Fred Johanson wrote:


We use Copy Services instead of the TDP for EXCHANGE.  We want to
keep backups for 30 days, which does work.  The .edb files are daily
marked as inactive and roll off as expected.  But an examination by
Mr. Exchange shows that there are .log files which are never marked
as inactive, and, thus, are immortal, so far to the sum of 50 Tb on
site (and the same offsite).  We obviously missed something in
configuration, but what?

To complicate matters, we tried to modify the client to allow
deletion of backups (Mr. Exchange discovered on his own that del ba
*log todate=current_date-30 will get rid of the unwanted) but keep
getting the client is accessing the server message, on an empty
machine.  While waiting to figure this out, we could do del vol xxx
discarddat=y on all those volumes more than 5 weeks old, but there
must be some way to prevent this in the future.



Re: Age-old licensing question

2009-09-28 Thread Gary Bowers

It is all based on PVU.  For virtual machines you are paying for the
number of processors of the ESX server.

Gary
On Sep 28, 2009, at 4:28 AM, Minns, Farren - Chichester wrote:


Hi all

I know I started this in a reply to my backing up of virtual
machines bit thought it best to start a new thread.

My simple question is... how do I find out how much licensing
costs? :-)

I know it's not that simple though.

My basic questions are this...

1) do I need to use the PVU calculations to work out how much a
standard BA client license will cost or can I just pay for a
standard client license?
2) do I use the PVU calculations to back up virtual machines?
3) are standard BA client licenses cpu based (or PVU)?

Thanks in advance

Farren







This email (and any attachment) is confidential, may be legally
privileged and is intended solely for the
use of the individual or entity to whom it is addressed. If you are
not the intended recipient please do
not disclose, copy or take any action in reliance on it. If you have
received this message in error please
tell us by reply and delete all copies on your system.

Although this email has been scanned for viruses you should rely on
your own virus check as the sender
accepts no liability for any damage arising out of any bug or virus
infection. Please note that email
traffic data may be monitored and that emails may be viewed for
security reasons.

John Wiley  Sons Limited is a private limited company registered in
England with registered number 641132.

Registered office address: The Atrium, Southern Gate, Chichester,
West Sussex, PO19 8SQ.



Re: Erro - backup of the client Linux

2009-09-11 Thread Gary Bowers
Is it actually backing up data though?  It will scan the entire drive,  
and backup the directory structure, even though you have excluded it.   
If you are seeing that the schedlog is going into other directories,  
that is normal.  What you have setup looks correct to me.  If you  
don't want to scan the other directories you would need to use  
exclude.dir.


Gary

On Sep 11, 2009, at 1:05 PM, Bruno Oliveira wrote:


No space. Command 'dsmc q inclexcl':


tsm q inclexcl
*** FILE INCLUDE/EXCLUDE ***
Mode Function  Pattern (match from top down)  Source File
 - -- -
No exclude filespace statements defined.
Excl Directory /.../.TsmCacheDir  TSM
Include All   /home/ifwiset/.../*
/opt/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /.../*
/opt/tivoli/tsm/client/ba/bin/dsm.sys
No DFS include/exclude statements defined.
tsm

2009/9/11 Schneider, Jim jschnei...@ussco.com


You have a space between ifwiset an /.../*
If that's present in your dsm.sys file, try removing it and running  
'dsmc q

inclexcl' again.

Jim

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On  
Behalf Of

Bruno Oliveira
Sent: Friday, September 11, 2009 12:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Erro - backup of the client Linux

I need to backup only the directory /home/ifwiset. The dsm.sys file  
is as

follows:

exclude *
include /home/ifwiset /.../*

When you run the command schedule the backup is done a full backup  
of the

/.


2009/9/10 Andrew Raibeck stor...@us.ibm.com

I don't quite understand the problem being described, but a couple  
of

things to add to what Richard said:

- If you do not exclude a file or directory from backup, then it is
implicitly included. If you want to include some file or  
directories for
backup, and exclude everything else, then you need to specify  
something

like this:

 exclude *
 include /dir1/.../*
 include /dir2/.../*
 include /home/andy/.../*

- The exclude statement excludes file objects from backup, but

directory

objects are still backed up. For example:

 exclude /home/andy/.../*

excludes all files in /home/andy, but backs up directory entries
under /home/andy such as /home/andy/dir1, /home/andy/dir2,
and /home/andy/temp

To prevent the backup of everything in /home/andy, including  
directory

entries, do this:

 exclude.dir /home/andy

Best regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Product Development
Level 3 Team Lead
Internal Notes e-mail: Andrew Raibeck/Hartford/i...@ibmus
Internet e-mail: stor...@us.ibm.com

IBM Tivoli Storage Manager support web page:



http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html



The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2009-09-10
18:15:09:


[image removed]

Erro - backup of the client Linux

Bruno Oliveira

to:

ADSM-L

2009-09-10 18:16

Sent by:

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU

Please respond to ADSM: Dist Stor Manager

I set up a TSM server 5.5 to perform backups from Linux (RHLE 5),  
made

the

required settings but when you start the schedule is made up of

directories

that are not included in the field include.backup.

Does anyone know why?
--
abs,

Bruno Oliveira
Beagá - MG
(31) 9342-4111






--
abs,

Bruno Oliveira
Beagá - Minas Gerais - Brazil
55 31 9342 4111





--
abs,

Bruno Oliveira
Beagá - Minas Gerais - Brazil
55 31 9342 4111


Re: Help on TDP_R3 Oracel - AIX

2009-08-21 Thread Gary Bowers

The alter tablespace command is an Oracle command, not a TSM command.
If you were getting error with the library manager (TSM), you would
see problems with allocating a channel.  I would get with your DBA to
do some investigation on this table.  The BRBACKUP command just issues
Oracle commands under the covers.

Gary
Itrus Technologies

On Aug 21, 2009, at 3:10 AM, Yudi Darmadi wrote:


Dear TSM'ers

I have a TDP_R3 Oracle on AIX V5.5.0.0 - and  when
I think this is not TSM issue, but i just wanna know does anyone had
been
experince this case?


BR0280I BRCONNECT time stamp: 2009-08-21 08.43.17
BR0301E SQL error -600 at location db_file_switch-11
ORA-00600: internal error code, arguments: [3664], [3], [1], [4], [0],
[1954909839], [0], [1954909839]
BR0316E 'Alter tablespace PSAPWPR begin backup' failed
BR0280I BRCONNECT time stamp: 2009-08-21 08.43.17


Best Regards,


Yudi Darmadi
PT Niagaprima Paramitra
Jl. KH Ahmad Dahlan No.25  Kebayoran Baru, Jakarta Selatan 12130
Phone: 021-72799949; Fax: 021-72799950; Mobile: 081905530830
http://www.niagaprima.com


Re: ANR9999D error - need to extend LOG

2009-08-21 Thread Gary Bowers

From the tsm command line:

def dbvol Vol_name formatsize=size

Gary
Itrus Technologies

On Aug 21, 2009, at 11:03 AM, Mario Behring wrote:


Hi list,

I have a TSM 5.5 running on a Windows 2003 box.

Recovery log is full and I am getting the following message at  
startup:


ANR1635I The server machine GUID, 52.1c.f3.41.5e.70.11.dd.8e.7d. 
00.0e.0c.64.ba.74, has initialized.A
NR2997W The server log is 99 percent full. The server will delay  
transactions by 300 milliseconds.
ANRD_2860420907 (adminit.c:1597) Thread0: Insufficient log  
space to update table Administrat-

ive.Attributes.
ANRD Thread0 issued message  from:
ANRD Thread0  10646B52 OutDiagToCons()+e2
ANRD Thread0  10640CD8 outDiagf()+98
ANRD Thread0  10065235 admInitPart2()+2075
ANRD Thread0  2E74696E Unknown


I am trying to extend the LOG using the DSMSERV EXTEND LOG command,  
but I need to create the volume previous to running this command. On  
Unix/Linux I would use the DSMFMT command, but I can´t find this  
utility anywhere on the TSM Windows server...


Is it missing or it doesn´t exist? If so, how can I create a volume  
so I can extend the LOG? I tried DSMSERV FORMAT, but looks like this  
command is only for fresh installations, as it obligates me to  
create a DB as well...


Any help is appreciated.

Mario






Re: ANR9999D error - need to extend LOG

2009-08-21 Thread Gary Bowers
I don't have a windows server at my disposal, but I believe that you  
create additional db and log volumes using the management console on  
that platform.  After creating a new log volume, you can then run the  
dsmserv extend log command.


Gary

On Aug 21, 2009, at 12:00 PM, Mario Behring wrote:

Sorry all...I didn´t make myself clearthere is no TSM command  
line at this pointafter the error message, the server shuts down  
and return to OS prompt.


Mario





From: Gary Bowers gbow...@itrus.com
To: ADSM-L@VM.MARIST.EDU
Sent: Friday, August 21, 2009 1:30:24 PM
Subject: Re: ANRD error - need to extend LOG

From the tsm command line:

def dbvol Vol_name formatsize=size

Gary
Itrus Technologies

On Aug 21, 2009, at 11:03 AM, Mario Behring wrote:


Hi list,

I have a TSM 5.5 running on a Windows 2003 box.

Recovery log is full and I am getting the following message at  
startup:


ANR1635I The server machine GUID, 52.1c.f3.41.5e.70.11.dd.8e.7d. 
00.0e.0c.64.ba.74, has initialized.A
NR2997W The server log is 99 percent full. The server will delay  
transactions by 300 milliseconds.
ANRD_2860420907 (adminit.c:1597) Thread0: Insufficient log  
space to update table Administrat-

ive.Attributes.
ANRD Thread0 issued message  from:
ANRD Thread0  10646B52 OutDiagToCons()+e2
ANRD Thread0  10640CD8 outDiagf()+98
ANRD Thread0  10065235 admInitPart2()+2075
ANRD Thread0  2E74696E Unknown


I am trying to extend the LOG using the DSMSERV EXTEND LOG command,  
but I need to create the volume previous to running this command.  
On Unix/Linux I would use the DSMFMT command, but I can´t find this  
utility anywhere on the TSM Windows server...


Is it missing or it doesn´t exist? If so, how can I create a volume  
so I can extend the LOG? I tried DSMSERV FORMAT, but looks like  
this command is only for fresh installations, as it obligates me to  
create a DB as well...


Any help is appreciated.

Mario











Re: Need some how-to assistance please

2009-08-19 Thread Gary Bowers

Bill,

From what I read on your thread you are only keeping 30 days worth of
backups.  You may already know this, but the fromdate and todate on
the export pertain to when the backup was taken, not the timestamp of
the file.  I'm sure you know this, but I want to be sure.

That said, your current command will not really export much, because
it is trying to export everything older than 30 days.  I don't suspect
that you will get much, if anything, exported with a retention of 30
days.  To be fair, I can't remember if it was 30 days, or 30 versions,
but either way, I would set the export to run for something like 20
days ago vs 30 to be sure that you capture the range that you are
looking for.

Also, you don't have to specify a fromdate.  By default it will get
everything.  Therefore, I would do just todate=-20.

Are you actively restoring data, or is this just data on hold for
future restores?  The reason I ask, is that it might make sense to go
ahead and restore this data somewhere if it is going to be recalled
anyway.

Hope this helps,

Gary
Itrus Technologies

On Aug 19, 2009, at 11:49 AM, Bill Boyer wrote:


What I came up with was to EXPORT the data. Since their original
retention
was 30-days, I figure that if I exported all the nodes' data  30-
days I
would be covered. This is the EXPORT I came up with:

export node * filespace=* domain=nobel filedata=all dev=lto
fromdate=12/17/2007 todate=-30 preview=no

The FROMDATE is the day the TSM server was installed. Don't know
when we
started backing up data for this client, but that date should cover
any time
period. Basically from inception of the TSM server instance to 30-
days ago.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of
Ochs, Duane
Sent: Tuesday, August 18, 2009 4:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Need some how-to assistance please

You only had a 30 day retention policy... If the data was deleted
you could
only go back 30 days.

This doesn't seem to difficult an issue. For that much data and that
many
nodes just run an archive on each system.

If they must have the data that is already saved... I'd say best
bet is
backups sets.
I don't think backupsets can be created for SQL TDP clients though.




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of
Bill Boyer
Sent: Tuesday, August 18, 2009 3:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Need some how-to assistance please

Current primary storagepool occupancy is 8.9TB.

32 nodes in the domain with 8 of them being TDP SQL agent nodenames.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of
Ochs, Duane
Sent: Tuesday, August 18, 2009 3:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Need some how-to assistance please

How much data are you talking about ?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of
Bill Boyer
Sent: Tuesday, August 18, 2009 2:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Need some how-to assistance please

History.have a client backing up with a 30-day retention policy
(vere=nolimit rete=30) and last week they came as requested that the
retention be change to No Limit across the board. Keep everything.
Lawyers
involved. Now they feel that the resource requirements for doing
that for an
indefinite period are more than they want to take on. So they asked
that the
retention be set back to 30-days , but..and here's the fun part.they
want to
tapes with the oldest backup data to be kept. You can see they have to
concept of TSM and are thinking of keeping the oldest full backup
tapes
around so they could be re-cataloged if a restore is needed.


So my problem/question is how do I accomplish the same thing? Was
thinking
EXPORT NODES, but what date range to use. Backupsets (no, I'm not
6.1! J)
isn't what I want either.



Any suggestions?



Bill Boyer

He who laughs last probably made a back-up. Murphy's law of
computing


Re: Dealloc prohibited - transaction failed

2009-08-19 Thread Gary Bowers

Hate to say it, but this looks like DB corruption IMHO.  If you have
the downtime, or another test server, I would run an audit against the
DB.

Also, since the error seems to be on the BFDESTROY, you might try
setting the trace flag for SHRED.  Maybe this will give more
information.

Gary
Itrus Technologies

On Aug 19, 2009, at 2:33 PM, Clark, Margaret wrote:


Did you try deleting by FSID?  Laborious, but it works when other
methods fail...   e.g. (for filespace 1):  DELETE FILESPACE
KL10143J  1 NAMETYPE=FSID
- Margaret Clark

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of Loon, EJ van - SPLXM
Sent: Wednesday, August 19, 2009 12:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Dealloc prohibited - transaction failed

I was hoping to be able to solve it myself, since the server is
running
an unsupported TSM level (5.3.4.0), so I cannot open a PMR.
Thanks anyway.
Kind regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of
km
Sent: dinsdag 18 augustus 2009 18:34
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Dealloc prohibited - transaction failed

On 18/08, Loon, EJ van - SPLXM wrote:

Hi TSM-ers!
I have two nodes which I cannot delete. When I issue a DELETE

FILESPACE

* for one of these node, I see the following errors in the log:

ANR0984I Process 59041 for DELETE FILESPACE started in the BACKGROUND

at

15:30:40.
ANR0800I DELETE FILESPACE * for node KL10143J started as process

59041.

ANR0802I DELETE FILESPACE * (backup/archive data) for node KL10143J
started.
ANRD ssalloc.c(1532): ThreadId 51 Dealloc prohibited -

transaction

failed.
ANRD ThreadId 51 issued message  from:
ANRD ThreadId 51  0x000100017f78 outDiagf
ANRD ThreadId 51  0x00010032d480 ssDealloc
ANRD ThreadId 51  0x000100365534 AfDeallocSegments
ANRD ThreadId 51  0x000100365e68 AfDeleteBitfileFromPool
ANRD ThreadId 51  0x0001003663c8 AfDestroyAll
ANRD ThreadId 51  0x0001003608a4 bfDestroy
ANRD ThreadId 51  0x0001001816fc ImDeleteBitfile
ANRD ThreadId 51  0x00010018a510 imDeleteObject
ANRD ThreadId 51  0x0001003d3800 DeleteBackups
ANRD ThreadId 51  0x0001003d4c80 imFSDeletionThread
ANRD ThreadId 51  0x000163c8 StartThread
ANRD ThreadId 51  0x0953650c _pthread_body
ANRD imutil.c(7001): ThreadId 51 unexpected rc=87 from
bfDestroy
for objId=0.1066965103
ANRD ThreadId 51 issued message  from:
ANRD ThreadId 51  0x000100017f78 outDiagf
ANRD ThreadId 51  0x000100181740 ImDeleteBitfile
ANRD ThreadId 51  0x00010018a510 imDeleteObject
ANRD ThreadId 51  0x0001003d3800 DeleteBackups
ANRD ThreadId 51  0x0001003d4c80 imFSDeletionThread
ANRD ThreadId 51  0x000163c8 StartThread
ANRD ThreadId 51  0x0953650c _pthread_body
ANRD imfsdel.c(1847): ThreadId 51 IM not able to delete object
0.1066965103, rc: 19
ANRD ThreadId 51 issued message  from:
ANRD ThreadId 51  0x000100017f78 outDiagf
ANRD ThreadId 51  0x0001003d3840 DeleteBackups
ANRD ThreadId 51  0x0001003d4c80 imFSDeletionThread
ANRD ThreadId 51  0x000163c8 StartThread
ANRD ThreadId 51  0x0953650c _pthread_body
ANR0987I Process 59041 for DELETE FILESPACE running in the BACKGROUND
processed 25 items with a completion state of FAILURE at 15:30:40.

I cannot find both the return codes 19 and 78 in the list of known
return codes for TSM...
It's reproducible. Anybody seen this before?
Thank you very much for any reply in advance!
Kind regards,
Eric van Loon



Looks like an IBM support case, probably a missing object or
something.

-km
**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286
**


Re: Dealloc prohibited - transaction failed

2009-08-19 Thread Gary Bowers

One more thing.  Are any of your database volumes marked stale or
offline?  It could be possible that a bad DBMIRROR would cause this.

Gary

On Aug 19, 2009, at 2:33 PM, Clark, Margaret wrote:


Did you try deleting by FSID?  Laborious, but it works when other
methods fail...   e.g. (for filespace 1):  DELETE FILESPACE
KL10143J  1 NAMETYPE=FSID
- Margaret Clark

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of Loon, EJ van - SPLXM
Sent: Wednesday, August 19, 2009 12:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Dealloc prohibited - transaction failed

I was hoping to be able to solve it myself, since the server is
running
an unsupported TSM level (5.3.4.0), so I cannot open a PMR.
Thanks anyway.
Kind regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
Behalf Of
km
Sent: dinsdag 18 augustus 2009 18:34
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Dealloc prohibited - transaction failed

On 18/08, Loon, EJ van - SPLXM wrote:

Hi TSM-ers!
I have two nodes which I cannot delete. When I issue a DELETE

FILESPACE

* for one of these node, I see the following errors in the log:

ANR0984I Process 59041 for DELETE FILESPACE started in the BACKGROUND

at

15:30:40.
ANR0800I DELETE FILESPACE * for node KL10143J started as process

59041.

ANR0802I DELETE FILESPACE * (backup/archive data) for node KL10143J
started.
ANRD ssalloc.c(1532): ThreadId 51 Dealloc prohibited -

transaction

failed.
ANRD ThreadId 51 issued message  from:
ANRD ThreadId 51  0x000100017f78 outDiagf
ANRD ThreadId 51  0x00010032d480 ssDealloc
ANRD ThreadId 51  0x000100365534 AfDeallocSegments
ANRD ThreadId 51  0x000100365e68 AfDeleteBitfileFromPool
ANRD ThreadId 51  0x0001003663c8 AfDestroyAll
ANRD ThreadId 51  0x0001003608a4 bfDestroy
ANRD ThreadId 51  0x0001001816fc ImDeleteBitfile
ANRD ThreadId 51  0x00010018a510 imDeleteObject
ANRD ThreadId 51  0x0001003d3800 DeleteBackups
ANRD ThreadId 51  0x0001003d4c80 imFSDeletionThread
ANRD ThreadId 51  0x000163c8 StartThread
ANRD ThreadId 51  0x0953650c _pthread_body
ANRD imutil.c(7001): ThreadId 51 unexpected rc=87 from
bfDestroy
for objId=0.1066965103
ANRD ThreadId 51 issued message  from:
ANRD ThreadId 51  0x000100017f78 outDiagf
ANRD ThreadId 51  0x000100181740 ImDeleteBitfile
ANRD ThreadId 51  0x00010018a510 imDeleteObject
ANRD ThreadId 51  0x0001003d3800 DeleteBackups
ANRD ThreadId 51  0x0001003d4c80 imFSDeletionThread
ANRD ThreadId 51  0x000163c8 StartThread
ANRD ThreadId 51  0x0953650c _pthread_body
ANRD imfsdel.c(1847): ThreadId 51 IM not able to delete object
0.1066965103, rc: 19
ANRD ThreadId 51 issued message  from:
ANRD ThreadId 51  0x000100017f78 outDiagf
ANRD ThreadId 51  0x0001003d3840 DeleteBackups
ANRD ThreadId 51  0x0001003d4c80 imFSDeletionThread
ANRD ThreadId 51  0x000163c8 StartThread
ANRD ThreadId 51  0x0953650c _pthread_body
ANR0987I Process 59041 for DELETE FILESPACE running in the BACKGROUND
processed 25 items with a completion state of FAILURE at 15:30:40.

I cannot find both the return codes 19 and 78 in the list of known
return codes for TSM...
It's reproducible. Anybody seen this before?
Thank you very much for any reply in advance!
Kind regards,
Eric van Loon



Looks like an IBM support case, probably a missing object or
something.

-km
**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286
**


Re: Need some how-to assistance please

2009-08-18 Thread Gary Bowers

One way would be to move the nodes to another domain with unlimited
retention, and then rename them node_archive.  You would then need to
recreate then nodes in the current domain for backups to continue.

This will cause the data to be backed up again for all nodes, but at
least it will only be one copy.

Another solution is to backup the stgpool to another backup stgpool,
then keep it along with a copy of the TSM database.  You would need to
setup a separate TSM instance if they are actively restoring the data.

Don't envy your situation.  It would be best if you could get your
lawyers to agree on what data needs to be retained rather than
everything.  This is where e-discovery and content management really
starts saving people money.  It's too late in your case, but it makes
a great argument for the future.

Good Luck,

Gary Bowers
Itrus Technologies

On Aug 18, 2009, at 2:29 PM, Bill Boyer wrote:


History.have a client backing up with a 30-day retention policy
(vere=nolimit rete=30) and last week they came as requested that the
retention be change to No Limit across the board. Keep everything.
Lawyers
involved. Now they feel that the resource requirements for doing
that for an
indefinite period are more than they want to take on. So they asked
that the
retention be set back to 30-days , but..and here's the fun part.they
want to
tapes with the oldest backup data to be kept. You can see they have to
concept of TSM and are thinking of keeping the oldest full backup
tapes
around so they could be re-cataloged if a restore is needed.


So my problem/question is how do I accomplish the same thing? Was
thinking
EXPORT NODES, but what date range to use. Backupsets (no, I'm not
6.1! J)
isn't what I want either.



Any suggestions?



Bill Boyer

He who laughs last probably made a back-up. Murphy's law of
computing


Re: windows client 5.5.2.2 schedules using dsmcad and prompt do not start

2009-08-10 Thread Gary Bowers

Without the errorlog, it is difficult to diagnose.  Try starting it in
the foreground first.  Open a windows command line, and run dsmc
sched.  This will give you a good idea of what is going wrong.  Also,
look at the dsmwebcl.log and dsmsched.log.  It could be a simple
password issue.

To answer your question, I have not had any problems with upgrades of
clients after applying the 5.5.2.2 fix patch.  I have on occasion seen
some issues where a registry setting was lost, and the TSM services
had to be removed and readded, if that helps at all.

Gary

On Aug 10, 2009, at 12:17 PM, TSM wrote:


Hello,

windows client-schedules do not start.
We use dsmcad and schedmode=prompt.
There is no special error message.
With client version 5.5.2.1 the schedules are working succesfull.

I'm wondering, if there is a problem with the tsm client 5.5.2.2 ?
Has anybody else the same problem?

with best regards
andreas


Re: ADSM / TSM Service Companies

2000-09-01 Thread Gary Bowers

Orin,

This is just a follow up note from the conversation that we had earlier.
Itrus, is a Premier IBM Business Partner based out of Dallas, TX that
Specializes in TSM/ADSM, HACMP, RS/6000 SP, storage, and general AIX
installation, maintenance, and development issues.  I just wanted to forward
my contact information along to you again.  I hope to be hearing from you
soon.

Gary Bowers
Itrus Technologies Inc.
AIX, HACMP, Storage, ADSM Consultant
(972) 365-4962
(817) 491-7145 fax
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Orin Rehorst
Sent: Friday, September 01, 2000 11:01 AM
To: [EMAIL PROTECTED]
Subject: ADSM / TSM Service Companies


I need contact information for a company that can do ADSM / TSM maintenance
work in the Houston area.

Regards,
Orin

Orin Rehorst
Port of Houston Authority
(Largest U.S. port in foreign tonnage)
e-mail:  [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Phone:  (713)670-2443
Fax:  (713)670-2457
TOPAS web site: www.homestead.com/topas/topas.html
www.homestead.com/topas/topas.html
"I managed good, but they played bad." Coach Rocky Bridges