I mean, it's formally in a whitepaper from 2018,
https://www.ibm.com/support/pages/repair-occupancy-repair-reporting-occupancy-storage-pool
and dsmserv was updated in 2021 to allow this to run on a container pool.
https://www.ibm.com/support/pages/apar/IT15373
It's not normal maintenance, but
Did you try with both RECONSTRUCT=YES and RECONSTRUCT=NO ?
Sometimes I have had to do that to get stubborn aggregates to move.
Also, what's the reusedelay on the pool? Sometimes resetting this can help.
Lastly, I had a lot of stuck data that wouldn't go away, and 8.1.17.100
seemed to allow
Anyone else running into connection latency on 8.1.18?
We were at 8.1.17.008 and 8.1.17.100, and came to 8.1.18.0 due to some
rollup locking patches (vs going to 8.1.17.015).
Now, we have pretty substantial hangs for client and dsmadmc connections.
During our normal backup window, it gets bad
For windows systems, the filespaces are not stored as C: and D:. They are
stored as UNC names such as \\hostname\c$.
The drawback here is that if the hostname ever got updated, you could have
C: backed up two or more times. Q OCC will tell you what to expect.
When you run an incremental, it
Hey, all. Just to resurrect this old thread, we ran into all of these same
issues, plus more.
POSSIBLY FIXED #1, My 8.1.13.100 servers do not seem to leave hung stgrule
target processes around.
Also, it looks like 8.1.13.012 has some additional fixes that help this.
012 patches may
Matt,
I didn't see further replies to this so I thought I'd add my experience
recently.
I have two servers I've been chasing locks/hangs on, mostly related to
REPAIR STGVOL.
One server would hang up every other day.
I waited for a month to get a patch, which was against 5.3.2.4; however,
in
Incrementals and Images are not the same type of objects insite TSM.
I don't see anything in the docs that indicate whether rebinding can occur
for image backups.
If rebinding an image snapshot is possible, it will only happen when you
take a new snapshot.
-Josh
On 06.04.05 at 09:44 [EMAIL
Mario,
Google and the IBM FTP site both have info.
Here's the last release of the Tru64 client:
ftp://ftp.software.ibm.com/storage/tivoli-storage-management/maintenance/client/v5r1/Tru64UNIX/v517
This client is no longer under development.
From what I understand, you'll still get howto/usage
Correction:
In the SHOW NODE against the subkeys, the KEY is the NODE_NAME. Field 1
is still the node number and field2 is PLATFORM_NAME.
Also, beware of using SHOW NODE on wrong or random pages.
On 06.04.20 at 22:58 [EMAIL PROTECTED] wrote:
Date: Thu, 20 Apr 2006 22:58:34 -0500
From: Josh
for the newer 1/2 tapes (LTO included), cleaning frequency is pretty
rare, except in cases where you have genuinely dirty tapes.
We have 36 drives, and they clean about once every year or two.
I've forced cleaning #2 on several drives because they got I/O errors and
out vault returns tapes with
If the library is hooked up to ethernet, you could lynx in and pull the
cleaning info from there.
On 06.04.20 at 18:38 [EMAIL PROTECTED] wrote:
Date: Thu, 20 Apr 2006 18:38:52 -0400
From: Jim Zajkowski [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
To:
I've never heard of a 100million file limitation to TSM.
The limits are 13.5GB for the log and 512GB for the TSM DB.
It's not DB2 Lite, rather, more like a port of the 1980s version of DB2
that was part of MVS.
Dave Cannon in a 2003 TSM symposium said they were considering decoupling
the
This could cause a call-home.
From the front panel, or from the web gui of the library, you should be
able to have it do a library inventory. It takes about 60 seconds per
frame.
On 06.04.20 at 15:11 [EMAIL PROTECTED] wrote:
Date: Thu, 20 Apr 2006 15:11:21 -0400
From: David E Ehresman
Directories are owned by the nodes they belong to.
So, if you're using a DIRMC pool with long retention,
and you implement collocation or colloc groups,
then the directories will be collocated the same as files.
On 06.04.20 at 13:29 [EMAIL PROTECTED] wrote:
Date: Thu, 20 Apr 2006 13:29:04
Schedule Randomization.
Q STAT and it's in the middle
On 06.04.18 at 09:19 [EMAIL PROTECTED] wrote:
Date: Tue, 18 Apr 2006 09:19:56 -0600
From: Andrew Raibeck [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Schedule start delayed
PROTECTED]
Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
To: ADSM-L@VM.MARIST.EDU
Subject: Re: CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE
What do these commands do?
--- Josh-Daniel Davis [EMAIL PROTECTED] wrote:
Does anyone know how to tell how big the expiration table is?
The reason
Plus, there's reusedelay on a storage pool, which prevents 100% expired
volumes from becoming scratch for X number of days.
On 06.04.07 at 11:44 [EMAIL PROTECTED] wrote:
Date: Fri, 7 Apr 2006 11:44:20 -0500
From: Rajesh Oak [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager
The biggest problem is that TSM doesn't sort reclaimable tapes prior to
assigning them to threads. Your best efficiency is to reclaim the most
empty tapes first.
You could use something like this:
UPD STG FOOBAR1 RECLAIM=95 RECLAIMPR=5
Wait a while
UPD STG FOOBAR1 RECLAIM=90 RECLAIMPR=5
Wait a
?
Thanks for any assistance.
-Josh-Daniel Davis
If it's by hostname, then it should be vaguely round-robin, if your DNS
has multiple address records. All connections for the cient would use the
IP chosen by that clients' resolver libraries.
You could also try using etherchannel. If set up correctly, you would
simply have two NICs with the
OK, I finally figured it out.
It's processing by NODES.REG_TIME, then by FSID.
I have 75 of 479 nodes left.
I'll look in my occupancy extracts to see how much more there is.
On 06.04.06 at 16:55 [EMAIL PROTECTED] wrote:
Date: Thu, 6 Apr 2006 16:55:50 -0500
From: Josh-Daniel Davis [EMAIL
The delay should have just been a block and not an abort.
The data transger interrupted supposedly means there was an error or
abort while trying to write to the storage media.
I'd look in the operating system error logs, and for more context in the
actlog.
On 06.03.09 at 09:28 [EMAIL
TSM Manager is pretty light weight. Most of its actual load will be
incurred on the TSM server as queries are made. I'd think a $250 CompUSA
special would do the trick nicely.
Admin Center's biggest issue is that it runs on the Integrated Service
Console, which is a WebSphere implementation.
I want to clarify.
You said from one policy to another?
MOVE NODEDATA moves the data between storage pools only.
This will not affect your retention policy of those files.
To change the policy domain:
UPD NODE DOM=newdomainname
If you're just looking to move the data, but not change policy:
I would recommend creating a second copygroup and a private storage pool
for that node. You'd probably want to give it a DB snapshot of its own
also.
If you NEED backupsets, then you definitely need the source pool to be
collocated. If you have to, it could be by group with all of your other
Are they all to the same server? I find that operational reporting tends
to be pretty resource intensive on the server. I've run into lock issues
that required killing sessions to free up.
If you have several TSM servers, you might try disabling specific reports
to see if things are OK on all
These are pulled by the server during startup and stored in temporary
tables that are inaccessible by SQL commands.
You can get the WWN from SHOW LIBR.
-Josh
On 06.03.10 at 07:56 [EMAIL PROTECTED] wrote:
Date: Fri, 10 Mar 2006 07:56:52 -0800
From: T. Lists [EMAIL PROTECTED]
Reply-To: ADSM:
If you have access to the server, you should be able to pull
Q ACT BEGINT= ENDT= BEGIND= ENDD= SEARCH=
The message numbers to search on start with ANE49xxI where xx is one of
these:
03/10/06 18:00:11 ANE4952I (Session: 40949, Node: DEADBEEF) Total
number of objects
Oops, I completely disregarded TDP.
If you can get the admin to grant you SQL authority (ANALYST I think), you
can select from SUMMARY which will show start/end times plus bytes
received. Then you could divide that by the bytes sent to get your
compression ratio.
There's nowhere to get network
It came from the sites that have 1200 node clusters,
and the sites with 16 Regatta-H and Squadrons-H systems with their
multiple ESS arrays and all of the 1000 different products IBM sells.
Each group of customers has a few loud proponents for 1-4 admins
being able to manage an entire enterprise
Mar 2006 22:59:02 -0600
From: Josh-Daniel Davis [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
To: ADSM-L@VM.MARIST.EDU
Subject: Re: dsmserv process hung.
Date: Fri, 3 Mar 2006 14:51:52 -0800
From: Larry Peifer [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager ADSM-L
, then import it again, making sure
that the copygroup destinations pointed to a disk pool.
-Josh
On 06.03.03 at 23:02 [EMAIL PROTECTED] wrote:
Date: Fri, 3 Mar 2006 23:02:02 -0600 (CST)
From: Josh-Daniel Davis [EMAIL PROTECTED]
To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Subject: Re: move
If you don't know the tape the most current data is on,
and don't feel like pulling the list of tapes (or can't per the length
of query)...
You could also use MOVE NODEDATA, but it would be more than just the last
version.
-Josh
On 06.03.03 at 16:53 [EMAIL PROTECTED] wrote:
Date: Fri, 3 Mar
mysqldump sent through adsmpipe is common and free.
On 06.02.28 at 09:38 [EMAIL PROTECTED] wrote:
Date: Tue, 28 Feb 2006 09:38:36 +1100
From: Paul Ripke [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backing up a MySQL database
I've run into the same issue alot. It's just a TSM limitation.
TSM is not smart enough to reorder any queue based on tape availability.
IE, if proc 1 is using tape 1,
and proc 2 needs data from tapes 1-5,
TSM won't do 2-5 while waiting for access to 1.
It'll simply go into media wait.
happens
It's still there. Here's what I wrote up about it in 2001:
FLUSH
This causes a database buffer writers to flush, which would be used to
test for buffer-writer starvation. Use this command when SHOW BUFVARS has
a dpDirty (dirty pages) value in the thousands. Then check log
utilization to see
Today, I was reminded why to be careful with SHOW commands during
production workloads and thought I'd share (and archive across the
Internet at large) by posting here:
03/04/06 10:44:46 ANR4391I Expiration processing node NODE, filespace
/oracle, fsId 141, domain STANDARD, and management
This happens when 2 threads start to back up the system object, and the
second one starts sending data before the first one is able to create the
group leader, which is the anchor for management and expiration of the
entire system object as a single entity even though it's made of multiple
Mark,
Thanks. Yes, I was hoping that maybe the ADSM.ORG folks would notice.
Eventually, one of the messages showed up post-dated, so I can go from
there.
-Josh
On 06.02.15 at 09:31 [EMAIL PROTECTED] wrote:
Date: Wed, 15 Feb 2006 09:31:44 -0600
From: Mark Stapleton [EMAIL PROTECTED]
domains.
On yahoo, nothing shows up to inbox or bulk.
To home server, nothing hits the exim4 logs.
Sorry to bother and thanks for your time.
-Josh-Daniel Davis
copypool maxpr=4 wait=yes
SERIAL
-Josh-Daniel Davis
On 06.02.14 at 09:05 [EMAIL PROTECTED] wrote:
Date: Tue, 14 Feb 2006 09:05:42 -0500
From: Timothy Hughes [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
To: ADSM-L@VM.MARIST.EDU
Subject: Re: AW: [ADSM-L] Automating server
Other options include:
*use WAIT=YES and it should dump the relevant actlog messages to
your admin session, including which tapes were mounted
*specify VOLumes=m1,m2, etc
On 06.02.15 at 07:26 [EMAIL PROTECTED] wrote:
Date: Wed, 15 Feb 2006 07:26:42 +1100
From:
Have you seen this?
--
Problem
Procedure to add a new 3592 tape drive on 3494 library
Solution
This can be necessary when 3590E drives are correctly defined on AIX and
TSM as well, however new 3592 tape drives are to be added to the 3494
library. These are configured in the
If you're also backing up with TDPMSEXC then shouldn't you be excluding
these files?
-josh
On 06.02.13 at 15:56 [EMAIL PROTECTED] wrote:
Date: Mon, 13 Feb 2006 15:56:16 +0100
From: goc [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
To: ADSM-L@VM.MARIST.EDU
Subject:
Depends entirely on your server. There's no chart based on the overall
load incurred; however, the extra I/O is all actlog. If your DB
performance has room to spare, then it shouldn't be a problem.
You could always turn it on for a day and compare your expiration
performance the next day with:
Dwight,
You can't stack mksysb's on a tape, but you can stack Sysbacks.
The first tape file on a bootable tape is in 512 byte blocks and is the
actual boot program. This is basically the kernel and a backup file
used to populate the ramfs as defined by the proto files.
On a mksysb, the second,
Rajesh,
3A/00 means media not present.
Is this a new library? Has it ever worked? If not, there may be a
drive cabling or definition problem such that the tape is being loaded
into one drive, but TSM is trying to read from a different drive
Is the tape you're trying label in the slot that TSM
Joe,
TCP/IP is always routed based on the IP addresses used. If you want
your traffic to go over the gigabit card:
If it's on a machine that will be a server for transaction X, then
specify to your client the host name or IP address of the gigabit card.
If it's on a machine that will be a
AIX 5.2 can be run in 32 and 64-bit mode.
64-bit kernel mode should have better performance on bulk I/O, larger
memory model, etc.
This is definitely worth if if you are using ultra 160 or 320 adapters,
or 64-bit Fibre channel (6228 and 6239) cards.
I would think that for an M80, any LPAR
Dale,
Did you check the basics of, as oracle, or your tdpo user:
# env | grep DSM
Make sure the DSMI variables point to the right locations, then verify
those files are readable by your user.
If after verifying this, you might want to let us know what version of
oracle, tdpo and tsmc you
I don't know if you can boot fibre tape. If not, this would most likely
be a firmware limitation. If any system would allow it, the p650 would.
The boot media, whether it be mksysb-CD, NIM or SCSI tape, will need to
have Atape and the fibre drivers in it. Make sure you're at current
firmware
51 matches
Mail list logo