Rick,
ask L1/L2 about how to make db2 on aix use tcpip to communicate with dsmserv.
aix has a problem doing the massive amount of ipc processing which db2
generates.
I learned of this recently when the rhel 6.6 kernel bug bit us.
The switch to tcpip is available on all platforms. TSM will run
manual
clean up like this just to keep things healthy seems inconsistent with
how the rest of TSM functions.
Has anyone ever reported this as a bug?
Martha
On 10/22/2014 2:38 PM, Colwell, William F. wrote:
Hi Martha,
I see this situation occur when a filesystem gets almost completely full
Hi Martha,
I see this situation occur when a filesystem gets almost completely full.
Do 'q dirsp dev-class-name' to check for nearly full filesystems.
The server doesn't fence off a filesystem like this, instead it keeps
hammering on it, allocating new volumes. When it tries to write to a
Hi Angela,
I downloaded the kc.zip file to a Linux server. After unzipping, the /bin
directory doesn't have any .sh files, only .bat files, so I can't
run the kc as a local server.
Thanks,
Bill Colwell
Draper lab
-Original Message-
From: ADSM: Dist Stor Manager
IBM supplies a perl script to measure the cost of dedup.
See http://www-01.ibm.com/support/docview.wss?uid=swg21596944
I just ran it in an instance with an 800 GB db, here are the final summary
lines -
Final Dedup and Database Impact Report
Are these tapes by any chance ejected from the library with the 'move media'
command?
When I have tapes go empty which are racked out of the library, I run a script
to get the media tracking to forget about them and make them scratch.
the script -
tsm: LM2run qscr mmi
Description
. I expect it will slow down to my normal rates when it hits
the next volume, but I can already see it's helping:
DB backup rate graph
The DB backup rate started out nearly twice as fast today.
On 3/26/2014 13:04, Colwell, William F. wrote:
When I had TSM databases on Netapp - both v5 v6 - I
When I had TSM databases on Netapp - both v5 v6 - I had to do frequent netapp
'reallocate' commands to get the physical order in the netapp to match
the logical order of db2.
The db2 backup is reading the database sequentially, but within the netapp it
is completely out of order.
Try doing '
I have 2 policysets in each domain. They are identical except for the
copygroup destination
parameter.
This is the design I came up with to implement 6.1 with dedup. Backups come
into an ingest
pool on hi-speed disk (bkp_1a) while policyset set_a is active. At 5 am, a
schedule/script
Wanda,
I tried deduping them and got 50% savings. I expected much more, thinking
that from
one day to the next, a pst should be 99% the same. I suspect that outlook makes
little updates all over the file which makes it hard for tsm to find duplicate
chunks.
Since I only keep 3 versions, and
Hello,
I don't know what the baseline for the claimed 10x improvement is. I hope there
is an ATS webinar soon to explain it.
I did serious amounts of dedup on 6.1 servers. They are now at 6.3.4.2+ and I
don't
remember a big improvement from the upgrade.
This url,
Hi Wanda,
some quick rambling thoughts about dereferenced chunk cleanup.
Do you know about the 'show banner' command? If IBM sends you an e-fix, this
will tell you what it is fixing.
tsm: xshow banner
* EFIX Cumulative
Hi Eric,
the timestampdiff function will do what you need. This works -
select node_name, platform_name, date(lastacc_time) -
from nodes -
where cast(timestampdiff(16, current_timestamp - lastacc_time) as
decimal(4,1)) 2
The first number in timestampdiff can be -
1 Fractions of a second
size on the
primary pool an issue? I.e., does TSM optimize output tape mounts?
Thanks.
..Paul
At 05:48 PM 11/14/2013, Colwell, William F. wrote:
Paul,
I am using 4 GB volumes on the 15k disks (aka ingest pool). Since each disk
is ~576 GiB
and there are 16 disks assigned to this server, that's
you've put a lot of thought into this, but I'm not sure I'm getting
everything you did, and why.
..Paul
At 10:24 AM 11/18/2013, Colwell, William F. wrote:
Paul,
I describe my copypool setup in a previous reply, last Friday.
If you lost it somehow, it is on adsm.org.
But quickly
Hi Sergio,
my first server started at 6.1 so it was all server side dedup. I have not let
any
of its clients do client side. The separation based on maxsize is working as
designed.
My 2nd server started at 6.3 and I do use client side. The clients do not
react well
when a file bigger than
Hi Sergio,
I faced the same questions 3 years ago and settled on the products from Nexsan
(now owned by Imation) for
massive bulk storage.
You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and
later attach 2 60 drive expansion
units to it (the E60X model).
I have 3
,
Can I ask what size volumes you use for the ingest pool (on 15k disks) and also
on your 4TB sata pool? I assume you are pre-allocating volumes and not using
scratch?
Thanks.
..Paul
At 02:13 PM 11/14/2013, Colwell, William F. wrote:
Hi Sergio,
I faced the same questions 3 years ago and settled
Hi Nick,
there is a webinar next week from the Tivoli User Community.
What's New in Tivoli Storage Manager V7.1
November 7, 2013 at 11:00 AM, ET USA
Join Ian T. Smith, Director of IBM Storage Software, to learn more about how
Tivoli Storage Manager V7.1
dramatically increases scalability
Hi Zoltan,
when I went to 6.3.4.0 from 6.3.somewhere-lower, an index reorg started in all
my servers.
The reorg was of a big table involved in dedup. It caused the active log
to fill up and all the servers crashed more than once.
I opened a pmr; IBM was aware of the problem, see
Hi Norman,
that is incorrect. IBM doesn't care what the hardware is when measuring used
capacity
in the Suite for Unified Recovery licensing model.
A description of the measurement process and the sql to do it is at
http://www-01.ibm.com/support/docview.wss?uid=swg21500482
Thanks,
Bill
Hi Zoltan,
regarding the upgrade of the 6.1 servers, if you are doing dedup, pay close
attention to apar IC90488 -
http://www-01.ibm.com/support/docview.wss?uid=swg1IC90488
If you upgrade to 6.3.4.0, the problem is fixed, otherwise you will need to
build
an index manually.
Bill Colwell
Draper
Hi Grant,
I used to track collocation group spill overs when my servers were version 5
and used tapes. Now I am on v6 and almost all disk, so I don't do that anymore.
Anyway, I used a mysql database on my desktop system. I would dump data from
the tsm servers and load it into mysql where I
IBM has an apar open fix a performance issue with mmbackup; see IC86976.
We are seriously looking at gpfs to replace our current file server on Netapp.
Prior to the v6 snapshot enhancements it would take 4 days to do a backup
of the fileserver via the b/a client over cifs. With the snapshot
Hi Geoff,
The messages manual says Ensure that the MAXNUMMP (maximum number of mount
points) defined on the server for this node is greater than 0.
What is the maxnummp for the node?
I version 6, I set it for all nodes to 6.
Bill Colwell
Draper Lab
-Original Message-
From: ADSM:
Hi David,
last month IBM withdrew the tsm for SharePoint product.
- - -
Software withdrawal and support discontinuance: IBM Tivoli Storage Manager for
Microsoft SharePoint V6.x
http://www.ibm.com/vrm/newsletter_10577_10362_232814_email_DYN_1IN/BColwell13712838
At the same time, they
Hi Geoff,
there isn't one command to do this, but a select and then 1 or 2 show commands
will
find the volume name. Here is an example.
tsm: WIN2select object_id from backups where node_name = 'A-NODE-NAME' and
ll_name = 'OUTLOOK.PST'
OBJECT_ID
-
Hi Geoff,
are you aware of the new command in 6.3, perform libaction? I haven't run it
yet, but the help
seems to be saying that if you have san discovery running, then just define the
library and then
run the command and it will create all the drives and paths.
I have 2 scripts which run in
Hi Sergio,
I ran the fix up procedure on 2 small 6.3.1 instances and at went well, no
problems.
I didn't have to run anything more than the directions.
If you plan to do a lot of dedup running this is a good idea before your
instances
gets too big.
I will not be running it on my 6.1 servers
Zoltan,
occupancy numbers were made incorrect by various bugs in early 6.1 code,
see apar ic73005. There is a special utility to fix the numbers, repair
occupancy.
It was supposed to be in 6.1.5.10 but isn't, you need an e-fix for 6.1.5.102.
Of course, you can ignore the errors unless you are
Allen,
after the pending big backup is done, and if the copygroup keeps
enough versions, you can delete the active backups using the client.
This action will promote the most recent inactive backup back up to
the active state.
See the b/a client guide, 'delete backup', especially the note
under
I agree with Zoltan. I have 2 very large instances at 6.1.5.10 in production
doing large amounts of dedup processing. I am aware of the reorg issues but it
doesn't bother me, I am not interested in reorging the tables. In any case
6.3 doesn't solve all the reorg issues, see apar ic81261 and
Harold,
After recreating the optionset, remember to update the nodes to use it.
When you deleted it, the server implicitly updated the nodes to not use any
optionset.
Bill Colwell
Draper lab
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Wanda,
when id dup finds duplicate chunks in the same storagepool, it will
raise the pct_reclaim
value for the volume it is working on. If the pct_reclaim isn't going
up, that means there
are no duplicate chunks being found. Id dup is still chunking the
backups up (watch you database grow!)
but
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE
-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Colwell, William F. bcolw...@draper.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/28/2011 20:43
Hi Daniel,
I remember hearing about a 6 TB limit for dedup in a webinar or conference call,
but what I recall is that that was a daily thruput limit. In the same section
of the
redbook as you quote is this paragraph -
Experienced administrators already know that Tivoli Storage Manager
Hi Dave,
I can't comment on your error messages, but you asked how I schedule
snapdiff backups.
The schedule invokes a command on the client. Here is a shortened
version of the command file.
echo on
for /f tokens=2-4 delims=/ %%a in ('date /t') do (set
date=%%a-%%b-%%c)
echo %date%
Hi Harold,
I am running 6.1 with dedup and have coded scripts to check the id dup
processes before proceeding.
Here is a snippet -
upd scr start_migration 'select count(*) from processes where
substr(process,1,1)=''I'' -'
upd scr start_migration ' and
, 2011 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Identify Duplicates Idle vs Active state?
What does the %1A. in like ''%1A.%active%'' having count(*) 1 '
test for?
Colwell, William F. bcolw...@draper.com 5/25/2011 11:17 AM
Hi Harold,
I am running 6.1 with dedup and have coded scripts
Hi,
I used ext3 for the first storage attached to the server, but I switched to
ext4 for the second
storage purchase. Both file systems work fine, but the documentation for ext4
say it is designed
to support large files better than ext3. Scratch volumes delete much faster
from the ext4
Hi Gary,
in v6 expiration puts a row in the summary table for every node, plus a
summary
row for the whole process. Here is output from a script which displays
rows from summary.
As you can see, some of the elapsed times are 0 -
Activity Target Start Time End Time
Zoltan,
you will also need to run expiration on the target server to delete what
the
server things are archive files.
Regards,
Bill Colwell
Draper Lab
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
J. Pohlmann
Sent: Tuesday, December 21,
Hi Henrik,
I have 2 TSM (6.1.4.2) instances on one server. One instance db size
(the size of the full db backup) is
558 GB, the other is 1,448 GB.
The server (IBM x3850 m2, running RHEL 5.5) started with 16 GB of ram, I
bumped it to 40 GB and then max'ed it out
with 128 GB. I can't say I did a
] On Behalf Of
Colwell, William F.
Sent: maandag 15 november 2010 16:12
To: ADSM-L@VM.MARIST.EDU
Subject: Re: De-dup ratio's
Hi David,
I am doing dedup with v6, no appliance involved.
On a server for windows systems, I am getting 3 to 1 savings. The 'q
stg f=d' command
shows the savings
, and are still figuring out the best way to deal with PST
files.
At 10:51 AM 11/16/2010, Colwell, William F. wrote:
But I won't start deduping PST's again because they are backed up every
day and I only keep 3 versions
so why do all the dedup effort only to have to go thru the chunk
deletion effort 3
Hi David,
I am doing dedup with v6, no appliance involved.
On a server for windows systems, I am getting 3 to 1 savings. The 'q
stg f=d' command
shows the savings -
Duplicate Data Not Stored: 77,638 G (67%)
I exclude pst files and any other file larger than 1 GB from dedup.
On
-supported-platforms
Från: Colwell, William F. [bcolw...@draper.com]
Skickat: den 28 oktober 2010 22:08
Till: ADSM-L@VM.MARIST.EDU
Ämne: Linux ext4 filesystems - is anyone using them for devt=file storage?
Hi,
I am running 2 6.1 servers on rhel 5.5. I am doing
Hi,
I am running 2 6.1 servers on rhel 5.5. I am doing a lot of doing
dedup. All primary storagepools are
devicetype file. Current I have 10 16TB ext3 filesystems on raid 6
Sata. All volumes are
scratch allocations.
I have another 96TB ready to go. I haven't made the filesystems
Hi Andy,
there are 2 sources for this information. A column in the stgpools table has
the MB saved -
tsm: select cast(stgpool_name as char(20)) as Name, -
cast(space_saved_mb / 1024.0 / 1024.0 as decimal(6,2)) as T
Saved from stgpools
Name T Saved
Hi Allen,
yes, I am seeing sessions hang like this. The sending server is version
6.
The receiver is 5.5. I am making the copypools for the v6 servers on
virtual
volumes. I get hanging sessions like this when doing backup stgpool and
also doing
tsm db backups to the same 5.5 server. I monitor
Grigori,
I assume the sql*plus feature you use is the break statement which
by default does outlines on break columns.
Besides submitting sql and retrieving results sets, Sql*plus includes
a lot of report writer functions which are not strictly SQL.
So I don't know any way to do
bericht-
Van: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Namens
Colwell, William F.
Verzonden: woensdag 26 augustus 2009 17:55
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: Re: [ADSM-L] Windows TSM server 6.1.2.0 after clean install :
ANR2968E Database backup terminated. DB2 sqlcode: -2033
Roger,
I have a script called 'qdr' to show the drives and the tapes on each.
The script also
executes scripts in the library manager client servers to get details of
the usage.
The scripts and sample output are in the attached file.
Hope this helps,
Bill Colwell
Draper Lab
-Original
Hi Sam,
I went thru this about two years ago, moving db volume from raw volumes
on Solaris to
a netapp. My experience was that it got faster as the number of db
volumes decreased.
The decreased because the old volumes were 6GB and the netapp volumes
were 20GB.
What I this the problem is, is that
I have been configuring a new TSM server since last November. At first
I wanted a VTL. But when I learned from the Oxford symposium
presentations
that TSM would have its own dedup in version 6,
and considering the cost of the vtl, I ditched it and ordered a lot more
of SATA arrays for less
@VM.MARIST.EDU
Subject: Re: TSM being abandoned?
Is Version 6 going to be released this Year or Next?
regards
Colwell, William F. wrote:
I have been configuring a new TSM server since last November. At first
I wanted a VTL. But when I learned from the Oxford symposium
presentations
that TSM would
We backup complete end user desktops. Ever since the advent of
TSM - actually adsm 1.1 - some people, mostly managers, have
asked how many copies of any file, for example winword.exe, are
stored in tsm. When I tell the 1,200, I can see they are thinking
'what a waste, what's wrong with tsm'. So
Hi Paul,
the subfile cache doesn't hold the whole file, it holds signatures of
some
sort, probably a checksum for each page. I use subfile for pst files.
My pst is
75 meg but the folder is 5 meg.
We went thru a conversion to outlook 2 years ago. First, you're lucky
to be involved
before the
Curtis,
the Oxford 2007 presentations are available at
http://tsm-symposium.oucs.ox.ac.uk/2007contributions.html
Review the ones by Dave Cannon and Freddy Saldana, they are very good
with lots of information about possible future tsm features.
Bill Colwell
-Original Message-
From:
Hi Dirk,
the backup node admin command has a mgmtclas parameter. You could
put the full backups to a separate mgmtclass which keeps it long enough.
For scheduling fulls I suggest you prefix the vfs mapping name with some
indicator of a cycle like cycle01/filespace. the a script could check
the
Keith,
Did you put the node name in upper case? The only way you can
get no rows retuned from the query is if the node name is cased wrong or
misspelled.
- bill
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of Keith Arbogast
Sent: Friday,
Keith,
I'm not sure what you mean by 'current nodes', but if they
backed up before you last changed management classes, and if there
are files that have never changed, then they are still in the active set
and will still have the surprising mgmtclass.
I am just finishing up a ppt for managers to
Hi Shawn,
use this sql to find the oldest backup date -
select min(backup_date) from backups
where node_name = 'NODE' and filespace_name = '\\node\c$'
It will return just one line of output. The filespace criteria is
optional.
I use it because we have nodes with old filespaces like
Allen,
use the query nodedata command which was added in 5.3(?)
as part of the collocation group feature. for example -
tsm: serverxq nodedata * vol=84
Node NameVolume NameStorage Pool
Physical
Hi,
one of my Solaris admins had to restore the boot disk recently. It
didn't
go well! Everything was restore eventually but it took a long time
because
the mount points of symbolic links were never backed up. He had
to make them manually and restart the restore. The client os is
Solaris 8,
Debbie,
I have a suggestion to give you 6 months on disk, then push to tape.
This will work especially well with file disk. Make multiple stgpools,
for example, march, april, etc. Using a script or schedule, update the archive
destination
to use the current month stgpool. Using another script
Chip,
I would check first for volume leaks. If this select returns anything
it is bad -
select volume_name from libvolumes where status = 'Private' and owner is
null
I have also had a different kind of leak, where I have too many filling
tapes.
If you aren't collocating (!) then you should
Hi,
I did a little test of active-only pools and they do have inactive files
in them.
The way they differ from ordinary pools is that during reclaim all
inactive
versions will be squeezed out, whereas with ordinary pools only expired
versions
are. Except at the very start of the AOP
Of
Helder Garcia
Sent: Friday, February 16, 2007 5:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Active Only Storage Pools for DR
Which command did you use? copy activedata or backup stgpool?
On 2/16/07, Colwell, William F. [EMAIL PROTECTED] wrote:
Hi,
I did a little test of active-only pools
Gary,
I have this sql in a script I am using right now to move the database to
new volumes.
upd scr add-dbcopy 'select ''ok'' from status where hour(current_time) -
7 0'
upd scr add-dbcopy 'if(rc_notfound) goto do_moves'
To do what you want I suggest -
upd scr add-dbcopy 'select ''ok'' from
Hi Matt,
I run this command in a script to check free cells
tsm: LIBRARY_MANAGERselect 678 - count(*) as Free cells from
libvolumes
Free cells
---
44
'678' is the number of cells in my library. Plug in the number of cells
in
a 3584.
Bill Colwell
Draper Lab
-Original
Hi,
message anr1639i provides details about node changes. I report on them
every day with the TSM OR.
tsm: xxxhelp anr1639i
---
ANR1639I Attributes changed for node nodeName: changed attribute list.
Explanation: The
Hi James,
I went thru this 6 months ago, going from 8 lto2 to 4 lto3 and 4 lto2.
I can't say this is a complete procedure, but here are some key points.
Go to version 5.3.3.* for the server and device driver. lto3 media
isn't
recognized at a lower version.
My lto2 device classes had
Hi Adrian,
you could create a master script which runs daily at 00:00 and
checks for holidays, and then inactivates schedules. A sql
statement like this can find any particular date -
select 'HOLIDAY' from status where year(current_date) = 2006 and
month(current_date) = 9 and day(current_date)
Hi,
I was looking over the options ('q opt') and saw the diskmap option. I
looked it up in the
admin ref and it sounds interesting. Is anyone using it? Does it do
any good? Here is
the text from the online manual -
DISKMAP
Specifies one of two ways the server performs I/O to a disk
75 matches
Mail list logo