2 instances of TSM on 1 windows box
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Whitlock, Brett
Sent: Monday, April 28, 2008 4:23 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L]
2 separate TSM servers? Or 2 instances of TSM on 1 windows box?
AIX 5.3
DS4700 Disks
TSM 5.3.4.2
TSM Databases are not mirrored
TSM DB Disks are equally configured, LUNs are 36Gb size
LOGS are mirrored, LUNs are 14Gb size
Buffpool is setup at about 25% of memory, SELFTUNEBUPOOLSIZE YES
DAILY
DB
If you only have to keep what is there for 10 years and can allow normal
processing of new data going forward... set up a new domain with grace
periods to fit your needs... for 10 years 3,666 days will work, my example
here is where I had to keep everything for an umlimited period of time.
(put in
All disk are internal. We needed quantity vs speed (more for the LZ than
DB) so we didn't have a choice (Dell) other than the big SATA drives
(biggest SAS was around 300GB).
Unfortunately (please correct me if I am wrong) there doesn't seem to be
any really good, all inclusive system monitoring
I thought it was 800-bytes for each entry?
We will continue to expand our 3494 library. We can easily add a rack of
12-3592 drives. All it takes is $$.
Remco Post [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
04/28/2008 05:51 PM
Please respond to
I agree.
My slowest/problem DB is 130GB but has 287M objects.
My other server with primarily Notes backups (same hardware - same
physical configuration) runs expires in 1-hour.
CAYE PIERRE [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
04/29/2008 06:09 AM
Please
On Fri, 25 Apr 2008 14:20:48 +0200, Remco Post [EMAIL PROTECTED] said:
Been there done that, you are right. Now, explain, why do we need a
backup volhist command then?
Like the appendix on our large intestine. Vestigial?
Someone suggested Well, now we write those every time something
Hi,
did you look at the XE Toolkit:
http://www.captivemetrics.com/Captive_Metrics/XE_Toolkit.html
?
Rainer
--
On Tue, 29 Apr 2008, Zoltan Forray/AC/VCU wrote:
All disk are internal. We needed quantity vs speed (more
On Tue, 29 Apr 2008 09:12:57 -0400, Zoltan Forray/AC/VCU [EMAIL PROTECTED]
said:
All disk are internal. We needed quantity vs speed (more for the LZ than
DB) so we didn't have a choice (Dell) other than the big SATA drives
(biggest SAS was around 300GB).
Unfortunately (please correct me
Hi Richard!
How does your shop deal with departmental server admins-- do they have any
access to the TSM server? If they do, do you allow them to use their own ids
for backup and restore, ISC/Admin Center, TSM Operational Reports, SQL
queries, etc.?
I used to be on the other side of the TSM
Have you looked at dstat from Dag Wieers -
http://dag.wieers.com/home-made/dstat/
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Tuesday, April 29, 2008 9:13 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: DB Bufferpool sizing -
I agree on your assessment.
I would like to see things like..who is hitting the disk (i/o
mapping).cpu utilization trend analysis (not just who is hitting it,
now!), communications bottlenecks (is the nic saturated?
Yes I realize there is a hodge-podge of various tools from various
a client is receiving this error while trying a db restore... It's a
lanfree client. I'm guessing I have to increase the commtimeout
parameter on the tsm server? Or the storage agent? Not sure which file
to modify on the storage agent?
-Original Message-
From: ADSM: Dist Stor Manager
On Apr 29, 2008, at 9:43 AM, Laughlin, Lisa wrote:
Hi Richard!
How does your shop deal with departmental server admins-- do they
have any access to the TSM server? If they do, do you allow them
to use their own ids for backup and restore, ISC/Admin Center, TSM
Operational Reports, SQL
hi, i know i'm beating a dead horse here probably but
i don't know how to delete obsolete backup pieces from TSM
which were backupped with rman/tdpo of course ...
the thing is that from 77 nodes/database 3 of them didn't have BACKDEL set
to YES so the node couldn't delete backups after each
Thanks for input, we've done the same thing for the Exchange
Environment, but I was under the impressions that once data is Archived
using the TSM ba client archive function that you could not extend the
retention on exiting archives... Example Client A Archives data to a
2yr Mgmtclass (Archive
See http://www.mail-archive.com/adsm-l@vm.marist.edu/msg76436.html
for recent similar question.
Zoltan Forray/AC/VCU wrote:
I thought it was 800-bytes for each entry?
I once calculated it to be about 350 for a mainly unix environment...
We will continue to expand our 3494 library. We can easily add a rack of
12-3592 drives. All it takes is $$.
One other thing to
Allen S. Rout wrote:
On Fri, 25 Apr 2008 14:20:48 +0200, Remco Post [EMAIL PROTECTED] said:
Been there done that, you are right. Now, explain, why do we need a
backup volhist command then?
Like the appendix on our large intestine. Vestigial?
Someone suggested Well, now we write those
On Apr 29, 2008, at 10:43 AM, Hart, Charles A wrote:
Thanks for input, we've done the same thing for the Exchange
Environment, but I was under the impressions that once data is
Archived
using the TSM ba client archive function that you could not extend the
retention on exiting archives...
One can also specify the location(s) of the volhist files in dsmserv.opt,
for example, I have it write to files in two different filesystems:
* =
*
* VOLUMEHISTORY
*
*
Lisa,
I'm in a situation similar to yours - I was a TSM Server admin in a
previous life, currently admin over all Windows related B/R. The admins
here graciously granted my id unrestricted policy priv, which gives me
95% access, based on my prior knowledge. We (Windoze folks) maintain
all our
On Tue, Apr 29, 2008 at 5:51 PM, Remco Post [EMAIL PROTECTED] wrote:
Zoltan Forray/AC/VCU wrote:
I thought it was 800-bytes for each entry?
I once calculated it to be about 350 for a mainly unix environment...
How does one calculate such thing?
--
Warm regards,
Michael Green
One other thing to consider is using CDP for files. That eases off a lot
of your TSM server database load and per default probably only backs up
the files you want to keep.
CDP ?
I would be reasonably happy with that arrangement-- I have asked for it, and
offered to help set it up, sign off on whatever waivers if the restrictions
don't work right and I stumble upon information that is not Revenue's.
There are forces at work that, being new to this organization, I
The TSM admins register the nodes. We request from the TSM admins to add,
delete, and associate client nodes. Anything that requires admin access is
performed by admins.
Unfortunately, I have come to learn that my happiness is not one of my daily
admin goals ;- )
thanks!
lisa
Oh, and Richard-- thanks for the hint. I'll see what I can do with it!
thanks!
lisa
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Richard
Sims
Sent: Tuesday, April 29, 2008 9:14 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM server
We have a couple of client nodes whose scheduled backups are oddly
running twice per day. We have a normal backup window scheduled from
5PM to Midnight. However, these clients are also running an extra
scheduler-triggered backup in the middle of the afternoon.
Both client and server are v5.5
Roger,
Could you have two clients with identical node names? Look for
ANR1639I messages showing different an attribute change for the suspect
nodes.
Cheers,
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has
You will get that message if you start a client session
using the -nodename=node switch from a machine
other that the client machine.
Then again when the client next contacts the server.
the ANR1639I Attributes changed message should give you the
machine name where the client sessions was run.
Of course, if you just want a point-in-time snapshot of the current data while
continuing to use existing management classes for all data going forward, why
not kick off a backupset, eject the tape when complete, and send it off to the
vault with a copy of the BA client and the OS you're using.
Michael Green wrote:
On Tue, Apr 29, 2008 at 5:51 PM, Remco Post [EMAIL PROTECTED] wrote:
Zoltan Forray/AC/VCU wrote:
I thought it was 800-bytes for each entry?
I once calculated it to be about 350 for a mainly unix environment...
How does one calculate such thing?
select used_pages
Zoltan Forray/AC/VCU wrote:
One other thing to consider is using CDP for files. That eases off a lot
of your TSM server database load and per default probably only backs up
the files you want to keep.
CDP ?
Windows only IBM product that can act as a tsm client, actually very
nice I guess,
Mark Wheeler wrote:
One can also specify the location(s) of the volhist files in dsmserv.opt,
for example, I have it write to files in two different filesystems:
How nice that people actually read the whole discussion-thread before
contributing ;-) Yes you are right.
Thank you for your input
On Apr 29, 2008, at 4:41 PM, Mahesh Tailor wrote:
Of course, if you just want a point-in-time snapshot of the current
data while continuing to use existing management classes for all
data going forward, why not kick off a backupset ...
Where the requirement is to freeze all current Backup and
I can't help but to wonder if a company that wants to
keep all their data for 10 years has a functioning
legal department. Are they aware that all that data
is subject to e-discovery?
FWIW:
CDP is continuous data protection
The point of CDP is to capture ALL changes to the users' working files. For
instance, by default it would monitor for changes to .doc and .xls and .txt
files, but not by default the .exe files. Versions are captured into a
local cache directory when
37 matches
Mail list logo