Re: TDP for SQL: does the id absolutely require SA priv?

2005-08-29 Thread Steve Schaub
Thanks for the definitive answer, Del.
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Del
Hoobler
Sent: Monday, August 29, 2005 9:45 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP for SQL: does the id absolutely require SA priv?

Steve,

Data Protection for SQL requires SYSADMIN role for the ID that runs the
backups and restores. This is because Data Protection for SQL uses the
Microsoft recommended SQL Server Virtual Device Interface (VDI) API for
performing backup and restore of the SQL Server databases.

In order to utilize the SQL Server "VDI" API, Microsoft SQL Server requires
the SYSADMIN role because the VDI API actually shares storage with the SQL
Server to increase performance. It also requires enough system permissions
to read and write to the local registry.

The following is directly from the Microsoft VDI SDK documentation:

"Security
 The system objects used to implement the virtual device set are  secured
with an access control list. This list permits access to  all processes
running under the account used by the primary client.
 Access is also permitted to processes running under the account used  by
Microsoft(r) SQL Server?, as recorded in the system services configuration.

 The server connection for SQL Server that is used to issue the  BACKUP or
RESTORE commands must be logged in with the sysadmin fixed  server role. For
more information, see Microsoft SQL Server Books Online."

Thanks,

Del

"ADSM: Dist Stor Manager"  wrote on 08/25/2005
08:26:02 AM:

> TSM serv = 5.2.2.0
> TSM TDP = 5.2.1.0
>
> I'll spare you the political details, but our SQL Server admin is
claiming
> that NIST standard required him to remove SQL access from the SYSTEM
> account.  We created a specific AD id and have been testing, but he
wants to
> not grant this id SA priv, for the same reason.
>
> What is the minimum amount of priv an id needs to run TDP backups?
> The
TDP
> doc "seems" to assume SA priv, but is it absolutely required?  The
> admin would be running any restores from the gui under his own id.
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


TDP for SQL: does the id absolutely require SA priv?

2005-08-25 Thread Steve Schaub
TSM serv = 5.2.2.0
TSM TDP = 5.2.1.0

I'll spare you the political details, but our SQL Server admin is claiming
that NIST standard required him to remove SQL access from the SYSTEM
account.  We created a specific AD id and have been testing, but he wants to
not grant this id SA priv, for the same reason.

What is the minimum amount of priv an id needs to run TDP backups?  The TDP
doc "seems" to assume SA priv, but is it absolutely required?  The admin
would be running any restores from the gui under his own id.

thanks.

Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)

Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Problems with BMR on W2K (missing DLL's after restor e)

2005-08-11 Thread Steve Schaub
Mike,
For what it's worth, we had better success with BMR using the 5.3.0.5 client
than the 5.2.2.0 when we did our hotsite test recently.
We also ran a pre and post restore script that copied/restored some of the
hardware specific files such as hal.dll, but we needed to restore to
dissimilar hardware.
Does this machine have other drives than c:?  If so, they should be restored
first, in case any software references them.
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Mike Hagery
Sent: Thursday, August 11, 2005 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Problems with BMR on W2K (missing DLL's after restore)

Hello,

TSM server Windows2000, TSM 5.2.2.0.
TSM client Windows2000 Advanced, TSM 5.2.2.0, Exchange-mail server

After a succesfull back-up of all local drives (C, D, E, F + system objects)
and a full back-up with ITSM_for_Mail (TDP for Exchange), we tried to do a
full restore of the system.

We get some problems after restoring the operating system and I hope someone
can point me to the right direction.

These are the followed steps:
- Basic installation W2KSP3 (same SP as installed during back-up)
- Server renamed to original name, not joined in domain
- IP config modified for connection with TSM-server
- Installation TSM client
- Modified dsm.opt
- Restore C-partition
NO REBOOT
- Restore System Objects
- Reboot

The system reboots well, but a few seconds after the log-menu appears there
also appears two popup error-messages concerning 'Unable to locate DLL' for
inetinfo.exe:
IisRTL.dll could not be located in 

and the other-one for AntigenIMC.exe (virusscanner):
Exstrace.dll could not be located in 

If we compare some dll's with another Exchange server the following were
missing:
iisext.dll
iismap.dll
iisrstap.dll
iisreset.exe

It is not possible to start the IIS-service manually.

I find this document:
DCF Document ID: 1164812 - IBM Tivoli Storage Manager: Modified Instructions
for Complete Restores of Windows Systems: Bare Metal Restore (BMR), System
State Restore, Windows System Object Restore

But,
After doing the restore again with the commands out of the document:
- dsmc restore "{SYSTEM OBJECT}\winnt\system32\catroot\*"
%systemroot%\system32\ -sub=yes -rep=all  - dsmc restore
%systemdrive%\* -sub=yes -rep=all
- dsmc restore systemobject

I still get the same popup-message about the missing DLL's.

If I do a select on the server the mentioned files are all in the
backup-table. If I do some queries on the client (q sysfiles, q
systemobject, show systemobject) everything is there.

Every tip is welcome.

Thanks in advance.

Mike

_
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: 1 of 3 TOR reports refuses to generate?

2005-08-11 Thread Steve Schaub
Actually, we are putting TRT in sometime soon (by our AIX guys).
I use TOR because it was relatively quick & easy to setup, and I didn't have
time to go through the purchasing process.

One thing I do miss from my previous company was a custom report we
generated daily on all session activity, that allowed me to both filter and
sort the data - was really useful for spotting nodes with low network xfer
rates, for example, or easily listing the top ten longest running backups,
or seeing only restore sessions from TDP nodes, etc - basically a limited
but still very useful adhoc query tool.  There's nothing equivilant in eith
TOR or TRT.  I can send you an example if you would be interested in seeing
it.  Someday I'll have time to convert the rexx program from mainframe to
windows, but not today.

-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Cowen
Sent: Thursday, August 11, 2005 8:36 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] 1 of 3 TOR reports refuses to generate?

Hi Steve,

Just out of curiosity, could you tell me what TOR is giving you that TRT
doesn't?  I am always interested in improving our product...

Thanks.
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


1 of 3 TOR reports refuses to generate?

2005-08-11 Thread Steve Schaub
endation"
record TSM_Report_Warning version "1.0" timestamp "2005-08-11 06:52:31" file
"TSMPROD01TSMPROD01ReptDailyReport-Windows20050811065231.htm" type "0"
typename "Report" computer "TSMPROD01" instance "TSMPROD01" serverurl
"http://tsmprod01.bcbst.com:1584 <http://tsmprod01.bcbst.com:1584> " report
"Daily Report - Windows" begin "2005-08-10 06:30:31" end "2005-08-11
06:30:30" status "1" statusname "Needs attention" Message "The process
cannot access the file because it is being used by another process."
Condition "Return Code: 1" Recommendation "No recommendation"
record TSM_Report_Warning version "1.0" timestamp "2005-08-11 06:52:58" file
"TSMPROD01TSMPROD01ReptDailyReport-Windows20050811065258.htm" type "0"
typename "Report" computer "TSMPROD01" instance "TSMPROD01" serverurl
"http://tsmprod01.bcbst.com:1584 <http://tsmprod01.bcbst.com:1584> " report
"Daily Report - Windows" begin "2005-08-10 06:30:58" end "2005-08-11
06:30:57" status "1" statusname "Needs attention" Message "The process
cannot access the file because it is being used by another process."
Condition "Return Code: 1" Recommendation "No recommendation"
record TSM_Report_Warning version "1.0" timestamp "2005-08-11 06:53:56" file
"TSMPROD01TSMPROD01ReptDailyReport-Windows20050811065356.htm" type "0"
typename "Report" computer "TSMPROD01" instance "TSMPROD01" serverurl
"http://tsmprod01.bcbst.com:1584 <http://tsmprod01.bcbst.com:1584> " report
"Daily Report - Windows" begin "2005-08-10 06:30:56" end "2005-08-11
06:30:55" status "1" statusname "Needs attention" Message "The process
cannot access the file because it is being used by another process."
Condition "Return Code: 1" Recommendation "No recommendation"

 
Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)
 
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: strange incremental behavior on Windows 5.2.2.0

2005-08-10 Thread Steve Schaub
This did fix the problem.
I guess I better get all of our 2003 servers upgraded soon.
Thanks for the assist, Richard & Andy!
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Andrew Raibeck
Sent: Tuesday, August 09, 2005 4:28 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] strange incremental behavior on Windows 5.2.2.0

I concur. The

08/07/2005 18:10:46 gtUpdateGroupAttr() server error 4 on update
SYSTEMSTATE\SYSFILES

message in the error log is a symptom of that APAR, too.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew
Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.

"ADSM: Dist Stor Manager"  wrote on 2005-08-09
12:11:54:

> Maybe this one?
>
>
>
>
>
> IC38377: Backup of Windows2003 System Object fails - ANS1950E - caused
> after a Microsoft VSS_E_PROVIDER_VETO error
>
>
>   A fix is available
> IBM Tivoli Storage Manager V5.2 Fix Pack 3 Clients and READMEs <
> http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg24007407>
>
>
> APAR status
> Closed as program error.
>
>
> Error description
> The backup of the Windows2003 System Object with:
>
> ANS1950E Backup via Microsoft Volume Shadow Copy failed.  See error
> log for more detail.
>
> The error reported in the dsmerror.log is:
>
> CreateSnapshotSet():  pAsync->QueryStatus() returns
> hr=VSS_E_PROVIDER_VETO
>
> It has also been reported that after this failure, subsequent
> incremental backups of the Windows 2003 system object may fail
> with:
>
> ANS1304W Active object not found
> Local fix
> Problem summary
> 
> * USERS AFFECTED: All TSM B/A client v5.2.0 v5.2.2 on  *
> * Windows 2003.*
> 
> * PROBLEM DESCRIPTION: See ERRROR DESCRIPTION. *
> 
> * RECOMMENDATION: Apply fixing PTF when available. *
> * This problem is currently projected  *
> * to be fixed in PTF level 5.2.3.  *
> * Note that this is subject to change at   *
> * the discretion of IBM.   *
> 
> If BACKUP SYSTEMSTATE or BACKUP SYSTEMSERVICES on Windows 2003 fails,
> subsequent incremental backups of the Windows 2003 system state or
> system service may fail with the message "ANS1304W Active object not
> found".
> Backup processing halts after the ANS1304W message is issued, even if
> there are more files to process.
> Problem conclusion
> The client code has been fixed so that after a system state or system
> services backup failure, subsequent attempts will not fail (barring
> any other conditions that might cause a failure).
> Temporary fix
> Windows interim fix 5.2.2.5
>
> IC38377: Backup of Windows2003 System Object fails - ANS1950E - caused
> after a Microsoft VSS_E_PROVIDER_VETO error
>
>
>   A fix is available
> IBM Tivoli Storage Manager V5.2 Fix Pack 3 Clients and READMEs <
> http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg24007407>
>
>
> APAR status
> Closed as program error.
>
>
> Error description
> The backup of the Windows2003 System Object with:
>
> ANS1950E Backup via Microsoft Volume Shadow Copy failed.  See error
> log for more detail.
>
> The error reported in the dsmerror.log is:
>
> CreateSnapshotSet():  pAsync->QueryStatus() returns
> hr=VSS_E_PROVIDER_VETO
>
> It has also been reported that after this failure, subsequent
> incremental backups of the Windows 2003 system object may fail
> with:
>
> ANS1304W Active object not found
> Local fix
> Problem summary
> 
> * USERS AFFECTED: All TSM B/A client v5.2.0 v5.2.2 on  *
> * Windows 2003.*
> 
> * PROBLEM DESCRIPTION: See ERRROR DESCRIPTION. *
> 
> * RECOMMENDATION: Apply fixing PTF when available. *
> * This problem is currently projected  *
> * to be fixed in PTF level 5.2.3.  *
> * Note that this is subject to change at   *
> * the discretion of IBM.   *
> 
> If BACKUP SYSTEMSTATE or BACKUP SYSTEMSERVICES on Windows 2003 fails,
> subsequent incremental backups of the Windows 2003 system state or
> system service may fail with the message "ANS1304W Active

Re: strange incremental behavior on Windows 5.2.2.0

2005-08-09 Thread Steve Schaub
This seems to fit the situation, all right.
I'm going to try disabling the system state/services in the domain statement
and see if that helps tonight.
I'll post results tomorrow.
Thanks.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Cowen
Sent: Tuesday, August 09, 2005 3:12 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] strange incremental behavior on Windows 5.2.2.0

Maybe this one?





IC38377: Backup of Windows2003 System Object fails - ANS1950E - caused after
a Microsoft VSS_E_PROVIDER_VETO error


  A fix is available
IBM Tivoli Storage Manager V5.2 Fix Pack 3 Clients and READMEs



APAR status 
Closed as program error.


Error description
The backup of the Windows2003 System Object with:

ANS1950E Backup via Microsoft Volume Shadow Copy failed.  See error log for
more detail.

The error reported in the dsmerror.log is:

CreateSnapshotSet():  pAsync->QueryStatus() returns hr=VSS_E_PROVIDER_VETO

It has also been reported that after this failure, subsequent incremental
backups of the Windows 2003 system object may fail
with:

ANS1304W Active object not found
Local fix
Problem summary

* USERS AFFECTED: All TSM B/A client v5.2.0 v5.2.2 on  *
* Windows 2003.*

* PROBLEM DESCRIPTION: See ERRROR DESCRIPTION. *

* RECOMMENDATION: Apply fixing PTF when available. *
* This problem is currently projected  *
* to be fixed in PTF level 5.2.3.  *
* Note that this is subject to change at   *
* the discretion of IBM.   *

If BACKUP SYSTEMSTATE or BACKUP SYSTEMSERVICES on Windows 2003 fails,
subsequent incremental backups of the Windows 2003 system state or system
service may fail with the message "ANS1304W Active object not found".
Backup processing halts after the ANS1304W message is issued, even if there
are more files to process.
Problem conclusion
The client code has been fixed so that after a system state or system
services backup failure, subsequent attempts will not fail (barring any
other conditions that might cause a failure).
Temporary fix
Windows interim fix 5.2.2.5

IC38377: Backup of Windows2003 System Object fails - ANS1950E - caused after
a Microsoft VSS_E_PROVIDER_VETO error


  A fix is available
IBM Tivoli Storage Manager V5.2 Fix Pack 3 Clients and READMEs



APAR status 
Closed as program error.


Error description
The backup of the Windows2003 System Object with:

ANS1950E Backup via Microsoft Volume Shadow Copy failed.  See error log for
more detail.

The error reported in the dsmerror.log is:

CreateSnapshotSet():  pAsync->QueryStatus() returns hr=VSS_E_PROVIDER_VETO

It has also been reported that after this failure, subsequent incremental
backups of the Windows 2003 system object may fail
with:

ANS1304W Active object not found
Local fix
Problem summary

* USERS AFFECTED: All TSM B/A client v5.2.0 v5.2.2 on  *
* Windows 2003.*

* PROBLEM DESCRIPTION: See ERRROR DESCRIPTION. *

* RECOMMENDATION: Apply fixing PTF when available. *
* This problem is currently projected  *
* to be fixed in PTF level 5.2.3.  *
* Note that this is subject to change at   *
* the discretion of IBM.   *

If BACKUP SYSTEMSTATE or BACKUP SYSTEMSERVICES on Windows 2003 fails,
subsequent incremental backups of the Windows 2003 system state or system
service may fail with the message "ANS1304W Active object not found".
Backup processing halts after the ANS1304W message is issued, even if there
are more files to process.
Problem conclusion
The client code has been fixed so that after a system state or system
services backup failure, subsequent attempts will not fail (barring any
other conditions that might cause a failure).
Temporary fix
Windows interim fix 5.2.2.5
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: strange incremental behavior on Windows 5.2.2.0

2005-08-09 Thread Steve Schaub
11
> \\friendship\c$\Program 
> Files\Tivoli\TSM\baclient\dsmerror.log  Changed
> 08/07/2005 18:11:43 Retry # 2  Normal File--> 2,477,990
> \\friendship\c$\Program 
> Files\Tivoli\TSM\baclient\dsmerror.log  Changed
> 08/07/2005 18:11:43 ANS1802E Incremental backup of '\\friendship\c$'
> finished with 29 failure
>
> 08/07/2005 18:11:44 Successful incremental backup of '\\friendship\f$'
>
> 08/07/2005 18:11:44 Successful incremental backup of '\\friendship\g$'
>
> 08/07/2005 18:11:45 ANS1898I * Processed 9,500 files *
> 08/07/2005 18:11:45 ANS1304W Active object not found
>
> 08/07/2005 18:11:45 --- SCHEDULEREC STATUS BEGIN
> 08/07/2005 18:11:46 Total number of objects inspected:9,829
> 08/07/2005 18:11:46 Total number of objects backed up:  262
> 08/07/2005 18:11:46 Total number of objects updated:  0
> 08/07/2005 18:11:46 Total number of objects rebound:  0
> 08/07/2005 18:11:46 Total number of objects deleted:  0
> 08/07/2005 18:11:46 Total number of objects expired:  5
> 08/07/2005 18:11:46 Total number of objects failed:  29
> 08/07/2005 18:11:46 Total number of bytes transferred:83.70 MB
> 08/07/2005 18:11:46 Data transfer time:   33.81 sec
> 08/07/2005 18:11:46 Network data transfer rate:2,535.23 KB/sec
> 08/07/2005 18:11:46 Aggregate data transfer rate:307.19 KB/sec
> 08/07/2005 18:11:46 Objects compressed by:0%
> 08/07/2005 18:11:46 Elapsed processing time:   00:04:39
> 08/07/2005 18:11:46 --- SCHEDULEREC STATUS END
> 08/07/2005 18:11:46 --- SCHEDULEREC OBJECT END INCR_1800 08/07/2005
18:00:00
> 08/07/2005 18:11:47 Scheduled event 'INCR_1800' completed successfully.
> 08/07/2005 18:11:47 Sending results for scheduled event 'INCR_1800'.
> 08/07/2005 18:11:47 Results sent to server for scheduled event
'INCR_1800'.
>
> 08/07/2005 18:11:47 ANS1483I Schedule log pruning started.
> 08/07/2005 18:11:56 ANS1484I Schedule log pruning finished successfully.
> 08/07/2005 18:11:56 Querying server for next scheduled event.
> 08/07/2005 18:11:56 Node Name: FRIENDSHIP
> 08/07/2005 18:11:56 Session established with server TSMPROD02:
AIX-RS/6000
> 08/07/2005 18:11:56   Server Version 5, Release 2, Level 2.0
> 08/07/2005 18:11:56   Server date/time: 08/07/2005 18:05:24  Last
access:
> 08/07/2005 18:04:14
>
> 08/07/2005 18:11:56 --- SCHEDULEREC QUERY BEGIN
> 08/07/2005 18:11:57 --- SCHEDULEREC QUERY END
> 08/07/2005 18:11:57 Next operation scheduled:
> 08/07/2005 18:11:57
> 
> 08/07/2005 18:11:57 Schedule Name: INCR_1800
> 08/07/2005 18:11:57 Action:Incremental
> 08/07/2005 18:11:57 Objects:
> 08/07/2005 18:11:57 Options:
> 08/07/2005 18:11:57 Server Window Start:   18:00:00 on 08/08/2005
> 08/07/2005 18:11:57
> 
> 08/07/2005 18:11:57 Scheduler has been stopped.
>
>
>   _
>
> From: Richard Cowen [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, August 09, 2005 12:14 PM
> To: Schaub, Steve
> Subject: RE: [ADSM-L] strange incremental behavior on Windows 5.2.2.0
>
>
> Steve-
>
> When you use the command line, do some files get backed up?
> Do filespaces seem to be "seen"?
> With a command line client, what does query inclexcl show?
>
>   _
>
> From: ADSM: Dist Stor Manager on behalf of Steve Schaub
> Sent: Tue 8/9/2005 10:09 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] strange incremental behavior on Windows 5.2.2.0
>
>
>
> TSM server 5.2.2.0 on AIX, client 5.2.2.0 on Win 2003 Std
>
> About 2 weeks ago we noticed that the daily scheduled incremental
backups on
> our 2 big fileservers were behaving oddly.
>
> When the scheduled incrementals run, or if I use an immediate action,
> or command line from the machines themselves, the backups look like
> they complete normally, but they only show that they have examined <
> 10k
files.
> There is no error message in the sched log or the activity log,
> nothing
in
> dsmerror or dsierror, no dump, nothing.  the schedule reports
> completed successfully.
>
> If I termserv to the machines and backup through the client gui, it
behaves
> normally and backs up tons of files.
>
> These file servers each have 2 volumes of 1.7TB and 1.3TB with over
> 5mil files on each.  These volumes have been increased every few weeks
> due to running out of space.
>
> I havent found any apar that seems to fit this odd behavior, and we
havent
> touched the tsm client on these machines for at least 6 months.  we
> are
not
> using journaling on them.
>
> anyone seen anything that would shed light?
>
> Steve Schaub
> Systems Engineer, WNI
> BlueCross BlueShield of Tennessee
> 423-752-6574 (desk)
> 423-785-7347 (cell)
>
> Please see the following link for the BlueCross BlueShield of
> Tennessee E-mail
> disclaimer:  http://www.bcbst.com/email_disclaimer.shtm
> <http://www.bcbst.com/email_disclaimer.shtm>
>
>
> Please see the following link for the BlueCross BlueShield of
> Tennessee
E-mail
> disclaimer:  http://www.bcbst.com/email_disclaimer.shtm
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: strange incremental behavior on Windows 5.2.2.0

2005-08-09 Thread Steve Schaub
  0%
08/07/2005 18:11:46 Elapsed processing time:   00:04:39
08/07/2005 18:11:46 --- SCHEDULEREC STATUS END
08/07/2005 18:11:46 --- SCHEDULEREC OBJECT END INCR_1800 08/07/2005 18:00:00
08/07/2005 18:11:47 Scheduled event 'INCR_1800' completed successfully.
08/07/2005 18:11:47 Sending results for scheduled event 'INCR_1800'.
08/07/2005 18:11:47 Results sent to server for scheduled event 'INCR_1800'.

08/07/2005 18:11:47 ANS1483I Schedule log pruning started.
08/07/2005 18:11:56 ANS1484I Schedule log pruning finished successfully.
08/07/2005 18:11:56 Querying server for next scheduled event.
08/07/2005 18:11:56 Node Name: FRIENDSHIP
08/07/2005 18:11:56 Session established with server TSMPROD02: AIX-RS/6000
08/07/2005 18:11:56   Server Version 5, Release 2, Level 2.0
08/07/2005 18:11:56   Server date/time: 08/07/2005 18:05:24  Last access:
08/07/2005 18:04:14

08/07/2005 18:11:56 --- SCHEDULEREC QUERY BEGIN
08/07/2005 18:11:57 --- SCHEDULEREC QUERY END
08/07/2005 18:11:57 Next operation scheduled:
08/07/2005 18:11:57

08/07/2005 18:11:57 Schedule Name: INCR_1800
08/07/2005 18:11:57 Action:Incremental
08/07/2005 18:11:57 Objects:
08/07/2005 18:11:57 Options:
08/07/2005 18:11:57 Server Window Start:   18:00:00 on 08/08/2005
08/07/2005 18:11:57

08/07/2005 18:11:57 Scheduler has been stopped.


  _

From: Richard Cowen [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 09, 2005 12:14 PM
To: Schaub, Steve
Subject: RE: [ADSM-L] strange incremental behavior on Windows 5.2.2.0


Steve-

When you use the command line, do some files get backed up?
Do filespaces seem to be "seen"?
With a command line client, what does query inclexcl show?

  _

From: ADSM: Dist Stor Manager on behalf of Steve Schaub
Sent: Tue 8/9/2005 10:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] strange incremental behavior on Windows 5.2.2.0



TSM server 5.2.2.0 on AIX, client 5.2.2.0 on Win 2003 Std

About 2 weeks ago we noticed that the daily scheduled incremental backups on
our 2 big fileservers were behaving oddly.

When the scheduled incrementals run, or if I use an immediate action, or
command line from the machines themselves, the backups look like they
complete normally, but they only show that they have examined < 10k files.
There is no error message in the sched log or the activity log, nothing in
dsmerror or dsierror, no dump, nothing.  the schedule reports completed
successfully.

If I termserv to the machines and backup through the client gui, it behaves
normally and backs up tons of files.

These file servers each have 2 volumes of 1.7TB and 1.3TB with over 5mil
files on each.  These volumes have been increased every few weeks due to
running out of space.

I havent found any apar that seems to fit this odd behavior, and we havent
touched the tsm client on these machines for at least 6 months.  we are not
using journaling on them.

anyone seen anything that would shed light?

Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)

Please see the following link for the BlueCross BlueShield of Tennessee
E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm
<http://www.bcbst.com/email_disclaimer.shtm>


Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


strange incremental behavior on Windows 5.2.2.0

2005-08-09 Thread Steve Schaub
TSM server 5.2.2.0 on AIX, client 5.2.2.0 on Win 2003 Std

About 2 weeks ago we noticed that the daily scheduled incremental backups on
our 2 big fileservers were behaving oddly.

When the scheduled incrementals run, or if I use an immediate action, or
command line from the machines themselves, the backups look like they
complete normally, but they only show that they have examined < 10k files.
There is no error message in the sched log or the activity log, nothing in
dsmerror or dsierror, no dump, nothing.  the schedule reports completed
successfully.

If I termserv to the machines and backup through the client gui, it behaves
normally and backs up tons of files.

These file servers each have 2 volumes of 1.7TB and 1.3TB with over 5mil
files on each.  These volumes have been increased every few weeks due to
running out of space.

I havent found any apar that seems to fit this odd behavior, and we havent
touched the tsm client on these machines for at least 6 months.  we are not
using journaling on them.

anyone seen anything that would shed light?

Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)

Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: How can I change the management class of a node?

2005-07-20 Thread Steve Schaub
Although there have been several times that I wish there was such a command.
Like right now, where I have walked into a system that is in desperate need
of simplification and housekeeping, but not being able to manually rebind
legacy data to new mgmtclasses is really tieing my hands.  Of course it was
easy to implement the client option sets to rebind existing files where I
want them - the problem is the tons of data no longer on the client nodes
that I can't root out of the rat's nest they are currently hiding in.
After all, "tinkering" is part of our TSM Admin job description ;-}
(sigh), maybe in 5.4

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Wednesday, July 20, 2005 11:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] How can I change the management class of a node?

In TSM, files (not nodes) are associated with management classes.
When files are sent from the client to the TSM server, they are bound to a
management class, as can be controlled from the client.  Once in TSM server
storage, file management classes cannot be changed by any command.  The only
way to change the management class of a file is to send the file to the TSM
server again, with a reformulated Include statement in effect for Backup, or
an Include,Archive or ARCHMc spec for Archive.  This discipline is in effect
because enterprise level data retention calls for retention values which are
set and known at file commit time, and which are not subsequently tinkered
with by someone beyond the client realm.  Refer to "Binding and rebinding
management classes to files" in the client manual.

Richard Sims
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: TDP for Exchange 5.2.1.0 Policy clarification

2005-07-15 Thread Steve Schaub
Thanks, Del.
Always appreciate someone grabbing my hand when I'm about to shoot myself in
the foot!
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Del
Hoobler
Sent: Thursday, July 14, 2005 5:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP for Exchange 5.2.1.0 Policy clarification

Steve,

All backups of the same name (type) will have the same management class.
You cannot back up "certain" full backups and direct them to a different
management class. All previous full backups will get rebound to management
class for that invocation. It works the same way as the base file-level
client.

COPY backups are named differently, thus can be bound by name to a different
management class. You need to use COPY backups to bind them to a different
management class.. otherwise... the previous full backups which you think
are being kept forever... won't be.

Thanks,

Del




"ADSM: Dist Stor Manager"  wrote on 07/14/2005
04:10:15 PM:

> I have a separate command script for the weekly/monthly fulls that
directs
> the data to a special mgmtclass.  I really want to avoid defining a
> 3rd nodename for these machines.
>
> What is your reason for using "COPY" backups instead of fulls?  Since
both
> the expiring and non-expiring fulls are available to use for restore,
why
> not commit the logs?
>
> Thanks,
> -steve
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
> Of
Del
> Hoobler
> Sent: Thursday, July 14, 2005 3:12 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] TDP for Exchange 5.2.1.0 Policy clarification
>
> Steve,
>
> The Daily and Hourly settings look fine to me.
>
> How are you performing your monthly/weekly "fulls"?
> You should either make them "COPY" type backups and bind the "COPY"
backups
> to a different management class with the policy settings with
> NOLimit/NOLimit/NOLimit/NOLimit or use a different NODENAME for those
> backups, and make the full backups use the policy settings with
> NOLimit/NOLimit/NOLimit/NOLimit
>
> Thanks,
>
> Del
>
> 
>
> "ADSM: Dist Stor Manager"  wrote on 07/14/2005
> 12:01:07 PM:
>
> > TSM server 5.2.4.3
> >
> > We are changing some policies for our Exchange environment and after
> reading
> > the TDP user guide I am questioning if my previous assumptions are
> correct.
> >
> > We were planning on using the following policy:
> >
> > Daily fulls expiring after 5 days
> > Hourly incrementals expiring after 3 days Weekly or Monthly
> > (dependant on each server's deleted item retention)
> fulls
> > that never expire
> >
> > Is this a valid policy, assuming that any full backup older than 3
> > days
> is
> > basically a "snapshot", and not usable for bringing data back to the
> minute?
> >
> > Is this a valid backup copy group setting
> (VerExist/VerDel/RetXtra/RetOnly):
> >
> >Fulls - nl/nl/5/5
> >Incr - nl/nl/3/3
> >
> > thanks.
> >
> > Steve Schaub
> Please see the following link for the BlueCross BlueShield of
> Tennessee
E-mail
> disclaimer:  http://www.bcbst.com/email_disclaimer.shtm
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: TDP for Exchange 5.2.1.0 Policy clarification

2005-07-14 Thread Steve Schaub
I have a separate command script for the weekly/monthly fulls that directs
the data to a special mgmtclass.  I really want to avoid defining a 3rd
nodename for these machines.

What is your reason for using "COPY" backups instead of fulls?  Since both
the expiring and non-expiring fulls are available to use for restore, why
not commit the logs?

Thanks,
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Del
Hoobler
Sent: Thursday, July 14, 2005 3:12 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP for Exchange 5.2.1.0 Policy clarification

Steve,

The Daily and Hourly settings look fine to me.

How are you performing your monthly/weekly "fulls"?
You should either make them "COPY" type backups and bind the "COPY" backups
to a different management class with the policy settings with
NOLimit/NOLimit/NOLimit/NOLimit or use a different NODENAME for those
backups, and make the full backups use the policy settings with
NOLimit/NOLimit/NOLimit/NOLimit

Thanks,

Del



"ADSM: Dist Stor Manager"  wrote on 07/14/2005
12:01:07 PM:

> TSM server 5.2.4.3
>
> We are changing some policies for our Exchange environment and after
reading
> the TDP user guide I am questioning if my previous assumptions are
correct.
>
> We were planning on using the following policy:
>
> Daily fulls expiring after 5 days
> Hourly incrementals expiring after 3 days Weekly or Monthly (dependant
> on each server's deleted item retention)
fulls
> that never expire
>
> Is this a valid policy, assuming that any full backup older than 3
> days
is
> basically a "snapshot", and not usable for bringing data back to the
minute?
>
> Is this a valid backup copy group setting
(VerExist/VerDel/RetXtra/RetOnly):
>
>Fulls - nl/nl/5/5
>Incr - nl/nl/3/3
>
> thanks.
>
> Steve Schaub
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


TDP for Exchange 5.2.1.0 Policy clarification

2005-07-14 Thread Steve Schaub
TSM server 5.2.4.3

We are changing some policies for our Exchange environment and after reading
the TDP user guide I am questioning if my previous assumptions are correct.

We were planning on using the following policy:

Daily fulls expiring after 5 days
Hourly incrementals expiring after 3 days
Weekly or Monthly (dependant on each server's deleted item retention) fulls
that never expire

Is this a valid policy, assuming that any full backup older than 3 days is
basically a "snapshot", and not usable for bringing data back to the minute?

Is this a valid backup copy group setting (VerExist/VerDel/RetXtra/RetOnly):

   Fulls - nl/nl/5/5
   Incr - nl/nl/3/3

thanks.

Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)

Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Storage Pool Backup Issue

2005-06-29 Thread Steve Schaub
True - so what Joni needs is a script that uses 6 "streams" to backup the
diskpools, then throttles back to 3 "streams" to do the tape to tape backup.
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Bill Dourado
Sent: Wednesday, June 29, 2005 11:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Storage Pool Backup Issue

With 6 scripts , you could end up with up to 6 processes each backing up
"onsite tape to offsite tape"..requiring 12 drives!, you would also
end up with several offsite tapes.

Bill








Steve Schaub <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" 
29/06/2005 16:12
Please respond to "ADSM: Dist Stor Manager"

To: ADSM-L@VM.MARIST.EDU
cc:
Subject:Re: [ADSM-L] Storage Pool Backup Issue


Joni,

Sounds to me like you need 6 scripts that all use wait=yes, and you will
have to balance which pools get backed up in each script.

-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Joni Moyer
Sent: Wednesday, June 29, 2005 10:59 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Storage Pool Backup Issue

Hello Everyone!

I have an issue of drive shortage and I would like to cascade my backup jobs
of my storage pools.  I know that you first want to make the disk to offsite
tape copy and then the onsite tape to offsite tape copy for each storage
pool.  My question is, how do I combine all 14 jobs in a cascading order so
that they are only using a maximum of 6 drives at a time?  I thought about
putting them all within a script and running them with wait=yes but I don't
think that will give me exactly what I want to do.  If anyone has any
suggestions I would appreciate the input!  Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]

Please see the following link for the BlueCross BlueShield of Tennessee
E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Storage Pool Backup Issue

2005-06-29 Thread Steve Schaub
Joni,

Sounds to me like you need 6 scripts that all use wait=yes, and you will
have to balance which pools get backed up in each script.

-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Joni Moyer
Sent: Wednesday, June 29, 2005 10:59 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Storage Pool Backup Issue

Hello Everyone!

I have an issue of drive shortage and I would like to cascade my backup jobs
of my storage pools.  I know that you first want to make the disk to offsite
tape copy and then the onsite tape to offsite tape copy for each storage
pool.  My question is, how do I combine all 14 jobs in a cascading order so
that they are only using a maximum of 6 drives at a time?  I thought about
putting them all within a script and running them with wait=yes but I don't
think that will give me exactly what I want to do.  If anyone has any
suggestions I would appreciate the input!  Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]

Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: firewall backup oddity

2005-06-23 Thread Steve Schaub
Actually, it is tcpclientport in the file - I inadvertantly messed it up as
I was removing the ip addresses from public view.  Good catch, though.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kathleen M Hallahan
Sent: Thursday, June 23, 2005 9:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] firewall backup oddity

Should one of those tcpport settings actually be a tcpclientport?

We've been through this with firewalls and now just use tcpclientport in the
default configuration, using the same port number as tcpport.

Kathleen




 "Steve Schaub"
 <[EMAIL PROTECTED]
 ST.COM>To
   ADSM-L@VM.MARIST.EDU
  Sent by : "ADSM: Dist Storcc
  Manager"
 Subject
   firewall backup oddity

 06/23/2005 07:42
 AM


 Please respond to
  "ADSM: Dist Stor
  Manager"
  <[EMAIL PROTECTED]
   T.EDU>






All,

I have a windows 5.3.0.5 client behind a firewall that wont backup via it's
tsm scheduled backup to my aix 5.2.2.0 server, yet works ok when I initiate
a manual backup via the native gui or command line on the client itself.
It
sometimes even  works when I kick off an immediate action schedule.  I
thought I had gone through the correct steps to make this work, so I'm
stumped.  The network guys assure me that port 1500 is open in both
directions for the client & server ip addresses.  Any pointers in the right
direction would be appreciated.
-steve schaub


Date/Time Message


--
06/23/05 00:04:05 ANR2716E Schedule prompter was not able to contact
client
SHADRACH using type 1 (10.151.4.66 1068). (SESSION:
496)
06/23/05 01:00:01 ANR2578W Schedule SHADRACH in domain NT_WEBSERVERS
for
no
   de SHADRACH has missed its scheduled start up window.

06/23/05 06:25:03 ANR2017I Administrator S42919S issued command: QUERY
ACTL
   OG begind=-1 begint=17:00 s=shadrach  (SESSION:
6099)



q node shadrach f=d

 Node Name: SHADRACH
  Platform: WinNT
   Client OS Level: 5.00
Client Version: Version 5, Release 3, Level 0.5
Policy Domain Name: NT_WEBSERVERS
 Last Access Date/Time: 06/22/05 16:43:53
Days Since Last Access: 1
Password Set Date/Time: 12/17/02 16:08:24
   Days Since Password Set: 919
 Invalid Sign-on Count: 0
   Locked?: No
   Contact: Len Starnes
   Compression: Client
   Archive Delete Allowed?: Yes
Backup Delete Allowed?: No
Registration Date/Time: 09/04/01 14:54:55
 Registering Administrator: MELINDA
Last Communication Method Used: Tcp/Ip
   Bytes Received Last Session: 3,788.75 M
   Bytes Sent Last Session: 12,063
  Duration of Last Session: 10,601.94
   Pct. Idle Wait Last Session: 53.52
  Pct. Comm. Wait Last Session: 36.96
  Pct. Media Wait Last Session: 0.00
 Optionset: WINDOWS
   URL: http://shadrach:1581
 Node Type: Client
Password Expiration Period: 9,999 Day(s)
 Keep Mount Point?: No
  Maximum Mount Points Allowed: 2
Auto Filespace Rename : No
 Validate Protocol: No
   TCP/IP Name: SHADRACH
TCP/IP Address: x.x.x.x
Globally Unique ID:
30.c8.92.41.11.db.11.d7.a5.e4.00.03.47.4d.37.7a
 Transaction Group Max: 0
   Data Write Path: ANY
Data Read Path: ANY
Session Initiation: ClientOrServer
High-level Address: x.x.x.x
 Low-level Address: 1500



dsm.opt on shadrach

*==*
* Tivoli Storage Manager - BCBST Win Servers Backup-Archive Clients*
*==*

*==*
* Identification Section   *
*==*
nodename   SHADRACH
tcpserveraddress   x.x.x.x
tcpport1500
tcpclientaddress   x.x.x.x
tcpport1500
*==*
* Communication Section*
*==*
commmethod TCPIP
tc

firewall backup oddity

2005-06-23 Thread Steve Schaub
All,

I have a windows 5.3.0.5 client behind a firewall that wont backup via it's
tsm scheduled backup to my aix 5.2.2.0 server, yet works ok when I initiate
a manual backup via the native gui or command line on the client itself.  It
sometimes even  works when I kick off an immediate action schedule.  I
thought I had gone through the correct steps to make this work, so I'm
stumped.  The network guys assure me that port 1500 is open in both
directions for the client & server ip addresses.  Any pointers in the right
direction would be appreciated.
-steve schaub


Date/Time Message


--
06/23/05 00:04:05 ANR2716E Schedule prompter was not able to contact
client
SHADRACH using type 1 (10.151.4.66 1068). (SESSION:
496)
06/23/05 01:00:01 ANR2578W Schedule SHADRACH in domain NT_WEBSERVERS for
no
   de SHADRACH has missed its scheduled start up window.

06/23/05 06:25:03 ANR2017I Administrator S42919S issued command: QUERY
ACTL
   OG begind=-1 begint=17:00 s=shadrach  (SESSION: 6099)



q node shadrach f=d

 Node Name: SHADRACH
  Platform: WinNT
   Client OS Level: 5.00
Client Version: Version 5, Release 3, Level 0.5
Policy Domain Name: NT_WEBSERVERS
 Last Access Date/Time: 06/22/05 16:43:53
Days Since Last Access: 1
Password Set Date/Time: 12/17/02 16:08:24
   Days Since Password Set: 919
 Invalid Sign-on Count: 0
   Locked?: No
   Contact: Len Starnes
   Compression: Client
   Archive Delete Allowed?: Yes
Backup Delete Allowed?: No
Registration Date/Time: 09/04/01 14:54:55
 Registering Administrator: MELINDA
Last Communication Method Used: Tcp/Ip
   Bytes Received Last Session: 3,788.75 M
   Bytes Sent Last Session: 12,063
  Duration of Last Session: 10,601.94
   Pct. Idle Wait Last Session: 53.52
  Pct. Comm. Wait Last Session: 36.96
  Pct. Media Wait Last Session: 0.00
 Optionset: WINDOWS
   URL: http://shadrach:1581
 Node Type: Client
Password Expiration Period: 9,999 Day(s)
 Keep Mount Point?: No
  Maximum Mount Points Allowed: 2
Auto Filespace Rename : No
 Validate Protocol: No
   TCP/IP Name: SHADRACH
TCP/IP Address: x.x.x.x
Globally Unique ID:
30.c8.92.41.11.db.11.d7.a5.e4.00.03.47.4d.37.7a
 Transaction Group Max: 0
   Data Write Path: ANY
Data Read Path: ANY
Session Initiation: ClientOrServer
High-level Address: x.x.x.x
 Low-level Address: 1500



dsm.opt on shadrach

*==*
* Tivoli Storage Manager - BCBST Win Servers Backup-Archive Clients*
*==*

*==*
* Identification Section   *
*==*
nodename   SHADRACH
tcpserveraddress   x.x.x.x
tcpport1500
tcpclientaddress   x.x.x.x
tcpport1500
*==*
* Communication Section*
*==*
commmethod TCPIP
tcpbuffsize512
tcpwindowsize  1024
httpport   1581

*==*
* CAD/Schedule Settings Section*
*==*
passwordaccess generate
managedservicesschedule webclient
errorlogretention  90
schedlogretention  14

*==*
* Note: Include-Exclude processing is controlled via Client Option Sets*
*   associated to each node on the TSM Server  *
*==*



client option set has the following settings besides include/excludes:

OptSet  Option Seq#  Value

--  --  ---
---
WINDOWS DOMAIN   50  all-local

WINDOWS SCHEDMODE60  prompted

WINDOWS TXNBYTELIMIT 70  262144

WINDOWS CHANGINGRETRIES  80  2

WINDOWS RESOURCEUTILIZATION  9

Re: Backing up more that one instances on the same SQL 2 000 server

2005-06-13 Thread Steve Schaub
Luc,

Modify your backup script to use a named instance.
Here is what our sqlfull.cmd looks like (first backup catches default
instance, 2nd one is specific):

@ECHO OFF
rem  *
rem  * Command file containing commands to do a scheduled*
rem  * full backup of Microsoft SQL Server databases to  *
rem  * TSM storage.  *
rem  *

set exc_dir="C:\Program Files\Tivoli\TSM\TDPSQL"

cd /d %exc_dir%

echo Current date is %date% >> sqlfull.log
echo Current time is %time% >> sqlfull.log

start /B tdpsqlc backup * full /logfile=sqlsched.log >> sqlfull.log
start /B tdpsqlc backup * full /SQLSERVer=croaker\broker
/logfile=sqlsched2.log >> sqlfull2.log

rem  *
rem  * if multiple SQL Server instances are installed, use the following *
rem  * example to perform specific backups on each one   *
rem  *
rem start /B tdpsqlc backup * full /SQLSERVer=rubicon\crestone
/logfile=sqlsched.log >> sqlfull.log 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Luc
Beaudoin
Sent: Monday, June 13, 2005 2:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Backing up more that one instances on the same SQL 2000
server

Hi all
I asked that question last week, but I looked in the documentation and it's
not that clear...

Is there anyone that are doing that 
Can someone tell me what to do


Thanks

Luc Beaudoin
Administrateur Réseau / Network Administrator Hopital General Juif S.M.B.D.
Tel: (514) 340-8222 ext:8254 
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Backing up VMWare from Guest -vs- backing up .dsk fi les

2005-05-26 Thread Steve Schaub
Our biggest driver was DR.
Getting some of our app and AD servers restored & running on dissimilar
hardware was getting too complicated & unreliable.
We weren't keeping many versions of files on these servers anyway, so
keeping versions of the .vmdk files is not that big of a deal to us.
Loading ESX & restoring vm's is easier for us to manage at the hotsite than
multiple physicals.
Now if I could only get it to work consistantly.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
TSM_User
Sent: Thursday, May 26, 2005 10:05 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Backing up VMWare from Guest -vs- backing up .dsk files

I have quite a few customers that are running VMWare and so far all of them
have decided to just backup from within the virtual machine for the
following reasons:

1)  Being that TSM is licensed per physical CPU there is no cost savings in
using a single client license running on the ESX server to backup the .dsk
files.

2) When you backup inside the VM you can either restore the data back onto a
new VM or onto a real server if you like. Some of my customers run the VM at
their site but have separate old servers at the hot site for recovery.

3) Most of the time VM's end up being smaller applications.  As a result
there really isn't a huge need to improve the backup or restore time. I've
been told that they can still run an image based backup from within the VM
as well.  I haven't done this myself but if this is the case then you could
still get a fast image backup and restore.

4) Who will officially support problems when you run into problems with
backing up the VM Ware files and restores don't work.


With those reasons above and the recent posts about issues with backup and
recovery of the VM Files I'm wondering what reasons their are for backing up
the .dsk files.  Still, I'm sure there are reasons out there.  I'd be
interested in hearing from the people who are choosing to backup the .dsk
files why they decided to go that route.




-
Do You Yahoo!?
 Yahoo! Small Business - Try our new Resources site!


Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Vmware guest "hot" backups

2005-05-25 Thread Steve Schaub
1. server=5.2.4.3 client=5.2.0.0 (trying to get 5.3, but we are missing some
prereq rpm's)

2. 2.5

3. we have only tried 2 guests so far, an app server and a domain
controller.  Most of our testing has been on the dc.  All the backups worked
until we turned "Validate Protocol" (crc) on, per Tivoli support - now every
backup fails.  On the restore side, it's more like 50/50 - there are 2
drives on this guest, and we have had to restore one drive from one day, and
the other from a different day, in order to make things work.

4. yes, they are also sending us an updated version of their script, and
helping us get the prereq rpm's.

5. we have tried using cp, but we have not tried restoring from it.  We have
also started experimenting with vmware's export, in case we are running into
an issue with backing up a 15gb file.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Stapleton, Mark
Sent: Wednesday, May 25, 2005 3:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Vmware guest "hot" backups

Questions:

1. The version of the TSM server and client you're working with.
2. Are you at version 2.5 of ESX, or 2.5.1?
3. When you say "several backups", can you give us an idea of how many
backups you run, and how many restore properly and improperly?
4. Have you talked to VMWare support about this issue?
5. Have you attempted backups and restores using cp or dd, instead of TSM?

--
Mark Stapleton ([EMAIL PROTECTED])
IBM Certified Advanced Deployment Professional
  Tivoli Storage Management Solutions 2005 IBM Certified Advanced Technical
Expert (CATE) AIX Office 262.521.5627



>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
>Of Steve Schaub
>Sent: Wednesday, May 25, 2005 1:55 PM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: Vmware guest "hot" backups
>
>We have had several backups fail, but worse, we have had some restores
>bomb out (like 300mb away from the end of a 15gb restore), and even
>worse yet, we have had some restores work, but the .vmdk came back
>corrupted and vmware couldn't even mount it.  We are planning on using
>this method to do DR recovery of our AD environment, so a reliable
>backup/restore is *fairly* important.
>
>We have seen this in the dsmerror.log on the vmhost after a failed
>restore:
>06/24/04 14:21:44 The 299579424th code was found to be out of sequence.
>The code (424) was greater than (262), the next available slot in the
>string table.
>06/24/04 14:21:44 The 299579425th code was found to be out of sequence.
>The code (288) was greater than (263), the next available slot in the
>string table.
>06/24/04 14:21:44 The 299579446th code was found to be out of sequence.
>The code (384) was greater than (267), the next available slot in the
>string table.
>
>We have a call in to Tivoli, but nothing in the way of a concrete fix
>yet...
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
>Of Stapleton, Mark
>Sent: Wednesday, May 25, 2005 11:32 AM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: [ADSM-L] Vmware guest "hot" backups
>
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
>Of Steve Schaub
>>My team & I would be interested in speaking with anyone who has been
>>successful in using TSM to backup/restore what the database
>folks would
>>call a "hot" backup of VMWare guest machines (i.e. backing up
>the .vmdk
>>files while the guest is in redo mode).
>>
>>We have run several test backup/restores, some less successful than
>>others.
>>We are using ESX 2.5 currently.
>>
>>If you have gotten this to work consistantly and would be willing to
>>share your experience, please email me directly:
>[EMAIL PROTECTED]
>><mailto:[EMAIL PROTECTED]>
>
>Yep, as long as we're talking VMWare ESX. The GSX documentation
>explicitly says that the only supported backups of VM occur only when
>the VM is shut down prior to backup.
>
>The ESX software allows for a "freeze" of a VM (with the use of a REDO
>log) as you describe; while the VM is frozen, the .vmdk file can be
>backed up reliably. Once the backup is complete, the log is committed,
>and you're done.
>
>What problems are you having?
>
>--
>Mark Stapleton ([EMAIL PROTECTED])
>IBM Certified Advanced Deployment Professional
>  Tivoli Storage Management Solutions 2005 IBM Certified Advanced
>Technical Expert (CATE) AIX Office 262.521.5627
>
>
>Please see the following link for the BlueCross BlueShield of Tennessee
>E-mail disclaimer:
>http://www.bcbst.com/email_disclaimer.shtm
>


Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Vmware guest "hot" backups

2005-05-25 Thread Steve Schaub
We have had several backups fail, but worse, we have had some restores bomb
out (like 300mb away from the end of a 15gb restore), and even worse yet, we
have had some restores work, but the .vmdk came back corrupted and vmware
couldn't even mount it.  We are planning on using this method to do DR
recovery of our AD environment, so a reliable backup/restore is *fairly*
important.

We have seen this in the dsmerror.log on the vmhost after a failed restore:
06/24/04 14:21:44 The 299579424th code was found to be out of sequence.
The code (424) was greater than (262), the next available slot in the string
table.
06/24/04 14:21:44 The 299579425th code was found to be out of sequence.
The code (288) was greater than (263), the next available slot in the string
table.
06/24/04 14:21:44 The 299579446th code was found to be out of sequence.
The code (384) was greater than (267), the next available slot in the string
table.

We have a call in to Tivoli, but nothing in the way of a concrete fix yet...


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Stapleton, Mark
Sent: Wednesday, May 25, 2005 11:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Vmware guest "hot" backups

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Steve Schaub
>My team & I would be interested in speaking with anyone who has been
>successful in using TSM to backup/restore what the database folks would
>call a "hot" backup of VMWare guest machines (i.e. backing up the .vmdk
>files while the guest is in redo mode).
>
>We have run several test backup/restores, some less successful than
>others.
>We are using ESX 2.5 currently.
>
>If you have gotten this to work consistantly and would be willing to
>share your experience, please email me directly: [EMAIL PROTECTED]
><mailto:[EMAIL PROTECTED]>

Yep, as long as we're talking VMWare ESX. The GSX documentation explicitly
says that the only supported backups of VM occur only when the VM is shut
down prior to backup.

The ESX software allows for a "freeze" of a VM (with the use of a REDO
log) as you describe; while the VM is frozen, the .vmdk file can be backed
up reliably. Once the backup is complete, the log is committed, and you're
done.

What problems are you having?

--
Mark Stapleton ([EMAIL PROTECTED])
IBM Certified Advanced Deployment Professional
  Tivoli Storage Management Solutions 2005 IBM Certified Advanced Technical
Expert (CATE) AIX Office 262.521.5627


Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Vmware guest "hot" backups

2005-05-25 Thread Steve Schaub
All,

My team & I would be interested in speaking with anyone who has been
successful in using TSM to backup/restore what the database folks would call
a "hot" backup of VMWare guest machines (i.e. backing up the .vmdk files
while the guest is in redo mode).

We have run several test backup/restores, some less successful than others.
We are using ESX 2.5 currently.

If you have gotten this to work consistantly and would be willing to share
your experience, please email me directly: [EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>

thanks.

Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)



Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


journaling - any way to avoid hardcoding drives in the .ini?

2005-05-05 Thread Steve Schaub
tsm windows 5.3.0.5 client

related to journaling - is it possible to code the tsmjbbd.ini in such a way
that all local drives are journaled, without having to hardcode each one?
I'm thinking if a drive gets added we might forget to go back and add it to
the journal config.



Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: CX700 ATA failure - Audit Storage Pool Volumes

2005-05-04 Thread Steve Schaub
First, make sure the drives were really bad - we had this happen to us on a
cx500 and it turned out that the microcode was actually bad, and the 2nd
raid5 disk failure was phony.  Have your ce double check this - we lost tons
of data from this little glitch biting us several days in a row before they
figured it out (when it did come back up the volumes were toast).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Nancy L Backhaus
Sent: Wednesday, May 04, 2005 1:09 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] CX700 ATA failure - Audit Storage Pool Volumes

TSM Server 5.2.3.5
AIX Operating System 5.2

We have lost two drives on a single ATA RAID array.  EMC CE is on site and
working with the EMC SAC to resolve the problem.  We brought TSM Server
down while EMC is working on the issue.   I started receiving the
following errors before we stop all processes, backups, migrations marking
the effected diskpool volumes read-only.

05/04/05 09:06:36 ANR1411W Access mode for volume
/tsmpool39/diskpool22a now
   set to "read-only" due to write error. (SESSION:
125991)

My question is >

When we bring the server back up.   I want to ensure that I am doing the
right thing, or if I am missing anything.

1.  I will need to make sure the volumes affected are online and  in a
read/write status.
2. Audit diskpools volumes, Inspect only first 3.  If there are damaged
volumes,rerun audit volume, with Fix=Yes.
4. Suggestions?


Nancy Backhaus
Enterprise Systems
[EMAIL PROTECTED]
Office: (716) 887-7979
Cell: (716)  609-2138

CONFIDENTIALITY NOTICE: This email message and any attachments are for the
sole use of the intended recipient(s) and may contain proprietary,
confidential, trade secret or privileged information.  Any unauthorized
review, use, disclosure or distribution is prohibited and may be a violation
of law.  If you are not the intended recipient or a person responsible for
delivering this message to an intended recipient, please contact the sender
by reply email and destroy all copies of the original message.


Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Select statement output

2005-04-20 Thread Steve Schaub
Joni,

Try this:
select count(*)  "Full/Filling Volumes" from volumes where status in
('FULL','FILLING')
Etc.

Steve Schaub, Network Engineer
BlueCross BlueShield of Tennessee
[EMAIL PROTECTED]
423-752-6574 (office)
423-785-7347 (cell)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Joni Moyer
Sent: Wednesday, April 20, 2005 2:42 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Select statement output

I have defined the following script to find the status of tapes.

 NameVOLUME_USAGE

 Description -

 Last Update Date/Time   2005-04-20 14:40:07.00

 Last Update by (administrator)  LIDZR8V

 Managing profile-





Lines:



/* Volumes that are full & filling */
select count(*) volume_name from volumes where status in ('FULL','FILLING')
/* Volumes that are empty */
select count(*) volume_name from volumes where status='EMPTY'
/* Volumes that are pending */
select count(*) volume_name from volumes where status='PENDING'


I get the following results:


VOLUME_NAME
---
529


VOLUME_NAME
---
  0


VOLUME_NAME
---
160



How do I get a heading of Full/Filling Volumes to appear above the first
column?  Empty to appear above the second and Pending above the third?
Thank you in advance!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Operational Reporting - how to change default path w here web pages are stored

2005-04-04 Thread Steve Schaub
On a similar note, is there a process to move OR from my test server over to
production without having to re-customize everything?


Steve Schaub, Network Engineer
BlueCross BlueShield of Tennessee
[EMAIL PROTECTED]
423-752-6574 (office)
423-785-7347 (cell)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Killam, Perpetua
Sent: Monday, April 04, 2005 3:58 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Operational Reporting - how to change default path
where web pages are stored

Speaking of which, does anyone know how to change the location of the
location for default path to store web pages? Where do I change it



Perpetua



  _

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Iain Barnetson
Sent: Friday, March 04, 2005 7:36 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Operational Reporting



In the TSM management console, reports, operational reporting; I have a
daily report setup with some custom queries.
Is there a way I can export all the queries so that I can then import
exactly the same queries into another mngt console with out having to
labouriously type them in each time.

Regards,

Iain Barnetson
IT Systems Administrator
UKN Infrastructure Operations




This e-mail may be privileged and/or confidential, and the sender does not
waive any related rights and obligations. Any distribution, use or copying
of this e-mail or the information it contains by other than an intended
recipient is unauthorized. If you received this e-mail in error, please
advise me (by return e-mail or otherwise) immediately.

Ce courriel est confidentiel et protege. L'expediteur ne renonce pas aux
droits et obligations qui s'y rapportent. Toute diffusion, utilisation ou
copie de ce message ou des renseignements qu'il contient par une personne
autre que le (les) destinataire(s) designe(s) est interdite. Si vous recevez
ce courriel par erreur, veuillez m'en aviser immediatement, par retour de
courriel ou par un autre moyen.



Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: "Freezing" a node's data - revisiting 'Need to save permanent cop y of all files currently being stored'

2005-03-16 Thread Steve Schaub
Charles,
Were you able to confirm that all of the inactive versions, including ones
of deleted files, rebound correctly, so that nothing expired from that
point?
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hart, Charles
Sent: Wednesday, March 16, 2005 9:08 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] "Freezing" a node's data - revisiting 'Need to save
permanent cop y of all files currently being stored'

We had something similar.
1) Created a new domain with all set to Nolimit,
2) Then upd the node to be in that dom.
3) renamed the original node name to something like xxx.old
4) regged a new node name using the orig node name in its orig dom.

Onlly downside is that doing restores prior to the dom/node name change you
have to use vitutal node etc.

Hope this helps

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Steve Schaub
Sent: Wednesday, March 16, 2005 7:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: "Freezing" a node's data - revisiting 'Need to save
permanent cop y of all files currently being stored'


Because the underlying need is to preserve all the backup versions as they
are as of today, not just to take a snapshot of the current data.

Richard also responded to my question, and his point is that my step 3 would
not rebind the inactive versions to the new domain, only the active ones.

So, if I read this correctly, there is no way to stop backup versions from
rolling off?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Lee, Gary D.
Sent: Wednesday, March 16, 2005 7:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] "Freezing" a node's data - revisiting 'Need to save
permanent cop y of all files currently being stored'

Why not just archive the data to management class with retver set to
nolimit?
Seems a whole lot easier.



Gary Lee
Senior System Programmer
Ball State University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Steve Schaub
Sent: Tuesday, March 15, 2005 10:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: "Freezing" a node's data - revisiting 'Need to save permanent cop y
of all files currently being stored'

All,

I found this thread and it fits a situation I have, where I need to "freeze"
the data that has already been backed up on certain nodes, but new backup
data can be allowed to expire normally.  The following post from Robin Sharp
is exactly what I was considering attempting, except that I want to put the
node back into normal backup after loading it in the "freezer".

Can anyone comment on modifying this procedure by following these steps:
1.Create a domain called "Freezer" with only one mgmtclass - bu/ar
copygroup settings all at nolimit
2.upd node water domain=freezer
3.run an incremental on water to rebind all data to freezer's mgmtclass
4.rename node water ice
5.register water, using original settings
6.run an incremental backup on water, basically a full since it is
considered a "new" node

If I understand TSM's mechanisms, I would then have a node named "ice" that
contains all of "water's" backup data as of a specific point in time, which
will never expire.  I also have "water" with a fresh start.  One question I
have is that with only one mgmtclass in the freezer domain, how much will
TSM complain if I don't go in and change all of the client option sets
pointing to specific mgmtclasses?  Another question - how does this process
affect water's data in the DR copypools?



Original response by Robin Sharp -

Need to save permanent copy of all files currently being stored

Is all that really necessary?

How about creating a new "permanent retention" domain, copy all relevant
policy sets, management classes, copygroups, etc. to the new domain, but
change all retentions to NOLIMIT.  Then move the affected client to the new
domain.  Next incremental should rebind all existing data to the new
"NOLIMIT" management classes.





Steve Schaub, Network Engineer

BlueCross BlueShield of Tennessee

[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>

423-752-6574





Please see the following link for the BlueCross BlueShield of Tennessee
E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm

--
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.2 - Release Date: 3/11/2005


--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 3/15/2005



Please see the following link for the BlueCross BlueShield of Tennessee
E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: "Freezing" a node's data - revisiting 'Need to save permanent cop y of all files currently being stored'

2005-03-16 Thread Steve Schaub
Because the underlying need is to preserve all the backup versions as they
are as of today, not just to take a snapshot of the current data.

Richard also responded to my question, and his point is that my step 3 would
not rebind the inactive versions to the new domain, only the active ones.

So, if I read this correctly, there is no way to stop backup versions from
rolling off?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Lee, Gary D.
Sent: Wednesday, March 16, 2005 7:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] "Freezing" a node's data - revisiting 'Need to save
permanent cop y of all files currently being stored'

Why not just archive the data to management class with retver set to
nolimit?
Seems a whole lot easier.



Gary Lee
Senior System Programmer
Ball State University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Steve Schaub
Sent: Tuesday, March 15, 2005 10:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: "Freezing" a node's data - revisiting 'Need to save permanent cop y
of all files currently being stored'

All,

I found this thread and it fits a situation I have, where I need to "freeze"
the data that has already been backed up on certain nodes, but new backup
data can be allowed to expire normally.  The following post from Robin Sharp
is exactly what I was considering attempting, except that I want to put the
node back into normal backup after loading it in the "freezer".

Can anyone comment on modifying this procedure by following these steps:
1.Create a domain called "Freezer" with only one mgmtclass - bu/ar
copygroup settings all at nolimit
2.upd node water domain=freezer
3.run an incremental on water to rebind all data to freezer's mgmtclass
4.rename node water ice
5.register water, using original settings
6.run an incremental backup on water, basically a full since it is
considered a "new" node

If I understand TSM's mechanisms, I would then have a node named "ice" that
contains all of "water's" backup data as of a specific point in time, which
will never expire.  I also have "water" with a fresh start.  One question I
have is that with only one mgmtclass in the freezer domain, how much will
TSM complain if I don't go in and change all of the client option sets
pointing to specific mgmtclasses?  Another question - how does this process
affect water's data in the DR copypools?



Original response by Robin Sharp -

Need to save permanent copy of all files currently being stored

Is all that really necessary?

How about creating a new "permanent retention" domain, copy all relevant
policy sets, management classes, copygroups, etc. to the new domain, but
change all retentions to NOLIMIT.  Then move the affected client to the new
domain.  Next incremental should rebind all existing data to the new
"NOLIMIT" management classes.





Steve Schaub, Network Engineer

BlueCross BlueShield of Tennessee

[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>

423-752-6574





Please see the following link for the BlueCross BlueShield of Tennessee
E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm

--
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.2 - Release Date: 3/11/2005


--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 3/15/2005



Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


"Freezing" a node's data - revisiting 'Need to save permanent cop y of all files currently being stored'

2005-03-15 Thread Steve Schaub
All,

I found this thread and it fits a situation I have, where I need to "freeze"
the data that has already been backed up on certain nodes, but new backup
data can be allowed to expire normally.  The following post from Robin Sharp
is exactly what I was considering attempting, except that I want to put the
node back into normal backup after loading it in the "freezer".

Can anyone comment on modifying this procedure by following these steps:
1.Create a domain called "Freezer" with only one mgmtclass - bu/ar
copygroup settings all at nolimit
2.upd node water domain=freezer
3.run an incremental on water to rebind all data to freezer's mgmtclass
4.rename node water ice
5.register water, using original settings
6.run an incremental backup on water, basically a full since it is
considered a "new" node

If I understand TSM's mechanisms, I would then have a node named "ice" that
contains all of "water's" backup data as of a specific point in time, which
will never expire.  I also have "water" with a fresh start.  One question I
have is that with only one mgmtclass in the freezer domain, how much will
TSM complain if I don't go in and change all of the client option sets
pointing to specific mgmtclasses?  Another question - how does this process
affect water's data in the DR copypools?



Original response by Robin Sharp -

Need to save permanent copy of all files currently being stored

Is all that really necessary?

How about creating a new "permanent retention" domain, copy all relevant
policy sets, management classes, copygroups, etc. to the new domain, but
change all retentions to NOLIMIT.  Then move the affected client to the new
domain.  Next incremental should rebind all existing data to the new
"NOLIMIT" management classes.





Steve Schaub, Network Engineer

BlueCross BlueShield of Tennessee

[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>

423-752-6574





Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Question on the preschedulecmd

2005-02-17 Thread Steve Schaub
All,
Maybe I missed the beginning of this thread, but I'm curious as to the
advantage of scripting the ntbackup over using the built-in tsm client
backup of systemobject/systemstate/systemservice?  Does this help DR in
some way?
-steve

-Original Message-
From: Stapleton, Mark [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 16, 2005 4:46 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Question on the preschedulecmd


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Jones, Eric J
>I'm at the point where I've tested backing up the SYSTEM STATE
>with NTBACKUP, having TSM backup the drives and exclude the 
>SYSTEM OBJECT, then rebuild the system, restore with TSM and 
>use NTBACKUP to restore the SYSTEM STATE.   I have a batch 
>file to kick off NTBACKUP for the SYSTEM STATE backup and want 
>to use the "preschedulecmd" to run this before TSM scans for 
>changed files and does the scheduled backup.  I need to make 
>sure the batch file completes before TSM does the backup.  
>Would there be any situation that TSM might start backing up 
>before the prescheduledcmd completes?

A PRESCHEDCMD batch file must complete successfully (with RC=0) before
the backup will happen; if a non-zero return code comes up, or the batch
file hangs for any reason, the backup will not happen.

What some have had better luck with is running the NTBACKUP batch file
as a POSTSCHEDCMD. If the NTBACKUP hangs or fails (which happens once in
a while), using POSTSCHEDCMD will not prevent the backup from
completing. Yeah, the NTBACKUP results are 24 hours old when they get
backed up, but does your system state change that frequently?

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627


Re: I have a db and disk pool volume question.

2005-02-15 Thread Steve Schaub
Just as an FYI, I would check with your EMC CE to make sure your
clariion has the latest firmware upgrade if you are using ATA disks with
raid5 - without going into details, does the term "data loss" sound like
a bad idea?.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Saturday, February 12, 2005 11:21 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: I have a db and disk pool volume question.


==> On Fri, 11 Feb 2005 15:38:58 -0500, "Martinez, Matt"
<[EMAIL PROTECTED]> said:

> I have an opportunity to redesign the disk pool and db layout of our 
> TSM Server , and I was wondering if my fellow TSMers can help me with 
> it. First things first our environment is as follows, TSM Version: 
> 5.1.7 soon to be 5.2.? on Win2k, the Disk the DB will be stored on 
> will be a RAID 5 Disk on a EMC CX700 array.

> My main question is can TSM read/write multiple streams to a single 
> Disk or DB vol?

Multiple backup streams to a DISK stgpool volume: Yes.

Multiple streams to a DB vol: not well formed question: there aren't
streams of data hitting the DB vols.

I suggest you hit the archives for some of the older discussions about
optimization points for DB volumes.  My own solution to these has been
to deploy a large number of relatively slow spindles for DB, and some
very fast disk for recovery log, but that's a completely different
hardware regime.


> The reason I would do this is to optimize the I/O across all the 
> spindles.


If you want to have large RAIDs (to minimize loss of space to parity
drives) then you might as well have large volumes as small.  While there
are multiple streams permitted to hit a DISK volume at a time, there's
only one thread of execution talking to a given volume at a time.  This
means that if you have two volumes in the same stgpool on the same RAID,
that you're going to engender some contention.

If you've only got one volume in the stgpool, then you've serialized on
the stgpool, which isn't so bad because you need to serialize on the
underlying RAID anyway.

To solve this on my own system, I've got my SSA chopped up into 5-disk
RAIDs, which I understand to be a performance sweet-spot for 36-G SSA
disks.  Then every DISK pool I've got has a volume on each RAID.  It's
really kind of cool to watch a high-throughput session run, because you
can see a throughput spike drift from RAID to RAID as the incoming
session round-robins through the available disk volumes.


- Allen S. Rout


Re: Parameters for exclude.systemobject

2005-02-10 Thread Steve Schaub
Use the domain statement with the negative flag:
Domain -systemobject

-Original Message-
From: Chernyaev Sergey [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 10, 2005 7:14 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Parameters for exclude.systemobject


Hello
I create client option set on server and want exclude from backup system
objects. If I write exclude.systemobject all, client send system objects
to server. Which parameters exist for exclude.systemobject?

Thanks for help 


Re: Open File (using lvsa) for windows client 5.3.0.0

2005-02-04 Thread Steve Schaub
I posted the error messages with my original note, but here they are
again:
 02/04/2005 09:28:27 ANS1327W The snapshot operation for 'D:\*' failed.
Error code: 673.
 02/04/2005 09:28:28 ANS1401W The snapshot virtual volume is not
accessible.
 02/04/2005 09:28:28 ANS1376W Unable to perform operation using a
point-in-time copy of the filesystem. The backup/archive 
 operation will continue without snapshot support.
 02/04/2005 09:28:38 ANS1327W The snapshot operation for 'C:\*' failed.
Error code: 673.
 02/04/2005 09:28:39 ANS1401W The snapshot virtual volume is not
accessible.
 02/04/2005 09:28:39 ANS1376W Unable to perform operation using a
point-in-time copy of the filesystem. The backup/archive operation will
continue without snapshot support.

Also, I have done several more tests, and using an immediate action
command or using the web client does not generate the error, only using
the native gui.  The client guide, apars etc note that OFS cant be used
via terminal services, but that it will work when using remote desktop
via xp (which is what I use).  I guess I'll have to truck down to the
data center and try the native gui directly from the server.

Of course, I can only assume the backups without error messages were
using OFM - I cant find any indication that it did or did not.  Is there
some flag somewhere (similar to the little note at the top of the log
when it is using journals) that indicates when a backup is coming from a
snapshot?

-Original Message-
From: TSM_User [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 03, 2005 6:46 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Open File (using lvsa) for windows client 5.3.0.0


With the TSM V5.3 client you *DO NOT* have to use the dynamic option.
You can now backup Open files on a system with a single drive.  Maybe
you want to try posting the exact error messages you are getting.

Steve Schaub <[EMAIL PROTECTED]> wrote:I tried adding the
fileleveltype=snapshot to my dsm.opt, but I'm getting the same errors.
Actually, I'm not sure this even made sense, since the enhancement to
5.3 supposedly allowed ofs backups on single disk systems. There has to
be some other way of finding out more of what it is choking on?

-Original Message-
From: Jozef Zatko [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 03, 2005 3:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Open File (using lvsa) for windows client 5.3.0.0


Hi Steve,
here is response to your problem I have found on Tivoli support site:


Problem is due to parameters. Disk on which OFS cache is stored MUST be
saved with option "Dynamic" (see option include.fs and/or
fileleveltype). It's mandatory to specify mode "SNAPSHOT" for all others
disks (even if this is the default) when we want to use OFS option.
Hereunder is an example for a valid option file : PASSWORDACCESS
GENERATE TCPSERVERADDRESS xx SNAPSHOTCACHELOCATION c:\tsmcache\
SNAPSHOTFSIDLEWAIT 0ms TCPNODELAY YES DOMAIN C: D: Include.Fs "C:"
FILELEVELTYPE=DYNAMIC Include.Fs "D:" FILELEVELTYPE=SNAPSHOT
SNAPSHOTFSIDLEWAIT=0S

Hope this helps

Ing. Jozef Zatko
Login a.s.
Dlha 2, Stupava
tel.: (421) (2) 60252618



Steve Schaub
Sent by: "ADSM: Dist Stor Manager" 02/02/2005
09:41 PM

Please respond to
"ADSM: Dist Stor Manager"


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Open File (using lvsa) for windows client 5.3.0.0






Anyone out there been successful with this?
I have one Win2k client that needs to backup locked files and so far
have been unsuccessful getting it to work. I did a clean install of tsm
client 5.3.0.0, configured the lvsa, did the required reboot, but the
backup still fails over to non-lvsa incremental. I have it running
journaled, which works fine, and I can see that tsm has created
/tsmlvsacache directories on both the c: and d: drives. I am using
remote desktop from xp to access the server, although I have tried using
the web gui as well. I even tried pointing the lvsacache dirs to
opposite vols in the dsm.opt, but I still get the following errors. Any
pointers on how to begin to troubleshoot this would be appreciated. The
only reference a search on IBM turned up is referencing 5.2 problems,
which is supposedly one of the main fixes included in 5.3.

Thanks in advance.

dsmerror.log
02/02/2005 12:07:34 ANS1327W The snapshot operation for 'D:\*' failed.
Error code: 673. 02/02/2005 12:07:34 ANS1401W The snapshot virtual
volume is not accessible. 02/02/2005 12:07:34 ANS1376W Unable to perform
operation using a point-in-time copy of the filesystem. The
backup/archive operation will continue without snapshot support.
02/02/2005 12:07:38 ANS1327W The snapshot operation for 'C:\*' failed.
Error code: 673. 02/02/2005 12:07:38 ANS1401W The snapshot virtual
volume is not accessible. 02/02/2005 12:07:39 ANS1376W Unable to perform
operati

Re: Open File (using lvsa) for windows client 5.3.0.0

2005-02-03 Thread Steve Schaub
I tried adding the fileleveltype=snapshot to my dsm.opt, but I'm getting
the same errors.
Actually, I'm not sure this even made sense, since the enhancement to
5.3 supposedly allowed ofs backups on single disk systems.
There has to be some other way of finding out more of what it is choking
on?

-Original Message-
From: Jozef Zatko [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 03, 2005 3:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Open File (using lvsa) for windows client 5.3.0.0


Hi Steve,
here is response to your problem I have found on Tivoli support site:


Problem is due to parameters. Disk on which OFS cache is stored MUST be
saved with option "Dynamic" (see option include.fs and/or
fileleveltype). It's mandatory to specify mode "SNAPSHOT" for all others
disks (even if this is the default) when we want to use OFS option.
Hereunder is an example for a valid option file : PASSWORDACCESS
GENERATE TCPSERVERADDRESS xx SNAPSHOTCACHELOCATION c:\tsmcache\
SNAPSHOTFSIDLEWAIT 0ms TCPNODELAY YES DOMAIN C: D: Include.Fs "C:"
FILELEVELTYPE=DYNAMIC Include.Fs "D:" FILELEVELTYPE=SNAPSHOT
SNAPSHOTFSIDLEWAIT=0S

Hope this helps

Ing. Jozef Zatko
Login a.s.
Dlha 2, Stupava
tel.: (421) (2) 60252618



Steve Schaub <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager"  02/02/2005
09:41 PM

Please respond to
"ADSM: Dist Stor Manager" 


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Open File (using lvsa) for windows client 5.3.0.0






Anyone out there been successful with this?
I have one Win2k client that needs to backup locked files and so far
have been unsuccessful getting it to work. I did a clean install of tsm
client 5.3.0.0, configured the lvsa, did the required reboot, but the
backup still fails over to non-lvsa incremental.  I have it running
journaled, which works fine, and I can see that tsm has created
/tsmlvsacache directories on both the c: and d: drives.  I am using
remote desktop from xp to access the server, although I have tried using
the web gui as well.  I even tried pointing the lvsacache dirs to
opposite vols in the dsm.opt, but I still get the following errors.  Any
pointers on how to begin to troubleshoot this would be appreciated.  The
only reference a search on IBM turned up is referencing 5.2 problems,
which is supposedly one of the main fixes included in 5.3.

Thanks in advance.

dsmerror.log
02/02/2005 12:07:34 ANS1327W The snapshot operation for 'D:\*' failed.
Error code: 673. 02/02/2005 12:07:34 ANS1401W The snapshot virtual
volume is not accessible. 02/02/2005 12:07:34 ANS1376W Unable to perform
operation using a point-in-time copy of the filesystem. The
backup/archive operation will continue without snapshot support.
02/02/2005 12:07:38 ANS1327W The snapshot operation for 'C:\*' failed.
Error code: 673. 02/02/2005 12:07:38 ANS1401W The snapshot virtual
volume is not accessible. 02/02/2005 12:07:39 ANS1376W Unable to perform
operation using a point-in-time copy of the filesystem. The
backup/archive operation will continue without snapshot support.

dsm.opt
*
* Communication Section
*
commmethod TCPIP
tcpport1500
tcpserveraddress   10.64.1.43
tcpbuffsize31
tcpwindowsize  63
largecommbuffers   Yes
httpport   1581
*
* Workstation Settings Section
*
passwordaccess generate
managedserviceswebclient schedule
errorlogretention  90
schedlogretention  14
passwordaccess generate
snapshotfsidlewait 1s 100ms
snapshotcachelocation  d:\
include.fs d: snapshotcachelocation=c:\


Steve Schaub
Computer Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED]
"Trials are inevitable.  We can curse them and grow bitter, or harvest
them and grow stronger" ForwardSourceID:NT00056AB2


Re: Migrating Windows File Data Question

2005-02-03 Thread Steve Schaub
Charles,

I have run into it - be prepared for a huge expiration run in the near
future.
Also, if you use client journaling, watch out - robocopy will blow it
out of the water (and fill up the event log to boot!)

-steve

-Original Message-
From: Hart, Charles [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 02, 2005 5:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Migrating Windows File Data Question


Thanks for the confirmation... That's what I thought. Arggg (I
understand why) but sure makes Data Moves.

Do other people run into this and what do you do?

Thanks again!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Thorneycroft, Doug
Sent: Wednesday, February 02, 2005 3:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Migrating Windows File Data Question


Just think of the drive/path as part of the file name.
new drive or change in path and tivoli sees it as a new
file. If the file no longer exist in the old path, then
Tivoli treats it as deleted.


Doug Thorneycroft
Systems Analyst
Computer Technology Section
County Sanitation Districts of Los Angeles County
(562) 699-7411 Ext. 1058
FAX (562) 699-6756
[EMAIL PROTECTED]



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Hart, Charles
Sent: Wednesday, February 02, 2005 1:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Migrating Windows File Data Question


Our Intel team will be using robocopy to move data from one Windows Dick
vol to a new one.  In the past we have seen TSM Backup the data that was
moved, even thought the data has already been backed up before.  An
example would be...

file1.txt has been on the D: Drive and has 10 Active Backup Copies on
TSM

We move file1.txt to a J: Drive.

1) Does the file1.txt start expiring off because its in a new location
and its considered a "new File" to TSM, causing the d:\file1.txt backup
history to drop off?

2) Does file one now get associated with its new location on J: Drive
and now gets backed up even thought the file has actually not changed?

In the past when our Intel group has moved data around and it appears to
all gets backed up again due to new location instead the file actually
being modified.  If anyone could add some insight that would be great..
We don't want to loose our backup history or backup a ton of data that
we already have Active / Inactive version of.

Appreciate the help

Charles 


Re: Clients backing up directly to devtype FILE

2005-02-03 Thread Steve Schaub
Tim,

We archive Oracle databases and redo logs directly to file class.  We
have not had this issue to this point - you had better not have jinxed
us!  All of our fileclass volumes are predefined, the database uses 30g
vols, the logs 3g.

I looked at the apar you referenced, and my take is that as long as you
have a reasonable hi/low migration threshold, and the server can get a
tape drive when it needs to offload, I don't see where you would reach
the full condition during a backup.  Especially in your case where you
are preempting large files direct to tape anyway (using the 2g limit).

We do our migration immediately after the copypool backup.  Restores
from fileclass are as fast as disk, and sometimes better if they
multi-thread.

-steve

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 02, 2005 4:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Clients backing up directly to devtype FILE


We have a Storage Pool hierarchy like:

 

DISK Stgpool (nolimit) --> FILE Stgpool (2GB limit) --> TAPE Stgpool
(nolimit)

 

Clients backup directly to DISK, files smaller than 2GB migrate to FILE
stgpool, everything else migrates to TAPE.

 

Clients backup overnight and we start migration from DISK at the end of
the workday so that recent backups are all on disk.

 

One problem with this is that client restores from DISK from previous
nights backup are limited to one session.  We could migrate earlier in
the day but then the larger last nights backups would be on TAPE which
we don't want.

 

One solution to this is to change the initial DISK Stgpool to type FILE.

 

APAR IC36524
(http://www-1.ibm.com/support/docview.wss?rs=663&context=SSGSG7&dc=DB550
&q1=maxsize&uid=swg1IC36524&loc=en_US&cs=utf-8&lang=en) indicates that
device type DISK should be used as the initial stgpool.

  

Just curious if anyone is using DEVTYPE File as the initial stgpool and
what experiences they have with it.

 

Any other suggestions to improve this setup?

 

Thanks,

 

Tim Rushforth

City of Winnipeg

 

 


Open File (using lvsa) for windows client 5.3.0.0

2005-02-02 Thread Steve Schaub
Anyone out there been successful with this?
I have one Win2k client that needs to backup locked files and so far
have been unsuccessful getting it to work.
I did a clean install of tsm client 5.3.0.0, configured the lvsa, did
the required reboot, but the backup still fails over to non-lvsa
incremental.  I have it running journaled, which works fine, and I can
see that tsm has created /tsmlvsacache directories on both the c: and d:
drives.  I am using remote desktop from xp to access the server,
although I have tried using the web gui as well.  I even tried pointing
the lvsacache dirs to opposite vols in the dsm.opt, but I still get the
following errors.  Any pointers on how to begin to troubleshoot this
would be appreciated.  The only reference a search on IBM turned up is
referencing 5.2 problems, which is supposedly one of the main fixes
included in 5.3.

Thanks in advance.

dsmerror.log
02/02/2005 12:07:34 ANS1327W The snapshot operation for 'D:\*' failed.
Error code: 673.
02/02/2005 12:07:34 ANS1401W The snapshot virtual volume is not
accessible.
02/02/2005 12:07:34 ANS1376W Unable to perform operation using a
point-in-time copy of the filesystem. The backup/archive operation will
continue without snapshot support.
02/02/2005 12:07:38 ANS1327W The snapshot operation for 'C:\*' failed.
Error code: 673.
02/02/2005 12:07:38 ANS1401W The snapshot virtual volume is not
accessible.
02/02/2005 12:07:39 ANS1376W Unable to perform operation using a
point-in-time copy of the filesystem. The backup/archive operation will
continue without snapshot support.

dsm.opt
*
* Communication Section
*
commmethod TCPIP
tcpport1500
tcpserveraddress   10.64.1.43
tcpbuffsize31
tcpwindowsize  63
largecommbuffers   Yes
httpport   1581
*
* Workstation Settings Section
*
passwordaccess generate
managedserviceswebclient schedule
errorlogretention  90
schedlogretention  14
passwordaccess generate
snapshotfsidlewait 1s 100ms
snapshotcachelocation  d:\
include.fs d: snapshotcachelocation=c:\


Steve Schaub
Computer Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 
"Trials are inevitable.  We can curse them and grow bitter, or harvest
them and grow stronger"


Re: TDPSQL restore - MediaW

2005-02-02 Thread Steve Schaub
Not sure if this applies to you or not, but we ran into the same type of
problem trying to do a multi-session restore using tdp-sql.  The details
are a bit fuzzy since it has been a while, but the problem was caused by
multiple restore streams looking for the same media.  When we ran a
single restore stream we did not have the problem.  Others may have more
detailed insight on how to configure tdp to allow problem-free
multi-session restores, but we simply decided to opt for single streams.

-Original Message-
From: Yiannakis Vakis [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 02, 2005 5:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TDPSQL restore - MediaW


Hi,
I'm on TSM server v.5.2 Win2000, TSM TDP for SQL client v.5.2 Win2000
I've done TDPSQL restores in the past successfully, but now I've got a
strange problem. There are two sessions between server and client. One
is on SendW, the other on MediaW. There is only one mount and I've got
three empty drives. One of the tape paths has a problem and is taken
offline. I wonder why the mount is not done on the other empty drives.
Any suggestions ? Thanks Yiannakis

Yiannakis Vakis
Systems Support Group, I.T.Division
Tel. 22-848523, 99-414788, Fax. 22-337770


Re: Emergency! Server will not start

2005-01-19 Thread Steve Schaub
Joni,

In case of emergency:

1. Step back
2. Take a deep breath
3. Reach for the chocolate-covered strawberries!

-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 18, 2005 3:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Emergency! Server will not start


I thank you all for your quick response to my own stupid mistake.  I am
just setting up this new TSM server and I did not extend the recovery
log past the initial use of 4MB.  Thank you again!  Your help is always
appreciated

: )


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]




 "Curtis Stewart"
 <[EMAIL PROTECTED]
 AWSON.COM>
To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor
cc
 Manager"
 <[EMAIL PROTECTED]
Subject
 .EDU> Re: Emergency!  Server will not
   start

 01/18/2005 03:46
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Can you see the file system and directory where you put the logs from
the OS? I think the 4MB is the default amount of log the TSM install
creates in the server/bin directory.

[EMAIL PROTECTED]


Re: Enabling caching to shorten housekeeping

2005-01-14 Thread Steve Schaub
Eric,

Oh, THAT bug.  Yes, it bit us once as well when our hp-ux system lost
its san connection to the file class luns and expiration went on its
merry way, deleting scores of volumes from the tsm db, even though it
could not actually touch the underlying filesystem.  That was a fun
weekend of scripting up some very challenging 'sync' routines to expose
and delete the os 'ghost' files.  Definitely deserves an apar...

I am moving us in the direction of pre-defined (zero scratch) rather
than dynamic volumes for these pools for that very reason.

Thanks,
-steve

-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 14, 2005 5:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Enabling caching to shorten housekeeping


Hi Jurjen!
No, that's not what I'm seeing. I have seen that TSM does not handle
allocation failures properly. I removed file volumes and directories
after creation and started a new backup. This backup runs into a rc 2
(File or directory not found) which is correct, but the volumes are
never updated to read-only or, better yet, unavailable. Also, TSM
forever keeps on trying to create new files in the non existing path. I
have not tried pre-defined file volumes yet, hopefully TSM handles them
better, otherwise we maybe have to reconsider our design. That's
definitely not what I want :-(( Kindest regards, Eric van Loon KLM Royal
Dutch Airlines


-Original Message-
From: Jurjen Oskam [mailto:[EMAIL PROTECTED]
Sent: Friday, January 14, 2005 11:21
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Enabling caching to shorten housekeeping


On Fri, Jan 14, 2005 at 10:06:49AM +0100, Loon, E.J. van - SPLXM wrote:

> We only ran into a nasty bug when using the file device class, so I 
> hope
IBM
> will turn my PMR in an APAR and fix it as soon as possible...

Let me guess: FILE volumes only store a tiny bit of data before
(incorrectly)
encountering end-of-volume? At least, that is the problem I'm
encountering.

--
Jurjen Oskam
  "E-mail has just erupted like a weed, and instead of considering
  what to say when they write, people now just let thoughts drool
  out onto the screen." - R. Craig Hogan


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail
or any attachment may be disclosed, copied or distributed, and that any
other action related to this e-mail or attachment is strictly
prohibited, and may be unlawful. If you have received this e-mail by
error, please notify the sender immediately by return e-mail, and delete
this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its
subsidiaries and/or its employees shall not be liable for the incorrect
or incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
**


Re: Enabling caching to shorten housekeeping

2005-01-14 Thread Steve Schaub
Eric,

Ok, you got my attention now!
What "bug" should I be losing sleep over?

-steve

-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 14, 2005 4:07 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Enabling caching to shorten housekeeping


Hi Steve (and other who replied)!
The risk of fragmentation sounds reasonable. I will keep that in mind,
thanks! Yes, I'm running two storage pool backups at the same time (my
copypool library only has two drives) and buying more drives is not an
option, since it's a 3494 frame. Expanding requires buying a drive frame
and 3590 drives, which are very expensive. This while my current TSM
environment will have to hold on for about half a year. We are currently
working on a drastic TSM redesign. We are planning for a TSM 5.3 server
on AIX 5.2 on a brand new P-Series machine. We will attach more than one
(probably 3) DS-series (formerly known as FastT) SATA subsystems with a
total amount of 200 Tb. online storage to accommodate for our primary
(file device class) storage pool. We only ran into a nasty bug when
using the file device class, so I hope IBM will turn my PMR in an APAR
and fix it as soon as possible... Kindest regards, Eric van Loon KLM
Royal Dutch Airlines



-Original Message-
From: Steve Schaub [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 13, 2005 21:32
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Enabling caching to shorten housekeeping


Eric,

The issues we had when we had caching turned on were:
1. Fragmentation of the the diskpool which led to a general slowdown in
processes.  This can be minimized if you have a regular schedule of
flushing the cache (turn it off and migrate down to zero). 2. TDP agents
don't always play nice with diskpool using caching - some algorithim
issue that doesn't free up a big enough block and then kills the backup
when it runs out of room in the pool.

I'm assuming you are already multi-threading your stgpool backup and
migration, so the only suggestions I can offer are; 1. buy more tape
drives and increase the multi-threads 2. buy tons of cheap disk and
implement file-based stgpools.  This is the approach we took and it cut
our total housekeeping elapsed time by 75%.

Good luck!

Steve Schaub
Computer Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED]
"Trials are inevitable.  We can curse them and grow bitter, or harvest
them and grow stronger"


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 13, 2005 10:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Enabling caching to shorten housekeeping


Hi *SM-ers!
I'm currently struggling with the fact that I cannot run a backup
stgpool diskpool and a migrate diskpool (to empty it out for the next
client backup
cycle) sequentially no more. Migration would run well into the evening
and I would like it to be ready at 18:00 hours. I'm thinking about
turning on caching for my diskpool. If it works like I hope, TSM
migration empties out the diskpool, but leaving the actual data behind,
so a backup stgpool diskpool uses these cached copies, instead of
mounting all the tapes during a subsequent backup stgpool tapepool. In
that case, I can run migration and backup stgpool at the same time. Is
TSM working like this or will a cached object, once (logically)
migrated, be backed up from tape? Thank you very much for your reply in
advance! Kindest regards, Eric van Loon KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail
or any attachment may be disclosed, copied or distributed, and that any
other action related to this e-mail or attachment is strictly
prohibited, and may be unlawful. If you have received this e-mail by
error, please notify the sender immediately by return e-mail, and delete
this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its
subsidiaries and/or its employees shall not be liable for the incorrect
or incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
**


Re: Enabling caching to shorten housekeeping

2005-01-13 Thread Steve Schaub
Eric,

The issues we had when we had caching turned on were:
1. Fragmentation of the the diskpool which led to a general slowdown in
processes.  This can be minimized if you have a regular schedule of
flushing the cache (turn it off and migrate down to zero).
2. TDP agents don't always play nice with diskpool using caching - some
algorithim issue that doesn't free up a big enough block and then kills
the backup when it runs out of room in the pool.

I'm assuming you are already multi-threading your stgpool backup and
migration, so the only suggestions I can offer are;
1. buy more tape drives and increase the multi-threads
2. buy tons of cheap disk and implement file-based stgpools.  This is
the approach we took and it cut our total housekeeping elapsed time by
75%.

Good luck!

Steve Schaub
Computer Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 
"Trials are inevitable.  We can curse them and grow bitter, or harvest
them and grow stronger"


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 13, 2005 10:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Enabling caching to shorten housekeeping


Hi *SM-ers!
I'm currently struggling with the fact that I cannot run a backup
stgpool diskpool and a migrate diskpool (to empty it out for the next
client backup
cycle) sequentially no more. Migration would run well into the evening
and I would like it to be ready at 18:00 hours. I'm thinking about
turning on caching for my diskpool. If it works like I hope, TSM
migration empties out the diskpool, but leaving the actual data behind,
so a backup stgpool diskpool uses these cached copies, instead of
mounting all the tapes during a subsequent backup stgpool tapepool. In
that case, I can run migration and backup stgpool at the same time. Is
TSM working like this or will a cached object, once (logically)
migrated, be backed up from tape? Thank you very much for your reply in
advance! Kindest regards, Eric van Loon KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail
or any attachment may be disclosed, copied or distributed, and that any
other action related to this e-mail or attachment is strictly
prohibited, and may be unlawful. If you have received this e-mail by
error, please notify the sender immediately by return e-mail, and delete
this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its
subsidiaries and/or its employees shall not be liable for the incorrect
or incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
**


Re: Admin command line problem

2004-12-01 Thread Steve Schaub
Sounds like your baclient is up to date on AIX, but not the admin client
(they are different).

-Original Message-
From: Jozef Zatko [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 01, 2004 4:30 AM
To: [EMAIL PROTECTED]
Subject: Admin command line problem


Hi fellows,
I have strange problem with administrative command line - dsmadmc.

According to manual, from version 5.2 there is a new option
-dataonly=yes. This option is working on my Windows client (version
5.2.0.6), but not on my AIX client. On my AIX 5.1 system, I have TSM
server 5.2.2.5 and TSM client 5.2.3.0 (64bit). Following is output I get
when I try to use dataonly option:

[EMAIL PROTECTED]:/root/# dsmadmc -id=admin -pa=xxx -dataonly=yes q vol
ANS8017E Command line parameter 3: 'dataonly=yes' is not valid.

ANS8002I Highest return code was 9.

According to manual, this option is supported on AIX. Any thoughts?

Best regards

Ing. Jozef Zatko
Login a.s.
Dlha 2, Stupava
tel.: (421) (2) 60252618


Re: scheduled SQL log backup question

2004-11-30 Thread Steve Schaub
Del/Luc,

I just finished setting up our TDP environment and hit this as well -
the easiest way out is to code a client option set for your TDP-SQL
nodes that excludes the log backups from master.  Mine looks like this:

compression   yes
inclexcl  "include '\...\*' STD_SQLDB"
inclexcl  "exclude '\...\master\...\log*'"

Your activity log results will then show excluded backups

Steve Schaub
Computer Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 
"Trials are inevitable.  We can curse them and grow bitter, or harvest
them and grow stronger"
   

-Original Message-
From: Del Hoobler [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 30, 2004 1:55 PM
To: [EMAIL PROTECTED]
Subject: Re: scheduled SQL log backup question


Luc,

This is a known requirement that has been submitted and being
prioritized for a future release, i.e. allowing you to "exclude" certain
databases from a command.

Thanks,

Del


"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote on 11/30/2004
01:47:30 PM:

> Thanks Joe .
>
> I thought of that one but I was hopping for a nice exetp option ... 
> cool .. I will do that way...
>
> thanks again
>
> Luc
>
>
>
>
>
> Joe Crnjanski <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 2004-11-30 
> 01:35 PM Please respond to "ADSM: Dist Stor Manager"
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Re: scheduled SQL log backup question
>
>
> I think you cannot take log backup of master db; only full; that 
> doesn't matter, it is small database (but very important)
>
> In your script put "tdpsqlc backup db1,db2,db3,db4 log ."
>
> Db1,db2... are database names (don't include master)
>
> Joe Crnjanski - Partner
> Infinity Network Solutions Inc.
> Phone: 416-235-0931 x26
> Fax: 416-235-0265
> Web: www.infinitynetwork.com
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
> Of Luc Beaudoin
> Sent: Tuesday, November 30, 2004 11:56 AM
> To: [EMAIL PROTECTED]
> Subject: scheduled SQL log backup question
>
> Hi all
>
> I'm doing SQL LOG backup every 6 hours and a full backup every day ...
>
> I got a failed status everytime cause I can not take theLOG backup of 
> the master DB ...
>
> This is the command that I launch ...
>
> %sql_dir%\tdpsqlc backup * log /tsmoptfile=%sql_dir%\dsm.opt 
> /logfile=%sql_dir%\sqllog.log >> %sql_dir%\sqlschedlog.log
>
> Is there a option that I can put .. like   backup ALL LOG except
MASTER
>
> thanks
>
> Luc Beaudoin
> Network Administrator/ SAN/TSM
> Hopital General Juif S.M.B.D.
> Tel: (514) 340-8222 ext:8254


Why is %Migr higher than %Util?

2004-11-24 Thread Steve Schaub
Just to satisfy my curiosity, can someone explain why my file-class disk
pools show a higher percent migratable than utilized?  How would TSM
migrate more data than exists in a given pool?  And which one is used to
trigger migration?  Inquiring minds want to know...


Storage PoDevice Cl EstimatedPct PctHigh   Low
Next Stora 
ool Name  lass Named CapacityUtilMigrMig   Mig   age
Pool   
 (MB)g P   Pct   
 Pct 
---   --   --   -   -      ---
---
DR_FILE1  TSMFILE03 891 G88.690.1 9570
DR_TAPE
ORALOG_FILTSMFILE01 452 G 1.8 4.1 9070
ORALOG_TAP 
 LE1_DR
PE_DR 
ORALOG_FILTSMFILE01 452 G 8.010.2 9070
ORALOG_TAP 
 LE1_STD
PE_STD
STD_FILE1 TSMFILE02 891 G81.784.9 9570
STD_FILE2  
STD_FILE2 TSMFILE04 891 G42.746.0 9570
STD_TAPE   

Steve Schaub
Computer Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 
"Trials are inevitable.  We can curse them and grow bitter, or harvest
them and grow stronger"


How to remove filespaces from DR?

2004-11-02 Thread Steve Schaub
Environment:
  TSM server 5.2.2.5 on HP-UX 11.11

I have some nodes that were going to a primary pool that was being
backed up to DR, but data owners have now decided this data does not
need DR.  I moved the primary pool data to a non-DR pool, but how do I
go about removing the data in the DR pool?

Steve Schaub
Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 
"Trials are inevitable.  We can curse them and grow bitter, or harvest
them and grow stronger"


Re: TSM.PWD issue

2004-11-02 Thread Steve Schaub
Hi Joni,

TSM clients need to store pwds for each stanza defined - on each client,
for each stanza, start a command line sessions for that servername "dsmc
-servername=x" and then do a "q se" to get prompted for the id/pwd,
which then gets stored locally.

-Original Message-
From: Joni Moyer [mailto:] 
Sent: Tuesday, November 02, 2004 8:53 AM
To: [EMAIL PROTECTED]
Subject: TSM.PWD issue


Hello All!

I am receiving the error:

  ANS1503E Valid password not available for server 'TSMDEV'.
  The administrator for your system must run TSM and enter the
password
  to store it locally.

  I have multiple TSM servers defined within the dsm.sys and dsm.opt
  files for a Solaris Client at the 5.2.2.5 level.  I have looked
  through previous issues and the recommendation is to run a dsmc
  command from the client server when logged in as root.  Will this
  clear up the password issue even though I want it directed to a
  different TSM server?  It is defined on all of the TSM servers
with
  the same password, so I'm a little confused as to why it is still
  communicating with the other servers and not this new TSM server
that
  was just defined?

  Thanks in advance for your help!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: Help on using multiple options in a client schedule

2004-10-22 Thread Steve Schaub
Thanks - that saved a trip to the rest home!

-Original Message-
From: Andrew Raibeck [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 21, 2004 5:13 PM
To: [EMAIL PROTECTED]
Subject: Re: Help on using multiple options in a client schedule


If you are trying to define a client schedule where the OPTIONS includes
the client -OPTFILE option, then that is not valid. -OPTFILE is valid
only when launching the client, i.e. "dsmc -optfile=dsm.opt.alt"

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.

"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote on 10/21/2004
10:26:12:

> TSM server 5.2.2.5
> I'm trying to test out a client schedule for a set of windows clients 
> that uses an alternate dsm.opt that I have created, and also bypasses 
> the normal journal type of incremental that runs daily.
>
> I just cant seem to get the syntax correct when I try them both 
> (either one works by itself).  I'm sure it's just a matter of putting 
> the quotes in the right place, but I need some help here.
>
> Basically I want to use these 2 options:
>-nojournal
>-optfile="c:/program files/tivoli/tsm/baclient/dsmweekly.opt"
>
> Any advice on exactly how to put this into the web gui so it works 
> would be much appreciated!
>
> Steve Schaub
> Storage Systems Engineer II
> Haworth, Inc
> 616-393-1457 (desk)
> 616-886-8821 (cell)
> [EMAIL PROTECTED]
> "Trials are inevitable.  We can curse them and grow bitter, or harvest

> them and grow stronger"


Help on using multiple options in a client schedule

2004-10-21 Thread Steve Schaub
TSM server 5.2.2.5
I'm trying to test out a client schedule for a set of windows clients
that uses an alternate dsm.opt that I have created, and also bypasses
the normal journal type of incremental that runs daily.

I just cant seem to get the syntax correct when I try them both (either
one works by itself).  I'm sure it's just a matter of putting the quotes
in the right place, but I need some help here.

Basically I want to use these 2 options:
   -nojournal
   -optfile="c:/program files/tivoli/tsm/baclient/dsmweekly.opt"

Any advice on exactly how to put this into the web gui so it works would
be much appreciated!

Steve Schaub
Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 
"Trials are inevitable.  We can curse them and grow bitter, or harvest
them and grow stronger"


WAN optimizers?

2004-10-07 Thread Steve Schaub
Wondering if anyone has first hand experience using some of the wan
optimiziation products out there (Tacit, Riverbed, Expand, Peribit,
etc)?  We are considering using something to either pull remote site
data back to our central data center (eliminating remote servers &
storage), or to help speed up backup and/or DR from the remote to here.
We will be using mainly point-to-point T1 links.

Steve Schaub
Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 


Re: Exchange Incremental Backup

2004-10-06 Thread Steve Schaub
No downside if you can afford the media for 35 days of fulls, and the
backups are getting done on time.
We moved to a weekly full and a twice daily log offload(incremental) on
our 5 Exchange servers and it works well for us.
One gotcha - if you want to do incrementals, the exchange servers have
to have circular logging disabled.

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 06, 2004 11:36 AM
To: [EMAIL PROTECTED]
Subject: Exchange Incremental Backup


We are using Data Protection for Exchange 5.2.1.0 to backup Exchange
2000.



We currently do nightly Full backups and have the TSM policy with Retain
Extra Versions = Retain Only Versions = 35.



We are looking into doing incremental backups throughout the day to
provide for better protection.  We currently have transaction logs and
the Exchange DB on separate physical disks both RAID10.  We would need a
lot of drives to fail to lose any data here - but we are also looking at
recovering from some type of logical corruption where both the logs and
the DB are corrupt so we would have to resort to the last backup.



We really don't want or need to keep the incrementals that are done
throughout the day for more than a couple of days.



It seems the way to do this would be to change Retain Only to say 2 days
but leave Retain Extra at 35.  This way full backups would be kept for
35 days but incrementals would only be kept for 2 days.



Does this make any sense?



Is there any other way to do what we are trying to do?



Are there downsides to this besides that any incremental that we do is
no good after 2 days?



Thanks for any input.



Tim Rushforth

City of Winnipeg


Re: TDP for Exchange file expiration

2004-10-06 Thread Steve Schaub
If you are talking about old exchange "Storage Groups", then these map
to individual filespaces in tsm and you should be able to use the
"delete filespace  " command to get rid of them.

-Original Message-
From: Gee, Norman [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 06, 2004 2:48 PM
To: [EMAIL PROTECTED]
Subject: TDP for Exchange file expiration


I have been running backups for Exchange for a while, but I notice that
I had a few tape that did not get recycle after expiration of the data.

I found that the Exchange admin changed the name of couple of the data
objects, therefore the system is keeping the last version of the
previous name of each object and not expiring them.  These are still
active. How can I get rid of this old object?  Add an exclude statement
for each old object?


Re: Calling all windows scripters - help!

2004-10-06 Thread Steve Schaub
Neil,

I figured out the problem - I had to change the %a parms to %%a
Thanks - this is just what I needed!

-Original Message-
From: Neil Schofield [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 05, 2004 8:31 PM
To: [EMAIL PROTECTED]
Subject: Re: Calling all windows scripters - help!


Steve

I don't know if you can build on this, but the following command will
create the files DUMMY1.TXT to DUMMY100.TXT in the current directory (or
append to them if they already exist) and add the date and time as two
lines to the end.

FOR /L %a IN (1,1,100) DO DATE /T >> DUMMY%a.TXT & TIME /T >>
DUMMY%a.TXT

Regards
Neil




Take a tour of a virtual waste water treatment works at
http://www.wastewatertour.com The information in this e-mail is
confidential and may also be legally privileged. The contents are
intended for recipient only and are subject to the legal notice
available at http://www.keldagroup.com/email.htm
Yorkshire Water Services Limited
Registered Office Western House Halifax Road Bradford BD6 2SZ Registered
in England and Wales No 2366682


Re: Calling all windows scripters - help!

2004-10-06 Thread Steve Schaub
Neil,

I get the following error when I try running the command:
 
D:\test>poke2.cmd
a.TXT was unexpected at this time.

D:\test>FOR /L a.TXT & TIME /T >> DUMMYa.TXT
D:\test>


D:\test>more poke2.cmd
FOR /L %a IN (1,1,100) DO DATE /T >> DUMMY%a.TXT & TIME /T >>
DUMMY%a.TXT

-Original Message-
From: Neil Schofield [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 05, 2004 8:31 PM
To: [EMAIL PROTECTED]
Subject: Re: Calling all windows scripters - help!


Steve

I don't know if you can build on this, but the following command will
create the files DUMMY1.TXT to DUMMY100.TXT in the current directory (or
append to them if they already exist) and add the date and time as two
lines to the end.

FOR /L %a IN (1,1,100) DO DATE /T >> DUMMY%a.TXT & TIME /T >>
DUMMY%a.TXT

Regards
Neil




Take a tour of a virtual waste water treatment works at
http://www.wastewatertour.com The information in this e-mail is
confidential and may also be legally privileged. The contents are
intended for recipient only and are subject to the legal notice
available at http://www.keldagroup.com/email.htm
Yorkshire Water Services Limited
Registered Office Western House Halifax Road Bradford BD6 2SZ Registered
in England and Wales No 2366682


Calling all windows scripters - help!

2004-10-05 Thread Steve Schaub
Rather than going out and reinventing the wheel, I just know someone out
there has already written this script.

I need a batch script that will run on a w2k/w2k3 box (without
installing any interprator or anything) that will go into a given
directory and update every file, preferably by adding a date/time stamp
line at the end of every file.  If it could prepopulate a directory with
a given number of dummy files, that would be fantastic.

I need to use this for validation testing.

Thanks in advance.

Steve Schaub
Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 


Re: Default Server option

2004-09-23 Thread Steve Schaub
Also, did you bounce the client after making the dsm.sys change?

-Original Message-
From: Joni Moyer [mailto:] 
Sent: Thursday, September 23, 2004 4:19 PM
To: [EMAIL PROTECTED]
Subject: Re: Default Server option


I guess to clarify, we put the defaultserver parameter into the dsm.sys
file at the top and want all backups/archives to still go to the ADSM
server, not TSMDEV server.  Currently when we try any backups/archives
with the setup provided it STILL tries to go to TSMDEV.  I'm confused as
to why this is occuring... I guess from your question you are asking if
we issued the command via dsmadmc or if it was a scheduled backup?  It
was done via dsmadmc.  Even though we stated that the default server
should be ADSM, from your statement, I am assuming that we still have to
use -SErvername=ADSM in order to get it to go to the default server?  I
had thought that the defaultserver parameter would automatically take
care of this.  We don't want it going to TSMDEV yet since I still have
some things to set up before testing begins and that is exactly where it
IS going.  I hope my babbling made some sense  If not, let me know.
Thanks again!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]




 "Richard Sims"
 <[EMAIL PROTECTED]>
 Sent by: "ADSM:
To
 Dist Stor [EMAIL PROTECTED]
 Manager"
cc
 <[EMAIL PROTECTED]
 .EDU>
Subject
   Re: Default Server option

 09/23/2004 04:06
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






>We are trying to use the default server option within the dsm.sys file 
>due to the fact that we will soon be migrating to a new TSM server 
>instance
and
>it does not seem to work... Our current/default server we use is ADSM 
>and the new instance is TSMDEV as shown below.  Does anyone have any 
>idea as
to
>why this isn't working? ...

Joni - You need to tell us how the client attempted to use the new
stanza.
   Has to be invoked with  -SErvername=TSMDEV
since the  DEFAULTSERVER   ADSM
spec is still in effect in your client options file.
Beyond that, you have to tell us what evidence indicates "...isn't
working".

   Richard Sims


HP-UX clients at 5.1.5.0 not using client option set correctly

2004-08-13 Thread Steve Schaub
TSM Server 5.2.2.5 on HP-UX 11.11
TSM Client 5.1.5.0 on HP-UX 11.11, 11.0

I have noticed that my hp-ux clients at 5.1.5.0 are not binding includes
from their client option set to the mgmtclass specified - they are all
going to DEFAULT.  I don't see this problem on the one hp-ux at 5.2.2.0
or on my windows or novell clients (mix of 5.1.x & 5.2.x).  The excludes
seem to be working ok, with some odd exception in the behavior of their
web gui.  We process all of our include/exclude statements through the
cloptsets, nothing local to client nodes.

Before I ask our Unix admins to update the client on all their servers,
has anyone else seen this problem before?  I couldn't find a good hit on
the listserv or ibm's apar list.

Cloptset:
domain  "all-local"
schedmode   prompted
changingretries 3
dirmc   DR_SHRSTATIC
compression yes
compressalways  yes
inclexcl"include '/.../* DR_SHRDYNAMIC'"
inclexcl"include '/home/.../* DR_SYSFILES'"
inclexcl"include '/opt/.../* DR_SYSFILES'"
inclexcl"include '/usr/.../* DR_SYSFILES'"
inclexcl"include '/var/.../* DR_SYSFILES'"
inclexcl"include '/stand/.../* DR_SYSFILES'"
inclexcl"include '/* DR_SYSFILES'"
inclexcl"exclude '/apps*/.../*'"
inclexcl"exclude '/ora*/.../*'"
inclexcl"include '/ora02/.../* DR_SHRSTATIC'"
inclexcl"include '/apps_logs/.../* DR_SHRSTATIC'"
inclexcl"include '/apps_patches/.../* DR_SHRSTATIC'"
inclexcl"exclude.dir '/apps'"
inclexcl"exclude.dir '/adminspace'"
inclexcl"exclude.dir '/var/adm/crash'"
inclexcl"exclude.dir '/var/tmp'"
inclexcl"exclude.dir '/cdrom'"
inclexcl"exclude.dir '/crash'"
inclexcl"exclude.dir '/sysadm_crash'"
inclexcl"exclude.dir '/sysadm_reserve'"
inclexcl"exclude.dir '/Ignite_Net_Backups'"
inclexcl"exclude.dir '/SD_CDROM'"
inclexcl"exclude.dir '/test*'"
inclexcl"exclude.dir '/db*'"
inclexcl"exclude.dir '/dw*'"
inclexcl"exclude.dir '/dpa*'"
inclexcl"exclude.dir '/hawfs*'"
inclexcl"exclude.dir '/busrep*'"
inclexcl"exclude.dir '/mesrep*'"
inclexcl"exclude.compression '/.../*.tar'"
inclexcl"exclude.compression '/.../*.gz'"


DR_SHRSTATIC mgmt class:
 Policy Domain Name: STANDARD
Policy Set Name: ACTIVE
Mgmt Class Name: DR_SHRSTATIC
Copy Group Name: STANDARD
Copy Group Type: Backup
   Versions Data Exists: No Limit
  Versions Data Deleted: No Limit
  Retain Extra Versions: 30
Retain Only Version: 30
  Copy Mode: Modified
 Copy Serialization: Shared Static
 Copy Frequency: 0
   Copy Destination: DR_DISK
Table of Contents (TOC) Destination: 
 Last Update by (administrator): TSS1
  Last Update Date/Time: Fri, Jun 25, 2004 06:17:53 AM
   Managing profile: 

DR_SYSFILES mgmt class:
 Policy Domain Name: STANDARD
Policy Set Name: ACTIVE
    Mgmt Class Name: DR_SYSFILES
Copy Group Name: STANDARD
Copy Group Type: Backup
   Versions Data Exists: 7
  Versions Data Deleted: 7
  Retain Extra Versions: 60
Retain Only Version: 60
  Copy Mode: Modified
 Copy Serialization: Dynamic
 Copy Frequency: 0
   Copy Destination: DR_DISK
Table of Contents (TOC) Destination: 
 Last Update by (administrator): TSS1
  Last Update Date/Time: Fri, Jun 25, 2004 06:07:04 AM
   Managing profile: 


Steve Schaub
Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 


Reclamation process for file class volumes not starting immediately

2004-08-13 Thread Steve Schaub
Tsm server 5.2.2.5 on HP-UX 11.11
I have 3 large (912gb) file class pools and I have noticed that lowering
the recl setting does not immediately trigger the reclamation process to
start like it does on my tape pools.  I want to be able to control when
recl runs to avoid contention, so I have a script that
lowers/monitors/raises the recl threshold.  Problem is, sometimes it
takes half an hour for the reclamation process to even start, which
makes it tricky to script.

Has anyone else noticed this behavior?

Steve Schaub
Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 


HP-UX Web client not using client option sets?

2004-07-29 Thread Steve Schaub
TSM server 5.2.2.5 on HP-UX 11.11
TSM client 5.1.5.0 & 5.2.2.0 HP-UX 11.11

I use client option sets to centralize & control our include/exclude
list on different nodes, and although the scheduled incremental backups
seem to work correctly, I have noticed that when I bring up the web
client for my HP-UX nodes, it appears that none of my statements in the
assigned client option set are being used.  For example, I have an
"exclude.fs '/crash'", but the web gui will allow me to back it up.  It
doesn't look like my windows clients have this issue.

I haven't been able to find any reason for this in the user docs, this
listserv, or IBM's web site.  Can anyone out there confirm or deny that
this is true for them?

Steve Schaub
Storage Systems Engineer II
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 


Add to wish list for future TSm version

2004-06-11 Thread Steve Schaub
Along with online db defrags and diskpool defrag/reclamation, I would
really like to see a decent Client Option Set editor in the web admin
gui.  I had to resort to writing my own script to avoid the current one
(feels like I'm back in the old line-mode TSO days on the mainframe!)

Steve Schaub
Systems Engineer II, Storage Systems
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 


Re: D2D backup with TSM

2004-06-03 Thread Steve Schaub
Thanks for the clarification, Tim.

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 02, 2004 12:24 PM
To: [EMAIL PROTECTED]
Subject: Re: D2D backup with TSM


That can only be done on a sequential access pool.  If you try on a
storagepool of device type disk you will get:

ANR1718E MOVE DATA: Reconstruction of file aggregates is supported only
for movement in which the source and target storage pools are
sequential-access. ANS8001I Return code 3.

-Original Message-
From: Steve Schaub [mailto:[EMAIL PROTECTED]
Sent: June 2, 2004 11:03 AM
To: [EMAIL PROTECTED]
Subject: Re: D2D backup with TSM

I was under the impression that performing a "move data xx
reconstruct=y" would reconstruct the aggregate, thus effectively
performing reclamation on a disk type volume.  Probably not what the TSM
developers intended it for, but it might be just the ticket for those
moving to disk-only infrastructures (and not wanting to use file based
volumes).

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED]

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 02, 2004 10:39 AM
To: [EMAIL PROTECTED]
Subject: Re: D2D backup with TSM


> When would an aggregrate be reclamed when using a DISK device pool? 
> Rick


Once again check out
http://ew.share.org/callpapers/attach/Long_Beach_Conference/S5725a.pdf

There is a table that compares Disk Pools (Random Access) to Seq File
Disk pools.  The answer is aggregrate reconstruction is not supported on
disk pools.


Re: D2D backup with TSM

2004-06-03 Thread Steve Schaub
Very interesting.

It sounds to me like an opportunity for Tivoli to create a white paper
for us TSM admins on "Best Practices in a Disk-Only TSM Environment".
Although I would really prefer that they come up with a solution for the
DISK fragmentation problem so  that I don't have to cludge around with
FILE types.  While they are at it, maybe an online database defrag
utility would come in useful as well...

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 

-Original Message-
From: Richard Rhodes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 02, 2004 12:54 PM
To: [EMAIL PROTECTED]
Subject: Re: D2D backup with TSM


Thanks for all the replies . . . . an interesting discussion.

That is an interesting presentation Tim pointed to.  It answeres lots of
questions.

It's sounding like using only a DISK device pool for d2d backups isn't a
very good idea.  But FILE device pools seem to be limited.

It sounds to me like you want to keep DISK device pool for staging are
just like we do today, providing multiple concurrent access for backups.
Then, migrate the files to  FILE device pool for long term storage.
And, make sure you turn on client compression.  Also, realize that all
the files for the FILE device pool will have to be in one filesystem.
The copy storage pool could be a FILE device pool on another disk
system, or, a tape system if you need to ship them offsite.

Since I have experienced disk subsystem crashes with loss data, I
couldn't ever use just one disk system.  I think I'd have to have
separate disk systems for the primary pools and copy pools (or tape),
and maybe even a third subsystem for the db and staging pools.  Yes,
I've been called paranoid . . . but it comes from experience.


Thanks!

Rick







  "Rushforth, Tim"
  <[EMAIL PROTECTED]To:
[EMAIL PROTECTED]
  PEG.CA>  cc:
  Sent by: "ADSM:  Subject:  Re: D2D backup
with TSM
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  06/02/2004 12:23
  PM
  Please respond to
  "ADSM: Dist Stor
  Manager"






That can only be done on a sequential access pool.  If you try on a
storagepool of device type disk you will get:

ANR1718E MOVE DATA: Reconstruction of file aggregates is supported only
for movement in which the source and target storage pools are
sequential-access. ANS8001I Return code 3.

-Original Message-
From: Steve Schaub [mailto:[EMAIL PROTECTED]
Sent: June 2, 2004 11:03 AM
To: [EMAIL PROTECTED]
Subject: Re: D2D backup with TSM

I was under the impression that performing a "move data xx
reconstruct=y" would reconstruct the aggregate, thus effectively
performing reclamation on a disk type volume.  Probably not what the TSM
developers intended it for, but it might be just the ticket for those
moving to disk-only infrastructures (and not wanting to use file based
volumes).

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED]

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 02, 2004 10:39 AM
To: [EMAIL PROTECTED]
Subject: Re: D2D backup with TSM


> When would an aggregrate be reclamed when using a DISK device pool? 
> Rick


Once again check out
http://ew.share.org/callpapers/attach/Long_Beach_Conference/S5725a.pdf

There is a table that compares Disk Pools (Random Access) to Seq File
Disk pools.  The answer is aggregrate reconstruction is not supported on
disk pools.


Re: D2D backup with TSM

2004-06-02 Thread Steve Schaub
I was under the impression that performing a "move data xx
reconstruct=y" would reconstruct the aggregate, thus effectively
performing reclamation on a disk type volume.  Probably not what the TSM
developers intended it for, but it might be just the ticket for those
moving to disk-only infrastructures (and not wanting to use file based
volumes).

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 02, 2004 10:39 AM
To: [EMAIL PROTECTED]
Subject: Re: D2D backup with TSM


> When would an aggregrate be reclamed when using a DISK device pool? 
> Rick


Once again check out
http://ew.share.org/callpapers/attach/Long_Beach_Conference/S5725a.pdf

There is a table that compares Disk Pools (Random Access) to Seq File
Disk pools.  The answer is aggregrate reconstruction is not supported on
disk pools.


Re: TSM extended addition question.

2004-06-01 Thread Steve Schaub
Wanda is correct, I just threw the "rules" that come with my latest cd,
in the trash.

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 01, 2004 1:53 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM extended addition question.


I believe its 3 drives and 40 slots.
But I can't find the doc at the moment.



-Original Message-
From: Ben Bullock [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 01, 2004 1:46 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM extended addition question.


That's it? Just 20 slots? Wow, I thought the number was higher
than that.

Thanks,
Ben

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ray Louvier
Sent: Tuesday, June 01, 2004 11:44 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM extended addition question.


20 slots

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ben Bullock
Sent: Tuesday, June 01, 2004 12:42 PM
To: [EMAIL PROTECTED]
Subject: TSM extended addition question.

Quick question:

There is TSM and then the "TSM extended edition". We have a
remote site with an existing library. I can't seem to find the document
that says which version of TSM I have to buy.

I know that the extended version gives NDMP and DRM support,
etc., but I think there was also a cutoff on the size of your library.

Anybody have the link to that document bookmarked?

Thanks,
Ben


Re: D2D backup with TSM

2004-06-01 Thread Steve Schaub
Matt,

Check out a company called Bus-Tech (www.bustech.com) which makes a
product that allows open system disk to be connected to a mainframe via
escon or ficon.  They also have another product that allows you to use
cheap disk as virtual tape.

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 

-Original Message-
From: MC Matt Cooper (2838) [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 01, 2004 10:39 AM
To: [EMAIL PROTECTED]
Subject: Re: D2D backup with TSM


I am seriously looking at changing to a D2D system.  I know of other
people doing it with different backup software.  I know of no reason not
to do it with TSM.  I am not sure if I am going to use the ATA disk
array as the primary disk pool or a seconday disk pool behind SHARK
disk.  I would rather use it as the primary.  I watched a presentation
of a company that is using NEXSAN's ATA-BEAST as primary disk pool. They
said they are getting better than 100GB an hour and the disk is not
the bottle neck.  The company was not using TSM though.   The NEXSAN
disk seems to be much cheaper than everyone else's ATA arrays.  I
believe faster to  (70MB/sec).  My issue is our TSM server is running on
the mainframe and I must find some sort of adapter to connect ESCON to
Fibre channel.
Matt   

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Rhodes
Sent: Friday, May 28, 2004 10:05 PM
To: [EMAIL PROTECTED]
Subject: D2D backup with TSM

We recently had a presentation from EMC.  During this presentation they
discussed their new DL700 virtual tape library.  It got people talking
about a long term strategy of moving to a all disk based backup system.
So we started thinking of the pros and cons of how to go about setting
up TSM for a disk based backup system.

Here are some initial thought on ways to configure TSM for D2D backups.
We would be very interested in your thoughts/comments.  I doubt we are
the only people thinking about this topic.

1)Purchase a very large disk system (ata drives?) and put storage pools
on them.
- use a standard DISK device pool for backups
- how to reclaim space?  do you even need to?
- fragmentation problems?
- multiple node access concurrently
- use a FILE device based pool
- single node access per disk file volume
- need to run reclamation
- still stage to disk and migrate to FILE device based pool?
- use a tape copy pool for offsite and backup
- use a offsite disk pool somehow
- iscsi over lan or fc if close enough (enough throughput?)
- other?
- must use client compression to get data compressed
- we mostly don't do this today

2)  Purchase a virtual tape system like the DL700
(bundle of a EMC Clariion and a FalconStor vts appliance)
- provides compression for data
- appliance is responsible for layout and use of disk space
- can copy a virtual tape to a real physical tape for offsite
storage


Thanks

Rick


Re: exchange backup

2004-06-01 Thread Steve Schaub
Isnt it supposed to be "excfull.cmd"?

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 01, 2004 7:29 AM
To: [EMAIL PROTECTED]
Subject: Re: exchange backup


>  I am getting the error as stated below while taking the exchange 
>backup through TSM scheduler.
...
>Executing Operating System command or script:
>c:\program files\tivoli\tsm\TDPexchange\exefull.cmd
>06/01/2004 10:56:25 Finished command.  Return code is: 1
...

>From my notes in ADSM QuickFacts:

Executing Operating System command or   Message in client schedule log,
 script:referring to a command being run
per
either the PRESchedulecmd,
PRENschedulecmd,
POSTSchedulecmd, or
POSTNschedulecmd option; or by
the
DEFine SCHedule ACTion=Command
spec
where OBJects="___" specifies
the
command name.

I don't recognize "exefull.cmd", which may be some local command. Check
that it exists: if it does, check its internals for conditions which may
cause it to exit with a bad value which fouls the schedule.

   Richard Sims   http://people.bu.edu/rbs


Re: AW: AW: AW: Simple Backup Copy Group Question

2004-03-18 Thread Steve Schaub
I totally agree with Salak.  I keep hoping new server releases will incorporate a 
usable editor for client option sets.  It is so bad that I have cobbled together a 
mainframe script that allows me to keep all of my cloptset statements in files that I 
can edit there, then run a batch job that goes through about 8 steps to put them in 
place on my TSM server (AIX - and don't even bother talking to me about vi!).  I don't 
get what is so hard about writing a java editor for these things 

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 

-Original Message-
From: Salak Juraj [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 17, 2004 7:40 AM
To: [EMAIL PROTECTED]
Subject: AW: AW: AW: AW: Simple Backup Copy Group Question


well, 
I am not satisfied with the clopstes editor at all.
Try using "" (having blanks in path  forces you to do so)
try to edit path in an existing inclexcl statement,
try to gain an overwiev of particulary complicated clopstes and/or inclexcl statements 
in combination with local opt files and YES/NO parameters.

It works flawelessly but it is back-breaking work,
like editing text files with EDLIN.

Juraj


-Ursprüngliche Nachricht-
Von: Farren Minns [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 17. März 2004 11:06
An: [EMAIL PROTECTED]
Betreff: Re: AW: AW: AW: Simple Backup Copy Group Question


I think it's quite simple. I only ever really do exclude statements, and I never have 
to use " " for those, so I assume it's the same for include statements.

Just don't want to find out in two years time that we have files missing.

Farren


|+---+|
||   Salak Juraj ||
||   <[EMAIL PROTECTED]> ||
||   Sent by: "ADSM: Dist Stor   |   To:       [EMAIL PROTECTED]  |
||   Manager"|           cc:  |
||   <[EMAIL PROTECTED]>  |           Subject:        AW: AW: AW:  |
||   |   Simple Backup Copy Group Question|
||   03/17/2004 10:04 AM ||
||   Please respond to "ADSM:||
||   Dist Stor Manager"  ||
||   ||
|+---+|






>> I actually set up all include / exclude statements within the TSM 
>> server
itself

I see, the cloptset editing  interfacece is rather horrific,
it is quite tricky to use "" in cloptsets, isn´t it?

Juraj


Re: J vs K 3590 Tapes

2004-02-24 Thread Steve Schaub
David,

Our policy has been to use J tapes only for collocated pools, K for
non-collocated.  This maximizes the larger tapes and reduces the number
of offsite tapes we have to manage.  The downside to the bigger tapes,
especially with copy pools, is that reclaiming them becomes very
difficult as the number of nodes increases (reclaiming 1 copy pool tape
could mean having to mount as many primary tapes as you have nodes).

Steve Schaub
Systems Engineer, Operations & Storage
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell)
[EMAIL PROTECTED] 

-Original Message-
From: David E Ehresman [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 23, 2004 10:30 AM
To: [EMAIL PROTECTED]
Subject: J vs K 3590 Tapes


We are backing up to 3590 tapes.  We currently use K (extended length)
tapes for onsite tape storage pool and J (standard length) tapes for
offsite copy storage pool.

We will soon start replacing some of our older J tapes and are trying to
decide whether to replace them with Js or Ks.  Has anyone weighed the
pros and cons of using Js vs Ks for offsite tapes.  Which did you decide
on and why?

David Ehresman
University of Louisville


Re: Primary storage pool on disk

2004-01-30 Thread Steve Schaub
But of course we would all still love for Tivoli to publish a white
paper on "tape-less" TSM with recommended best practices.  Especially
the disk vs. file question, which has never been definitively answered
by Tivoli and has about equal numbers of fully convinced people in each
camp.

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell phone)
[EMAIL PROTECTED] (text page)
WWJWMTD


-Original Message-
From: Stapleton, Mark [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 30, 2004 2:06 PM
To: [EMAIL PROTECTED]
Subject: Re: Primary storage pool on disk


This topic has been done to death in the last six months. Please search
the archives at search.adsm.org.
 
--
Mark Stapleton ([EMAIL PROTECTED])
 

-Original Message- 
From: Stef Coene [mailto:[EMAIL PROTECTED] 
Sent: Fri 1/30/2004 12:45 
To: [EMAIL PROTECTED] 
Cc: 
Subject: Primary storage pool on disk



Hi,

I have a customer who wants to put his primary storage pool on
disk.  This is
now 3TB and can grown up to 7TB.  Incremental backup is appr
300GB/night.

But I have some questions?
- What kind of disks should I use?  (the customer has a FastT
from IBM with
fiber disks, but he wants a cheaper solution)
- What kind of raid level as protection?  Raid5, JBOD, Raild
0+1, ... ??
- Should I use a DISK type or FILE type storage pool?

Thx.

Stef



Re: TDP for MS SQL and cached storage pools

2004-01-30 Thread Steve Schaub
Actually, based on my experience with TDP for Mail, I would guess that
Tim is right on.  We were forced to turn caching off on the diskpool
that we used for mail backups due to this "feature" in TSM-TDP (support
claims that this is not a bug and is working exactly as designed, they
just don't bother to let anyone know that caching is not an option with
TDP).

Apparantly TDP cant trust the calculation it does before sending,
therefor, it will not send if the %util in the diskpool meets the
requirement, rather than factoring in the %reclaimable.

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell phone)
[EMAIL PROTECTED] (text page)
WWJWMTD


-Original Message-
From: Stapleton, Mark [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 30, 2004 1:57 PM
To: [EMAIL PROTECTED]
Subject: Re: TDP for MS SQL and cached storage pools


You've jumped to *way* too large a conclusion. This has nothing to do
with the disk storage pool, cached or otherwise.
 
You need to check the dsierror.log for API-related errors, and you also
need to check your error message against the listing for it in
http://people.bu.edu/rbs/ADSM.QuickFacts.
 
--
Mark Stapleton ([EMAIL PROTECTED])

-Original Message- 
From: Tim Melly [mailto:[EMAIL PROTECTED] 
Sent: Fri 1/30/2004 12:42 
To: [EMAIL PROTECTED] 
Cc: 
Subject: TDP for MS SQL and cached storage pools



To *,

I'm getting the following error from an NT TSM client (v 5.1.6)
using TDP for MS
SQL:

01/23/2004 10:08:08 ACO5436E A failure occurred on stripe number
(0), rc = 418
01/23/2004 10:08:08 ANS1311E (RC11)   Server out of data storage
space

The TSM server (AIX 4.3.3 ML10, TSM 5.1.6) has sufficient
primary storage pool
space (350 GB) to handle this backup request (70 GBs) but I'm
still getting the
error. I'm using "disk caching" on my primary storage pool. Has
anyone
encountered a situation where disk caching would cause an
incorrect calculation
of available storage pool space???


Regards, Tim
NAFTA IS Technical Operations
(203) 812-3469
[EMAIL PROTECTED]



Re: TSM best practices manual?

2004-01-29 Thread Steve Schaub
Tab,

Just to throw out a thought - our company is currently looking at a
vendor called Data Domain (I ran into their product during the Storage
Decisions 2003 conference last Oct).  They make an ATA based appliance
that is supposed to be able to store in the range of 23TB using only 16
raid10 disks.  Their claim to fame is a front-end that does what they
call "Global Compression", meaning that they eliminate redundant chunks
of data - which, in the backup world, represents a lot of our data
stream.

Like I said, we are only starting to look at them (as a potential
disk-based replacement of our primary tape pools), so I cant say if they
can really do what they say, but it might be worth checking out their
website.

If anyone else out there has evaluated the product good or bad, please
share the results.

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-886-8821 (cell phone)
[EMAIL PROTECTED] (text page)
WWJWMTD


-Original Message-
From: Tab Trepagnier [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 28, 2004 2:42 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM best practices manual?


Wanda,

Thanks very much for the suggestions.  In answer to your questions:

1.  I know I have compression operating on the drives because our
average tape capacity is 1.6 X uncompressed capacity on all three media.
2. Most of what we back up is server data; a little bit is OS, etc, but
not much.  We're a Notes and Oracle shop and we have a LOT of data from
both systems.  We also design and manufacture our own products, and our
engineers routinely generate 100MB+ CAD files. 3. We keep five copies of
user-created data, and two copies of everything else.  Design data is
also archived but that isn't relevant to this discussion. 4. True, but I
already have FIVE libraries; I am trying to avoid buying a sixth.

This is what I think I'm going to do.  At present we keep everything
except permanent archives online fulltime since we don't really have an
"operator".  We have two parallel data paths: "small clients" going to a
3575, and "large clients" going to the 3583.  I'm going to recombine
those paths into a single path and make liberal use of the Migration
Delay feature.  The idea is for the incoming data to travel:  Disk -->
3575 --> 3583 --> MSL DLT --> shelf. The idea is to have data 1-2 days
old on disk (radical!), data 2-10 days old on fast-access 3570, data
10-180 days old on LTO, and data older than 6-12 months on the shelf.
The little MSL retains nothing but is instead just a portal.

As for a "best practices" guide, I've begun browsing the TSM 5.2
Implementation Guide to see if that provides the info I'm looking for.
I'm also browsing my training handout from the ADSM 3.1 Advanced
Implementation course.

Thanks again for the suggestions.

Tab







"Prather, Wanda" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 01/28/2004
12:44 PM Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: TSM best practices manual?


Tab,

I'm not sure this is an issue of TSM design -  if your libraries are out
of capacity in terms of SLOTS, rather than throughput, you just have
"too much" data.

That either means you are

1) not compressing the data as much as you can, or
2) backing up things you don't need to
3) keeping data longer/more copies  than you need to
4) really in need of additional library space

For 1), it's a matter of checking to make sure that your drives do have
compression turned on.  If you can't compress at the drive level, turn
it on at the client level.

For 2-4, I don't know any magic/automatic way of figuring it out.

Here's what I do:

dsmadmc -id=x -password=yyy -commadelimited  "select
CURRENT_DATE as DATE,'SPACEAUDIT',node_name as node, backup_mb,
backup_copy_mb,archive_mb, archive_copy_mb  from auditocc"`;

Suck that into a spreadsheet and look to see which clients are taking up
the most space on the server side.

Then go look in detail at the management classes and exclude lists
associated with the "hoggish" clients, and see what you can find out
about the copies they are keeping.

- Are you keeping copies of EVERYTHING on the client for a zillion
versions, rather than just the important data files?
- for Windows 2000, are you keeping more copies of the SYSTEM OBJECT
than would likely be used?
- Look at their dsmsched.log files and see what is actually being backed
up.

- Be suspicious of TDP clients not deleteing copies they are supposed
to. (For example, if they are supposedly keeping 10 versions of a 10 GB
data base, but the SELECT shows 500 GB on the server, there's something
wrong.)
- If it's user/group space, are there lots of .mp3 files?  (exclude 'em
with 

Re: tsm win2k silent install

2003-09-23 Thread Steve Schaub
Working on this myself - the 5.2 has "issues" with the msi used for the
silent install.  You have to get the 5.2.0.2 release, and note that
their documentation is wrong - use "Readmes" instead of
"OnlineClientReadmes".

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-412-0544 (numeric page)
[EMAIL PROTECTED] (text page)
WWJWMTD


-Original Message-
From: Tim Brown [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 19, 2003 5:08 PM
To: [EMAIL PROTECTED]
Subject: tsm win2k silent install


i am trying a silent install with tsm 5.2 on win 2k, the 5.1 code readme
file had this command

dsmcutil  install /name:"TSM Central Scheduler Service" /node:NODE
/password:PASS /autostart:yes /MACHine:nODE /CLIENTDIR:"c:\program
files\tivoli\tsm\baclient"

the 5.2 readme does not

has anybody been able to install 5.2 on win2k silently

Tim Brown
Systems Specialist
Central Hudson Gas & Electric
284 South Ave
Poughkeepsie, NY 12601
Phone: 845-486-5643
Fax: 845-486-5921


Re: TSM and DR

2003-09-19 Thread Steve Schaub
If someone has not taken a TSM DB backup for 3 months, they weren't
really serious about DR in the first place.

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-412-0544 (numeric page)
[EMAIL PROTECTED] (text page)
WWJWMTD


-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 19, 2003 1:19 PM
To: [EMAIL PROTECTED]
Subject: TSM and DR


Say for a moment you're faced with recovering a TSM server in a DR
situation. You have your DB backup and copypool tapes and perform a
database recovery. If that DB was created back in January and it's now
March, isn't there a potential for objects getting expired the first
time you start the TSM server? E.g. when the TSM server is started it
typically performs an expire inventory as part of that sequence. I would
imagine that now that it's 2 months later, would it therefore start
expiring objects that you probably don't want to have expired?

If not, why not?
If so, whats the appropriate step to take before starting the TSM server
(or perhaps even before recovering the DB) to ensure expire inventory
doesn't ruin your recovery? I recall there being an option in
dsmserv.opt that allows you to turn off automatic expire inventory. That
seems like a good idea.. but what if there was an admin schedule that
runs expire inventory back then and you happen to start the recovery
while in the schedule's window?

I think you can see what I'm getting at with all this. I want to make
sure all my bases are covered..


This e-mail has been captured and archived by the ZANTAZ Digital
Safe(tm) service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is privileged, confidential or exempt from disclosure
under applicable law.  If the reader of this message is not the intended
recipient, or the employee or agent responsible for delivering this
message to the intended recipient, you are hereby notified that any
dissemination, distribution or copying of this communication is strictly
prohibited.  If you have received this communication in error, please
notify the sender immediately by telephone or directly reply to the
original message(s) sent.  Thank you.


Re: SUBFILEBACKUP problems and issues

2003-08-14 Thread Steve Schaub
I have been considering using this option on all of our server backups once we move to 
a 100% disk environment for our primary pools.  This would potentially save us a lot 
of disk space, while still allowing for fairly rapid restores.  But if there are 
limitations that would prevent us from restoring, obviously that will not work.  I 
agree that the developers should start looking at how to enable this option being used 
at the enterprise level, not just laptop.

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-412-0544 (numeric page)
[EMAIL PROTECTED] (text page)
WWJWMTD


-Original Message-
From: Salak Juraj [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2003 3:26 AM
To: [EMAIL PROTECTED]
Subject: AW: SUBFILEBACKUP problems and issues


Hi,

I have a couple of small file servers beeing backed up over VPN´s of 128-256k.

All share same business requirement- to be able to restore single files rapidly, while 
restoration of  whole server - considered to be rare - may take a week.

So raising this limits would be of help for me and my network utilisation.

Maybe it could help if some of us  (== more than one) would open 
an official enhancement requirement to IBM?

There had been a small chat with one developer this year on this forum, he meant there 
were no huge technical problems with it.

regards
Juraj Salak



-Ursprüngliche Nachricht-
Von: Len Boyle [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 11. August 2003 23:22
An: [EMAIL PROTECTED]
Betreff: SUBFILEBACKUP problems and issues


Hello

We have been using the subfilebackup feature for windows 2k clients located out on the 
wan.This feature has helped quite a bit as the windows client disk usage has grown and 
the wan lines speeds have grown but not at the same rate.

Side note: his is where the tsm client on an icrenmental backup only backups changed 
blocks and not the whole file. This is a huge savings in network and tsm server 
media.To do this the tsm client creates a checksum for each file and stores them in a 
directory on the client.

We have run up against several design limits, which were discussed on this listserv a 
few months back. Now we have run into a problem that appears to be the same as listed 
in apar IC36552.

Both the limits and the problem are related to that magic 1 and 2 gig numbers.

The problem is when the base file and delta files, each under the 2 gig support limit, 
in total exceed 2gig. Then the client can not put the data together to restore the 
file.It appears that the client uses a 2gig memory map of the data.

The first limit is that the files used to store the checksums have a limit of 1 
gigbyte.

The second limit is that the files covered under subfile are limited to 2 gig.

The orginal design thoughts for this feature seemed to be for laptops with small disk 
drives. Of course we are using this for windows files servers with users who store 
files greater then 2gig.

So we run run up against both of the above limits. And now we are finding that the 
file limit is really going to have to be something much smaller then 2gig, because of 
apar ic36552.

Can I ask if there is a test fix for Apar IC36552, that I be put on the list of folks 
that are on the testors list.

Also let me put in a vote to raising the limit on the file size and the checksum size.

How many folks out there are using this feature?
For servers?
On the lan and not the wan?


-
Leonard Boyle   [EMAIL PROTECTED]
SAS Institute Inc.  [EMAIL PROTECTED]
Room RB448  [EMAIL PROTECTED]
1 SAS Campus Drive  (919) 531-6241
Cary NC 27513


Need help setting up FILE class pool volumes

2003-08-14 Thread Steve Schaub
ntion (min): 
  Label Prefix: 
   Library: 
 Directory: /bfaa2dk10/filevolumes
   Server Name: 
  Retry Period: 
Retry Interval: 
Shared: 
Last Update by (administrator): TSS1
 Last Update Date/Time: 08/06/03 13:08:11


   Volume Name: /bfaa2dk10/filevolumes/1FCB.BFS
 Storage Pool Name: DIRFILE
 Device Class Name: FILE2
   Estimated Capacity (MB): 50.0
  Pct Util: 97.7
 Volume Status: Filling
Access: Read/Write
Pct. Reclaimable Space: 0.4
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 6
 Write Pass Number: 1
 Approx. Date Last Written: 08/06/03 13:40:04
Approx. Date Last Read: 08/07/03 16:34:20
   Date Became Pending: 
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location: 
Last Update by (administrator): 
 Last Update Date/Time: 08/06/03 13:37:55


Filesystem1024-blocks  Free %UsedIused %Iused Mounted on

/dev/bfaa3dk10  5242885236881%   17 1% /bfaa3dk10

/dev/bfaa4dk10  5242885236881%   17 1% /bfaa4dk10

/dev/bfaa5dk10  5242885236881%   17 1% /bfaa5dk10

/dev/bfaa6dk10  5242885236881%   17 1% /bfaa6dk10

/dev/bfaa7dk10  5242885236881%   17 1% /bfaa7dk10

/dev/bfaa2dk10  524288472472   10%   19 1% /bfaa2dk10

 

 
[/bfaa2dk10]  
%ls -l

total 16

drwxr-sr-x   2 root sys 512 Aug 06 13:37 filevolumes

drwxrwx---   2 root system  512 Aug 01 16:10 lost+found

 

[/bfaa2dk10]  
%cd filevolumes

 

[/bfaa2dk10/filevolumes]  
%ls -l

total 102400

-rw---   1 root sys52428800 Aug 06 13:44 1fcb.bfs

 


Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-412-0544 (numeric page)
[EMAIL PROTECTED] (text page)
WWJWMTD


TDP for Exchange - 5mb/s throughput?

2003-03-20 Thread Steve Schaub
We are running TDP-E on a Compaq 4way Xeon w/4gb mem.  We backup 40gb in
around 2.25 hrs, which comes out in the range of 5mb/s.  We have run the
backup across the fast ethernet and also the gig-e connection, with no
apparent difference.  The box does not show any cpu or memory hit during
the backup period.
Does anyone have experience with TDP MSExchgV2 (shows client version
4.2.2.0) that would suggest where the bottleneck might be?  We are
coming into an IBM H50 4-way w/2gb memory, TSM 4.2.1.9, going to an SSA
diskpool.

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-836-6962 (cell)
WWJWMTD


Re: HELP: MVS TSM Server Hanging on Boot

2003-01-07 Thread Steve Schaub
Mark,
We had this issue a few weeks ago.  We ended up having to get Tivoli
level2 to walk us through the fix.  It basically involved finding a
filespace (we use AIX for server) that had spare room, defining a
temporary log volume using the dsmserv command, and letting it use this
"extra" space to perform the redo action.  Sorry this is short on
detail, maybe one of the tsm "gurus" can fill in the exact commands?

-Original Message-
From: Hokanson, Mark [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 07, 2003 3:19 PM
To: [EMAIL PROTECTED]
Subject: HELP: MVS TSM Server Hanging on Boot


This morning we noticed our TSM recovery log was ~97-100% full. The
server was still up but we couldn't log in. We halted the server,
increased the recovery log, and are trying to bring the server back up.

The problem is the server is stuck in Recovery log undo pass in
progress.

10.36.06 STC24351  ANR0900I Processing options file dsmserv.opt.

10.36.07 STC24351  ANR0990I Server restart-recovery in progress.

10.36.30 STC24351  ANR0200I Recovery log assigned capacity is 11780
megabytes. 10.36.30 STC24351  ANR0201I Database assigned capacity is
53728 megabytes.

10.36.30 STC24351  ANR0306I Recovery log volume mount in progress.

10.44.07 STC24351  ANR0353I Recovery log analysis pass in progress.

11.57.38 STC24351  ANR0354I Recovery log redo pass in progress.

12.52.38 STC24351  ANR0355I Recovery log undo pass in progress.

12.52.40 STC24351  ANR0362W Database usage exceeds 88 %% of its assigned
capacity

Can anyone please advise?

Mark Hokanson
Thomson Legal and Regulatory



NexSan's ATABoy/ATABeast as a primary diskpool

2003-01-03 Thread Steve Schaub
Is anyone out there using one of these, or something similar, as TSM's
primary diskpool?  If so, what is the good/bad/ugly involved?

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-836-6962 (cell)
WWJWMTD



Re: "size estimate exceeded" when "cache migrated files" is enabled

2003-01-02 Thread Steve Schaub
Not sure about the API, but we had this same issue trying to use the TDP
for Exchange.  Turns out you can't use a diskpool with caching turned
on, because the calculation prior to send is not accurate.  There is
nothing in the documentation that I could find that pointed this out,
but Tivoli assured me that this was an "intentional feature", not a bug!
Grrr.

-Original Message-
From: Christian Sandfeld [mailto:[EMAIL PROTECTED]] 
Sent: Thursday, January 02, 2003 7:30 AM
To: [EMAIL PROTECTED]
Subject: "size estimate exceeded" when "cache migrated files" is enabled


Hi list,

First, a happy new your to you all!

Just before christmas we changed our setup a bit to facilitate backup of
a Domino server running on AS/400, utilizing BRMS and the TSM API as the
backup client. We have verified that BRMS can contact the TSM server,
and also verified that backup runs as scheduled.

On the TSM server we introduced a seperate policy domain and a seperate
disk pool for the AS/400 data. The size of the diskpool is 270GB, and
the data backed up from the AS/400 server each day is approximately
80GB.

All ran smoothly for the first couple of days, but since enabling
caching of migrated files, we started to get loads of these errors in
our activity
log:

  ANR0534W Transaction failed for session 12451 for node
  AS4DOM01 (OS400) - size estimate exceeded and server is
  unable to obtain additional space in storage pool AS400POOL.


It is my understanding that caching will keep data in the diskpool even
after files are migrated to tape, but that when space in the diskpool is
needed for new data, some of the old data is removed.

The server is ver. 4.2.2.12
I believe the TSM API installed on the AS/400 is the latest version
currently available, but I can't remember for sure (and I'm not sure how
to check).

Can anyone help shed some light over this matter?

Kind regards,

Christian



Re: Webrowser sessions that drive up cpu usage on AIX

2002-12-11 Thread Steve Schaub
We have been plagued by this problem for several years.  We have only
seen problems when a client machine has IE at level 5.0.  We ended up
including a "phantom killer" to our healthcheck that cancels any "?"
sessions it finds when it runs.

-Original Message-
From: Robert L. Rippy [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, December 10, 2002 7:25 AM
To: [EMAIL PROTECTED]
Subject: Re: Webrowser sessions that drive up cpu usage on AIX


I have seen them left behind when a user doesn't log out and just closes
the window also.

Thanks,
Robert Rippy



From: Peter Hadikin <[EMAIL PROTECTED]> on 12/09/2002 04:08 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:
Subject:  Webrowser sessions that drive up cpu usage on AIX

Running TSM server on AIX ... AIX is at 4.3.3 maintenance level 10, TSM
server is a 5.1.5.2.  Anybody seen 'phantom' sessions left behind by
using a webrowser that seem to use plenty of cycles on the server?  Is
is just me or are others seeing this?

1,928 HTTP   RecvW0 S0   0 Admin WebBrow- ?
   ser
 1,949 HTTP   Run  0 S0   0 Admin WebBrow- ?
   ser

Thanks in advance, Peter



Re: Tape drive recomendations

2002-10-31 Thread Steve Schaub
If I'm reading this correctly, only migrating my diskpool down to 10% is
hurting my DRM reclamation?
Are you saying that putting in a 10TB diskpool that never goes to
primary tape (only dr copies) would give me monstrously bad
reclamations?  Our current main diskpool is 100gb on SSA and has caching
on.  I haven't had a terrible problem with DR reclamation, though the
storage pool backup to DRM only has a throughput of about 5mb/sec.

-Original Message-
From: [EMAIL PROTECTED] 
Sent: Thursday, October 31, 2002 11:03 AM
To: CITY.WINNIPEG.MB.CA.TRushfor; VM.MARIST.EDU;.ADSM-L
Subject: Re: Tape drive recomendations


And the issue is still there if you don't use CACHE=YES but don't
completely clear your backup pool.

We have more disk in our storagepool than is required for one night's
incremental - so we thought keeping some backups on disk was a good
thing (why migrate to tape if you don't need the space).


Tim Rushforth
City of Winnipeg

-Original Message-
From: Bill Boyer [mailto:bill.boyer@;VERIZON.NET]
Sent: October 31, 2002 9:01 AM
To: [EMAIL PROTECTED]
Subject: Re: Tape drive recomendations

Be careful of your copypool reclamations with the disk cache turned on!!
There is a BIG performance hit on reclamation when the primary copy of
the file is on a DISK direct access storage pool. Then the
MOVESIZETHRESH and MOVEBATCHSIZE values are thrown out the window and
the files are processed one at a time.

What I've done to relieve the restore times is to not MIGRATE the disk
pools until the end of the day. That way restoring from last night is
quick. I had a client where they wanted CACHE=YES on a 60GB disk pool.
The offsite copypool reclamation ran for 2-days! Changed it so that
migration started at 5:00pm and nobody complained about restore times.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@;VM.MARIST.EDU]On Behalf Of
Steve Schaub
Sent: Thursday, October 31, 2002 5:59 AM
To: [EMAIL PROTECTED]
Subject: Re: Tape drive recomendations


This is one reason I am looking into some of the new, cheaper ATA-based
disk arrays.  98% of restores come from the last few versions, so if you
can size the diskpool large enough (and turn caching on) that you rarely
need to go to tape, restores scream.  Some of the initial prices I am
seeing are < $10k per TB.  It's not SSA, but for a diskpool it might be
fast enough.

-Original Message-
From: [EMAIL PROTECTED]
Sent: Wednesday, October 30, 2002 10:15 PM
To: UFL.EDU.asr; VM.MARIST.EDU;.ADSM-L
Subject: Re: Tape drive recomendations
2

=> On Wed, 30 Oct 2002 16:42:14 -0600, "Coats, Jack"
<[EMAIL PROTECTED]> said:

> From my fox hole, LTO works great, but in some ways it is 'to big'. 
> The spin time on the tapes is measured as about 3 minutes to rewind 
> and unmount a tape.  Meaning if you have to scan down a tape to 
> restore a file it can be a while.  Very fast tapes tend to be small, 
> so it is a real tradeoff.

> Speed of restore is starting to be a factor here and I have seen 
> several posts where that is becoming more of an issue at many sites. 
> But the architecture of TSM that makes it great, also gets in the way 
> of high speed restores, unless you have lots of slots in a large 
> library for a relatively small number of clients (co-location and/or 
> backup sets - for theses many smaller tapes might be better, but I 
> digress).


Our call on this is congealing: Use the LTO for less-often-read storage.
i.e.: copy pools.  If we can have primary pools on 3590s, we can get up
to 60G raw on the -K volumes.  That seems plenty at the moment.

We can use the 200G-raw (coming soon!) LTO volumes for copies, and read
from them correspondingly less often.

LTO drives are, at the very least, a cheap way to increase your drive
count.

- Allen S. Rout



Re: resource utilization

2002-10-31 Thread Steve Schaub
We set it to 5 for servers that have the cpu/memory/os capable of
handling multiple concurrent backup streams, and leave it at 2 for those
that are borderline.  For a beefy machine with multiple filespaces,
setting it higher will definitely improve backup elapsed times.  But it
can put a big strain on smaller machines.

-Original Message-
From: [EMAIL PROTECTED] 
Sent: Thursday, October 31, 2002 8:45 AM
To: EXCHANGE.ML.COM.JWholey; VM.MARIST.EDU;.ADSM-L
Subject: resource utilization


Is it recommended to add this value to the dsm.opt file?  Initially,
IBM's recommendation was to use the default value of 2 unless you have a
special circumstances.  What is Tivoli's positon at this time?  What are
most people doing?  What are the drawbacks, if any?



Re: Tape drive recomendations

2002-10-31 Thread Steve Schaub
This is one reason I am looking into some of the new, cheaper ATA-based
disk arrays.  98% of restores come from the last few versions, so if you
can size the diskpool large enough (and turn caching on) that you rarely
need to go to tape, restores scream.  Some of the initial prices I am
seeing are < $10k per TB.  It's not SSA, but for a diskpool it might be
fast enough.

-Original Message-
From: [EMAIL PROTECTED] 
Sent: Wednesday, October 30, 2002 10:15 PM
To: UFL.EDU.asr; VM.MARIST.EDU;.ADSM-L
Subject: Re: Tape drive recomendations


=> On Wed, 30 Oct 2002 16:42:14 -0600, "Coats, Jack"
<[EMAIL PROTECTED]> said:

> From my fox hole, LTO works great, but in some ways it is 'to big'.  
> The spin time on the tapes is measured as about 3 minutes to rewind 
> and unmount a tape.  Meaning if you have to scan down a tape to 
> restore a file it can be a while.  Very fast tapes tend to be small, 
> so it is a real tradeoff.

> Speed of restore is starting to be a factor here and I have seen 
> several posts where that is becoming more of an issue at many sites.  
> But the architecture of TSM that makes it great, also gets in the way 
> of high speed restores, unless you have lots of slots in a large 
> library for a relatively small number of clients (co-location and/or 
> backup sets - for theses many smaller tapes might be better, but I 
> digress).


Our call on this is congealing: Use the LTO for less-often-read storage.
i.e.: copy pools.  If we can have primary pools on 3590s, we can get up
to 60G raw on the -K volumes.  That seems plenty at the moment.

We can use the 200G-raw (coming soon!) LTO volumes for copies, and read
from them correspondingly less often.

LTO drives are, at the very least, a cheap way to increase your drive
count.

- Allen S. Rout



Re: HELP! Faster Restore than Backups over Gigabit?

2002-10-31 Thread Steve Schaub
Paul (or anyone who knows),
If Vin has the logging mode set to rollforward, does that impact
performance?  I only suggest this because I have had a nagging slowdown
in some operations and the only change I can really identify is that we
switched from normal to rollforward logging.  Just a wild guess.

-Original Message-
From: [EMAIL PROTECTED] 
Sent: Wednesday, October 30, 2002 10:36 PM
To: NAPTHEON.COM.seay_pd; VM.MARIST.EDU;.ADSM-L
Subject: Re: HELP! Faster Restore than Backups over Gigabit?


You have lots of RAM, but what is your bufferpool size.  For this size
of machine memory, I would use at least 256MB for the bufferpool size.
Also beef up the logpool size as well.  Your TSM DB and log are very
small.  Do a Q DB F=D command and see what your database hit percentage
is.  If it is less than 98 percent you need to increase the bufferpool.

Did you TSM mirror the Database and Log?  If so, it is probably useless
because you are already running protected disk.  Remember, on RAID 1 you
still have to write all data twice and on read back both drives can be
used as a source if the RAID card is smart enough to do that.

I am betting that you are overrunning the RAID card during the backup.
Are the RAID 1 mirror volumes on the same physical SCSI bus as the
primaries? That could be the issue as well.

Hope this helps.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Vin Yam [mailto:vyam@;QBCT.COM]
Sent: Wednesday, October 30, 2002 11:14 AM
To: [EMAIL PROTECTED]
Subject: Re: HELP! Faster Restore than Backups over Gigabit?


Hi,

Our backups are set for an absolute serialization.  We backup and
restore the same number of files and the same amount of data 42 GB.  The
TSM server is configured in a RAID 1E0 with a ServeRAID 4Mx adapter.
The client is using a RAID 1.  We are going straight to disk since our
diskpool is 150 GB. We've tried formatting the partition that the
diskpool is on in 64k blocks, but it hasn't proved much help.

Is there a case where the TSM DB can be TOO large?  Our TSM DB volume is
1 GB in size and our recovery log is 250 MB in size.  We have 4 GB of
RAM on the TSM server, so we didn't consider this to be a problem.
Thanks for any help.  Please email me if you need more information.

-Vin
[EMAIL PROTECTED]


-Original Message-
Forum:   ADSM.ORG - ADSM / TSM Mailing List Archive
Date:  Oct 30, 00:08
From:  Seay, Paul <[EMAIL PROTECTED]>

Actually, the 2048 TCPWINDOWSIZE is not supported in NT to my knowledge.
It is supported in W2K at SP1 or 2, cannot remember, with a registry
hack. Someone else will have to give the particulars on that.

Be careful comparing restores to backups.  Depending on what numbers you
are using, you may get the wrong conclusions.  Make sure it is the same
files backed up that were restored.  Also, look at where your backup is
going to on your server.  If it is going to RAID-5 storage pools, that
is it.  The write penalty on the RAID-5 Array is the cause of the backup
delay.  If you are going directly to tape, I do not know what the issue
is without a lot more information.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Vin Yam [mailto:vyam@;QBCT.COM]
Sent: Tuesday, October 29, 2002 7:16 PM
To: [EMAIL PROTECTED]
Subject: HELP! Faster Restore than Backups over Gigabit?


Hi,

We just installed Gigabit fiber NICs and an isolated gigabit fiber
switch. Our restores have increased dramatically from 31.81 GB/hr (8.84
MB/s) to 56 GB/hr (15.6 MB/s).  The backups are still around 35.5 GB/hr
(9.8 MB/s).  The TSM server is very powerful (NT 4.0 SP6a Dual 1.8 Ghz
P4 w/ 4GB RAM, 150 GB RAID array) and the clients are (Netware 4.2 Quad
XEON P3 w/ 4 GB RAM). (We've tried changing the TCPWindow Size in our
dsmserv.opt from 63 to 2048 with no effect) We're running TSM 5.1.1.4
server and TSM 4.2.3 netware
client.   Any
ideas?  Please email me direct if you have any suggestions or need more
information.  Thanks.

DSM.OPT settings

** TSM TWEAKS **
COMPRESSION NO
LARGECOMMBUFFERS YES
MEMORYEFFICIENTBACKUP NO
PROCESSOR 20
RESOURCEUTILIZATION 10

TXNB 2097152

TCPB 32
TCPNodelay YES
TCPWindowsize 64
** TSM TWEAKS **

managedservices schedule webclient
schedmode prompted

-Vin
[EMAIL PROTECTED]



Linux as TSM Server; ATA DiskArrays; SSA-diskpool copypool speeds (3 totally unrelated questions for the price of one!)

2002-10-24 Thread Steve Schaub
I have 3 questions I have been pondering for a few weeks and would like
input on:

1. Running TSM Server under Linux on Intel
Has anyone started running TSM Server under Linux yet, and if so, would
you be willing to provide feedback?
Our current environment is TSM 4.2 running on a NetStore (3466-C00) with
4cpu, 2gb-ram and ssa disk for db/logs and all diskpools.  NSM is being
discontinued and we have no AIX admin (HP-UX shop starting to add
Linux).  Suport $ for the rs/6000 is also increasing dramatically.  I
would like to consider using a beefy Intel box (Compaq/Dell/IBM) or even
a cluster of less beffy machines if the bottom line looks good and it
works.  We could use W2K but I'm not quite ready to take the plunge into
the "dark side".

2. Using ATA Disk Array for nearline storage
With STK and others coming out with lower cost disk arrays based on ata
disk, has anyone considered using one as nearline?  We have about 20TB
on tape, and if I could keep all our backup on a disk array and only use
tape for DRM I could cut down on a lot of tape processing/cost and avoid
buying more drives.  Thoughts?

3. We currently use SSA disk for our db/logs as well as all our
diskpools.  Our stgpool backup from diskpools to DRM tape only achieves
about 5mb/sec, which I don't consider very good.  Is this really
reasonable, and if not, where would be a good place for me to start
troubleshooting (remember with NSM I am very limited in what I can do on
the box).

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-836-6962 (cell)
WWJWMTD



Re: how many active session?

2002-10-17 Thread Steve Schaub
Another approach is to turn accounting on, and use those records to
create the view.  We had the same need, and wrote a Rexx program that
daily reads those records and generates a timeline using a very basic
text format of a bar graph showing exactly when each session was
running.  We reference this whenever we want to add new nodes, or
occassionally revamp the entire schedule, since it shows where our
"spikes" and "holes" are.

-Original Message-
From: "Mr. Lindsay Morris" <[EMAIL PROTECTED]> 
Sent: Thursday, October 17, 2002 9:17 AM
To: VM.MARIST.EDU;.ADSM-L
Subject: Re: how many active session?


One way to do this is to take periodic "query session" readings and
count them. A better way (ours) is to analyze the activity log for
session-start and session-end messages (not as easy as it sounds, but
do-able). This is better because:
1. It doesn't hammer your TSM server with queries
2. It sees all the fine detail of session activity.  When a TSM
schedule fires, you may have 50 nodes in session - but only for a
minute.  60 seconds later 8 of them are done; two minutes later 25 of
the are done.  So there's a spike of activity that you'd miss, if you
only took a reading once every five minuntes.

What good is this detail?  Well, when you look at a night's worth, you
might see that there's a spike at 8PM, which is mostly done by 8:30.
Then there's another spike at 10PM.  But from 8:30 to 10:00 PM, nothing
much is happening.  So you can see that it might be smart to move your
10 PM job up to 8:30, and squeeze your backups into a smaller window.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@;VM.MARIST.EDU]On Behalf 
> Of Niklas Lundstrom
> Sent: Thursday, October 17, 2002 2:42 AM
> To: [EMAIL PROTECTED]
> Subject: Re: how many active session?
>
>
> Hello
>
> The TSM Manager can provide you with this and much more. There is a 
> free 30day  trialversion to download at www.tsmmanager.com I think the

> product is very good
>
> Regards
> Niklas
>
> -Original Message-
> From: MC Matt Cooper (2838) [mailto:Matt.Cooper@;AMGREETINGS.COM]
> Sent: den 16 oktober 2002 13:31
> To: [EMAIL PROTECTED]
> Subject: how many active session?
>
>
> Does anyone know of a way to track the number of active sessions 
> overnight. I know you can schedule re-occurring QUERY SESSIONs and 
> then
> count them.   I
> want to just get the number of active sessions say every 15, 30 or 60
> minutes.   I have to find the best place to move some backups but I
don't
> have way of tracking this.
>
> Would that TSM MANGER product provide this?
> Thanks
> Matt
>



Re: Archive or backup question

2002-09-25 Thread Steve Schaub

Antony,
Not sure why anyone would want to backup a cache directory (assuming you can even get 
it into a quiesced state so you aren't sitting with a "fuzzy" backup), but...

I believe the archive command has a -quiet option that turns off detailed logging to 
the dsmsched.log file.  The downside is that you won't have a log of everything that 
was done (but you probably don't want to read 8 million lines of "x backed up" 
either).

As far as the TSM db is concerned, 8m objects is going to be 8m objects, regardless.  
Throw more disk at it.


Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]
616-393-1457 Desk
616-836-6962 Cell
Siempre Hay Esperanza

>>> [EMAIL PROTECTED] 09/24 9:57 PM >>>
We have a cache directory on an AIX server that has about 8 million
objects. We want to archive this maybe twice a week for DR purposes and
only keep one copy. The trouble being it blows out our database with all
the objects and the schedule log blows out the files space where it is
saved to. Does anyone know of anyway to backup/archive this amount of
objects and not blow out both the DB and the filespace?

Any help or comments would be appreciated.

Antony Ryan
Support Specialist
KAZ Computer Services
118 Bennett Street
East Perth   WA  6004

Phone:  1300 657 627
   +61 8 6212 0100
Mobile:   0438 074 895
Fax:   +61 8 6212 0101

A division of KAZ Group Limited - visit our web site at
http://www.kaz.com.au



The above message has been scanned and meets the Insurance Commission of Western 
Australia's
Email security policy requirements for outbound transmission.

This email (facsimile) and any attachments may be confidential and privileged. If you 
are not the
intended recipient, you are hereby notified that any use, dissemination, distribution 
or copying of this
email (facsimile) is strictly prohibited. If you have received this email (facsimile) 
in error please contact
the Insurance Commission.

Web: www.icwa.wa.gov.au
Phone: +61 08 9264




Re: Migration question

2002-09-13 Thread Steve Schaub

No, each storage group has a setting that controls the number of migration processes 
used during migration.  Do a Q STG  F=D to show the setting, and UPD STG 
 MIGPR=<#> to change it.  For example, to use 3 drives during the migration 
of a stgpool named DISKPOOL, enter UPD STG DISKPOOL MIGPR=3.

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]
616-393-1457 Desk
616-836-6962 Cell
Siempre Hay Esperanza

>>> [EMAIL PROTECTED] 09/13 3:43 AM >>>
Good morning to all of you.

I have a quick question. Does migration automatically use all available
tape drives?

Thanks in advance

Farren Minns - TSM and Solaris System Admin - John Wiley and Sons Ltd

Our Chichester based offices have amalgamated and relocated to a new
address

John Wiley & Sons Ltd
The Atrium
Southern Gate
Chichester
West Sussex
PO19 8SQ

Main phone and fax numbers remain the same:
Phone +44 (0)1243 779777
Fax   +44 (0)1243 775878
Direct dial numbers are unchanged

Address, phone and fax nos. for all other Wiley UK locations are unchanged
**



Re: Enginering change NSM

2002-09-13 Thread Steve Schaub

Brian,

Our company has a C00 NSM and just went through this EC three weeks ago.  We had 
issues with the lmcp0 daemon not starting correctly, so we had no communication to our 
3494 (even though the 3590 drives were online).  Also, our fast ethernet adapter 
somehow defaulted back to half duplex which we didn't catch until the next day.  Other 
than that, our biggest issue was that we weren't told up front that it was a 
destructive os upgrade - they boot from their own mksysb tape and lay down a totally 
new system.  We had alot of scripts and perf data that went up in smoke.  Fortunately 
we were able to get them back from the our own mksysb that we ran prior to the 
upgrade, but it was not much fun.  Other than that, it went ok.


Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]
616-393-1457 Desk
616-836-6962 Cell
Siempre Hay Esperanza

>>> [EMAIL PROTECTED] 09/12 5:35 PM >>>
Hello,

Next week we will have the first upgrade of our NSM. It's a model C01, with
AIX 4.3.3 and TSM 4.1 and will be upgrade to AIX 5.1.0. and TSM 4.2.1.9

Because this is the first time, I was wondering if other NSM-sites will
share their experiences with me, and give me some tips and trics, or maybe a
kind of checklist/procedure.

Thanks in advance,

Brian.



_
Meld je aan bij de grootste e-mailservice ter wereld met MSN Hotmail:
http://www.hotmail.com/nl



Re: mass Windows client dsm.opt file edit

2002-09-12 Thread Steve Schaub

We only store setting in the local dsm.opt that can't be set using client option sets 
on the TSM server.  All of our include/exclude statements are stored in client option 
sets based on client os.  Makes changes alot easier, since we make the change in one 
place, and we don't have to refresh the scheduler on the client for the change to take 
effect.

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]
616-393-1457 Desk
616-836-6962 Cell
Siempre Hay Esperanza

>>> [EMAIL PROTECTED] 09/12 12:48 PM >>>
Does anyone have a clever technique / method to perform a dsm.opt file edit
(ie. a new exclude statement) across 100's of Windows clients at one time?

_
Join the world s largest e-mail service with MSN Hotmail.
http://www.hotmail.com



Automated install of 4.2 on WinNT workstations

2002-09-12 Thread Steve Schaub

I have 100 engineering workstations that I need to convert from Arcserve to TSM.  What 
is the best method for getting the 4.2 client running on these machines?  I have seen 
several posts related to using .bat files with the dsmcutil and others about msiexec, 
but I am confused on the difference between these methods.  I would like to have a 
"one button push" for these engineers to use.  One option I have here is using Netware 
Application Launcher, but the paperwork involved in getting the software into it is 
somewhat daunting.  Regardless of the method, how do I dynamically get the node/pwd 
into each setup (we will use the machine name for both)?  Thanks.

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]
616-393-1457 Desk
616-836-6962 Cell
Siempre Hay Esperanza



How do I grant cross-node restore access (easily)

2002-07-12 Thread Steve Schaub

How can I make sure that all of my client nodes have rights to restore data from any 
other node without having to go into each individual client gui and using the 
"utility/user access list" option?  Is this permission stored locally on each client 
node, or is it in the TSm server database?

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]
616-393-1457 Desk
616-836-6962 Cell
Siempre Hay Esperanza



3590 pricing (used)

2002-06-12 Thread Steve Schaub

Has anyone purchased used 3590 equipment recently and would be willing to share a 
reasonable ballpark dollar amount?  We are running out of room in our 3494 and would 
like to start converting our S/390 over from 3490 to 3590.  What would be a good price 
for an A60 controller and four 3590E1A or B1a (escon) drives?

Alternately, if I sacrificed my 4) 3590E1A drives from TSM to the Mainframe and bought 
a separate library for TSM, what would it take to replace what I have (277-J, 218-K of 
which 119-K are offsite)?

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]

No trees were killed in the sending of this message. However a large
number of electrons were terribly inconvenienced.



Re: data tansfer time ???

2002-06-06 Thread Steve Schaub

I have recently been trying to troubleshoot some network transfer rate problems with a 
few clients, and noticed that there seems to be a wide difference between what the 
client reports back (picked up in the activity log) vs. the accounting file records.  
Can anyone tell my why this is, and which numbers are the most reliable?

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]

No trees were killed in the sending of this message. However a large
number of electrons were terribly inconvenienced.


>>> [EMAIL PROTECTED] 06/05 10:13 PM >>>
Not long ago, I have the same question about the definition of this term
and here's the answer:

Data Transfer Time - the total time it takes to send the total backup data
from TSM client to TSM server over the network.
  *** My understanding of this definition is :  client process will
read data from disk into I/O buffer, then from I/O buffer to TSM server
through
the network.  When TSM server receive these data, it will send a ACK packet
back to the client.  When the client receive the ACK packet from TSM
server, then
it will read the next segment of data from disk to I/O buffer.

===> Data Transfer Time is
the total sum of time that the client send the data to server through
network including the time it receive the ACK packet but exclude the time
to fetch the next batch of data into the I/O buffer.

Network data transfer rate - the rate that TSM client send the data over
the network to TSM server.

  *** My understanding of this definition is :  the total amount of
data send to TSM server divided by the data transfer time



Thanks & Regards
William



Calculating network transfer rates using accounting log data

2002-05-23 Thread Steve Schaub

I have a script that reports on backup sessions using the accounting log, and I 
thought I was calculating the network transfer rate correctly, but I see that the 
results returned from the client are reporting very different numbers.  The formula I 
am using to figure out the speed at which data is actually being transfered when 
transmission is happening is:

xmit_kbytes / (session_seconds - mediaw_seconds - commw_seconds - idlew_seconds)

xmit_kbytes(field20) = 44016
session_seconds(field21) = 125
idlew_seconds(field22) = 0
commw_seconds(field23) = 112
mediaw_seconds(field24) = 0

my report thus shows network transfer at 3,386mb/s
but the client results say 393kb/s

I guess even after reading the manual and various postings, I still don't understand 
what commw means, exactly.  Can someone clear this up for me?

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]

No trees were killed in the sending of this message. However a large
number of electrons were terribly inconvenienced.



Best value for disk pool

2002-05-16 Thread Steve Schaub

We have a 3466 C00 (AIX 4.3.3  TSM 4.1.0) using approx. 144gb of SSA disk for our db, 
logs, and several disk pools.  We have reached a point of needing to add more disk for 
the db and pools, but because of budget cuts, the $$ for additional SSA disks is not 
looking good.  It is time for us to consider using other types of direct attached disk 
for this system.  Currently, we are not raid-ing the disk pool at all, and using tsm 
mirroring for the db.  Can I get some feedback from those with experience using other 
vendor's disk systems on an AIX box.  Cost, reliability, speed are of course the 
important metrics.  I do understand that hooking up non-SSA disk probably means 
decoupling the 3466 and turning it into a standalone rs6k box.  Anyone with tips on 
that path would also be appreciated.

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]

No trees were killed in the sending of this message. However a large
number of electrons were terribly inconvenienced.



Backup DB using specific tape range

2002-04-13 Thread Steve Schaub

When my daily database backup runs, it sometimes calls for a tape that has not come 
back from offsite yet.  This is the command I am using (we want to use the smaller J 
tapes instead of K):

BACKUP DB Devclass=3590 Type=full 
Vol=300260,300261,300262,300263,300264,300265,300266,300267,300268,300269 Wait=yes 
 

This shows up in the log:

04/13/02 10:04:06 ANR1409W Volume 300260 already in use - skipped.  
04/13/02 10:04:47 ANR8308I 008: 3590 volume 300261 is required for use in library 
3494A; CHECKIN LIBVOLUME required within 60 minutes.   
   

volumes 300262 through 300269 were in the library but it didnt call for one of them.
Is there a way to ensure that it only tries to use a tape that is in the library?

Steve Schaub
Haworth, Inc
email: [EMAIL PROTECTED]

No trees were killed in the sending of this message. However a large
number of electrons were terribly inconvenienced.



Most stable client level for Netware

2002-02-08 Thread Steve Schaub

We just upgraded out AIX 4.3.3 server (3466-C00) to TSM 4.1.1.0 and are getting ready 
to roll out new client code for our WinNT, Netware & HP-UX boxes.  Our biggest problem 
is all the Netware abends (4.11 & 5.1) we get every night which hang the tsm session 
and usually require a reboot.  Yes, we have kept TSA up to date.

Can anyone point to the Netware tsm client release that is most stable? 

Steve Schaub
Haworth, Inc
email: [EMAIL PROTECTED]

No trees were killed in the sending of this message. However a large
number of electrons were terribly inconvenienced.



3466 Support

2001-10-31 Thread Steve Schaub

We purchased a 3466 solution less than 2 years ago when we brought TSM in as our 
Enterprise Backup Solution.  One of the biggest selling points was that IBM handles 
all the maint/support issues with hardware/aix/tsm.  Our understanding was that our 
maintenence cost covered aix as well as tsm upgrades as they became available.  Now we 
are told that the move from 3.7.3.8 to 4.2.? is not covered under that agreement so we 
can expect to shell out some significant bucks.  Was anyone else out there surprised 
by this or are we the only ones who didn't read the fine print?

Steve Schaub
Haworth, Inc
email: [EMAIL PROTECTED]

No trees were killed in the sending of this message. However a large
number of electrons were terribly inconvenienced.



  1   2   >