Re: Upgrade TSM 4.1 to TSM 5.2 on AIX 4.3.3 ??

2004-07-19 Thread Gordon Woodward
Jesse, I don't use AIX sorry so can't help you there but Paul Zarnowski talked about 
his upgrade back in February which sounds similar to yours.

Checkout the following link for more info:

http://msgs.adsm.org/cgi-bin/get/adsm0402/236.html

Hope it helps.

Gordon Woodward
Wintel Server Support




  [EMAIL PROTECTED]
  ROCARE.COM   To:   [EMAIL PROTECTED]
  Sent by: cc:
  [EMAIL PROTECTED]Subject:  Upgrade TSM 4.1 to TSM 5.2 on 
AIX 4.3.3 ??
  EDU


  16/07/2004 10:19
  AM
  Please respond to
  ADSM-L






I am running TSM 4.1 on AIX 4.3.3.  I want to upgrade TSM to 5.2 and AIX to 5.2.  What 
steps should I take??
Please help

Jesse Sanaseros
Dallas Metrocare Services
Network Administrator
[EMAIL PROTECTED]





--

This e-mail may contain confidential and/or privileged information. If you are not the 
intended recipient (or have received this e-mail in error) please notify the sender 
immediately and destroy this e-mail. Any unauthorized copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden.


Using multiple client interfaces

2004-07-19 Thread Cheese Machine
Hi everyone,

I wonder if anyone can offer any advise about the following:-

We have a single TSM server with a gigabit interface. (x.x.1.102)

The major backup client is a Solaris machine running four Lotus Domino server 
instances, each instance allocated it's own port on a quad ethernet card.

We have four TSM nodes registered for this client and run four Scheduler services, one 
for each Domino instance. We would like to see each client backed up via it's own port 
on the quad ethernet, however as there is only one TSM server address to connect to, 
Solaris routes all the traffic via the same default interface.

Whichever backup is running Solaris always returns the same default route to the TSM 
server and all four nodes backup via the same port.

This has an affect on performance as all the clients need to back up at the same time 
due to operational requirements.

I know that the client option TCPCLIENTADDRESS will tell the TSM server to contact the 
client on a specific IP address but this just seems to be for server - client traffic.

Is there also a way of directing the backup traffic via a specific client interface 
instead of using the default route as specified by the Soalris O/S?

Thanks for your help

Jon



-
 ALL-NEW Yahoo! Messenger - so many all-new ways to express yourself


Operational reporting

2004-07-19 Thread Henrik Wahlstedt
Hi,

TSM server 5.2.2.5 on Win2k .

I repeatadly get this error from Operational reporting at the same time
every day (almost... 3 out of 5 days.)
Application popup: Microsoft Visual C++ Runtime Library : Runtime Error!
Program: C:\xyz\UTILS\tsm\console\tsmreptsvc.exe
This application has requested the Runtime to terminate it in an unusual
way.
Please contact the application's support team for more information

LogReportSvc.txt
07/19/2004 05:30:20: Worker Thread 0 waiting for work.
07/19/2004 05:30:20: Worker Thread 1 is waiting for report completion.
07/19/2004 05:30:20:   Adding the Monitor for STO-W03,Server1,Hourly
Monitor,1 to the ready queue.
07/19/2004 05:30:20: Worker Thread 0 waiting for work.
07/19/2004 05:34:03: Email was sent to [EMAIL PROTECTED] from STO-W03
(STO-W03).
07/19/2004 05:34:03: Worker Thread 1 completed report for
STO-W03,Server1,Daily Report,0.

Only select statements from Operational reporting in TSM actlog, this is
the last one.
07/19/2004 05:33:46  ANR2017I Administrator xyz issued command: select
  msgno,nodename,sessid,message from actlog where (
  msgno=4952 or msgno=4953 or msgno=4954 or
msgno=4955 or
  msgno=4956 or msgno=4957 or msgno=4958 or
msgno=4959 or
  msgno=4960 or msgno=4961 or msgno=4964 or
msgno=4967 or
  msgno=4968 or msgno=4970 ) and (date_time between
  '2004-07-18 05:30:20' and '2004-07-19 05:30:19')
order by
  sessid (SESSION: 25363)

I get my email, TSM works fine and Operational reporting hangs..
Have anyone seen this before or have any suggestions how to solve it?

//Henrik


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Re: Operational reporting

2004-07-19 Thread Muhammad Sadat
Hi Henrik,
Eventhough I never encountered such problem, but it sounds like something
about APIs. Try installing BA Client again, the APIs of which are also used
while we are talking to TSM.

Operational reporting is normally installed on machines other than TSM
server, check to see if that other machine is running properly, re-apply
service packs if it needs be !

Others on the list might have better options !

Kind Regards,
Muhammad SaDaT Anwar
Product Specialist
Systems Management  Data Management Products

Info Tech (Pvt) Limited
108, Business Avenue,
Main Shahrah-e-Faisal,
Karachi, Pakistan
Ph: +92-21-111-427-427 Fax: +92-21-4310569
Cell: +92-21-300-8211943




 Henrik Wahlstedt
 [EMAIL PROTECTED]
   To
 Sent by: ADSM:
 Dist Stor [EMAIL PROTECTED]
 Manager   cc
 [EMAIL PROTECTED]
 .EDU


 07/19/2004 03:37
 PMSubject
   Operational reporting

 Please respond to
 ADSM: Dist Stor
 Manager
 [EMAIL PROTECTED]
   .EDU






Hi,

TSM server 5.2.2.5 on Win2k .

I repeatadly get this error from Operational reporting at the same time
every day (almost... 3 out of 5 days.)
Application popup: Microsoft Visual C++ Runtime Library : Runtime Error!
Program: C:\xyz\UTILS\tsm\console\tsmreptsvc.exe
This application has requested the Runtime to terminate it in an unusual
way.
Please contact the application's support team for more information

LogReportSvc.txt
07/19/2004 05:30:20: Worker Thread 0 waiting for work.
07/19/2004 05:30:20: Worker Thread 1 is waiting for report completion.
07/19/2004 05:30:20:   Adding the Monitor for STO-W03,Server1,Hourly
Monitor,1 to the ready queue.
07/19/2004 05:30:20: Worker Thread 0 waiting for work.
07/19/2004 05:34:03: Email was sent to [EMAIL PROTECTED] from STO-W03
(STO-W03).
07/19/2004 05:34:03: Worker Thread 1 completed report for
STO-W03,Server1,Daily Report,0.

Only select statements from Operational reporting in TSM actlog, this is
the last one.
07/19/2004 05:33:46  ANR2017I Administrator xyz issued command: select
  msgno,nodename,sessid,message from actlog where (
  msgno=4952 or msgno=4953 or msgno=4954 or
msgno=4955 or
  msgno=4956 or msgno=4957 or msgno=4958 or
msgno=4959 or
  msgno=4960 or msgno=4961 or msgno=4964 or
msgno=4967 or
  msgno=4968 or msgno=4970 ) and (date_time between
  '2004-07-18 05:30:20' and '2004-07-19 05:30:19')
order by
  sessid (SESSION: 25363)

I get my email, TSM works fine and Operational reporting hangs..
Have anyone seen this before or have any suggestions how to solve it?

//Henrik


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Re: Using multiple client interfaces

2004-07-19 Thread Richard Sims
I wonder if anyone can offer any advise about the following:-

We have a single TSM server with a gigabit interface. (x.x.1.102)

The major backup client is a Solaris machine running four Lotus Domino server
instances, each instance allocated it's own port on a quad ethernet card.

We have four TSM nodes registered for this client and run four Scheduler
services, one for each Domino instance. We would like to see each client backed
up via it's own port on the quad ethernet, however as there is only one TSM
server address to connect to, Solaris routes all the traffic via the same
default interface.

Whichever backup is running Solaris always returns the same default route to
the TSM server and all four nodes backup via the same port.

This has an affect on performance as all the clients need to back up at the
same time due to operational requirements.

I know that the client option TCPCLIENTADDRESS will tell the TSM server to
contact the client on a specific IP address but this just seems to be for
server - client traffic.

Is there also a way of directing the backup traffic via a specific client
interface instead of using the default route as specified by the Soalris O/S?

In the usual multi-homed situation, you would have each portal of the quad
ethernet card on a separate subnet, to distribute traffic; and in such case you
would use the TCPServeraddress client option to direct traffic through a
specific subnet, via IP address rather than network hostname.  But whereas you
indicate that the TSM server has only a single network address, fanning out the
client traffic seems moot.

As you'd expect, this topic has been explored before, and you can see past
discussions at one of the List archive sites
(www.mail-archive.com/[EMAIL PROTECTED]/  www.adsm.org)
by searching on default route or similar keywords.

Your best course is to talk to your network people about optimal network
configuration and path utilization to achieve what you intend.  This will assure
that you end up with the best arrangement - without causing unexpected loads to
appear on subnets which may be intended for other purposes.

Richard Sims


txnbytelimit lto2

2004-07-19 Thread Joni Moyer
Hey Everyone!

I have been looking into tuning client and server parameters for when I
move to an AIX 5.2 TSM server at 5.2.2.5.  I noticed that in an environment
with LTO2, the clients txnbytelimit should be 2097152.  Is the TSM
Performance and Tuning Guide correct?  Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



RES: txnbytelimit lto2

2004-07-19 Thread Paul van Dongen
Hello all, 

   I usually don't use this number, since I have a lot of retries due to
changing files and so on, and I would have a huge amount of
re-transmitted files. However, I've been testing (with LTO1, but I will
do it again with LTO2) and found out that you should not set this to a
number less than the buffer size of the LTO drive. Doing so will produce
a buffer flush on the drive each time you commit the transaction in
TSM's database.
   Since I was working with IBM LTO drives (64MB buffer) I determined
that 65536 was the smallest number to use. I got my best results using
TXNBYTELIMIT 131072 (128MB).



Regards, 

Paul Gondim van Dongen
MCSE
IBM Certified Deployment Professional -- Tivoli Storage Manager V5.2
VANguard - Value Added Network guardians
http://www.vanguard-it.com.br
+55 81 3225-0353


-Mensagem original-
De: Joni Moyer [mailto:[EMAIL PROTECTED] 
Enviada em: Monday, July 19, 2004 8:36 AM
Para: [EMAIL PROTECTED]
Assunto: txnbytelimit  lto2


Hey Everyone!

I have been looking into tuning client and server parameters for when I
move to an AIX 5.2 TSM server at 5.2.2.5.  I noticed that in an
environment with LTO2, the clients txnbytelimit should be 2097152.  Is
the TSM Performance and Tuning Guide correct?  Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: Upgrade TSM 4.1 to TSM 5.2 on AIX 4.3.3 ??

2004-07-19 Thread Lawrence Clark
5.1 will run on AIX; but I don't believe 5.2 will...

 [EMAIL PROTECTED] 07/19/2004 2:07:27 AM 
Jesse, I don't use AIX sorry so can't help you there but Paul Zarnowski
talked about his upgrade back in February which sounds similar to
yours.

Checkout the following link for more info:

http://msgs.adsm.org/cgi-bin/get/adsm0402/236.html

Hope it helps.

Gordon Woodward
Wintel Server Support




  [EMAIL PROTECTED]
  ROCARE.COM   To:
[EMAIL PROTECTED]
  Sent by: cc:
  [EMAIL PROTECTED]Subject:  Upgrade TSM
4.1 to TSM 5.2 on AIX 4.3.3 ??
  EDU


  16/07/2004 10:19
  AM
  Please respond to
  ADSM-L






I am running TSM 4.1 on AIX 4.3.3.  I want to upgrade TSM to 5.2 and
AIX to 5.2.  What steps should I take??
Please help

Jesse Sanaseros
Dallas Metrocare Services
Network Administrator
[EMAIL PROTECTED]





--

This e-mail may contain confidential and/or privileged information. If
you are not the intended recipient (or have received this e-mail in
error) please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.


Re: RES: txnbytelimit lto2

2004-07-19 Thread TSM_User
We have found success using:
txnbytelimit2097152  *Use for LTO, DLT and 9940 drives.

We use
txnbytelimit  25600 * Use for 9840 and 3590

Paul van Dongen [EMAIL PROTECTED] wrote:
Hello all,

I usually don't use this number, since I have a lot of retries due to
changing files and so on, and I would have a huge amount of
re-transmitted files. However, I've been testing (with LTO1, but I will
do it again with LTO2) and found out that you should not set this to a
number less than the buffer size of the LTO drive. Doing so will produce
a buffer flush on the drive each time you commit the transaction in
TSM's database.
Since I was working with IBM LTO drives (64MB buffer) I determined
that 65536 was the smallest number to use. I got my best results using
TXNBYTELIMIT 131072 (128MB).



Regards,

Paul Gondim van Dongen
MCSE
IBM Certified Deployment Professional -- Tivoli Storage Manager V5.2
VANguard - Value Added Network guardians
http://www.vanguard-it.com.br
+55 81 3225-0353


-Mensagem original-
De: Joni Moyer [mailto:[EMAIL PROTECTED]
Enviada em: Monday, July 19, 2004 8:36 AM
Para: [EMAIL PROTECTED]
Assunto: txnbytelimit  lto2


Hey Everyone!

I have been looking into tuning client and server parameters for when I
move to an AIX 5.2 TSM server at 5.2.2.5. I noticed that in an
environment with LTO2, the clients txnbytelimit should be 2097152. Is
the TSM Performance and Tuning Guide correct? Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



-
Do you Yahoo!?
Vote for the stars of Yahoo!'s next ad campaign!


Re: RES: txnbytelimit lto2

2004-07-19 Thread Joni Moyer
Ok.  Thanks!  It's nice to hear that people are successfully using these
parameters!



Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]




 TSM_User
 [EMAIL PROTECTED]
 OMTo
 Sent by: ADSM:   [EMAIL PROTECTED]
 Dist Stor  cc
 Manager
 [EMAIL PROTECTED] Subject
 .EDU Re: RES: txnbytelimit  lto2


 07/19/2004 08:54
 AM


 Please respond to
 ADSM: Dist Stor
 Manager
 [EMAIL PROTECTED]
   .EDU






We have found success using:
txnbytelimit2097152  *Use for LTO, DLT and 9940 drives.

We use
txnbytelimit  25600 * Use for 9840 and 3590

Paul van Dongen [EMAIL PROTECTED] wrote:
Hello all,

I usually don't use this number, since I have a lot of retries due to
changing files and so on, and I would have a huge amount of
re-transmitted files. However, I've been testing (with LTO1, but I will
do it again with LTO2) and found out that you should not set this to a
number less than the buffer size of the LTO drive. Doing so will produce
a buffer flush on the drive each time you commit the transaction in
TSM's database.
Since I was working with IBM LTO drives (64MB buffer) I determined
that 65536 was the smallest number to use. I got my best results using
TXNBYTELIMIT 131072 (128MB).



Regards,

Paul Gondim van Dongen
MCSE
IBM Certified Deployment Professional -- Tivoli Storage Manager V5.2
VANguard - Value Added Network guardians
http://www.vanguard-it.com.br
+55 81 3225-0353


-Mensagem original-
De: Joni Moyer [mailto:[EMAIL PROTECTED]
Enviada em: Monday, July 19, 2004 8:36 AM
Para: [EMAIL PROTECTED]
Assunto: txnbytelimit  lto2


Hey Everyone!

I have been looking into tuning client and server parameters for when I
move to an AIX 5.2 TSM server at 5.2.2.5. I noticed that in an
environment with LTO2, the clients txnbytelimit should be 2097152. Is
the TSM Performance and Tuning Guide correct? Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



-
Do you Yahoo!?
Vote for the stars of Yahoo!'s next ad campaign!


Win2k, tsm client 5.2.2.0, ANR0444W all over the place

2004-07-19 Thread MC Matt Cooper (2838)
Hello,
I am runing TSM 5.1.8.1 server on z/OS.  We have many WIN2K servers
that just had their TSM client code updated to 5.2.2.0. from 5.1.6.x.
Many of them are getting the following messages.  Allong with a lot of
the server being backed up correctly.  I thought I had seen a discussion
on this before but can't seem to find it.  Is this a known problem?  A
one time event?  
 
 ANRD SMNODE(19149): ThreadId311 Session 67 for node USCLES101 :
Invalid
ANRD filespace for backup group member: 4.  
ANR0444W Protocol error on session 67 for node USCLES101 (WinNT) -
out-of-sequ-
ANR0444W ence verb (type Data) received.  
 
Thanks in Advance  


Re: Thoughts on Monthly Archives

2004-07-19 Thread asr
== In article [EMAIL PROTECTED], Steve Harris [EMAIL PROTECTED] writes:

 There was another management requirement that all production servers be
 backed up in full once per year and that snapshot be kept forever - there
 is no reasoning with this, its one of those stupid mandates that applies to
 the whole of the state government, and if data is not able to be
 categorized, then it must be kept.

Mmm, Arbitrary.  Tastes like BUDGET!

 I think you'll recognise that having a third TSM Server for the yearly
 backup isn't really an option, so an archive mechanism is the only one that
 will work for that.


Actually, I'd disagree that an extra server is a barrier. The reason is that I
am finding it to be very simple to maintain several servers on the same
hardware.  Right now I've got ~10 on one box, and it works quite well.
Though, really, you can use a seprate policy domain instead of a separate
server perfectly easily.  But I like the separate server scheme...


So, permit me to spin a yarn for you, which I will assert will solve your
monthly and your yearly-forever problems all in one swell foop, and at a lower
cost in management and consumables than your archive scheme.

Erect on your TSM server a second TSM instance.  I'll call it the ARCHIVE
server.

On the ARCHIVE server, define nodes for those machines getting the
cast-in-stone treatment;  On their management classes, define:

verexists  1200
verdeleted 1200
retextra   nolimit
retonlynolimit

On each of the client machines, add an ARCHIVE stanza to your dsm.sys.  Then,
run incrementals for each box once a month.  In unix land, I'd say

1 0 1 * * dsmc incr -se=ARCHIVE

in your crontab.


Now, I recognize that this is only retaining data for 100 years, but what the
heck, the pig may learn to sing. ;)

You get all the benefits of the incremental scheme, you separate the
archive/retention database from the recovery database, and you don't waste any
hardware.


There, I assert.  Please pick holes.



- Allen S. Rout


Database tapes

2004-07-19 Thread Mark Heynes
 
Guy's
Can someone tell me the easiest way to copy a database tape and
then delete the record of it from TSM.
 
What I'm after is creating a copy of the database so that we can test
some recovery procedures off site
 but want to be able to just wipe the tape at the end rather than return
it to the TSM server
 
Thanks in advance
 
Mark


***
This email and any files transmitted with it are confidential and intended solely for 
the use of the individual or entity to whom they are addressed. If you have received 
this email in error or you are not the stated recipient you must not deal with it in 
any way other than to notify the sender of its receipt by you in error. 

Emails are susceptible to interference. You should not assume that the contents 
originated from Adam Continuity or that they have been accurately reproduced from 
their original form. Adam Continuity accepts no responsibility for information, errors 
or omissions in this e-mail or use or misuse thereof or any act done or omitted to be 
done in connection with this communication. If in doubt, please verify their 
authenticity with the sender. 

This footnote also confirms that this email message has been swept by MIMEsweeper for 
the presence of computer viruses.
***


Multiplexing to a single tape

2004-07-19 Thread Gill, Geoffrey L.
I was wondering if anyone was multiplexing backups to a single tape.
Recently I attended a Veritas class, our other new backup system, (yes I
did ask to stay completely with TSM) and they really touted this as what
seemed like a very good means of backup. Unless you have the newer faster
technology tape drives I'm not sure I agree with it but don't have
experience since disk pools have always been plentiful. I'm not sure I agree
with it anyway because I believe restores would be slow since data would be
scattered throughout the tape. What if multiple restores need the same tape?
Can they run at the same time and if so wouldn't that be even worse?



So the question I have is, has anyone tried this functionality with TSM, or
Veritas for that matter since I believe there are others on the list that
use it. I am interested in any feedback so feel free to praise or rail on
it.



Thanks,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 854-0975


Re: Thoughts on Monthly Archives

2004-07-19 Thread Andrew Raibeck
Some considerations for long-term archive:

- Much of today's data, as it is used from day to day, exists in some
product-specific format. If you were to retrieve that data, say, 10 years
from now, would you have software capable of reading that data?

- Even if you archive the software, will operating systems 10 years from
now be able to run that software?

- Even if you archive the operating system installation files, will the
hardware 10 years from now be able to install and run that operating
system?

- There is a good case to consider carefully what gets archived and how
you archive it.For instance, maybe for database data, it would make sense
to export that data to some common format, such as tab- or comma-delimited
records, which is very likely to be importable by most software. Likewise,
for image data, consider a format that is common today and likely to be
common tomorrow.

- 10 years from now, the people that need to retrieve the archived data
will probably not be the same people who originally archived the data.
Will your successors know what that data is? Will they know how to get to
it? (Gee, we need to get at the accounts payable database from 10 years
ago... under which node is it archived?) Will they know how to
reconstruct it, and how to use it?

I am by no means an expert in this area, but these are some things to
consider carefully for long-term archives. Note that most of these issues
are not directly related to TSM, but apply regardless of which data
storage tool you use.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.


Re: Database tapes

2004-07-19 Thread Coats, Jack
delete vol VOLUMEID discarddata=yes

should handle it. ...

-Original Message-
From: Mark Heynes [mailto:[EMAIL PROTECTED]
Sent: Monday, July 19, 2004 10:38 AM
To: [EMAIL PROTECTED]
Subject: Database tapes



Guy's
Can someone tell me the easiest way to copy a database tape and
then delete the record of it from TSM.

What I'm after is creating a copy of the database so that we can test
some recovery procedures off site
 but want to be able to just wipe the tape at the end rather than return
it to the TSM server

Thanks in advance

Mark


Re: Thoughts on Monthly Archives

2004-07-19 Thread Shannon Bach

For us, it is the beginning of the Sarbanes-Oxley overhaul. I ask those same questions to people all over my company and their response?

Well you (me) had better make sure that the data moves with whatever new Technoloogy comes in!

They don't care if we have the software capable of reading this data again. They just want to be in compliance with Sarbanes-Oxley. And it is starting to look to me that Sarbanes-Oxley believes in keeping everything, forever.








Andrew Raibeck [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
07/19/2004 11:25 AM
Please respond to ADSM: Dist Stor Manager


To:[EMAIL PROTECTED]
cc:
Subject:Re: Thoughts on Monthly Archives


Some considerations for long-term archive:

- Much of today's data, as it is used from day to day, exists in some
product-specific format. If you were to retrieve that data, say, 10 years
from now, would you have software capable of reading that data?

- Even if you archive the software, will operating systems 10 years from
now be able to run that software?

- Even if you archive the operating system installation files, will the
hardware 10 years from now be able to install and run that operating
system?

- There is a good case to consider carefully what gets archived and how
you archive it.For instance, maybe for database data, it would make sense
to export that data to some common format, such as tab- or comma-delimited
records, which is very likely to be importable by most software. Likewise,
for image data, consider a format that is common today and likely to be
common tomorrow.

- 10 years from now, the people that need to retrieve the archived data
will probably not be the same people who originally archived the data.
Will your successors know what that data is? Will they know how to get to
it? (Gee, we need to get at the accounts payable database from 10 years
ago... under which node is it archived?) Will they know how to
reconstruct it, and how to use it?

I am by no means an expert in this area, but these are some things to
consider carefully for long-term archives. Note that most of these issues
are not directly related to TSM, but apply regardless of which data
storage tool you use.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




Re: Multiplexing to a single tape

2004-07-19 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Gill, Geoffrey L.
I was wondering if anyone was multiplexing backups to a single tape.
Recently I attended a Veritas class, our other new backup 
system, (yes I
did ask to stay completely with TSM) and they really touted 
this as what
seemed like a very good means of backup. Unless you have the 
newer faster
technology tape drives I'm not sure I agree with it but don't have
experience since disk pools have always been plentiful. I'm 
not sure I agree
with it anyway because I believe restores would be slow since 
data would be
scattered throughout the tape. What if multiple restores need 
the same tape?
Can they run at the same time and if so wouldn't that be even worse?

[sound of hollow laughter]

Veritas doesn't tell you about the sting in the tail of their tape
multiplexing. While it can make for faster backups, particularly with
libraries with a small number of drives, they neglect to tell you that
the de-multiplexing necessary to perform restores makes such restores
considerably slower--as in (sometimes) an order of magnitude slower.

...and after all, I personally don't give a hoot about faster backups;
it is faster *restores* I'm after.

--
Mark Stapleton


Re: Database tapes

2004-07-19 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Mark Heynes
Guy's

Careful. There are some highly capable *women* on this mailing list as
well.

Can someone tell me the easiest way to copy a database tape and
then delete the record of it from TSM.

Just let your scheduled DELETE VOLH command expire the tape, along with
your other database backups.

--
Mark Stapleton


Re: Multiplexing to a single tape

2004-07-19 Thread Coats, Jack
My info is old, but my guess it hasn't changed much since I used to work for
a VAR and we sold both TSM and Veritas NetBackup:

Veritas multiplexes blocks from various backup clients on the tapes when
this is being done.  It works, and is very effective.  They basically used a
hacked version of gzip to prepend the node name/sequesnce number
(effectively) on the data block before it is written to tape.

This is best used when writing directly to tape from multiple clients
backing up at the same time.  If you keep the tape fed and you have enough
bandwidth, you can keep the tape streaming.  For performance you MUST keep
the tapes streaming. (I think I heard that you may be able to backup to disk
now, and then backup to tape, but I never used it, and I may be
mis-remembering too.)

Scaling backups with Veritas has to do with bandwidth, tape streaming speed,
number of concurrent clients you are backing up, and number of tape drives
available to backup to.

TSM does this as a two step process.  Backup to disk local on the TSM
machine, then put the files from disk to tape.  This does not 'multiplex'
like Veritas does, but it intermingles files from various clients on the
same tapes.  Since the disk drives are local, and if your TSM server is
configured correctly (I have one with a problem, so ...) you can keep the
tapes streaming by sucking the data from disk.  (Before someone crys foul,
yes you can use TSM to backup directly to tape, especially for large files,
but I havn't heard of people using this much, and it is an issue if you need
to keep the tapes streaming and there isn't enough bandwidth).

This means that Veritas requires less disk and better bandwidth from clients
when backups are happening, and TSM can handle small clients, random backup
times, and low bandwith situations much better than Veritas.  But TSM does
require the disk space, and a good database.

Veritas has the benefit of if the database 'catalog' is destroyed and you
cannot get the catalog back from a backup, you can re-bulid the catalog, but
you have to spin all the tapes, end to end.  (Veritas seems to be an easier
'sell' because it is full+diff/incremental type of backup rather than TSMs
'incremental forever'.  After 3years management here still doesn't have that
down!)

TSM and Veritas have the SAME problem when it comes to restores.  They both
MAY have to spin a number of tapes to perform one (non-trivial) restore.

In general, TSM uses more disk, Veritas uses more tape drives, TSM must have
some 'downtime' for reclaimation.  Database maintenance on both systems is
needed to keep things optimal.

I hope this helps ... Jack

-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]
Sent: Monday, July 19, 2004 11:06 AM
To: [EMAIL PROTECTED]
Subject: Multiplexing to a single tape


I was wondering if anyone was multiplexing backups to a single tape.
Recently I attended a Veritas class, our other new backup system, (yes I
did ask to stay completely with TSM) and they really touted this as what
seemed like a very good means of backup. Unless you have the newer faster
technology tape drives I'm not sure I agree with it but don't have
experience since disk pools have always been plentiful. I'm not sure I agree
with it anyway because I believe restores would be slow since data would be
scattered throughout the tape. What if multiple restores need the same tape?
Can they run at the same time and if so wouldn't that be even worse?



So the question I have is, has anyone tried this functionality with TSM, or
Veritas for that matter since I believe there are others on the list that
use it. I am interested in any feedback so feel free to praise or rail on
it.



Thanks,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 854-0975


Re: Thoughts on Monthly Archives

2004-07-19 Thread Coats, Jack
I havn't read the SBO requirements, but from our internal auditors, it looks
like we need to have a good business
best efforts to keep readable for whatever retention period we publicize.

In working with older tape technology in storing archive tapes, we found
that 20% of the tapes were not readable within 5 years. It seems that the
best idea for long term archives is they need to be re-read on a regular
basis (annually?) just to make sure they can be read.

The only time I have heard of a real application that must have 'everything
forever' was a TSM application where they were storing a local and remote
copy of documentation (and each version) for a nuclear power plant.  In that
case, I think they used a WORM library both locally and remotely.  I didn't
design it and don't know the details, just that it 'had to work'.  It got
expensive, but that is what the federal regulators requried.


-Original Message-
From: Shannon Bach [mailto:[EMAIL PROTECTED]
Sent: Monday, July 19, 2004 12:09 PM
To: [EMAIL PROTECTED]
Subject: Re: Thoughts on Monthly Archives



For us, it is the beginning of the Sarbanes-Oxley overhaul.  I ask those
same questions to people all over my company and their response?

Well you (me) had better make sure that the data moves with whatever new
Technoloogy comes in!

They don't care if we have the software capable of reading this data again.
They just want to be in compliance with Sarbanes-Oxley.  And it is starting
to look to me that  Sarbanes-Oxley believes in keeping everything, forever.





Andrew Raibeck [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]


07/19/2004 11:25 AM
Please respond to ADSM: Dist Stor Manager



To:[EMAIL PROTECTED]
cc:
Subject:Re: Thoughts on Monthly Archives



Some considerations for long-term archive:

- Much of today's data, as it is used from day to day, exists in some
product-specific format. If you were to retrieve that data, say, 10 years
from now, would you have software capable of reading that data?

- Even if you archive the software, will operating systems 10 years from
now be able to run that software?

- Even if you archive the operating system installation files, will the
hardware 10 years from now be able to install and run that operating
system?

- There is a good case to consider carefully what gets archived and how
you archive it.For instance, maybe for database data, it would make sense
to export that data to some common format, such as tab- or comma-delimited
records, which is very likely to be importable by most software. Likewise,
for image data, consider a format that is common today and likely to be
common tomorrow.

- 10 years from now, the people that need to retrieve the archived data
will probably not be the same people who originally archived the data.
Will your successors know what that data is? Will they know how to get to
it? (Gee, we need to get at the accounts payable database from 10 years
ago... under which node is it archived?) Will they know how to
reconstruct it, and how to use it?

I am by no means an expert in this area, but these are some things to
consider carefully for long-term archives. Note that most of these issues
are not directly related to TSM, but apply regardless of which data
storage tool you use.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.


Re: Thoughts on Monthly Archives

2004-07-19 Thread Andrew Raibeck
 For us, it is the beginning of the Sarbanes-Oxley overhaul.  I ask those
same
 questions to people all over my company and their response?

 Well you (me) had better make sure that the data moves with whatever new

 Technology comes in!

My personal (not necessarily that of IBM's) opinion: this is a flippant
response to a valid concern, unless your responsibilities cover this area
as well.

From a TSM administrative perspective, it is the TSM administrator's
responsibility to ensure that data backed up by TSM can be restored to the
same state it was in at the time it was backed up, plus other duties
related to the backup and management of the data, as assigned.

Being able to convert from one external data format to another is not a
function of TSM, and thus is not naturally a part of administering TSM. In
general I would say that resolving the issues related to long-term archive
of data belong to the owners of the data and the people who administer
that data. After all, they are the experts on that data and are therefore
the best resources for addressing these issues. Of course, in the process
of planning for the archives, the TSM administrator can raise these issues
(as you have apparently done) and contribute to the solution; but I
wouldn't put the sole responsibility on the TSM administrator. Nor was it
my intent to suggest that these were TSM issues per se when I raised them.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 07/19/2004
10:09:16:


 For us, it is the beginning of the Sarbanes-Oxley overhaul.  I ask those
same
 questions to people all over my company and their response?

 Well you (me) had better make sure that the data moves with whatever new

 Technoloogy comes in!

 They don't care if we have the software capable of reading this data
again.
 They just want to be in compliance with Sarbanes-Oxley.  And it is
starting
 to look to me that  Sarbanes-Oxley believes in keeping everything,
forever.





 Andrew Raibeck [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 07/19/2004 11:25 AM
 Please respond to ADSM: Dist Stor Manager


 To:[EMAIL PROTECTED]
 cc:
 Subject:Re: Thoughts on Monthly Archives




 Some considerations for long-term archive:

 - Much of today's data, as it is used from day to day, exists in some
 product-specific format. If you were to retrieve that data, say, 10
years
 from now, would you have software capable of reading that data?

 - Even if you archive the software, will operating systems 10 years from
 now be able to run that software?

 - Even if you archive the operating system installation files, will the
 hardware 10 years from now be able to install and run that operating
 system?

 - There is a good case to consider carefully what gets archived and how
 you archive it.For instance, maybe for database data, it would make
sense
 to export that data to some common format, such as tab- or
comma-delimited
 records, which is very likely to be importable by most software.
Likewise,
 for image data, consider a format that is common today and likely to be
 common tomorrow.

 - 10 years from now, the people that need to retrieve the archived data
 will probably not be the same people who originally archived the data.
 Will your successors know what that data is? Will they know how to get
to
 it? (Gee, we need to get at the accounts payable database from 10 years
 ago... under which node is it archived?) Will they know how to
 reconstruct it, and how to use it?

 I am by no means an expert in this area, but these are some things to
 consider carefully for long-term archives. Note that most of these
issues
 are not directly related to TSM, but apply regardless of which data
 storage tool you use.

 Regards,

 Andy

 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
 Internet e-mail: [EMAIL PROTECTED]

 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.



Re: Thoughts on Monthly Archives

2004-07-19 Thread Andrew Raibeck
 I haven't read the SBO requirements, but from our internal auditors, it
looks
 like we need to have a good business
 best efforts to keep readable for whatever retention period we
publicize.

 In working with older tape technology in storing archive tapes, we found
 that 20% of the tapes were not readable within 5 years. It seems that
the
 best idea for long term archives is they need to be re-read on a regular
 basis (annually?) just to make sure they can be read.

Or refreshed.

Actually when I raised the issue of the data being readable, I wasn't
referring to hardware or media problems (though these are also good
points), but the availability of software that can understand the data.
For example, if you open a .zip file with a plain text editor, the content
will appear as so much garbage. Without a zip program that can make sense
of the data, being able to restore a .zip file doesn't give you anything
useful (at least not without an awful lot of hacking). .zip is a common
format and likely to be available in the future, but what about more
obscure formats, where maybe you archive the files while you use the
product? Suppose you switch to a new product that uses a different format,
then decide 10 years from now you need to retrieve the data that was used
by the old product. Will you still have a copy of that old product around
that will run on current operating systems and hardware? Even if you
archived the software and could in theory restore and run it, do you know
where, amongst all that archive data, the software is? Will your
successors be able to find the data and the software? Will they even know
which software they need to run?

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.


Re: Multiplexing to a single tape

2004-07-19 Thread Bill Boyer
The new release of NetBackup 5.0 has disk backup capabilities. They call is
a DSSU...Disk something Staging Unit. Backups can be directed to the DSSU
and are then de-staged to tape by the media server as time permits. It is
also cache'd on the DSSU, just like CACHE=YES on a storage pool. I don't
know what kinda performance you get from this, but you can have more
simultaneous bakcups running than you have tape drives.

Veritas now also has what they call Synthetic Backups. This is a media
server based process that combines the latest FULL backup with all the
incremental backups (applying deleted and moved files) to create a new FULL
backup without having to actually do a FULL backup. New to 5.0, too.

I'm not endorsing Veritas (I make my living with TSM!)...just adding to the
thread.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Stapleton, Mark
Sent: Monday, July 19, 2004 1:21 PM
To: [EMAIL PROTECTED]
Subject: Re: Multiplexing to a single tape


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf Of Gill, Geoffrey L.
I was wondering if anyone was multiplexing backups to a single tape.
Recently I attended a Veritas class, our other new backup
system, (yes I
did ask to stay completely with TSM) and they really touted
this as what
seemed like a very good means of backup. Unless you have the
newer faster
technology tape drives I'm not sure I agree with it but don't have
experience since disk pools have always been plentiful. I'm
not sure I agree
with it anyway because I believe restores would be slow since
data would be
scattered throughout the tape. What if multiple restores need
the same tape?
Can they run at the same time and if so wouldn't that be even worse?

[sound of hollow laughter]

Veritas doesn't tell you about the sting in the tail of their tape
multiplexing. While it can make for faster backups, particularly with
libraries with a small number of drives, they neglect to tell you that
the de-multiplexing necessary to perform restores makes such restores
considerably slower--as in (sometimes) an order of magnitude slower.

...and after all, I personally don't give a hoot about faster backups;
it is faster *restores* I'm after.

--
Mark Stapleton


Re: Thoughts on Monthly Archives

2004-07-19 Thread Mike Bantz
In our case, this is going to be used to back up and archive file servers -
NOT operating systems.

As for how to read/use the data, here's the directory structure:

\\toaster\is\documentation - contains docs, for instance
\\toaster\installs - contains setup files for all apps used in our
environment

I don't know about you, but we're still swimming in 95 and NT CD's, and some
of our hardware would have no problems stepping down to run those operating
systems.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Andrew Raibeck
Sent: Monday, July 19, 2004 11:58 AM
To: [EMAIL PROTECTED]
Subject: Re: Thoughts on Monthly Archives

 I haven't read the SBO requirements, but from our internal auditors,
 it
looks
 like we need to have a good business
 best efforts to keep readable for whatever retention period we
publicize.

 In working with older tape technology in storing archive tapes, we
 found that 20% of the tapes were not readable within 5 years. It seems
 that
the
 best idea for long term archives is they need to be re-read on a
 regular basis (annually?) just to make sure they can be read.

Or refreshed.

Actually when I raised the issue of the data being readable, I wasn't
referring to hardware or media problems (though these are also good points),
but the availability of software that can understand the data.
For example, if you open a .zip file with a plain text editor, the content
will appear as so much garbage. Without a zip program that can make sense of
the data, being able to restore a .zip file doesn't give you anything useful
(at least not without an awful lot of hacking). .zip is a common format and
likely to be available in the future, but what about more obscure formats,
where maybe you archive the files while you use the product? Suppose you
switch to a new product that uses a different format, then decide 10 years
from now you need to retrieve the data that was used by the old product.
Will you still have a copy of that old product around that will run on
current operating systems and hardware? Even if you archived the software
and could in theory restore and run it, do you know where, amongst all that
archive data, the software is? Will your successors be able to find the data
and the software? Will they even know which software they need to run?

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew
Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.


Re: Multiplexing to a single tape

2004-07-19 Thread Dale Jolliff
Speaking from having just moved from a client that for some bizarre reason
thought that moving from TSM to Veritas would be a good move, I can assure
you that you don't want to be part of that train wreck.  Just the
crippling administrivia alone will give you permanent migraines.  God help
you if you need security beyond what DNS can give you. And the multiplexed
backups are only the tip of the iceberg for pain if you intend to properly
duplicate your data for offsite storage.

From what I gathered from Veritas, they have collected a lot of people
that were laid off from TSM/Tivoli/IBM and they are on the road to making
Veritas look very much like TSM, just naming it differently. (DSSU =
Diskpool, that sort of thing...)  Well, until you get to the underlying
functionality, anyway. I sat in on a presentation at one point where they
invoked sentences that contained the phrase ... just like TSM... ,
...and just like Tivoli  and so forth a number of times.

I can't for the life of me see where moving backwards in
time/features/capabilities is a good thing, but then again, I'm not a
manager.  Thank goodness.

There is a LOT of pain with moving to Veritas from TSM.  You lose a great
deal of flexibility and reliability.  Going the other way is considerably
easier, in my simple minded opinion.

All of that said, if you are doing a relatively small environment, have
time to do a complete shutdown for every backup cycle, maybe it is the
right tool.
I just wouldn't put it in a sizeable production environment without a
readily updated resume.

They do have some good ideas, don't get me wrong -- they are fighting a
morass of legacy code and that may be what's hampering forward progress.
I just don't see it as a technically-ready-for-prime-time product yet.

OK, off my soap box.
All of the above is just my personal opinion, and not supported by any
factual or hearsay evidence,
yourmileagemayvary,Iamnotalawyer,andIneversaidit.
So there.





Bill Boyer [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
07/19/2004 02:15 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Multiplexing to a single tape


The new release of NetBackup 5.0 has disk backup capabilities. They call
is
a DSSU...Disk something Staging Unit. Backups can be directed to the
DSSU
and are then de-staged to tape by the media server as time permits. It is
also cache'd on the DSSU, just like CACHE=YES on a storage pool. I don't
know what kinda performance you get from this, but you can have more
simultaneous bakcups running than you have tape drives.

Veritas now also has what they call Synthetic Backups. This is a media
server based process that combines the latest FULL backup with all the
incremental backups (applying deleted and moved files) to create a new
FULL
backup without having to actually do a FULL backup. New to 5.0, too.

I'm not endorsing Veritas (I make my living with TSM!)...just adding to
the
thread.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Stapleton, Mark
Sent: Monday, July 19, 2004 1:21 PM
To: [EMAIL PROTECTED]
Subject: Re: Multiplexing to a single tape


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf Of Gill, Geoffrey L.
I was wondering if anyone was multiplexing backups to a single tape.
Recently I attended a Veritas class, our other new backup
system, (yes I
did ask to stay completely with TSM) and they really touted
this as what
seemed like a very good means of backup. Unless you have the
newer faster
technology tape drives I'm not sure I agree with it but don't have
experience since disk pools have always been plentiful. I'm
not sure I agree
with it anyway because I believe restores would be slow since
data would be
scattered throughout the tape. What if multiple restores need
the same tape?
Can they run at the same time and if so wouldn't that be even worse?

[sound of hollow laughter]

Veritas doesn't tell you about the sting in the tail of their tape
multiplexing. While it can make for faster backups, particularly with
libraries with a small number of drives, they neglect to tell you that
the de-multiplexing necessary to perform restores makes such restores
considerably slower--as in (sometimes) an order of magnitude slower.

...and after all, I personally don't give a hoot about faster backups;
it is faster *restores* I'm after.

--
Mark Stapleton


Re: Thoughts on Monthly Archives

2004-07-19 Thread Kauffman, Tom
As someone who has migrated from Burroughs Medium Systems (B3900) to
Honeywell (DPS-8) to IBM (4381, 3090, 9672 - MVS, MVS-XA, MVS-ESA) to
IBM RS/6000, I have to agree with Andrew. Any long-term archiving
mandated by the business MUST include archiving of the data in an
UNLOADED format - either csv or fixed field, ascii format (no binary or
packed fields). In addition, the human-readable database schema and
subschemas and the source code to the unload and reload programs need to
be archived.

The cost for this is not trivial. The unload/reload programs should be
tested out. Just about any non-trivial database will take long enough to
flat-file that you'll need to run the process from a copy of the
original. And once it's been unloaded, you need to manage the results. 

I would be inclined to agree with Dwight -- this isn't really a TSM
process. I'd think about using tar or pax (primarily because the source
to both is available) to write the archive media (tape, dvd, whatever)
and figure on copying the result about every six months to a year.

Short of that -- and I have yet to get MY corporation to buy into this
-- I have seven archive management classes: one_year, two_year, and on
to seven_year, with retentions amounting to one month longer than the
class name suggests. And an internal memo I dust off from time to time
about why this is a dumb idea. Current archiving activities haven't
really caused database issues in TSM - yet. But the requirement for more
disk space is one of the 'dumb idea' entries.

Good Luck -

Tom Kauffman
NIBCO, Inc

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of Andrew Raibeck
 Sent: Monday, July 19, 2004 12:58 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Thoughts on Monthly Archives
 
  I haven't read the SBO requirements, but from our internal auditors,
 it
 looks
  like we need to have a good business
  best efforts to keep readable for whatever retention period we
 publicize.
 
  In working with older tape technology in storing archive tapes, we
 found
  that 20% of the tapes were not readable within 5 years. It seems
 that
 the
  best idea for long term archives is they need to be re-read on a
 regular
  basis (annually?) just to make sure they can be read.
 
 Or refreshed.
 
 Actually when I raised the issue of the data being readable, I wasn't
 referring to hardware or media problems (though these are also good
 points), but the availability of software that can understand the
 data.
 For example, if you open a .zip file with a plain text editor, the
 content
 will appear as so much garbage. Without a zip program that can make
 sense
 of the data, being able to restore a .zip file doesn't give you
 anything
 useful (at least not without an awful lot of hacking). .zip is a
 common
 format and likely to be available in the future, but what about more
 obscure formats, where maybe you archive the files while you use the
 product? Suppose you switch to a new product that uses a different
 format,
 then decide 10 years from now you need to retrieve the data that was
 used
 by the old product. Will you still have a copy of that old product
 around
 that will run on current operating systems and hardware? Even if you
 archived the software and could in theory restore and run it, do you
 know
 where, amongst all that archive data, the software is? Will your
 successors be able to find the data and the software? Will they even
 know
 which software they need to run?
 
 Regards,
 
 Andy
 
 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
 Internet e-mail: [EMAIL PROTECTED]
 
 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.
CONFIDENTIALITY NOTICE:  This email and any attachments are for the exclusive and 
confidential use of the intended recipient.  If you are not the intended recipient, 
please do not read, distribute or take action in reliance upon this message. If you 
have received this in error, please notify us immediately by return email and promptly 
delete this message and its attachments from your computer system. We do not waive 
attorney-client or work product privilege by the transmission of this message.


LANFree backups in MSCS environment

2004-07-19 Thread Bill Boyer
Anyone out there using LANFree backups with clusters?

The environment is SQLServer in an ACTIVE-ACTIVE configuration. It's already
configured with the B/A client on each node and the TDP SQL agent scheduler
service as a cluster resource. If I add the storage agent to each node, up
and running not as a cluster resource.will the TDP agent run LANfree
using the storage agent on the node that it is currently ACTIVE on??

Bill Boyer
An Optimist is just a pessimist with no job experience.  - Scott Adams


RES: LANFree backups in MSCS environment

2004-07-19 Thread Paul van Dongen
That is the way I usually configure it. Works fine.


Paul Gondim van Dongen
MCSE
IBM Certified Deployment Professional -- Tivoli Storage Manager V5.2
VANguard - Value Added Network guardians
http://www.vanguard-it.com.br
+55 81 3225-0353


-Mensagem original-
De: Bill Boyer [mailto:[EMAIL PROTECTED] 
Enviada em: Monday, July 19, 2004 4:11 PM
Para: [EMAIL PROTECTED]
Assunto: LANFree backups in MSCS environment


Anyone out there using LANFree backups with clusters?

The environment is SQLServer in an ACTIVE-ACTIVE configuration. It's
already configured with the B/A client on each node and the TDP SQL
agent scheduler service as a cluster resource. If I add the storage
agent to each node, up and running not as a cluster resource.will
the TDP agent run LANfree using the storage agent on the node that it is
currently ACTIVE on??

Bill Boyer
An Optimist is just a pessimist with no job experience.  - Scott Adams


Re: Multiplexing to a single tape

2004-07-19 Thread Coats, Jack
IMHO, if you must move from TSM to Veritas NetBackup, (typically a
bureaucratic or
economic reason, not technically based) just start using NetBackup
and stop using TSM.  Then keep TSM running on a 'warm' server, and test
restores and migtating from tape to tape occasionally.  You can pretty much
get rid of the disk pools, but you will need to keep TSM 'alive' and ready
to
restore until you are ready to call it 'dead' for your use.

I hope you never have to do this.  But this is how we migrated from Veritas
BackupEXEC
to TSM, and given our 60 day retention window, it was acceptable.  This is
also how
a IBM VAR is suggestion we migrate to a new TSM server (from Windows to
AIX), but that
seems odd to me.

... Jack


Re: Database tapes

2004-07-19 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Coats, Jack
delete vol VOLUMEID discarddata=yes

Er...no. A DELETE VOLUME command will not get rid of database backups.
Only a DELETE VOLHISTORY will.

--
Mark Stapleton


Re: Thoughts on Monthly Archives

2004-07-19 Thread Steve Harris
Thanks for your input Andy,  yes these are all valid points.

The requirement that we have is to be able to go back to the time of a decision and 
have available all pertinent information so that the same decision could be made 
again.  I don't know who the optimist was who wrote the requirement, but somehow he 
got it approved by goverment, and I daresay it will take 10 years before the practical 
difficulties cause the requirement to change.

My approach would be to specify archiving functionality in EVERY new application that 
is installed  with a plain-text unload method such as XML. Then, there needs to be an 
archive process attached to every old application as it is retired. 

The issue here is not so much to hold every record, but to hold every record *once* 
rather than multiple times.  I understand that this function is mandated to be in all 
new systems by 2006.  I'm having trouble finding new employment in my current 
specialities (AIX  and TSM - its a very small pond where I am) so I'm thinking of 
getting ahead of the game and becoming an archiving consultant.  It will certainly be 
a world wide growth sector for the next few years. Every Queensland state government 
department will need to be doing a project in this area  in the next year or two.

Regards

Steve.


 [EMAIL PROTECTED] 20/07/2004 2:25:00 
Some considerations for long-term archive:

- Much of today's data, as it is used from day to day, exists in some
product-specific format. If you were to retrieve that data, say, 10 years
from now, would you have software capable of reading that data?

- Even if you archive the software, will operating systems 10 years from
now be able to run that software?

- Even if you archive the operating system installation files, will the
hardware 10 years from now be able to install and run that operating
system?

- There is a good case to consider carefully what gets archived and how
you archive it.For instance, maybe for database data, it would make sense
to export that data to some common format, such as tab- or comma-delimited
records, which is very likely to be importable by most software. Likewise,
for image data, consider a format that is common today and likely to be
common tomorrow.

- 10 years from now, the people that need to retrieve the archived data
will probably not be the same people who originally archived the data.
Will your successors know what that data is? Will they know how to get to
it? (Gee, we need to get at the accounts payable database from 10 years
ago... under which node is it archived?) Will they know how to
reconstruct it, and how to use it?

I am by no means an expert in this area, but these are some things to
consider carefully for long-term archives. Note that most of these issues
are not directly related to TSM, but apply regardless of which data
storage tool you use.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED] 

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.



***
This email, including any attachments sent with it, is confidential and for the sole 
use of the intended recipient(s).  This confidentiality is not waived or lost, if you 
receive it and you are not the intended recipient(s), or if it is transmitted/received 
in error.

Any unauthorised use, alteration, disclosure, distribution or review of this email is 
prohibited.  It may be subject to a statutory duty of confidentiality if it relates to 
health service matters.

If you are not the intended recipient(s), or if you have received this email in error, 
you are asked to immediately notify the sender by telephone or by return email.  You 
should also delete this email and destroy any hard copies produced.
***


TSM DR vs Veritas IDR

2004-07-19 Thread CORP Rick Willmore
Hello All,
I wanted some input concerning TSM's DR module and its capabilities.  I used BMR many 
years ago as a DR solution and was planning on using it with TSM.  I am currently 
working with a Veritas Backup Exec guy who is touting the IDR agent that Veritas has 
and was wondering what my options are for a DR scenario.  I would like to be able to 
throw a floppy/CD in and restore a server to some specific point in a DR scenario.  I 
am currently using the DR module as supplied by TSM but feel that this isn't quite the 
solution I am looking for when restoring a windows box.  Any input would be 
appreciated.

R.


Re: TSM DR vs Veritas IDR

2004-07-19 Thread Muhammad Sadat
Hi J,
TSM's DR is primarily concerned with the TSM server restore in case it
crashes.
TSM is certified by Cristie BMR solution (www.cristie.co.uk) and is capable
of same features that Veritas IDR provides.

Kind Regards,
Muhammad SaDaT Anwar
Product Specialist
Systems Management  Data Management Products

Info Tech (Pvt) Limited
108, Business Avenue,
Main Shahrah-e-Faisal,
Karachi, Pakistan
Ph: +92-21-111-427-427 Fax: +92-21-4310569
Cell: +92-21-300-8211943




 CORP Rick
 Willmore
 [EMAIL PROTECTED]  To
 OM
 Sent by: ADSM:   [EMAIL PROTECTED]
 Dist Stor  cc
 Manager
 [EMAIL PROTECTED]
 .EDU


 07/20/2004 05:02  Subject
 AMTSM DR vs Veritas IDR


 Please respond to
 ADSM: Dist Stor
 Manager
 [EMAIL PROTECTED]
   .EDU






Hello All,
I wanted some input concerning TSM's DR module and its capabilities.  I
used BMR many years ago as a DR solution and was planning on using it with
TSM.  I am currently working with a Veritas Backup Exec guy who is touting
the IDR agent that Veritas has and was wondering what my options are for a
DR scenario.  I would like to be able to throw a floppy/CD in and restore a
server to some specific point in a DR scenario.  I am currently using the
DR module as supplied by TSM but feel that this isn't quite the solution I
am looking for when restoring a windows box.  Any input would be
appreciated.

R.