Re: opinion on AIT vs LTO and 3570 tape technology?

2002-05-26 Thread Don France

Nicely put, Gianluca!  I agree that LTO is (nearly) a no-brainer here.

The part about using HSM or not -- depends on your customer's perspective
(and wallet).  I've been working with a couple clients who have similar (or
worse!) retention needs for some of their data;  we've about resolved to use
multiple TSM servers, once a TSM db gets so large that it's going to
hinder server recovery or expiration/migration/reclamation processing -- so,
after 1-year's worth of data is accumulated, export/import the node (and its
data) to a restore/retrieve only TSM server, maybe even on the same box.
The argument for HSM depends on whether they really wanna spend for 2-year's
worth of online disk;  if they really want the data as fast as always
spinning rotating memory can provide, then all the points about how fast
can data be gotten back from LTO are moot!  Notwithstanding this
round-a-bout argument for HSM, LTO is *the* emerging, cost-effective way to
store large volumes of data;  it's performance is between DLT and 3590,
capacity is much greater than both, is available from HP, Dell, etc. (though
I like IBM's the best, at least until it's more mature.)

Good luck,

Don France
Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E. --
www.pacepros.com)
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gianluca Mariani1
Sent: Friday, May 24, 2002 11:15 AM
To: [EMAIL PROTECTED]
Subject: Re: opinion on AIT vs LTO and 3570 tape technology?


Hi Lisa,
the main points in the AIT vs LTO contest to me are:

1. AIT is a proprietary format dveloped for the then niche market of
digital media. It is true it has faster access times than LTO. this is,
though, just about the only advantage it can count over LTO. AIT-2
cartridges have 50GB native capacity compared to 100GB for native LTO;
AIT-2 can go up to 130GB for compressed capacity while LTO can reach 200GB.
AIT-2 has faster access times because the cartridge is smaller, so, on
average, the head has to go through a shorter tape length than LTO to get
to the first byte of data; but from then on contest is over, as LTO can
sustain transfer rates of 15MB/s in native mode and 30MB/s for compressed
data while AIT-2 runs, respectively, at 6 and 15.6MB/s.
what this means is that when you are transferring big sequential files, as
seems to be your case, LTO will beat the pants off AIT for overall
throughput; an analogy could be 3570 vs 3590. 3570 will get to data before
3590, on average, and then lose out on transfer speed. if you're talking
about start/stop and small file transfer then access times are important,
otherwise access time is much less of an issue. even in a situation like
this last one, LTO has a performance advantage that is quite impressive.
anyhow, no one beats 3570's capabilities for start/stop access situations.
Generation 2 LTO is, at the moment, under test and will be out in a few
months with 200GB native media and 30MB/s, or around that mark, native
transfer rate.

2. I don't know of any AIT automated library that can be compared to
3584LTO as to capacity and footprint; you have up to 248TB of native
capacity for the 3584, and you can start out with a base frame with up to
12 drives and up to 28TB of native capacity. AIT libraries, if I remember
correctly, cannot go further than a few TB(4 I think) and a few drives. If
money is a major consideration and you have a homogeneous environment, 3583
would still outpace AIT and cost a lot less than 3584.

3. LTO is an open standard, AIT is proprietary. what this means is that no
one company can control LTO's roadmap and force customer's choices. LTO has
a set roadmap for the next 4/5 years, and if you don't like IBM tape you
just go out and buy HP or STK or whatever and keep using your media. with
AIT you do what Sony tells you to.

4. LTO is SAN ready. LTO drives and libraries have Fiber Channel attachment
and can be put straight into a Storage Area Network. maybe with a GB
Ethernet, your case it seems, this is not an issue but in general it's an
important point. TSM can drive these libraries and move data over the SAN
with a benefit for LAN traffic (ok, not always as we all know... :-)) . AIT
and 3570 are out of the picture here.


hope this helps.

Cordiali saluti
Gianluca Mariani
Tivoli TSM GRT EMEA
Via Sciangai 53, Roma
 phones : +39(0)659664598
   +393351270554 (mobile)
[EMAIL PROTECTED]



  Lisa Cabanas
  [EMAIL PROTECTED]To:
[EMAIL PROTECTED]
  cc:
  Sent by: ADSM:  Subject:  opinion on AIT vs
LTO and 3570 tape technology?
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  05/24/2002 03:15
  PM
  Please respond 

Re: changeing bytes to gigabytes

2002-05-26 Thread Don France

BUT WAIT... have you not seen 4.2.2 and 5.1.x -- both have broken summary
table info, specifically the BYTES column is (mostly, not always) ZERO!  I
am still researching the other columns, they may be FUBAR'ed also;  I am
told there is an APAR open for this -- IC33455 -- anyone know when it will
get fixed?!?  (For capacity planning  workload monitoring, this is the
single BEST resource we've used in a long time, since the old SMF days!)

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Seay, Paul
Sent: Wednesday, May 22, 2002 7:12 PM
To: [EMAIL PROTECTED]
Subject: Re: changeing bytes to gigabytes


Select entity, cast(bytes/1024/1024/1024 as decimal(8,2)) as Gigabytes  
from summary

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Blair, Georgia [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 21, 2002 11:52 AM
To: [EMAIL PROTECTED]
Subject: changeing bytes to gigabytes


I am using the summary table to monitor how much data migrates from disk on
a daily basis. What is the easiest way to find change the amount to
gigabytes versus bytes? Or does someone have a particular select statement
for this.

Thanks in advance
[EMAIL PROTECTED]



Re: Use of TSM disk storage pools volumes by clients

2002-05-26 Thread Zlatko Krastev

Hello,

I see no answer to this on the list. So get one
TSM utilizes transactions, in other words it groups files until they
reach some limit and processes the whole bunch at once. Limits are size of
transaction in  kB (controlled by TXNBytelimit option in UNIX dsm.sys or
Windows dsm.opt) and number of files (controlled by TXNGroupmax option in
dsmserv.opt).
Every transaction will go into one volume and next transaction will go
into next volume in round-robin. If you have more client sessions (be it
many nodes or few nodes with many threads) than volumes at least two will
go to same volume. For volumes of type DISK this is not problem, that why
they are random-access volumes. Requests are performed immediately by TSM
and are queued by OS device driver (and may even be reordered by SCSI
disks or RAID controllers).
For sequential volumes (FILE or TAPE) things are as you are afraid - if
ten nodes try to backup to tape but you have only five drives, five
sessions will wait for resources until one of the other five session
completes.
If you have large disk you can define single volume of type DISK. It would
handle simultaneously multiple sessions even when is the only volume in a
storage pool. Creation of several volumes over same spindle would not help
and may even decrease the performance due to TSM trying to write
parallelly which would force HDD head to go back'n'forth. You will
improvement of transfer times (microseconds) at the price of greater seek
time (milliseconds) !!
If you create a filesystem and define stgpool of type FILE over this large
HDD then define maxcapacity to be smaller, maxscratch to be higher and
higher maxnummp to avoid congestion for mount points (each file serves as
virtual tape). So for FILE class you should ensure load to be spread more.
Hope this helps.

Zlatko Krastev
IT Consultant




Please respond to [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Use of TSM disk storage pools volumes by clients

Greetings,
Sorry if this is a newbie question, but I have not been able to find
the answer in the manuals.
Say a TSM server has a disk storage pool with 5 volumes.  When a
given client is backing up to a disk storage pool, will all his backup
data go to the same disk storage pool volume?  Or will it go to one
volume only until that volume fills up, then go on to another volume?
And what if there are 10 clients backing up to 5 volumes?  Do 5
clients get exclusive access to the 5 available volumes (like they would
to tape) while the other 5 wait, or do they all get to write to the
volumes?
Another question that depends on the answers to that question is, if
you have a big disk, say a 73GB disk you intend to use for disk storage
pool, is it better to make it one big disk storage pool volume, or make
it more small storage pool volumes, like 7 volumes of 10GB each?  If
each client got a different disk storage pool volume, then you would
think this would perform very slowly, since you would have to seek all
over the disk as those 7 clients each wrote to separate volumes on the
same physical disk.  On the other hand, if you made it one 73GB volume,
it might perform very slowly if only one client could write to it, and
the others had to wait.

Please reply via email if you have any insight into these questions, and
I will post a summary to the list.

Thanks in advance,

John Schneider
Lowery Systems, Inc.



Concept question

2002-05-26 Thread Jean-Baptiste Nothomb

Hi,

If I take a full backup on Friday ADSM will the other days on backup the
files that were modified since that day afterwards (if copy mode is set to
modified). Now assuming that this data will expire after 7 days will ADSM
take a backup of the whole data again afterwards?

Regards,
--Jean-Baptiste




_
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.



Re: changeing bytes to gigabytes

2002-05-26 Thread Bill Boyer

We just went up to 4.2.2.2 and it's still broke. Actually that BROKE it for
us. We were at 4.1.2.5 before that. I have a PMR open and the tech said Per
my research, this problem matches APAR IC31132 for AIX, and IC31296 for
Windows.  Per the README file for TSM 4.2.2, these apars
were supposed to be fixed at this level.  As such, I will need to escalate
this for further determination by our L2 group.

It's better than SMF...I don't need SAS and it's not RECFM=VBS! Just a nice
SQL query from Excel does the trick.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Don France
Sent: Friday, May 24, 2002 9:39 PM
To: [EMAIL PROTECTED]
Subject: Re: changeing bytes to gigabytes


BUT WAIT... have you not seen 4.2.2 and 5.1.x -- both have broken summary
table info, specifically the BYTES column is (mostly, not always) ZERO!  I
am still researching the other columns, they may be FUBAR'ed also;  I am
told there is an APAR open for this -- IC33455 -- anyone know when it will
get fixed?!?  (For capacity planning  workload monitoring, this is the
single BEST resource we've used in a long time, since the old SMF days!)

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Seay, Paul
Sent: Wednesday, May 22, 2002 7:12 PM
To: [EMAIL PROTECTED]
Subject: Re: changeing bytes to gigabytes


Select entity, cast(bytes/1024/1024/1024 as decimal(8,2)) as Gigabytes  
from summary

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Blair, Georgia [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 21, 2002 11:52 AM
To: [EMAIL PROTECTED]
Subject: changeing bytes to gigabytes


I am using the summary table to monitor how much data migrates from disk on
a daily basis. What is the easiest way to find change the amount to
gigabytes versus bytes? Or does someone have a particular select statement
for this.

Thanks in advance
[EMAIL PROTECTED]



IBM Ultrium, W2k Plug-n-Pray, TSM SAN

2002-05-26 Thread Zlatko Krastev

Hello all,

I think I should share my experience and ask for experience and opinions.
The environment (we are installing for a customer):
IBM 3583 with 3 drives + SAN Data Gateway Module connected to IBM 2109
(i.e. Brocade) switch
TSM server v4.2.1.15 (or may become 4.2.2) on Windows 2000 (its a small
shop)
3x MgSysSAN on AIX 4.3.3 (rest is LAN)

According to Tivoli flash 2 we could have problems only if there are at
least two Windows-based TSM servers or Storage Agents. And we have only
one - ought to be no problem.
One of the drives will be used from IBM iSeries server so the library was
zoned through SDG module to allow TSM  SAN access only to drives 23
from one FC port. Drive 1 was left to iSeries through other FC port of SDG
module. Library sharing with iSeries is not a threat.
Here comes the problem. After reconfiguration of SDG Windows box lost the
first drive (the iSeries one) and drive 2 (former \\.\TAPE02) became
\\.\TAPE01. Drive 3 (former \\.\TAPE03) became \\.\TAPE02. All this was
due to Windoze wizdom and happened on the TSM server. Drives were empty
at this moment so server ended up with library not working (all later
mount attempts failed due to element mismatch and drives went offline).

Now the question I am asking myself and hoping for your help.
Mount retension of drives is not zero. If drive transition happens during
write operation on drive 2 and drive 3 holding a tape because of mount
retension the next block written will go to \\.\TAPE02 but this might be
drive 3. Tivoli's temporary resolution for flash 2 is by using SCSI
Reserve command.
1. Will the drive 3 be RESERVED when is idle with cartridge loaded or not?

2. Which piece of software will issue Reserve and Release SCSI commands -
TSM or Ultrium driver?
3. Wouldn't Windows plug-n-pray reset SCSI bus thus unlocking device?
I already scheduled some additional test to perform myself at customer's
SAN TSM zone. For most of you this might be very deep technically but I am
afraid of potential data corruption/loss this might lead to. Any opinions
would be highly appreciated.

Zlatko Krastev
IT Consultant



Re: Parm field length for format command????

2002-05-26 Thread Bill Boyer

There is a limit on the number of bytes you can put for a PARM= in the JCL.
Try putting the filenames in a sequential dataset in the order you want. One
DSN per line. Then in the PARM field use the DD: specification instead of
listing all the dsnames. I don't have access to the mainframe right now, but
this is how I have it set up for our disaster recovery job streams.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Bo Nielsen
Sent: Saturday, May 25, 2002 5:04 PM
To: [EMAIL PROTECTED]
Subject: Parm field length for format command


Hi,

I hope someone can help me.

I have unloaded my DB on a OS/390 server vers. 4.1.5, and now I try to
initialize the  volumes,
but I get the message:

2 IEF642I EXCESSIVE PARAMETER LENGTH IN THE PARM FIELD

My PARM field look like:
PARM=('/LOADFORMAT 2 XSYS.ADSMVSAM.RLOG1 XSYS.ADSMVSAM.RLOG 3',
' XSYS.ADSMVSAM.DB5 XSYS.ADSMVSAM.DB1 XSYS.ADSMVSAM.DB')

And now I can't restore the DB, because I have format one log and db vol.


 Regards
 Bo Nielsen
   * 43 86 46 71
 COOP data  * (Internt postcenter): 6230
 IT-Driftscenter  * [EMAIL PROTECTED]

If you don't have the time to do it right the first time, where will you
find the time to do it again?



Re: Concept question

2002-05-26 Thread Bill Boyer

Only the EXTRA versions will be expired after 7-days, if that's your
management class retention. TSM wil never-ever-EVER expire the ACTIVE
version of a file The RETEXTRA parameter of your management class tells
TSM how long to keep the INACTIVE or EXTRA copies of a file...not ALL copies
of a file. The ACTIVE copy will always be there.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Jean-Baptiste Nothomb
Sent: Sunday, May 26, 2002 6:24 AM
To: [EMAIL PROTECTED]
Subject: Concept question


Hi,

If I take a full backup on Friday ADSM will the other days on backup the
files that were modified since that day afterwards (if copy mode is set to
modified). Now assuming that this data will expire after 7 days will ADSM
take a backup of the whole data again afterwards?

Regards,
--Jean-Baptiste




_
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.



Re: q access question

2002-05-26 Thread Robert Ouzen

  Petur

No quiet. I need to know for all my nodename the permission access I
gave for another nodename to restore from.
The command is: set access backup * nodename

to check the permission i run on each nodename the command: q access

I figure if is it a global command to do it for all my nodenames !!

Regards Robert


-Original Message-
From: P?tur Ey??rsson
To: [EMAIL PROTECTED]
Sent: 23/05/2002 16:19
Subject: Re: q access question

is this what you are looking for

select NODE_NAME, LASTACC_TIME from NODES

hope this helps. :)

Kvedja/Regards
Petur Eythorsson
Taeknimadur/Technician
IBM Certified Specialist - AIX
Tivoli Storage Manager Certified Professional
Microsoft Certified System Engineer

[EMAIL PROTECTED]

 Nyherji Hf  Simi TEL: +354-569-7700
 Borgartun 37105 Iceland
 URL:http://www.nyherji.is


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Robert Ouzen
Sent: 23. ma? 2002 06:05
To: [EMAIL PROTECTED]
Subject: q access question


Hi

Did anyone create a script to get for all nodename a list of access for
there backups . I did now manually for each node:  q access

Thanks Regards

Robert Ouzen



Compare Networker to TSM

2002-05-26 Thread Rupp Thomas (Illwerke)

Hi TSM-ers,

in the near future I'll have to compare Legato's Networker with TSM.
*   Has anyone written a comparison of both current products?
*   If not, is anyone interested in such a comparison?
*   If yes, I would write it in english otherwise in german.
Do you think http://www.autovault.org/discus/index.html is a good
place to save such documents - so they can be updated by everyone?

Kind regards
Thomas Rupp
Vorarlberger Illwerke AG
MAIL:   [EMAIL PROTECTED]
TEL:++43/5574/4991-251
FAX:++43/5574/4991-820-8251




--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG
--



Re: changeing bytes to gigabytes

2002-05-26 Thread Dave Canan

Please call your Tivoli support rep back and ask about the status of APAR
IC33455. This is the correct APAR for this problem. I have included the
text below.

ERROR DESCRIPTION:
  Found on TSM Version 4.2.2.0
  Other Versions Affected  TSM Server V5.1
  OSs Effected All (recreated on WIN, AIX, Solaris)
  Happens with all TSM Client levels (tested 4.x thru 5.1 clients
  Impact: medium
  When backing up to a V4.2..2.0 or V5.1.0 TSM server the
  summary table is not being updated to show the correct amount
  of bytes received.  In the older versions of the server the
  bytes would display the amount of data received,however with
  versions 5.1.0 and 4.2.2.0 the bytes received is
  reflecting 0 bytes.  This is a problem for customers that
  verify backups with scripts querying the SUMMARY table:
  ..
  select * from SUMMARY where ACTIVITY = 'BACKUP' 
  ..
  Examples:
Server Version 4, Release 2, Level 1.11
  Total number of bytes transferred: 8.55 MB

  tsm: SOCRATESselect * from SUMMARY where ACTIVITY = 'BACKUP'
 START_TIME: 2002-04-29 14:42:49.00
   END_TIME: 2002-04-29 14:44:06.00
   ACTIVITY: BACKUP
 NUMBER: 3
 ENTITY: CHIPSHOT
   COMMMETH: Tcp/Ip
  SCHEDULE_NAME:
   EXAMINED: 54
   AFFECTED: 54
 FAILED: 0
  BYTES: 0
   IDLE: 76
 MEDIAW: 0
  PROCESSES: 1
 SUCCESSFUL: YES
VOLUME_NAME:
 DRIVE_NAME:
   LIBRARY_NAME:
   LAST_USE:

Server Version 4, Release 2, Level 2.0
  Total number of bytes transferred: 6.73 MB
  tsm: DUMPTRUCKselect * from SUMMARY where ACTIVITY='BACKUP'
  *** Older Backup when TSM server was at V4.2.1.0 **

 START_TIME: 2002-04-15 15:18:57.00
   END_TIME: 2002-04-15 15:20:55.00
   ACTIVITY: BACKUP
 NUMBER: 5
 ENTITY: COSMO
   COMMMETH: Tcp/Ip
  SCHEDULE_NAME:
   EXAMINED: 459
   AFFECTED: 459
 FAILED: 0
  BYTES: 10195527
   IDLE: 0
 MEDIAW: 0
  PROCESSES: 1
 SUCCESSFUL: YES
VOLUME_NAME:
 DRIVE_NAME:
   LIBRARY_NAME:
   LAST_USE:
   *After Upgrade to V4.2.2***

 START_TIME: 2002-04-29 14:59:42.00
   END_TIME: 2002-04-29 15:03:25.00
   ACTIVITY: BACKUP
 NUMBER: 2
 ENTITY: CHIPSHOT
   COMMMETH: Tcp/Ip
  SCHEDULE_NAME:
   EXAMINED: 74
   AFFECTED: 74
 FAILED: 0
  BYTES: 0
   IDLE: 222
 MEDIAW: 0
  PROCESSES: 1
 SUCCESSFUL: YES
VOLUME_NAME:
 DRIVE_NAME:
   LIBRARY_NAME:
   LAST_USE:
  LOCAL FIX:
  Please apply appropriate PTF when available.



At 09:38 AM 5/26/2002 -0400, you wrote:
We just went up to 4.2.2.2 and it's still broke. Actually that BROKE it for
us. We were at 4.1.2.5 before that. I have a PMR open and the tech said Per
my research, this problem matches APAR IC31132 for AIX, and IC31296 for
Windows.  Per the README file for TSM 4.2.2, these apars
were supposed to be fixed at this level.  As such, I will need to escalate
this for further determination by our L2 group.

It's better than SMF...I don't need SAS and it's not RECFM=VBS! Just a nice
SQL query from Excel does the trick.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Don France
Sent: Friday, May 24, 2002 9:39 PM
To: [EMAIL PROTECTED]
Subject: Re: changeing bytes to gigabytes


BUT WAIT... have you not seen 4.2.2 and 5.1.x -- both have broken summary
table info, specifically the BYTES column is (mostly, not always) ZERO!  I
am still researching the other columns, they may be FUBAR'ed also;  I am
told there is an APAR open for this -- IC33455 -- anyone know when it will
get fixed?!?  (For capacity planning  workload monitoring, this is the
single BEST resource we've used in a long time, since the old SMF days!)

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Seay, Paul
Sent: Wednesday, May 22, 2002 7:12 PM
To: [EMAIL PROTECTED]
Subject: Re: changeing bytes to gigabytes


Select entity, cast(bytes/1024/1024/1024 as decimal(8,2)) as Gigabytes  
from summary

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Blair, Georgia [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 21, 2002 11:52 AM
To: [EMAIL PROTECTED]
Subject: changeing bytes to gigabytes


I am using the summary table to monitor how much data migrates from disk on
a daily basis. What is the easiest way to find change the amount to
gigabytes versus bytes? Or does someone have a particular select statement
for this.

Thanks in advance
[EMAIL PROTECTED]

Money is not the root of all evil - full backups are.



Re: NT Cluster TSM

2002-05-26 Thread Zlatko Krastev

Can you be sure that since you've set it up nothing has changed. It would
be interesting to see current dsm.opt of local  cluster node. Also the
way how schedules are started may sched some light. Also check file backup
dates on those unnecessary filespaces - maybe someone manually invoked
dsmc with local option file and this is not done by regular schedules.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Re: NT Cluster  TSM

The way I have set this up is to use the CLUSTER name in TSM for the
shared storage, and the two local computer names for the local drives.

nodename
Cluster   - controls the F,G,H,I,J drives (CLUSTERNODE YES option)
Server1  - controls c and d drive (Domain C, Domain D...)
Server2  - controls c and d drive (Domain C, Domain D...)

I have 2 schedules on each node - one for local backps and one for cluster
backups. I created a group in the cluster admin to control backups service
for the shared drives and keep the local node backups to automatic

Joe

 -Original Message-
 From: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
 Sent: Fri 24/05/2002 4:55 PM
 To: [EMAIL PROTECTED]
 Cc:
 Subject: Re: NT Cluster  TSM



 The only explanation is that server1 somehow is getting
DOMAIN ALL-LOCAL.
 Check carefully which dsm.opt is used when cluster
instance goes to
 server1 and are there any DOMAIN options in a optionset
associated to that
 node.
 If domain all-local gets somehow into local node then
is have two
 current copies of shared files. But if cluster resources
are backed up as
 server1 instead of cluster the only way to
distinguish which version
 is the current is checking the date or very-very
carefully follow
 failovers.

 Zlatko Krastev
 IT Consultant




 Please respond to ADSM: Dist Stor Manager
[EMAIL PROTECTED]
 Sent by:ADSM: Dist Stor Manager
[EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 cc:

 Subject:NT Cluster  TSM

 Hello,
 I'd like to ask a question about NT cluster with TSM..
 I have got an NT Cluster and the backup is made with TSM.
On the TSM
 server
 side I've got defined 2 server, each one for each real
server: server1 and
 server2. There are another two server defined for shared
disks: disk1(disk
 f:) and disk2 (disk g:).
 There are four file spaces:
 FileSpace Name  Node Name
 \\server1\c$server1
 \\server2\c$server2
 \\cluster\f$disk1
 \\cluster\g$disk2
 The DOMAIN option in the opt files are
 Server1 DOMAIN C:
 Server2 DOMAIN C:
 disk1   DOMAIN F:
 disk2   DOMAIN G:
 From the backup point of view it works fine, the problem
arises when the
 shared disks are moved from one real server to the other.
 In this situation the shared disks are considered as
local disks for the
 real servers and a full backup of those disks are made,
and the file
 spaces
 are as follows
 FileSpace Name  Node Name
 \\server1\c$server1
 \\server1\f$server1
 \\server1\g$server1
 \\server2\c$server2
 \\cluster\f$disk1
 \\cluster\g$disk2

 The creation of the two new file spaces is normal? Is
this situation ok?

 Another problem arises when there is a need of restoring
a file: where is
 the backup of one file residing in disk f:? in
\\server1\f$ or in
 \\cluster\f$ or in both? Which one is the most recent?
and other than the
 last one?

 Can anyone help me in clarifing those questions?

 Angel Antsn
 E-Mail : [EMAIL PROTECTED]



big data pool volumes

2002-05-26 Thread Burak Demircan

Hi, 
Last friday I created a large file enable journaled file system on AIX 4.3.3 
and put a 
10 gb single file on it. It was my backup disk pool volume. But I recieved 
following messages on 
TSM and AIX. Any idea? 


Actlong from TSM 4.2.2.0 

24-05-2002 21:22:38  ANRD dsrtrv.c(538): ThreadId62 Error on volume 
                      /tsmpoolfs/nomirrordata1.dsm: execRc=-1, summaryRc=-1. 
24-05-2002 22:05:28  ANRD blkdisk.c(1496): ThreadId41 Error -1 reading 
                      from disk /tsmpoolfs/nomirrordata1.dsm, errno=5 (There is 
                      an input or output error.). 



errpt from AIX 4.3.3.0_09ML 

21F54B38   0525000902 P H hdisk2         DISK OPERATION ERROR 
21F54B38   0525000902 P H hdisk2         DISK OPERATION ERROR 
21F54B38   0525000902 P H hdisk2         DISK OPERATION ERROR