aix/3590/san - High Availability Failover + load balancing

2003-03-26 Thread Richard L. Rhodes
From IBM's 3590 web site:

The new AIX(R) high-availability failover mechanism in the AIX tape
device driver enables multiple redundant paths in a Storage Area
Network (SAN) to 3590 Model E and Model H Tape Drives having Fibre
Channel attachment.

and

The AIX tape device driver also offers dynamic load balancing for
3590 Fibre Channel drives used in an AIX SAN environment.

We will soon be upgrading our current TSM server with scsi attached
3590E drives to FC attached 3590H drives.  I've been trying to find
out more information on just what this path failover and load
balancing is, and, how it works - but not finding much.

Setup:  We will have 14 drives and 2 FC switches - each drive
attached to both switches.  From the aix system (aix5.1) we will have
3 fc connections to each switch (6 total).

q1)  What will the rmt devices look like?  How many will there be?

Will I have one rmt device for each HBA/drive?  If the tape drives
were disk luns and there was no zoning, I would expect 84 aix devices
(6 hba's times 14 devices - each tape drives showing up 6 times).

q2)  In this environment, what is the primary path and what is the
failover path.  How is the primary picked?

q3)  Does load balancing use all paths?

In other words, will only 3 of the HBA's be use while the other 3 sit
idle unless a switch failure occure?  Are all 6 paths to to any
particular tape drive usable if the system decides to use them?

q4)  Is all this a function of AIX, the atape driver, or both?

q5)  Is there any documents that describe this more, like a redbook
or tsm manual . . . .I didn't find much.

Thanks

Richard Rhodes
[EMAIL PROTECTED]


Re: Backup of 300 GB Data within 12 Hours

2002-08-26 Thread Richard L. Rhodes

On 26 Aug 2002 at 9:00, Kauffman, Tom wrote:
 I do a 650+ GB SAP/R3 database in 2 hours 40 minutes. TSM and SAP on
 RS/6000 systems (660-6M1 now, was S7A two weeks ago). All disk is IBM
 ESS and the backups run over two dedicated Gigabit ethernet networks
 to four IBM LTO tape drives. No compression on the client. The restore
 runs in about 5 hours. The files average about 10 MB each, as TSM sees
 them (1.6 GB objects, six-way multiplexing).

I'm a little confused by your configuration. . . .

650gb database in 10mb files?  That would be 65000 files.  Did you
mean 10gb/file?

1.6gb objects?  Doesn't tsm see the individual files?  Are you doing
raw backups of the filesystems?

six-way multiplexing?  I didn't think tsm supported multiplexing
(multiple datastreams concurrently onto one tape), or is this
something else? Is this something brbackup does?


Sorry for the questions, but I'm interested in your specifics, since
we will be backing some very large SAP databases to LTO in the near
future.

Thanks

Rick





Re: Compare TSM/Legato and Veritas

2002-08-02 Thread Richard L. Rhodes

On 2 Aug 2002 at 4:16, Rafael Mendez wrote:

 Hi,
 In my opinion (I have used Legato and TSM not Veritas) each storage
 product has its advantages and disadvantages.

I have also used both . . .

Legato is easy to
 install and configure

Very easy!  The first time I downloaded the demo to try it out, I had
it running, doing client backups, with full jukebox support within 1
hour.  I was very impressed.

IBM could sure take a few lessons from Legato on how to
install/configure jukeboxes.

 but when you have very new devices, Legato is
 not the best. Also, i do not trust in Legato support.

My experience is that their support was the very worse I've ever
experienced.  This also includes extremely buggy/untested sftw
releases.

Now to be fair, this was several years ago.  My understanding is that
things are somewhat better today.  There is a Legato mailing list -
check there.


 TSM is in my opinion the most powerful storage software but is hard to
 understand how it works.  You need to study very much and sometimes
 TSM require too much effort.

TSM is also very resource intensive, requiring lots of processor
power, fast disk accesss for the db, very large database, and the big
disk staging pools.


 Legato could be installed over a small network(for that reason you
 have three different versions of Legato Server) but I do not think is
 a good idea to install TSM if you have small network.

I found Legato to be good up to around 50-100 clients.  Around 100
clients, manually balancing the client schedules became a mess.

TSM is deep and complex, but for this you get a lot.  It can do just
about anything you want and do it well - you just have to give it the
resources it wants.  The worse mistake to make with TSM is to not
give it the resources it needs to run well.

 So, this is just my point of view and I am concious that many people
 could disagree with me.


My opinion . . . .

We ran Legato for 4-5 years.  It served us well during that time,
even with all the problems we had with buggy sftw and bad support.
Our decision to switch to TSM was NOT because of these issues.  Even
with all the problems it wouldn't have been possible to justify the
dollars to switch.  Ultimately, the switch was justified because we
out grew Legato.  It simply couldn't do some of the things we needed.

We are VERY happy with TSM.  It's anything but perfect, but it does
what we need better than anything else.

Rick



Re: AIX Oracle Snapshots

2002-07-16 Thread Richard L. Rhodes

On 15 Jul 2002 at 21:50, Seay, Paul wrote:
 From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
 Sent: Monday, July 15, 2002 11:51 AM
 To: [EMAIL PROTECTED]
 Subject: Re: AIX Oracle Snapshots


 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf
 Of Markus Veit  has anyone tried to backup an AIX oracle server using
 image and  snapshot backup? We want to backup an Oracle server
 without using  RMON. Can anyone think of a way to backup Oracle
 without setting up  RMON?

Without rmon, Oracle has 2 basic methods of doing backups:

1)  Logical backups:   export cmd

This consists of a logical dump of the data.  You end up
with a file that contains sql commands to recreate
your tablespaces, tables, indexes, procedures, etc.  It
is transportable between platforms - a export file from
an aix system can be used to create that db on a hp or
nt box.

2)  Hot backups

Your db must be in archive log mode, where oracle creates
a copy of the redo logs when a log switch occurs.  You
walk through your tablespaces, putting them into backup
mode, use an os cmd to backup the files for that tablespace,
take the tablespace out of backup mode, and do the same with
the other tablespaces.  This gives a image backup of the db
that is taken live.  The key is the archive logs - you must
have the archive logs created during the hot backup.

Our site uses both exports and hot backups.  We seldom, if ever,
shutdown a db to do a cold backup.  We've created our own scrips
that automatically perform these tasks, and are scheduled via
cron.  Our hot backup script copies/compresses the db files
to another disk area.  TSM then just runs a normal backup of
that disk area - to tsm, our oracle backup is just a bunch of
files on disk.  We have other scripts that make sure all
archive logs get backed up.

Backup sftw vendors (ibm, legato, veritas, etc) all make interface
agents for Oracle, but all the ones I'm aware of are interfaces
to rman, not a old style hot backup.

Note: rman can perform hot backups - but they work differently than
the hot backup method I've described above.

At the least, you need to hit the Oracle manuals to learn about
Oracle backup/recovery, and expecially hot backups and exports - you
need both.  I strongly suggest the Oracle backup+recovery class after
the basic dba class.  Also, a backup is only good if it works on a
restore - backups must be tested!!



Re: Veritas/Legato/ArcServ

2002-04-18 Thread Richard L. Rhodes

Legato will pack fulls and incrementals into the same tapes, or,
separate tapes, depending upon how you set up storage pools.  I'd say
the waste of tape comes from having to perform periodic
full/differential backups, depending upon your schedule.  If you mix
multiple retention periods on the same tape, or, similar retension
periods that are spread across a long time (several months), you can
get one little old file keeping a tape from being recycled.  This
used to really bug me - one little old file holding up a 25gb tape
from being reusable.  There is no way to move a backup from one tape
to another to free up the tape.

On the other hand, you don't need a huge disk pool, a batch window to
run migration/reclamation/expiration, enough library to hold all your
local tapes online, will allow you to split storage pools across
multiple libraries, a much lighter weight catalog than tsm, and, a
automated/automatic catalog backup system that requires nothing more
than the normal client level backup of the Legato server.

I can't believe I just did that - I actually said something kind of
good about Legato.  I was sooo happy to get rid of Legato.  It was
buggy sftw and had the absolute worse tech support I've ever delt
with.

rick

On 17 Apr 2002 at 10:19, Gerald Wichmann wrote:
 Not specifically TSM question but more of a question to better
 understand how to discuss pro's/cons to other competing products.

 Since my background is all TSM, I'm curious on how the other
 competitors handle media. Is my assumption correct that they waste a
 lot of tape space? As far as I understand it, all these products do
 traditional full/incremental type backups where each full and
 incremental uses a tape. Thus Server 1 would suck up 7 tapes in a
 week (1 full, 6 incrementals).

 Is this true? Or can these products actually put 2 full's on a single
 tape? Or multiple incrementals on a single tape?




TSM 5.1

2002-03-12 Thread Richard L. Rhodes

We had a meeting with IBM last week where they described some of the
new features of 5.1 - coming within a few weeks.

1)  multi session restore
2)  simultaneous writes to copy pools (more than one)
3)  a move nodedata command
4)  lan-free backup/restore (I thought it already had this)
5)  hpux lan-free



Veritas and IBM

2002-02-12 Thread Richard L. Rhodes

IBM and Veritas are getting quite close lately.  They are jointly
working on a project to port Veritas sftw to AIX.  This includes
Veritas's filesystem, volume manager, special DB filesystem,
clustering sftw, etc.  My understanding is that IBM has been pushing
this, and that a joint team of some of the best from both companies
are working on the port.  It's supposed to be available in the not
too distant future.  Last week I attended a Veritas DR seminar.  THe
speakers always included AIX when they talked about supported
platforms - very soon.

The way I heard it, IBM wants AIX to be much more of a mainstream
Unix system.  They see Veritas support as a big step in this
direction, even if it competes with some of their own products.  For
companies that have standarized on Veritas sftw, AIX simply isn't an
option today.

I wouldn't worry about Veritas and support for BMR on AIX, but
rather, a resource issue as Veritas expands BMR and being spread too
thin to provide good support for existing platforms.

Rick

On 12 Feb 2002 at 13:41, Bill Boyer wrote:
 I have a client that is interested in BMR, but since Veritas bought
 them I don't feel comfortable pitching it. Don't know where it'll go.

 Bill Boyer
 DSS, Inc.





Re: low bandwitdth and big files

2002-01-31 Thread Richard L. Rhodes

On 30 Jan 2002 at 16:07, Wholey, Joseph (TGA\MLOL) wrote:
 With regard to compressing data twice, I disagree.  There's
something very wrong with it.  That's why it is strongly
recommended not to do it. (not just with TSM, but with all
data)  Some data that  goes thru multiple compression routines
can blow up to 2x the size the file started out as.

This absolutely true, but is not the whole story. Modern compression
systems (programs and chips) can detect the condition of compressing
uncompressable data.

While It's been years, I remember pkzip detecting this condition
and just storing the file - giving up any compression on
the file.  The unix compress utility is NOT smart.  It will happily
compress uncompressable data, making the file bigger.

Sony uses a IBM compression chip in AIT tape drives.  I once found a
IBM web site that went into much detail on various IBM compression
chip - this chip was included.  It stated that the chip could
detect when it was attempting to compress uncompressable data, and
would stop compression.  What is hard to find is a statement from
tape drive vendors that their hdwr compression is this smart.

So, at least for AIT drives, set them to hdwr compression and forget
about what you send them.

Rick



Re: How reliable are your 3590 K media

2002-01-10 Thread Richard L. Rhodes

We have several thousand 3590K tapes.  In general we are very happy
with them.  Like you, we have experienced some problem with a few new
tapes, but no where near your 10% rate - more like less than 1%.
Once past the initial use, they are rock solid.

We just received another shippment of tapes.  THese tapes were
purchased straight from IBM.  THey were willing to beat (by a hair)
the best price from any other vendor, and, deliver them several weeks
faster.  It will be interesting to see if these tapes have the same
problems.

One suggestion - I believe there is a diagnostic utility built into
the 3590 drives.  You can put a tape in and have the drive check the
entire tape.  You could check your tapes this way, although it would
be a big job.  Also, with new tapes you might want to increase the
cleaning frequency on the drives until all the tapes have been used.

Rick

On 10 Jan 2002 at 11:20, PETER GRIFFIN wrote:

 We have recently installed two 3494 libraries utilising 3590 K media.

 I am experiencing media failures (I/O errors) for about 1 in 10 media
 when initializing with label libv. Some of the media does successfully
 initialize on the second or third attempt but this does not fill me with
 confidence.



Re: FW: Urgent: Cost of TDP for SAP R/3

2002-01-09 Thread Richard L. Rhodes

Thanks for all the HELP!

Ya'll are wonderfull . . . . .

Rick



Urgent: Cost of TDP for SAP R/3

2002-01-08 Thread Richard L. Rhodes

I need a quick/rough idea of the cost of
TDP for SAP R/3.  This is a rush request from my
bos for a possible SAP test system.

I don't know how it's priced, but the Oracle db
being backed up will be around 500gb-1tb, running
on a S7A.

We've got a call into our local Tivoli rep, but like usual, he's
never there and doesn't return call/emails for days!


Thanks

Richard Rhodes
p: 330-384-4904
e: [EMAIL PROTECTED]



Re: Memory Tuning For AIX

2002-01-02 Thread Richard L. Rhodes

I'd trust more experienced help, but here
are my thoughts . . .

pi + po indicate you are constantly paging.  This probably
means you need more memory, but could be caused by aggressive
aix filesystem caching.  If so, vmtune can help limit
filesystem caching.

Your cpu is hardly being hit.  us + sy is only
around 25%. The majority of your cpu time
is wa - waiting for I/O - processes blocked on
disk I/O.

My suggestion is more disks for more disk i/o bandwidth and
more memory.  An iostat will tell you which disks/filesystems
need to be spread across more disk drives.

Rick

On 2 Jan 2002 at 13:33, Denis L'Huillier wrote:
 Hello-
 I am having a memory problem.
 I am runnning AIX 4.3.2 on a H50.  TSM is the only application
 running on the server.  When I run vmstat I get the following...

 kthr memory page  faultscpu
 - ---   ---
  r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa
  0  0 136233   202   0   3   2  99  210   0  52  240  66 17 30 16 37
  1  3 136234   342   0  38  11 3727 4204   0 1322 6136 4665 12 11  3 74
  2  3 136236   160   0  44  12 3728 5326   0 1324 5844 4644 10 11  2 76
  1  3 136237   295   0  17  19 4113 5725   0 1305 7174 5091 13 10  3 74
  0  3 136237   629   0  19   9 4240 7442   0 1316 7200 5082 14 13  5 68

 Does this look normal to anyone?  From my limited understanding of vmstat
 it looks like:
 CPU utilization is maxed out
 I have blocked processes (the b column)
 The scan rate (sr) column is very high.
 The server has 1Gig of memory.  I'm really not sure how to tell how much is
 allocated to
 the OS and how much is allocated to TSM.  Does anyone have any ideas or
 pointers is
 tuning memory for AIX?  Do I need a memory upgrade?
 I started checking this because my server response is horrible.
 Thanks.
 Denis




Re: Point system has me very confused

2001-12-11 Thread Richard L. Rhodes

I agree.

The last time we needed to order more client licenses it was
impossible to get someone from Tivoli to even talk to us.  We wanted
to spend money, but they wouldn't respond to emails or voice mails.
I found this amazing!  I had to get our local IBM storage rep to
contact Tivoli before they would call us back.

Just amazing . . . . . .

Rick

On 11 Dec 2001 at 12:54, Prather, Wanda wrote:
 And customers are STILL
 getting different answers, depending on which reseller or Tivoli office they
 talk to (assuming, of course, a Tivoli office will talk to them at all.)
 And resellers get different answers, depending on which Tivoli office they
 talk to (assuming, of course, that a Tivoli office will talk to them at
 all).



Re: running multiple servers on one AIX machine

2001-12-10 Thread Richard L. Rhodes

Just what all do you copy in order to set up a independent tsm
instance?

Thanks

Rick

On 10 Dec 2001 at 9:13, Davidson, Becky wrote:
 I keep a copy of everything regarding each machine in it's own directory
 tree including the code.  Then I have a rc.adsmserv2 and rc.adsmserv4 to
 start each.  This helps me to upgrade one server and not the other.
 Hope this helps
 Becky



TSM Platform Change

2001-11-27 Thread Richard L. Rhodes

What is involved in changing a TSM server from one platform (HPUX) to
another (AIX)?

I'm involved in a project that needs to move a TSM server from hp to
ibm.  I know the db needs exported/imported, unloaded/loaded,
backedup/restored (I'm not sure how, but I've heard it can be done).
The bigger question is the  tapes.  After a tsm db is brought to a
new platform, can that tsm instance read the tapes that were created
on the other platform?

Thanks

Rick



Library spanning

2001-11-26 Thread Richard L. Rhodes

On 25 Nov 2001 at 19:12, Seay, Paul wrote:
 The issue that I see is that a storage pool in the future will need to span
 more than one library simply because of the number of needed devices and
 exploiting the flexibility of SANs.  I hope TSM Development is working on
 this.

I just had this conversation with our local IBM storage folks.  I can
see us running out of space in our library sometime in the not to
distant future.  This would be a major mess since only full storage
pools can be moved to a second library.

I ran Legato with 2 libraries for many years - it worked very well
although you have the expected problem of too much work occuring in
one library while drives might be sitting idle in the other library.
This gets into the administrative mess of trying to balance  what
tapes are in what library so that all tape drives are utilizied.

While the local guys didn't have any answers, they did indicate that
Storage Tank would/could address this problem.  I don't know what
Storage Tank is going to be, but it sounds like having to spend more
money to get around a TSM limitation.



Re: 3590 Fiber drives

2001-11-26 Thread Richard L. Rhodes

On 26 Nov 2001 at 12:12, Gill, Geoffrey L. wrote:
 Can anyone tell me if there are any issues with the 50 micron cables as
 opposed to the 62.5 micron cables and 3590 drives. Taking into consideration
 these will be on a SAN soon but for now direct connect. We only need to run
 about 50 ft between computer and drives.

I can't speak directly to 3590 drives, but FC supports both 50/125
and 62.5/125 cables.  The difference is in the distance the cables
can be run.  I believe the distances supported are 500m for 50/125
and 300m for 62.5/125.  I have fiber channel connections running on
both, although only to disk subsystems (emc and ibm).  I think both
also work with gigabit ethernet.



STK Powerhorn Silos

2001-11-14 Thread Richard L. Rhodes

How well does TSM work with STK silos?

- issues
- problems

Thanks

Rick



Re: Server Database Performance

2001-11-08 Thread Richard L. Rhodes

On 8 Nov 2001, at 10:29, Seay, Paul wrote:
 Based on numbers that I have heard, you are looking at as much as a day.  It
 depends on your disk subsystem speed.  Just be cause it is EMC does not mean
 it is fast.  It must be laid out properly and not all EMC disk is created
 equal.

Absolutely!

You must trace your disks as you see them at the OS back to the Simm
and onto hypervolumes.  It's possible that what the OS sees as
separate disks could reside on just a few physical drives.  Even if
your db is spread across emc physical drives, are other
applications/systems using hypervolumes on the same physical
drives.In other words, you could be in contention with other systems.
  I assume your db grew to it's current size.  It may need to be
spread across more physical drives.

EMC sales people sell their systems as if you never have to be
concerned about how the backend is layed out - they are wrong.

Rick



High %sys time

2001-11-08 Thread Richard L. Rhodes

When our TSM server get's a very high cpu utilization, what is high
is the %sys.

%usr%sys%wio   %idle

00:43:28  22  56  11  12
00:44:28  22  57  10  11
00:45:28  22  55  10  12

03:37:30  12  50  19  19
03:38:30  17  50  15  18
03:39:30  14  50  15  21

07:26:48  16  58  18   7
07:27:48  18  58  17   7
07:28:48  18  63  14   5

I don't ever see the %usr above the mid-twenty percent range.

q) What is TSM doing that causes so much system time?

At first I thought it was while it was moving large amounts of data
from the disk to tape pool, but on closer look, this is true for some
of the time but not other times while at the same disk read rate.

Thanks

Rick



Re: 3494+VTS+TSM

2001-10-31 Thread Richard L. Rhodes

We share a 3494 with a mainframe vts and a aix/rs6k based tsm server.
The vts origionally had 6 frame units with 6 drives (base end unit,
high availability frame at the other end, and 4 storage frames with
drives).  To support TSM we've now added 8 more frames and 12 drives.

It took the IBM folks a little work to get the library manager set up
to allow dual connections, but once done it's worked very well.
Since each system has it's own dedicated tape drives performance isn't
a problem.  Our vts doesn't access the robot for mounts very often,
so robot access contention isn't a problem.

The only problem we have had is coordinating outages with with the
mainframe folks when we needed the new frames added. Adding frames
to a 3494 NOT a fast process - the last set of 4 frames we installed
for TSM required an 8 hour outage.

From our perspective - it's been a big win and few problems.

Rick



On 30 Oct 2001, at 16:59, Mahesh Tailor wrote:
 Hello, all!

 I am not a 3494 expert, so please forgive my ignorance in the way I refer to this 
platform.

 We have a 3494 VTS system/library installed that is currently being used by our 
OS/390 mainframe for its backups.  I would like to know if it is possible for me have 
TSM utilize this system for it's storage also?  Is there any good documentation about 
how to do this?

 Also, I spoke with our Tivoli TSM marketing specialist about this and was told, in 
fairly certain terms, that this was a **BAD** idea because of two reasons:

 1) BAD performance
 2) the same thing that happened to the 3466 (i.e. Tivoli does not recognize it or 
support it as a true TSM platform) IS GOING to happen to the 3494 system, since it is 
also somewhat based on the same concept and that if we implement this, we were almost 
sure to lose support from Tivoli in the
near future.

 BTW, one reason I ask is that I have a 3466 and am having a hell of a time getting 
support for this platfrom ever since Tivoli inherited ADSM/TSM and given the strong 
word of caution from a TSM person at Tivoli, I am reluctant to proceeed with the 3494 
integration and upgrade.

 Can anyone tell me whether any of this is true and lay my fears to rest?

 In case you don't feel comfortable emaling the list, please send me a message 
directly and I will ensure that your email stays confidential.

 TIA

 Mahesh Tailor
 WAN/NetView/TSM Administrator
 Carilion Health System
 Voice: 540-224-3929
 Fax: 540-224-3954




no-query restore

2001-10-24 Thread Richard L. Rhodes

The TSM manuals talk about a no-query restore where the
TSM server does the work of figuring out what files to
restore and the best order use.  The documentation for
no-query is only under the restore cmd - nothing is
said about the gui.

Will the Web gui perform a no-query?

Thanks

Rick



Re: Netware restores and backup sets

2001-10-23 Thread Richard L. Rhodes

On 22 Oct 2001, at 23:07, Mark Stapleton wrote:
 Is this sound reasoning?

 No.

 Thoughts?

 Use some logic. A restore reads a file off of a tape and sends it
 along the SCSI/fiber connection to the TSM server, which in turn pipes
 it through the I/O bus (getting it from SCSI to ethernet/token ring),
 sends it out a network connection, through an indeterminate number of
 pieces of network hardware, gets it to its intended target. The target
 box then sends the file through *its* I/O bus to IDE/SCSI, and finally
 onto disk.


That's the exact point I'm interested in - just how fast can the TSM
server access all the active files for a client off of tape.  This
takes out all the stuff you mentioned.  In this case, the TSM server
can read all the files for this client in 3 hours, but an actual
restore takes 8 hours.  This means the bottleneck is not the TSM
server - it's the network and/or the client system.  In fact, the TSM
server may be able to access all the files even faster - the backup
set writes may have slowed it down.  If our Netware admins are
unhappy with the restore performance, the problem is on their side,
not the TSM server side.

Rick



Netware restores and backup sets

2001-10-22 Thread Richard L. Rhodes

W did a test full restore of a netware server.  The restore was
around 40gb and took 8 hours.  The Netware admins were disappointed
with this time.

To try and compare this time with something else, we  created a
backup set for the same server - it took 3 hours.

My take is that a backup set creation is the equivalent of a full
restore.  If the backup set can be created in 3 hours, then a full
server restore is possible in 3 hours - if you can get the data to
the server (network throughput) and the netware server can accept the
data (netware server write throughput).

Is this sound reasoning?  Thoughts?

Thanks

Rick



Re: Netware restores and backup sets

2001-10-22 Thread Richard L. Rhodes

On 22 Oct 2001, at 15:48, John Naylor wrote:
 Because you produced your backupset in 3 hours does not mean you will
 get it back in 3 hours. The backupset speed is determined by how fast your host
 server
 can pull the entries from the database and write to its tapes  These speeds are
 likely to be different from how fast your netware server can pull the data in
 and write to its drives.

Exactly what I was thinking when we created the backup set.  I was
looking for a way to take the netware server out of the picture and
get some idea of how fast the TSM server can serve up the files.  The
backup set writes (restores) to a very fast tape (3590E in my case).
In doing so it copies all current versions for of the client, just
like a full restore to the client system itself (our netware server).

If the TSM server can create a backup set in 3 hours, then that's how
fast the tsm server can serve up the files.  It in no way implies how
fast a client can get the file or write the files to disk.  So, if an
actual restore takes 8 hours and a backup set creation takes only 3
hours, then the difference is outside the TSM server - the network or
the client netware server.

In other words, I want to tell our netware admins that the bottleneck
is on their side or in the network.

What would be neat would be the ability to create a backup set to
/dev/null, and truly see how fast TSM can serve up the files.

Thanks

Rick



Re: Monitor for ADSM - wish list

2001-09-25 Thread Richard L. Rhodes

Since converting from Legato to TSM, I find the ONLY thing I miss
is the really slick legato monitoring interface.  Oh how I
wish tsm had one, kind of like topas or monitor do for AIX.  I find
it tedious at best to try and piece together a picture of what
the server is doing from several tsm commands and os commands.

Wish List:

each tape drive listed with:
- status: online, offline
- drive name, current throughput (read/write) or activity
(mounting,unmounting, etc)
- process or session using the drive
each disk pool with:
- processes or sessions currently accessing
- % utilized
- current throughput (read/write)
Log
- % utilized
- sessions or processes currently using
cpu utilization of the computer (user, sys, wio, id)
cpu utilization of the tsm instance
network interface
- packets/s, throughput
outstanding messages

In other words, a simple glance would tell you the complete status
of the server, drives, pools, performance.

I can only wish . . . . .

Rick



On 25 Sep 2001, at 14:33, Joshua S. Bassi wrote:

 Query process

 Query session

 Query actlog

 dsmadmc -consolemode

 Plus there used to be some other monitoring tools out there, but I don't
 think they are being developed on any longer.



Re: Etherchannel and EBU backups.

2001-09-24 Thread Richard L. Rhodes

On 24 Sep 2001, at 15:46, Zlatko Krastev/ACIT wrote:
 Eric,

 if you set-up an EtherChannel on both sides (!) it is like connecting
 through 400 Mb/s interface (virtual one which distributes load on real ones
 - ent0, ent1, ...). This is done at interface (OSI Level 2) and has to be
 transparent to your sessions.

Does not a etherchannel present one IP number and mac address to the
network?  If so, then I don't see how this could work.  what leg a
packet takes on an Etherchannel is determined by an xor on the low
order bits of the source and destination mac addresses.  Between 2
computers on the same switch these addresses will always be the same
and therefore always use the same etherchannel legs.  It will not
round-robin your packets between the legs.



Etherchannel

2001-09-24 Thread Richard L. Rhodes

There's lots of misunderstanding about Etherchannel.  Check out this
Cisco web page for a highlevel overview.

http://www.cisco.com/warp/public/cc/techno/media/lan/ether/channel/pro
dlit/faste_an.htm



Re: Is there a way to delete all files that were backed up lastnight for certain clients

2001-09-19 Thread Richard L. Rhodes

On 19 Sep 2001, at 10:35, David Longo wrote:
 That question has come up here about viruses.  I don't think there is an
 easy way.  Maybe someone has some experience/ideas.

Lets suppose that instead of just a few clients, you wanted to do
this for all clients - or at least be willing to accept loosing all
backups from the previous night . . . . .

Could you restore the db to the previous backup, and if in
rollforware mode, roll forward to before the backups started?

I've often wondered what would happen in this situation because
what's on tape wouldn't match what is in the db.  This would be
especially true if migration, expiration, and reclamation have run.

This situation is always true if your db backups are done in normal
mode (NOT using roll-forward).  In this case, any db restore will
always leave the db out of sync with what's on tape.

I've never had a good explanation of this situation, and the
documentation is completely silent.

Rick



Re: Defrag Database

2001-09-17 Thread Richard L. Rhodes

The question is what is a reorg?

I would think moving the db to a new volume would just move the db
blocks - not changing their contents.  It would collapse the unused
db blocks, but I think this would be different result than a
unload/load, which would recreate all db objects.

Kind of like the difference between moving a logical volume to a
different disk and defraging a filesystem.  On a move logical volume
the physical partitions are moved, but their contents are left alone.
 To defrag the filesystem requires tinkering with the contents of the
physical partition.

Just a guess  . . . .

Rick

On 17 Sep 2001, at 9:42, Alex Paschal wrote:

 I just had a thought.  If you defined new dbvols and deleted the previous
 dbvols, that would move the db data over to the new volume.  Could that
 possibly defrag the db at the same time?  I would expect so because the
 manual says you can cancel the delete dbvol process, but some of the data
 may have moved.  It makes sense to me that this implies it moves db objects,
 not just vast tracts of raw dbspace.  Any thoughts?

 (Andy? Are you out there? Any chance on developer feedback on this one?)

 Alex

 -Original Message-
 From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]]
 Sent: Monday, September 17, 2001 2:06 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Defrag Database


 Hi Maurice!
 An online defrag utility is on all our wish lists for some time!
 The only way to defrag your database is a offline dump and reload... Note
 that this is quite a lengthy process!
 Kindest regards,
 Eric van Loon
 KLM Royal Dutch Airlines


 -Original Message-
 From: Maurice van 't Loo [mailto:[EMAIL PROTECTED]]
 Sent: Monday, September 17, 2001 09:22
 To: [EMAIL PROTECTED]
 Subject: Defrag Database


 Hi,

 *** --- Q DB F=D

 Available Space (MB): 3.320
 Assigned Capacity (MB): 3.320
 Maximum Extension (MB): 0
 Maximum Reduction (MB): 20
 Pct Util: 43,2
 Max. Pct Util: 52,0
 Physical Volumes: 13

 Because the database is around 50% utililized, i want to reduce the
 database, the by the spacetrigger made volumes can be deleted.

 Is there a way to defrag the database?

 Tia,
Maurice


 **
 This e-mail and any attachment may contain confidential and privileged
 material intended for the addressee only. If you are not the addressee, you
 are notified that no part of the e-mail or any attachment may be disclosed,
 copied or distributed, and that any other action related to this e-mail or
 attachment is strictly prohibited, and may be unlawful. If you have received
 this e-mail by error, please notify the sender immediately by return e-mail,
 and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its
 subsidiaries and/or its employees shall not be liable for the incorrect or
 incomplete transmission of this e-mail or any attachments, nor responsible
 for any delay in receipt.
 **

 WorldSecure Freightliner.com made the following
  annotations on 09/17/01 09:29:03
 --

 [INFO] -- Content Manager:
 The information contained in this communication is confidential and intended solely 
for the use o
f the individual to whom it is addressed and others authorized to receive it.  If you 
are not the i
ntended recipient, any disclosure, copying, distribution or taking of any action in 
reliance on the
 contents of this information is prohibited. If you have received this communication 
in error, plea
se immediately notify the sender by phone if possible or via email message.

 ==




DNS Domain change effect on TSM

2001-09-17 Thread Richard L. Rhodes

Our current internal network has a bunch of dns domains resulting
from poor planning, mergers, etc.  A decision has been made to
re-domain(?) our network.  We are going to move all our network under
one new domain.  IP numbers are NOT changing, just the domain.

The plan is to put all hosts in the the new domain, and keep the old
domain around for a while.  This should allow for a quick conversion,
but eliminate a big-bang conversion.

I'm interested in issues/thoughts/comments on problems with TSM that
might come up in relation with this domain change.

Thanks

Rick



Re: Tivoli vs Arcserve 2000

2001-09-06 Thread Richard L. Rhodes

On 6 Sep 2001, at 9:08, Cory Heikel wrote:
 First and foremost, the support for arcserv is lousy at best -

While I didn't work with Archserv, I've watched our archserv admins
and had many conversations with them.

 - Their support truly is very, very bad.
 - The product doesn't scale.  It seems to work well up to some small
 number of clients, then falls apart.
 - It's cataloging/indexing system (db?) was forever getting
 corrupted.  Much time is spent doing index restores to recover
 from corrupt indexes.

Rick



AIT1 drive problems

2001-09-05 Thread Richard L. Rhodes

We pulled the plug on our old Legato backup server a short
while ago, which had 2 qualstar 46120 libraries with AIT1
tape drives.

We decided to try and hook one of the these qualstar's
up to our TSM server (3494 with Magstars).  We got it up
and configured in tsm, but when we go and try to
label tapes we get errors - I've listed the errors
at the end of this email.

Now, the strange thing.  This library has 3 AIT1 drives.
Two of them get these errors, but the 3rd works!!  So we've
been able to label tapes and use them, but only with
one drive.  Any attempt to label tapes, or read used
tapes with the two drives fails.  If I try and use the
drives directly from aix (ie: tar to them), they work fine.
They also worked under Legato.

We tried our other Qualstar library also.  It also has 3
drives - all 3 fail in exactly the same way.

I made sure that the dip switches (not scsi addr jumpers)
on the drives are all set to the same positions as the
one drive that works.

I updated the drives with the latest microcode, but
this didn't help.

I'm stumpted!
- all drives work in aix
- only one drives works with tsm
- latest microcode
- dip switches set like good drive
- all drives did work in Legato

Any thoughts?

Thanks

Rick


Error messages from log:

09/05/01 11:14:28 ANR0609I LABEL LIBVOLUME started as process 2.
09/05/01 11:15:03 ANR8302E I/O error on drive DRVMT5 (/dev/mt5)
  (OP=SETMODE,CC=207, KEY=05, ASC=26, ASCQ=00,
  SENSE=70.00.05.00.00.00.00.14.00.00.00.00.26
  .00.00.8F.00.0E.00.00.00.00.01.83.AD.50.,
  Description=Device is not in a state capable
  of performing request).
  Refer to Appendix D in the 'Messages' manual
  for recommended action.
09/05/01 11:15:03 ANR8302E I/O error on drive DRVMT5 (/dev/mt5)
  (OP=SETMODE,CC=207, KEY=05, ASC=26, ASCQ=00,
  SENSE=70.00.05.00.00.00.00.14.00.00.00.00.26.
  00.00.8E.00.0E.00.00.00.00.01.83.AD.50.,
  Description=Device is not in a state capable
  of performing request).
  Refer to Appendix D in the 'Messages' manual
  for recommended action.
09/05/01 11:15:03 ANR8806E Could not write volume label 002019
  on the tape in library SILO2.



Expiration Processing

2001-08-31 Thread Richard L. Rhodes

While expiration is running you can see on a q pro cmd how many
objects are inspected and expired.  Is there any way to tell the
number of objects expiration still has to inspect? In other words,
I'm trying to find some way to know where exp is and now much longer
it might run.

THanks

Rick



Re: TSM DB/Disk Pool - Disk tuning question

2001-08-30 Thread Richard L. Rhodes

On 30 Aug 2001, at 14:48, Jeff Bach wrote:
 1.  A single backups runs to a single volume as perceived by ADSM.  The
 first session runs to the first volume allocated to the storage pool.  The
 second session runs to the second logical volume allocated to the storage
 pool.  The third sessions to the third volume.

Boy is this confusing, not not well documented.  I thought
(apparently wrongly) that the unit of allocation was a transaction.
The first transaction for a session would go on one volume, the
second tran would go on another, the third on still another.  Thus,
one session could spread it's backup data across all volumes.

I'll have to try this on our test system - when we get it working
someday!

Rick



Re: TSM Bare Restore AIX Different Arch

2001-08-23 Thread Richard L. Rhodes

How do you restore the mksysb that was done to disk and was backed up
to TSM?  The only way I know of to restore one is via NIM.

Thanks

Rick

On 22 Aug 2001, at 14:22, Prather, Wanda wrote:
 We do the mksysb's to disk.  Then TSM backs them up.
 Then we only need the tape version of mksysb for the TSM server itself.



Re: EMC, Timefinder, Oracle, and TSM

2001-08-14 Thread Richard L. Rhodes

In general, yes you can.

How were you planning to perform the bcv split?  The
normal ways are to either shutdown Oracle (cold backup)
or put Oracle into backup mode (hot backup).  Either way,
once restored you can force oracle into recovery mode and
roll forward through any logs.  Yes, you will have to run
in archive-log mode.

Rick

On 13 Aug 2001, at 20:52, E. J. wrote:

 I have an Oracle DB on Sun in an EMC with Timefinder.
 The plan is to use Timefinder to create BCV's of the
 DB and then back up the BCV's to TSM.  Question:  How
 can I use the BCV's and still provide the ability to
 restore to the most recent point in time?  If the
 entire database is backed-up by attaching the BCV's to
 a Backup Server then out to TSM, can the redo logs
 created on the DB Server be applied to the full DB
 backup taken via the BCVs and Backup Server?

 EJ


 __
 Do You Yahoo!?
 Make international calls for as low as $.04/minute with Yahoo! Messenger
 http://phonecard.yahoo.com/




Re: SQL-BACKTRACK ORACLE

2001-08-14 Thread Richard L. Rhodes

We use sql-backtrack for Sybase, but not Oracle - never wanted to
spend the money.  We've also looked at RMAN, but rejected it for
various reasons.  The incremental hot seems good, but you're still
just pulling out changed blocks, which is basically whats in the logs.

The vast majority of our oracle databases run a hot once a week and
full export on the other 6 nights as an integrity check.  Logs are
backed up nightly to tsm.

On certain large and active databases, we run hots 3 times a week
with 4 full exports.  There is a lot of log activity on these
systems, so we backup the logs to tsm each hour.

All of our Hot backups are performed by a multi-threaded ksh script
that creates a compressed backup on disk.  The current project is to
modify this script to handle raw volumes.

Rick



On 13 Aug 2001, at 15:49, Thiha Than wrote:
 hi,

 Our oracle DBA claims that 'hot' incremental backups of Oracle takes as =
 long as 'hot' full backups using SQL-BACKTRACK. Instead he does weekly =
 full 'hot' backups of the database and backups of the redo logs thru the
 =
 week.

 Is that how others using SQL-BACKTRACK and ORACLE do thier backups?


 It's the same for backing up oracle databases using RMAN, incremental
 takes almost as long as a full backup.  I have never used sql-backtrack
 before so I cannot comment on it.  With RMAN, it has to scan through every
 single Oracle data block to find the changed block.  So the incremental
 backup will take a long time, but the amount of data sent through can be
 significantly less.

 regards,
 Thiha




Testing new releases?

2001-08-08 Thread Richard L. Rhodes

Is it possible, on one AIX server, to run two TSM servers where each
server is a different TSM version?  For example, one TSM server
running at TSM4.1 and another server running at version 4.2?

We want to do some testing with 4.2 (we're on 4.1) with a spare scsi
jukebox.  We could think of no way to do this on our production
server, even though that server has plenty of resources.  So we've
started putting together a small test server.

Q)  Do you maintain a test server for testing, or do you just dump
the new release onto production and have a good backout procedure?

Rick



Re: 3494 mystery

2001-08-07 Thread Richard L. Rhodes

On 7 Aug 2001, at 14:22, Loon, E.J. van - SPLXM wrote:
 Hi Rick!
 The 3494 uses the library manager (the PC with an customized OS/2, which is
 located inside your library control unit frame) to control this. Your AIX
 host uses the ATLDD driver to address the library. The library manager
 assigns a drive to your mount request and reports this back to your host.
 I hope this explains.

Well, not exactly.  AIX knows a drive by it's scsi/fc address, for
which aix creates a device (/dev/rmtx).  When you define the drive to
tsm, you specify the device (/dev/rmtx).  The aix device (say,
/dev/rmt1) has no knowledge that this drive is in the 4th frame and
is the bottom left drive (which is what the robot needs to know).
Yes, the os/2 pc knows all about the tape drives, what bay there in,
their location in the bay, but has no knowledge of the AIX device.
So somehow the 3494, aix, and tsm have to coordinate which tape drive
is used.

The only thing I can find that both the 3494 and aix/tsm know about
is the drive serial number.  Here is one of my drives from a
mtlib -l /dev/lmcp0 -D command (THanks Mr Sims for your wonderful
documentation!).

  70, 00F35130 003590E1A70

Here is a lscfg -v for the same drive:

  rmt5  10-58-00-0,0  IBM 3590 Tape Drive and Medium
  Changer
ManufacturerIBM
Machine Type and Model..03590E1A
Serial Number...000F3513
Device Specific.(FW)E32E
Loadable Microcode LevelA0B00E26

So one method might be that when the 3494 mounts a volume in a free
drive, he might inform tsm of which drive by specifying the serial
number.  TSM would then match up the serial number to the AIX device
number, and read/write to that device.

Yes, yes, I know, none of this is important . . . . I must curious.

Rick



bufpoolsize and AIX

2001-08-03 Thread Richard L. Rhodes

Our bufpool hit ratio has been running in the mid 90% and I would
like to increase it's size.  Currently, it's set to:

   dsmserv.opt: BUFPoolsize 1048576

Our tsm server is a hand-me-down with lots of memory - 12GB.  I
decided to double the bufpoolsize parm to 2048576.  When I tried to
start tsm it complained that it couldn't acquire this memory and
failed to start.  I tried to set it to just a few hundred mb more,
but had the same result.  I finally just put it back to the origional
value and got our server up and running.

The doc states that the limit on the bufpoolsize parm is the size of
your virtual memory.  I'm well under that!

Any idea what limit I might be hitting?  I think I must be hitting an
AIX (4.3.3) limit of some kind.


Thanks

Rick



scratch volumes

2001-07-25 Thread Richard L. Rhodes

We have TSM setup with several tape pools in our 3494.  The tape
pools are collocated.  When we add scratch volumes to the library,
TSM grabs the scratch tapes for the tape pools even though there are
lots of tapes only partially full.  I understand this - it's the
effect of collocation.  The problem is how to keep scratch tapes
available for db backups?

q) How do I make sure some scratch volumes are always available for
db backups, or, how do I keep tsm from adding scratch tapes to a tape
pool?

Thanks

Rick



Re: LARGE FILE BACKUPS THROUGH A FIREWALL.

2001-07-16 Thread Richard L. Rhodes

Another idea . . . .

Sounds like the backup is working correctly, just taking a long time.
What is the load on the firewall system during the backup?  You might
be hitting a max throughput on the firewall.  Try turning on
client compression to ease the firewalls load.

Rick

On 16 Jul 2001, at 11:14, Steve Martin wrote:
 Firewalls running Checkpoint.  I got a 25 GB SQL DB that takes over 40-50
 hours to backup.  I've tested the same backup but bypassing the FW's and the
 backup took only about 1 hour!  It is not only the SQL DB but any large file
 takes a tremendous amount of time to backup through the FW.



Re: Database backup performance

2001-06-28 Thread Richard L. Rhodes

Our db backup stats are below.  We do 2 db backups
per day,  one to disk and one to tape.

all pages:  11,266,048 (44g)
used pages: 8,387,060 (34g)
disk bkup:  60min - local ESS disk
tape bkup:  60min - 3590 drives
processor:  6k-s7a



Restore to different system

2001-06-14 Thread Richard L. Rhodes

I'm sure TSM can do this, but initial hunting didin't turn up
anything.  It's probably right before our eyes . . .

We have systems A and B that we backup to TSM.  THey are the same
architecture (AIX).  We want to do a TSM restore from system A's
backups to system b.

How do you setup TSM so system B can access system A's backups?

Thanks

Rick



Re: IBM ESS Model 2105-F20

2001-06-12 Thread Richard L. Rhodes

Yes, this would be a consequence of moving access from one ess port
to another.

Each ess port has a separate wwpn.  During the cfgmgr run the wwpn of
the ess port is stored in the ODM under the hdisk definition.  So,
when you change the ess port used by a lun, AIX still tries to access
the lun via the old ess port.  To change this requires the hdisk to
be deleted and recreated.

Here's an example of one of my systems that has a ess hdisk via fc:

  istbd1u:root:/==lsattr -E -l hdisk9
  scsi_id   0x31e00N/ATrue
  lun_id0x560f N/ATrue
  location N/ATrue
  ww_name   0x1000c9233e36 N/AFalse
  pvid  none   Physical volume identifier False
  q_typesimple Queuing TYPE   True
  queue_depth   64 Queue DEPTHTrue
  start_timeout 180START unit time out value  True
  rw_timeout60 READ/WRITE time out value  True

  NOTE: the ww_name field is the wwpn of the ess port that this hdisk
  uses.

Interestingly, there is a similar problem with failed ess fc cards.
When a ESS FC card fails and is replaced, the new card put in has a
new wwpn!  Any access to a lun via that ess port won't work until the
AIX hdisk is deleted/recreated.  When this happened to us we were
just amazed!!!   Every other raid vendor I know of who supports
FC uses persistent wwpn names on the raid system fc adapters.  For
example, EMC uses a wwpn name that included the Symm serial number
and card slot number.  If you swap out the Symm fc card, the new card
put in will have the same wwpn of the pervious card.  We found out
that in testing the ESS, IBM tested by pulling out the fc card and
then putting the same card back in the same bay/slot!  There is a
microcode fix which provides persistent ess wwpn names, but it
requires you to delete/recreate your AIX hdisk's.  Dending on how
long you've had your ESS, you may already be using this microcode.

I strongly suggest that all ESS fiber channel customers talk to IBM
about this microcode upgrade.

On 11 Jun 2001, at 17:18, Van Ruler, Ruud R SSI-ISES-31 wrote:

 Guys

 t
his is not really a TSM matter but perhaps someone could help me out here:

 At the moment we have a concerning situation on a ESS. When we change
 ESS-ports for an AIX-server without touching the assigned LUN's of the
 server, only a Hba-swap on the ESS, our customers complain about the
 availability of the disks from the servers point of view. They say that
 after we performed this action and they have deleted the disks, rebooted the
 server and ran config-manager, they still have missing disks (some disks are
 visible, but not all of them), note that before the Hba-swap, the server
 could see all its disks.
 When they boot the server again a couple of days later, all the disks are
 visible again without us performing any actions on the ESS or the switches.

 any ideas ???

 thanks


 Ruud van Ruler,  Shell Services International BV - ISES/31
 Our Central Data Storage Management home page:
 http://sww2.shell.com/cdsm/
  Room 1B/G01
  Dokter van Zeelandstraat 1, 2285 BD Leidschendam NL
 Tel : +31 (0)70 - 3034644, Fax 4011, Mobile +31 (0)6-55127646
 Email Internet: [EMAIL PROTECTED]
   [EMAIL PROTECTED]

 R.vanRuler




TSM server hanging

2001-05-31 Thread Richard L. Rhodes

Were having problems with our TSM server hanging during nightly
backups.

server:  RS/6K-S7A
tsm:TSM 4.1.1
Drives: 3494 with 8 3590E drives

Within the last several weeks our server has hung  during nightly
backups about a half dozen times.   Actually, we don't think it's
hung, just running very slow.  In the morning there are 50 sessions
(max sessions is set to 50) just sitting there doing nothing, or just
running very slow.  You see just a little trickle of data coming in
on the ethernet adapters - a few k/s.  The aix error log and TSM log
show no errors that would relate to this problem (ie: the aix error
log has drive cleaning messages).  The TSM log does show lots of max-
session messages, but that's because the very first backups that get
started in the evening never finish, or finish so slow that we run
out of sessions.  IBM support has suggested that we put the latest
fix on (4.1.3), but other than that they have no idea.

Any help is more than Welcome!

Rick



Re: Multiple TSM* Servers On Same Machine

2001-05-11 Thread Richard L. Rhodes

We will be looking at running multiple TSM instances, probably on one
box, later this year.  Our server has plenty of horsepower and
bandwidth . . . the only reason we may need to do this is database
size.

The more I ponder this the madder I'm getting!  Why should I have to
purchase another instance of TSM, just to keep the db size down, so
that backup and recoveries can run faster!  We've got TB Oracle DB's
that don't have this problem!  If they just implemented a Oracle
style Hot backup so that I could copy out the actual db files live,
then this problem would be solved.

I'm about ready to jump on bandwagon asking for the ability to use
Oracle or DB2 for the TSM db engine!

Rick



On 10 May 2001, at 11:08, Paul Zarnowski wrote:

 The reason I would do it would be to keep the database size down.

 At 11:41 AM 5/9/2001 -0400,
 [EMAIL PROTECTED] wrote:
 Why would one put multiple TSM servers on a single machine?




Re: anyone else heard this?

2001-05-01 Thread Richard L. Rhodes

My understanding is that there is only one vendor for
the tapes, Imation, even though you might get them
from someone else (even IBM).

Rick

On 1 May 2001, at 13:51, Talafous, John G. wrote:

 Yes, I was told two weeks ago that there was a 9-12 week wait. If it is now
 8-10 weeks, there must be some improvement. I am also told that the reason
 is the popularity of the product. A simple supply and demand issue. Hope the
 manufacturer(s) can ramp up some additional capacity quickly.

 John G. Talafous  IS Technical Principal
 The Timken CompanyGlobal Software Support
 P.O. Box 6927 Data Management
 1835 Dueber Ave. S.W. Phone: (330)-471-3390
 Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
 [EMAIL PROTECTED]   http://www.timken.com




Re: 500GB Backup

2001-04-30 Thread Richard L. Rhodes

You're getting you terms mixed up.

9000 kB/s is 9MB/s, as you state.

A 100/mbps ethernet (fast ethernet) is bits-per-second,
not bytes-per second.  100mbps=10mB/s, so if your getting
9mB/s, your doing quite well.  This is your bottleneck.
To get faster you're going to need gigabit ethernet, which
is 1000mbps, or, 100mB/s.

I run dual gigabit connections into our tsm server, a
RS/6k-S7A.  IBM's adapters seem to max out at around
30-35mB/s, or 20,000-25,000 packets-per-second.  As
soon as we get a new switch that supports jumbo
packets (9k ethernet packets) I expect to see the
packets-per-second drop and the throughput increase
for clients that also have gigabit connections.

Rick



On 27 Apr 2001, at 14:03, Dearman, Richard wrote:

 9000 kB/s is only 9mb per second.  Over a 100meg ethernet LAN.  That doesn't
 sound to god to me.

 -Original Message-
 From: Adolph Kahan [mailto:[EMAIL PROTECTED]]
 Sent: Friday, April 27, 2001 1:02 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 500GB Backup


 If you're getting 9000 kBytes/s over a 100mb LAN, then that is very good
 and you
 are not going to move data faster over the LAN.


 - Original Message -
 From: Dearman, Richard [EMAIL PROTECTED]
 Sent: Friday, April 27, 2001 1:32:51 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 500GB Backup

  I am not using any TDP product.  I'm picking up oracle db dump files with
  the regular tsm client.  Going over a 100mb ethernet segment.  Then
  migrating the files to a 3494 library.  All the data goes to disk first.
 
 
 
  -Original Message-
  From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
  Sent: Friday, April 27, 2001 12:25 PM
  To: [EMAIL PROTECTED]
  Subject: Re: 500GB Backup
 
 
  And it depends on the type of DB.  TSM only supports a couple in LAN-free
  mode.
 
  -Original Message-
  From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
  Sent: Friday, April 27, 2001 1:18 PM
  To: [EMAIL PROTECTED]
  Subject: Re: 500GB Backup
 
 
  You need a SAN in place to use the LAN free backup.
 
  -Original Message-
  From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]]
  Sent: Friday, April 27, 2001 12:01 PM
  To: [EMAIL PROTECTED]
  Subject: Re: 500GB Backup
  Importance: High
 
 
  Hi
  If I am correct IBM offers LAN free backup which is faster .But need to
  purchase additional s/w.
  pinni
 
 
 
  -Original Message-
  From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
  Sent: Friday, April 27, 2001 11:48 AM
  To: [EMAIL PROTECTED]
  Subject: 500GB Backup
 
 
  I need to backup a 500GB database and my current setup is my tsm server is
  on the same ip subnet has the database server.  Tsm is connected to 500GB
 of
  SSA storage with 5 RAID5 sets with two 50GB tsm volumes on each raid set.
  My though put to the disk only seems to be 9000 kB/s.  Which doesn't seem
  very high to me. My backup time is about 6 hours.  Does anyone have any
  better senarios for backing up this amount of data in the smallest amount
 of
  time soon.  I am going to be backing up a 1Tb per day and I need to get me
  backup times to the lowest.  Does anyone have any suggestions on how to do
  that.
 
  Thanks
  ***EMAIL  DISCLAIMER**
  This e-mail and any files transmitted with it may be confidential and are
  intended solely for the use of the individual or entity to whom they are
  addressed.   If you are not the intended recipient or the individual
  responsible for delivering the e-mail to the intended recipient, any
  disclosure, copying, distribution or any action taken or omitted to be
 taken
  in reliance on it, is strictly prohibited.  If you have received this
 e-mail
  in error, please delete it and notify the sender or contact Health
  Information  Management (312) 996-3941.
  ***EMAIL  DISCLAIMER**
  This e-mail and any files transmitted with it may be confidential and are
  intended solely for the use of the individual or entity to whom they are
  addressed.   If you are not the intended recipient or the individual
  responsible for delivering the e-mail to the intended recipient, any
  disclosure, copying, distribution or any action taken or omitted to be
 taken
  in reliance on it, is strictly prohibited.  If you have received this
 e-mail
  in error, please delete it and notify the sender or contact Health
  Information  Management (312) 996-3941.
  ***EMAIL  DISCLAIMER**
  This e-mail and any files transmitted with it may be confidential and are
  intended solely for the use of the individual or entity to whom they are
  addressed.   If you are not the intended recipient or the individual
  responsible for delivering the e-mail to the intended recipient, any
  disclosure, copying, distribution or any action taken or omitted to be
 taken
  in reliance on it, is strictly prohibited.  If you have received this
 e-mail
  in error, please delete it and 

Re: Server pricing information

2001-04-30 Thread Richard L. Rhodes

You won't find it.  Tivoli no longer
publishes list prices for TSM, at least
that's what they told me when I complained about
not finding price info.  The only way to get
pricing is to request a quote.

Oh, BTW, the find print
on the quote doesn't allow you to tell anyone else
the price they quote to you.


On 27 Apr 2001, at 15:20, Thomas Denier wrote:

 My site has a TSM 3.7 server running under OS/390. We pay for this on
 a monthly charge basis. With the end of support for the server code
 coming up in another five months or so, I am trying to find out what
 it is going to cost to upgrade to TSM 4.1. I have also been asked to
 look into the economics of migrating to an AIX server, mostly because
 there is no apparent prospect of having SAN support integrated into
 the OS/390 server. I have not been able to find pricing information
 for any type of 4.1 server on Tivoli's Web site. I am not eager to
 attempt contact with Tivoli marketing; I still have vivid memories of
 getting the run-around when I tried that in preparation for upgrading
 from ADSM 3.1 to our current server. Is TSM server pricing information
 available on line somewhere?




Re: 500GB Backup

2001-04-30 Thread Richard L. Rhodes

On 30 Apr 2001, at 11:24, Burton, Robert wrote:
 We have benchmarked this to deathEMC/IBM and Hitachi disk we can get 8
 to 10 MB/s, on both IBM 3590e's and STK 9840 we are getting
 35 to 40 MB/sec

We use IBm Shark disk for our TSM staging pools.  One raid set in the
Shark (8 drives, raid5) can accept sequential data at 30MB/s
continuously.  We send 2 gigabit data streams to disk over night at
60-70MB/s.  Most of our data is already compressed Oracle backups, so
when we destage the disk pool we drive 8 3590E drives at 80-90MB/s.

We have lots of EMC storage.  When using meta-volumns we can drive
them at about the same rate for sequential writes - about 30-35MB/s.

Rick



Re: 500GB Backup

2001-04-27 Thread Richard L. Rhodes

On 27 Apr 2001, at 12:32, Dearman, Richard wrote:

 I am not using any TDP product.  I'm picking up oracle db dump files with
 the regular tsm client.  Going over a 100mb ethernet segment.  Then
 migrating the files to a 3494 library.  All the data goes to disk first.



 -Original Message-
 From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
 Sent: Friday, April 27, 2001 12:25 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 500GB Backup


 And it depends on the type of DB.  TSM only supports a couple in LAN-free
 mode.

 -Original Message-
 From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
 Sent: Friday, April 27, 2001 1:18 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 500GB Backup


 You need a SAN in place to use the LAN free backup.

 -Original Message-
 From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]]
 Sent: Friday, April 27, 2001 12:01 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 500GB Backup
 Importance: High


 Hi
 If I am correct IBM offers LAN free backup which is faster .But need to
 purchase additional s/w.
 pinni



 -Original Message-
 From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
 Sent: Friday, April 27, 2001 11:48 AM
 To: [EMAIL PROTECTED]
 Subject: 500GB Backup


 I need to backup a 500GB database and my current setup is my tsm server is
 on the same ip subnet has the database server.  Tsm is connected to 500GB of
 SSA storage with 5 RAID5 sets with two 50GB tsm volumes on each raid set.
 My though put to the disk only seems to be 9000 kB/s.  Which doesn't seem
 very high to me. My backup time is about 6 hours.  Does anyone have any
 better senarios for backing up this amount of data in the smallest amount of
 time soon.  I am going to be backing up a 1Tb per day and I need to get me
 backup times to the lowest.  Does anyone have any suggestions on how to do
 that.

 Thanks
 ***EMAIL  DISCLAIMER**
 This e-mail and any files transmitted with it may be confidential and are
 intended solely for the use of the individual or entity to whom they are
 addressed.   If you are not the intended recipient or the individual
 responsible for delivering the e-mail to the intended recipient, any
 disclosure, copying, distribution or any action taken or omitted to be taken
 in reliance on it, is strictly prohibited.  If you have received this e-mail
 in error, please delete it and notify the sender or contact Health
 Information  Management (312) 996-3941.
 ***EMAIL  DISCLAIMER**
 This e-mail and any files transmitted with it may be confidential and are
 intended solely for the use of the individual or entity to whom they are
 addressed.   If you are not the intended recipient or the individual
 responsible for delivering the e-mail to the intended recipient, any
 disclosure, copying, distribution or any action taken or omitted to be taken
 in reliance on it, is strictly prohibited.  If you have received this e-mail
 in error, please delete it and notify the sender or contact Health
 Information  Management (312) 996-3941.
 ***EMAIL  DISCLAIMER**
 This e-mail and any files transmitted with it may be confidential and are
 intended solely for the use of the individual or entity to whom they are
 addressed.   If you are not the intended recipient or the individual
 responsible for delivering the e-mail to the intended recipient, any
 disclosure, copying, distribution or any action taken or omitted to be taken
 in reliance on it, is strictly prohibited.  If you have received this e-mail
 in error, please delete it and notify the sender or contact Health
 Information  Management (312) 996-3941.




Re: AIX Question

2001-04-26 Thread Richard L. Rhodes

According to the release material I've read, AIX5L will allow
multiple default gateways.   You might check if this is
back-ported to some patch level for AIX 433.

Rick

On 26 Apr 2001, at 10:42, Taylor, David wrote:

 By definition, you can only have one default gateway.

 David Taylor
 Senior Software Systems Engineer
 West Bend Mutual Insurance
 (262) 335-7077
 [EMAIL PROTECTED]


 -Original Message-
 From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, April 26, 2001 9:53 AM
 To: [EMAIL PROTECTED]
 Subject: AIX Question


 I'm new to the AIX world so forgive me if the question seems stupid.  I
 installed two nic cards in my H70 running AIX 4.3.3.  Both are on different
 ip subnets.  Everytime I set the default gateway for the second nic card it
 changes the default gateway on the first card to be the same as the second.
 Am I doing something wrong.  It won't let me set two different default
 gateways for each card.

 Thanks
 ***EMAIL  DISCLAIMER**
 This e-mail and any files transmitted with it may be confidential and are
 intended solely for the use of the individual or entity to whom they are
 addressed.   If you are not the intended recipient or the individual
 responsible for delivering the e-mail to the intended recipient, any
 disclosure, copying, distribution or any action taken or omitted to be taken
 in reliance on it, is strictly prohibited.  If you have received this e-mail
 in error, please delete it and notify the sender or contact Health
 Information  Management (312) 996-3941.


 **
 This email and any files transmitted with it are confidential and
 intended solely for the use of the individual or entity to whom they
 are addressed. If you have received this email in error please notify
 the system manager.

 This footnote also confirms that this email message has been swept by
 MIMEsweeper for the presence of computer viruses.

 www.mimesweeper.com
 **




TSM cold backups

2001-04-19 Thread Richard L. Rhodes

As our TSM Database grows, were hitting the inevitable question
of how big to let the db grow vs purchasing and bringing up another
TSM instance.  We're not there yet, but we can see this on the
horizon.
The issues isn't TSM performance, but db backup/recovery.  This got
me thinking (always a dangerous thing exercise), does anyone use cold
backups for their TSM db?

In other words, does anyone shutdown their TSM instance, make some
kind of backup of the TSM db and related files, and then bring
the db back up?  Another method would be to use EMC BCVs or IBM Shark
FlashCopy to enable a very short shutdown-split-restart-backup
system, like we use for critical databases?

My thought is that it would be much faster to backup and restore the
actual DB files than use TSM's normal backup.

Better yet, is there any way to just copy the TSM db while it is
live?  SOmething like Oracle's Hot backup?

Just thinking out loud . . . .

Rick



Tivoli reselling BMR

2001-04-17 Thread Richard L. Rhodes

Looks like IBM (well, Tivoli really) is reselling BMR.

http://www.ibmlink.ibm.com/usaletsparms=H_201-101



Re: Amount of Data Backed Up Each Night

2001-04-09 Thread Richard L. Rhodes

We backup about 1.2TB per night.  850gb of Lotus Notes and +400gb of
normal stuff.

Rick

On 5 Apr 2001, at 13:39, Diana J.Cline wrote:

 Is there anyone else out there who is backing up 300gb per night or more?
 If so, i'd love to converse with you.




Re: Comparison of Backup Products

2001-03-29 Thread Richard L. Rhodes

Well, we are in the process of converting to TSM from Legato (Unix)
and Archserv (NT, Netware).

Legato:  I've delt with lots of companies support organizations, but
Legato's is absolutely the worse.   Code quality?  Check the Legato
mailing list - The complaints loud and clear about the code quality.
We got so mad that we looked to convert to something else.

Archserv:  I've watched our NT/Netware fight Archserv for years.
Great product for a small number of systems, but falls apart as the
numbers of clients grows.

As far as Netware backups go, Every backup product I know of has
problems backing up Legato.  The last time I tried to get a Legato
Netware client running on a netware server it took a full week.  I
had to get from Legato an unpublished list of nlm's needed to make it
work.  Our conversion to TSM is almost complete for NT and Unix, but
Netware is being a problem.  There are Netware patches we need
installed to support TSM, but can't install them due to other Netware
issues (out of my area, not sure of details).  Also, for whatever the
new Netware filesystem is, there is no open file agent for it, and
won't be for some time due to Novells constant changes (what I'm
told).

TSM is anything but perfect, but so far for us it's been just about
what we expected.  My take is that TSM normal  IBM software . . .
very good core, more complex than it needs to be,  and forever
remains unfinished and probably always will.


On 28 Mar 2001, at 14:15, Gill, Geoffrey L. wrote:

 To All,

 Due to the poor quality *SM code Tivoli has been releasing, and the
 resultant
 fallout at my site, I've been asked to investigate other backups solutions.
 I'm
 familiar with Veritas NetBackup but have not worked with any other backup
 products. Does anyone have any experiences / information on non-*SM
 products
 that they can share.




TSM Client on AS/400?

2001-03-29 Thread Richard L. Rhodes

As I stated in a previous post, we are merging with a company that
uses the AS/400 platform for their Notes server.  Ours is currently
on NT, but the merger teams sound like they are leaning toward AS/400
for the merged company.

From a previous post I learned that there is no TDP for Notes on the
AS/400.

Now, from my browsing, I learn that there is a TSM server for the
AS/400 platform, BUT NO CLIENT!  According to the AS/400 TSM Server
manual, there is no backup/archive client for the AS/400!

Now, I'm really confused!  THe company were merging with uses runs a
TSM server on HP, but is somehow backing up their Notes db's on the
AS/400 to the TSM server.  HOW?  Our combined Notes systems would be
over 1.5TB, and I'm getting very concerned about being able to backup
this amount of data on a AS/400.



Thanks

Rick



TSM, Domino, Transaction Logging, AS400

2001-03-23 Thread Richard L. Rhodes

We are merging with a company that uses IBM AS400 for their Notes
server.  It's looking like the combined company may standardize on
the AS400 for this function.  Neither company uses Domino transaction
logging, which means we're both doing full backups every night.  The
Notes merger team is looking at transaction logging as a possible
option.

I went looking for TDP for Domino for AS400, and . . . .
found that there is no product like this!!!  This is very strange
given that IBM is pushing the AS400 as a Domino server platform.

Q)  Can you use Domino transaction logging, and still backup without
a TDP Domino agent?

Thanks

Rick



Re: SAP and 3494 performance.

2001-03-20 Thread Richard L. Rhodes

Thanks for great info!

q)  What kind of a client system are you using?
- machine type
- # of processors

q)  When you say " don't multiplex with tdp" I'm not sure what you
mean.
- Are you using TDP for SAP?
- Why Not?

Thanks!

Rick


On 20 Mar 2001, at 10:59, Cook, Dwight E wrote:

 alter to go to diskpool first
 use tsm client compression
 don't multiplex with tdp
 run about 10-20 concurrent client sessions (based on processor ability)
 you should see about 4/1 compression
 you should see about 11.6 MB/sec data transfer (if things are really OK 
 processor can keep up)
 you should see your db backup in just over 2 hours...

 we have a 2.4 TB data base, use the above methods and push 600 GB (the
 compressed size) in 16 hours...
 we also have smaller ones we see the same rates with... such as a 200 GB
 that compresses down to about 60 GB and runs in  under 2 hours

 hope this helps...
 Dwight


 -Original Message-
 From: Francisco Molero [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, March 20, 2001 9:35 AM
 To: [EMAIL PROTECTED]
 Subject: SAP and 3494 performance.


 Hello,
 I have a query about performance between a TDP for SAP
 R/3 under HP-UX and TSM Server with a 3494 library.
 The size of database is 350 GB and the connection is
 via tcpip (ethernet 100). I used the brbackup -c -t
 offline command and the bacukp is sent to tape pool in
 the library 3494. I have a private network and also I
 am using the compression parameter to yes in the
 client in order to improve the performance in the
 network. The backup takes 6 hours in backing up the db
 and this time is very high. I need to reduce it. the
 drives are 3590, and the format device class is DRIVE.
 Can someone help me about the parameters which can
 improve this performance ???

 Regards,
 Fran

 __
 Do You Yahoo!?
 Get email at your own domain with Yahoo! Mail.
 http://personal.mail.yahoo.com/




Re: Multiple ADSM/TSM Instances Sharing a Library Via ACSLS

2001-03-16 Thread Richard L. Rhodes

Have you had any support problems between IBM and STK  over the IBM
drives in the STK library?   It's possible we may run this config in
the future and I'm interested in how the 2 companies cooperate.

Rick

 One library is home of 8 9840 drives attached to RS/6000 S7A via SCSI, the
 other is home of 8 3590E drives attached to a different S7A via fibre channel
 (switched). All drives are used exclusively by 2 x 2 TSM servers, i.e. on each



Re: Point-In-Time Restore

2001-03-08 Thread Richard L. Rhodes

Thanks Andy,

We've tried to get an answer to this question since we first put in
TSM (3rd qtr last year), but couldn't find an answer, including from
support.

Your example exactly describes our situation, but, our systems don't
appear to act like your describing.

We have a domain called "AIX" with the following mgt classes:

vde rev vdd rto
aix 14  14  14  14   (dflt)
aix1m   32  32  32  32
aix2m   62  62  62  62
aixorac 2   1   1   90   (oracle backups)

All our directories go into the last class aixorac with the largest
RetainOnlyVersion, just as you describe.  When I fire up the gui,
changed the date to point back a month, point to a filespace that
is bound to aix2m (2 month PIT), and go to a directory that changes
every day - every thing looks OK!  The directory shows up in the
gui with all it's files.  I don't seem to be able to duplicate the
situation you describe!

We've wondered about this directory issues, but didn't think there
was a problem given that it appeared to be working.  I fugured it
created the directory from the files that exist in it.

Any thoughts?

Rick




On 7 Mar 2001, at 7:58, Andy Raibeck wrote:
 Day 1: File C:\MyDir\MyFile.txt is backed up. MyDir is bound to CLASS1 and
 MyFile.txt is bound to CLASS2.

 Day2: File C:\MyDir\MyFile.txt is changed. The MyDir directory is also
 changed. When the backup runs, MyDir will be backed up. Because only 1
 version is kept, the version that was created on Day 1 is deleted.
 MyFile.txt is also backed up and bound to CLASS2. There are now 2 versions
 of MyFile.txt.

 Now you need to do a PIT restore back to Day 1. However, since there is
 only one backup version of MyDir, created on Day 2, it will not be
 displayed in the GUI when you PIT criteria specifies Day 1.



Re: Compression / No Compression ???

2001-03-08 Thread Richard L. Rhodes

Oracle db's are highly compressable.  We run our Oracle backups
through the unix compress utility.  I've seen tablespace files on a
newly created instance (no data loaded yet) compress from 1gb down
to 10mb.  A normal tablespace file full of data will tipically
compress about 3-to-1.

In general, data can only be compressed once.  If you compress via
sftw, like the unix compress utility or TSM's client then the drive
hdwr compressions won't add anything.  In this case you would
basically get the native capacity of the tape drive onto a tape.  We
use 3590E drives with tapes that have 40gb native capacity.  Our
tapes that hold oracle backups generally end up with right around
40gb.  Client side compression accomplishes the same thing.

When hdwr compression is turned on, the tape drive tries to compress
the datastream it receives from the tsm server.  When not using
client compression and not backing up already compressed files, the
tape drive will attempt to compress the datastream.  On the tapes
with this kind of backups we get anywhere from 50gb up to 120gb.
120gb on a 40gb tape is a 3:1 compression ratio.  An ORacle db will
compress around 3 to 1.

Client side compression takes cpu cycles and in general will result
in a much slower backup but uses much less network bandwidth.  Hdwr
compression in the tape drive is very fast, much faster than client
side compression (usually).

The big argument is usually whether you should run your tape drive in
compressed mode even if you send already compressed data to it
(client side compression or just backing up .Z or .zip files).  If
you compress a datastream that is already compressed, the datastream
will actually get bigger.  Go ahead, run a unix compress on an
existing .Z file.  My answer is to always leave it ON.  Modern
compression chips used in tape drives can detect when data received
by the drive is uncompressable, and will stop compressing the data.
AIT drives are like this.  I've got to believe that IBM 3590 drives
are at least that smart!!!  For that matter, the TSM client can also
do this!!! That's the purpose of the "compressalways" command for the
client side dsm.opt file.  When running "compression yes" and
"compressalways no", the client will attempt to compress files.  If
the client detects that a file is uncompressable,  the client stops
compression and just send the file.

The one place I've found client side compression very usefull is when
backing up remote systems on a wan.  If I run the backup without
client compression I destroy response time for all wan users.  By
using client side compression I throttle the backup.  The client
systems can't compress/send the data fast enough to dominate the wan
link. Oh for a client side bandwidth parm like Veritas has . . . . .

Rick



On 8 Mar 2001, at 11:29, Roy Lake wrote:
 Hi Chaps,

 Just wanted to share my findings with you with regards to TSM compression/no 
compression.

 We have 3575-L18 tape library. We use 3570-C Format tapes. We used to have CLIENT 
compression set to YES when doing backups, with DRIVE compression OFF. Most of the 
data on our systems is Oracle.  When we had client compression set to YES, each 
cartridge would take about 5GB.

 I have done some testing and found that when I switched compression OFF, we managed 
to get around 21GB on each cart, and also the backups were a LOT quicker.

 IBM recommend (and I quote:) "Oracle databases are normally full of white space, so 
compression is required. Either h/w or client compression."

 Could someone please explain WHY compression is required if we get more on tape with 
it switched OFF, and the backups are quicker?.

 In our environment, TSM has its own 10Meg a sec network, and 99.9% of the backups 
are done overnight, so there is no problem with performance issues.

 Am I missing something here, or is it REALLY a better idea to forget about 
compression totally?.





Re: Backupsets and cd burners

2001-03-06 Thread Richard L. Rhodes

On 6 Mar 2001, at 7:55, Gill, Geoffrey L. wrote:

 3. I noticed the post recently that talks about putting backupsets on CD. I
 have never heard of a cd-writer/rewriter on an AIX box, which doesn't mean
 it's not possible. I'm wondering if anyone has a solution for creating
 backupsets on an environment similar to mine, or any other for that matter,
 that can create backupsets on cd's.

I've been looking into creating mksysb's onto cdrom.  IBM added the
mkcd cmd to AIX.  This command will front the cd creation pricess, but
doesn't actually format the image or burn the image, you need other
sftw to do that.  Here is some comments from the
/usr/lpp/bos.sysmgt/mkcd.README.txt file . . . .

You'll need a recordable CD drive (CD-R or CD-RW) and the software 
that will create a CDROM file system and "burn" the CD.

There are many CD-R (CD Recordable) and CD-RW (CD ReWritable) drives
available.  We tested with four of the  external drives.  We figured
most customers would use external drives because they already have
an internal CD drive in their box.
   *  Yamaha CRW4416SX - CD-RW (rewritable)
   *  RICOH MP6201S - CD-R (recordable)
   *  Panasonic 7502-B - CD-R
   *  Young Minds CD Studio - CD-R

We tested 3 different types of software.
   * Jodian Systems and Software CDWRITE[tm] CD-ROM Premastering
   Package
   * GNU mkisofs and cdrecord - available from  The CD building
   project for
 UNIX or The BULL website: click on "download", then click on
   "AIX 4.3.2 and later", and get cdrecord-1.80.23.exe
   * Young Minds makedisc premastering software
Jodian and Young Minds are purchasable and supported products.
Please see their websites for details.  The GNU commands are
available for free.  Both Jodian and Youngminds use their own device
drivers for the CD-R.  The GNU cdrecord command uses the system CD
device driver.

The Young Minds is a package deal of cdburner and sftw.  Costs $3500.
Their cdburner emulates a Exabyte 8200 tape drive.

The Jodian Systems package included a Yamaha 8824 cd burner with sftw
for $1300.

I've heard that there is one other company that makes a this kind of
sftw for AIX.  Not sure who.

Not wanting to spend any money, I've been asking around work here for
a cdburner I could borrow and try with the GNU sftw.  Unfortunely,
no one around here has one.  I was going to purchase one of the models
IBM tested with, but they seem to be older models that aren't for
sale any more.

If anyone is using a cd burner that works with the GNU sftw, I'd love
to here what model it is.

btw:  the mkcd cmd will allow you to burn an existing mksysb disk
file to cd.  The cd can be bootable, or, you can boot with AIX
release cd's.  I'm trying to find out if I can have systems do a
mksysb to disk, send them to a common systems for storage, and burn a
cd if I need to use one.  ie: trying to use mksysb's on disk without
nim, or spending the money for BMR.


Rick



Re: 3590E

2001-02-15 Thread Richard L. Rhodes

The tape with the splice was an Imation tape purchased
through IBM.

Rick

On 15 Feb 2001, at 10:53, [EMAIL PROTECTED] wrote:

 ...there was a splice...

 Rick - Yikes!  Most bad.
Could you post to ADSM-L what brand of tape that was?

  thanks, Richard




Re: 3590E K tape

2001-02-15 Thread Richard L. Rhodes

WHile we've had some problems with K tapes, overall were very happy
with them.  Out of about 1600 tapes in use, we've have problems
with about 6.

Rick

On 15 Feb 2001, at 12:50, Debbie Lane wrote:

 Thanks for all the input regarding this tape.  Imation is the only
 manufacture.  I have decided not to use the K tape because, they seem too
 fragile and the quality is not consistent.  I need the extended capacity,
 but I also need to know that I have a high probability of a successful
 restore.




Re: BMR Software

2001-02-14 Thread Richard L. Rhodes

 Go download the DR redbook

Could please give the full name of the Redbook you are refering to?
Initial searches on the redbook web site didn't find anything.

Thanks

Rick



tsm upgrades and testing

2001-02-09 Thread Richard L. Rhodes

Here is a real general question . . . . .

How do you handle tsm upgrade testing?

When a new release comes out do you:

a)  dump it on you server, upgrade and see if it works?
b)  create a test instance and try it out first?

Related questions:

1)  I know you can have multiple tsm instances running concurrently
on a server.  Can you have multiple tsm instances that are running
different tsm versions on the same server?

2)  Do you keep a test instance around so you can play with new
features without effecting your production system?


Thanks

Rick



Re: Achievable throughput using Gigabit connections?

2001-01-26 Thread Richard L. Rhodes

On 26 Jan 2001, at 9:10, Steve Schaub wrote:
 I am looking to find out what kind of mb/sec other shops are getting.  Our 
environment is:

Hi Steve,

Our TSM server is a RS6k-S7A with IBM ESS disk
staging area.  The system has
2 gigabitethernet connections.  THese connections
seem to hit around 30-35MB/s.  Combined they give 60-70MB/s.
The client systems are all 100BaseT.  I think I'm max-ing them out in
packets-per-second, but could get more throughput if we get some
clients onto gitabit with fat packets.  I'm only getting
around 5-6MB/s from many systems that are EMC based, but then
I'm pulling data streams from lots of boxes concurrently.

Rick



Notes: conversion to logging and TDP

2001-01-05 Thread Richard L. Rhodes

Hi Everyone,

As with many tsm sites, we are struggling with Notes backup.

Our environment is that we backup Notes from several large
centralized replication servers, not the actual servers users
interact with.  Our Notes servers do NOT use the v5 transaction log
feature, so we basically do a full backup every night.  Yes, 800gb's
worth every night!!!

The Notes team has agreed to look at the change to logging and
backing up with TDP for Notes, but this isn't going to happen any
time soon since it's a HUDGE change - according to them.

I know next to nothing about Notes, so I'd like to ask some of the
experts out there some questions:

1)  Now big of an effort is to convert to logging?

2)  Is is possible to only use logging on our centralized rep servers
and not on the actual production servers?  In other words, covert
only the rep servers to logging and TDP, and, leave the production
servers alone.

Thanks

Rick



Re: what is dsmserv.dsk

2000-10-02 Thread Richard L. Rhodes

The first time I ran into the dsmserv.dsk file I was completely
confused.  The QuickStart guide has 2 references to it: page 2 and 7.
 Neither gives any idea of what it is or why you need it.  The
comment on page 7 about this file being read from the directory where
the server is started is under a section labeled "Defining
Environment Variables - kind if hidden.  On Page 7 and 8, under
"Runing Multiple Servers on a Single Machine" there is no mention of
needing to create a dsmserv.dsk  in the directory where you want the
db files created.  They talk about copying the dsmserv.opt, but not
the dsmserv.dsk.  This omission is a major mistake that caused me
several hours of confusion.

So, I agree with the origional poster that dsmserv.dsk is poorly
documented.

Rick

On 30 Sep 2000, at 7:19, Richard Sims wrote:
WHAT IS THIS "disk definition file dsmserv.dsk"?  

 Don - You are apparently not referencing the Quick Start manual, needed for
   properly installing and initializing your new server - dsmserv.dsk
 is described in there, and would be created automatically in the install.
 Don't try to take shortcuts: follow the detailed instructions.
 Richard Sims, BU




Re: Cleaning a Qualstar 46120 library

2000-09-29 Thread Richard L. Rhodes

I can't speak for AIT-2, but I have 2 46120's with AIT-1 drives.

Now, my libraries are 4+ years old (upgraded once from exabyte to
ait1), but they do not have any ability to auto-clean drives.  You
need your bkup program to control this.

As far as self cleaning, I found that if I didn't clean the drives I
would eventually get hdwr drive errors in the AIX error log.  The
drive did NOT indicate it needed cleaning (some flashing light
pattern on the drive).  The drive thought everything was ok, but aix
was getting hdwr errors from it.  Running a cleaning tape in the
drive always fixed this problem.  I now clean the drives about once a
month.

You can really tell how dirty the drives are by the length of time
the cleaning cartridge stays in the drive.  I just cleaned my drives
today - it probably didn't take 30 seconds to clean a drive.  When
the hdwr errors occur (described above), it could take a couple
minutes to clean a drive.  If it does take this long to clean a
drive, I run the drive through another cleaning (or more) until the
drive completes a cleaning cycle in a short period.

NOTE:  ait-2 might be different!

Rick


On 29 Sep 2000, at 9:01, Dean Winger wrote:

 Can anyone help with this?
 I have a Qualstar 46120 tape library outfitted with four AIT-2 drives.  I would like 
to set up a schedule for cleaning the drives.  Any recommendations on how I should do 
this?  I am using TSM 4.1.1 for Windows NT.

 I do not think that this library has the built in capability of automating this 
process via a hardware solution.  I see that there is a cleaning frequency parameter 
with options for Gigabytes/ASNEEDED/None.  Is this what I should be using?  If so how 
should I set it for this library with AIT-2
drives?

 Just a side note:  I thought the press releases for AIT-2 technology stated that 
these drives are self-cleaning.  If this is true, why does Sony make cleaning tapes?

 Thanks for your help



Re: Server won't start from the INITTAB

2000-09-12 Thread Richard L. Rhodes

On 12 Sep 2000, at 8:04, Richard Sims wrote:
 We are having problems starting our server automatically via the INITTAB file.
 ...
 Explanation (fstat error): A file or directory in the path name does not exist.

 The server is testing for its database and recovery log volumes, per the
 dsmserv.dsk file, and cannot find them.

Let me show my ignorance . . . . .

When I setup our little test *SM instance I installed it in a
different location than the default.  I was pretending I was setting
up a second instance.  To start the instance I had to change the
current directory to the location of the dsmserv.dsk file.  To get it
to start from the inittab file I had to add a "cd" command this
directory.  If I didn't, I started up the *SM instance that gets
installed with *SM.

So, if you instance is installed outside of the default, try adding a
"cd" to the directory where your dsmserv.dsk file is located.

Rick



Shark setup for TSM

2000-09-07 Thread Richard L. Rhodes

We will be using a IBM Shark disk subsystem for our TSM db and
staging pools.  I'm interested in advise from others who use a Shark
for these purposes on how you have it setup.

1) My first thought for the tsm db, is to allocate small luns on each
raid set, combine them into one AIX volume group.  This would allow
me to get all disks in the Shark involved in db accesses.  But, it
would put the db on the same raidsets as the staging pools which I
think would be bad.

2)  My second thought, is to use a couple raidsets for the db with no
staging pools on them.  This basically dedicates them to the db.
This sounds like a better idea.

3)  For staging pools, I'm planning to spread the pool files on
different raidsets/luns.  I know that TSM allocates a thread per pool
file, so I was going to keep the individual files down to around 4gb.
I'll have several hundred gigs os staging pool.

4)  The Shark manual indicates that you can create an AIX striped
filesystem across Shark luns, and that this is good.  Is anyone doing
this?  Logically this creates a raid 0 (striped) filesystem across
raid 5 luns.  It seems that Shark read-ahead logic would get messed
up.

5)  Even though the Shark is a raid system, are you still using TSM
db and/or log mirroring?




Thanks

Rick



Re: Mksysb

2000-07-28 Thread Richard L. Rhodes

On 28 Jul 2000, at 11:01, Shekhar Dhotre wrote:
 Storing   mksyb on s solaris is ok, but whats the use?
 i mean how we can use it to reintsall the crasghed AIX srver. it should be on
 tape , i dont know i am wrong or  right?

My understanding is that you can only network install a mksysb via
nim.  I went around this with our local rep, wanting to know why ibm
forced the use of nim for network installs.  My only exposure to it
was AIX admin class a long time ago, and that was enough to make me
decide to never use it.  I asked him why they
didn't support network installs (both mksysb and initial loads) via
ftp or nfs.  Life would be s much simpler . . . . .

Rick