> Anyway, there are some rather large data requirments.  But one of them
> is 128GB.  I don't know what 128 GB means in this case.  Raw data.
> The dasd used by the current database platform.  The amount of dasd on
> that server...just what, I don't know.
> In either case, I expect at least 100 GB of mainframe dasd to be used
> for this.

Think seriously about FCP SCSI disk for this sucker. You can glue 3390s
together to get this much space, but it's a lot of hassle, and you may
be able to recycle existing SAN disk, which will save you a bunch of
money.

> On the little info I have, I'm expecting a total of 400 GB to be
> moved to us.  400 GB ends up being a lot more, when we factor in test
> systems, number of Linux images necessary (8X5 servers,
> weekend servers,
> 24X?? servers) as well as the dasd requirements for flashcopy.

Another good reason to think FCP. Gives you much better granularity on
disk allocation.

> I've only had to deal with a few GB databases (DB2, Oracle, IMS,
> etc).  Where I figure I will have between 7 and 12 databases, the big
> one, 128 GB has me concerned.  Just how do you back that time of thing
> up?

One popular way is to allocate a bunch of disk to the guest, use the
online backup utility included with the DBMS to dump the database to
disk, and then backup the disk backup files to tape using your fave
backup tool. Cheap, and easy to implement. Also, works with pretty much
any database -- disk is cheap enough these days that it's often less
expensive to do this than buy smarter backup tools, and if you
coordinate the dumps, you can use the same holding area for more than
one DB machine.

2nd option is to use a commercial backup tool with an agent for your
specific database. Pricey, often very pricey, but it works well if your
backup tool has a lot of database agents.

> So what kind of hardware do you backup large databases to?
> How often?

How often is the usual "it depends" -- your requirements will dictate.
Most sites I'm aware of use LTO drives attached to an outboard
workstation; they've gotten cheap enough to put in small libraries like
the HP Surestores, and a two drive (200G/tape), 30 slot LTO library is
on the order of $20K. Use option 1 shown above and Amanda or Bacula over
a private network connection between the PC and the mainframe, and you
have a nice, cost-effective backup solution. Host-attached 3590s aren't
really very cost effective compared to that.

> How frequently do you need to shutdown z/Linux or Oracle and for what
> reasons?

Depends on how often you take physical DR dumps of the DASD. Linux needs
to be down for a good consistent dump. The approach described above lets
you do backups with Linux and Oracle running, and still have a clean
backup.

-- db

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to