Michael Coffin wrote:
(Cross-posted on VMESA-L and LINUX-390)
Hi Folks,
I want to eliminate use of tapes in my weekly DR process. Currently we
DDR numerous 3390 spindles to 3590 tape cartridges.
I have set up a Linux server at our DR site with a ton of free disk
space, but the question becomes what is the best method to get images of
our DASD stored on it?
I've modified our procedures to use DDR2CMS to create CMS files
representing the 3390 DASD images, which are then FTP'd to the Linux
server - but the process is VERY inefficient:
1. DDR2CMS produces RECFM=V files which are unsuitable for FTP
(I've NEVER had any luck successfully FTP'ing RECFM=V files to a non-CMS
environment and getting them back in the correct format later), so I
have to COPYFILE (PACK the output from DDR2CMS. DDR2CMS takes around
47 minutes/spindle, and the COPYFILE takes around 38 minutes - the FTP
only takes around 17 minutes! So we are really wasting nearly 90
minutes/spindle just prepping the data to be transmitted.
2. The output from DDR2CMS for a 3390-3 spindle may actually be
LARGER than a 3390-3 spindle (even using COMPACT), so we need to use
3390-9 spindles as "work space", something I'm not fond of doing (as a
general rule we don't use 3390-9's at this site, but I configured a
string of them just for this purpose).
There is a great tool on the VM download page called PIPEDDR which
basically does what DDR2CMS does using PIPE TRACKREAD - and it can write
the output to a TCPIP stage. This is exactly what I'm looking for, with
ONE important difference - PIPEDDR only talks to a remote VM/CMS system
running PIPEDDR to receive the output, I need to be able to PIPE the
output to a remote Linux storage server.
Can anyone recommend a nice client that can run on Linux and listen on a
TCPIP port, accept some authorization credentials and host commands
(i.e. MKDIR, CD to dir, etc.) and receive/write to disk a stream of data
similar to what PIPEDDR might write to it's TCPIP stage? I could then
skip creating the DDR2CMS file and COPYFILE (PACKing it, writing
"indirectly" to the Linux server. I'd rather not reinvent the wheel if
there's already something out there. :)
If the backup is a file on Linux, then
rsync to update the file might work.
Notes
1. I have tried this on intellish hardware over ADSL. We ran into
trouble between 1-2 Gbytes of file size, but I suspect the problems were
with consumer-grade network technology.
2. rsync can use an enormous amount of virtual RAM. I don't know why,
but it did not induce thrashing on our systems. As there are several
variables that come into play to determine whether the system might
thrash, ymmv so keep an eye on it.
3. Updating a filesystem as apposed to updating a file containing a
filesystem proved much worse.
4. It's some time since I played with this, with newer versions of rsync
things may have improved a lot.
5. I went to some effort to ensure the files(1) within the filesystem
were compressed before transmission. This step might cost too much CPU
on a zED box.
(1) It makes no sense to use rsync on a compressed tarball, but a
tarball of compressed files is different.
--
Cheers
John
-- spambait
[EMAIL PROTECTED] [EMAIL PROTECTED]
-- Advice
http://webfoot.com/advice/email.top.php
http://www.catb.org/~esr/faqs/smart-questions.html
http://support.microsoft.com/kb/555375
You cannot reply off-list:-)
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390