Hi Guys,

Thanks for all of the responses, let me respond to Rob's latest as a
response to all.  :)

First, let me be clear - ALL I want to do is have a service of some sort
running on an Intel-based Linux that allows me to stream data to it to
create a file on the Linux's disk.  For you VM'ers out there, this would
be the result of the CMS PIPE TRACKREAD stage (and would be nearly
identical to the PIPEDDR package on www.vm.ibm.com/download - with the
exception being that we're PIPing the output to a Linux server, not
another VM system running the receiving end of PIPEDDR).  

A colleague suggested running netcat on the Linux server, and PIPing the
output from PIPE TRACKREAD on CMS to a TCPIP stage pointing to netcat on
the intel Linux server.  That might work, but I'm looking for
alternatives to netcat.

Also, the intel Linux server is ONLY used as a place to store the DASD
image files - it's not used in the recovery process other than to
collect the images from it.  Both the backup/dump processes and the
restore processes are all VM/CMS based and must run under VM/CMS.  If I
were to rewrite the entire system from the beginning, it would be a
simple matter to have Linux on z/Series mount a remote Samba share and
simply 'dd' the DASD - but it's a LOT more complex than that, there are
literally thousands of lines of code involved that I don't want to
rewrite simply to have Linux do the work using dd/Samba.

I know that we can create CMS file images of the DASD using PIPEDDR or
DDR2CMS and then FTP those CMS files, in fact that is what we are doing
now.  The problem is this is VERY inefficient, we spend more time
creating the CMS files in preparation for transmitting the data to the
intel Linux server via FTP than we do actually transmitting the data.
I'm looking for a way to eliminate creating those intermediate files and
simply read tracks of raw data (probably using PIPE TRACKREAD with
TRACKSQUISH) and write them (somehow) to a file on the intel Linux
server over TCPIP.

TSM is not a possibility, it's not free and frankly Tivoli on VM/CMS
hasn't been maintained/supported well (I've heard complaints about it
for MANY years).

PPRC is indeed a possibility (in fact I have a request in for funding
for new DASD, including a second array located at the DR site - the
production DASD would replicate via PPRC to the array at the DR site),
but it's a VERY expensive proposal.  IF the funding comes through for
this, I no longer have to worry about host-based processes like the one
we are discussing here, it'll all be handled by the PPRC code on the
DASD array(s).

If an FTP stage existed for the CMS PIPE command THAT would be perfect
(I've never come across one, maybe I'll try hunting around today and see
if I can find one).  That would allow the VM/CMS host to do:

PIPE TRACKREAD
| TRACKSQUISH
| FTP linuxserver 'output_fileid'

Thanks for all the responses, but as U2 once said "I still haven't found
what I'm looking for".  :)

If you have any other thought, please keep them coming.  Again, an
existing process that could run on the intel-based Linux server to
accept raw streams of data, which originate under VM/CMS, and create a
file is what I'm looking for.  :)

-Mike


-----Original Message-----
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Rob van der Heij
Sent: Wednesday, June 18, 2008 3:49 AM
To: [email protected]
Subject: Re: DDR'ing 3390 DASD To Remote Location

On Wed, Jun 18, 2008 at 2:51 AM, Scott Rohling <[EMAIL PROTECTED]>
wrote:

> -  Use PIPEDDR to write to a file and FTP this file to your Linux 
> server

I'm tempted to think that using temporary files to hold a copy of the
raw tracks is a royal PITA (though if you squish or even compress the
tracks, it probably will fit on a big CMS minidisk). I do think there
used to be a FTP pipeline stage floating around, but I'm not sure it did
both PUT and GET. Writing your own FTP client stage might be a nice
learning experience (I've asked Endicott about a "raw" mode for the FTP
client, but no luck).

If you read from disk while during the transfer, you certainly will need
to have the system shutdown to get a consistent copy on the other side
(because you take much longer between the first and last track of the
volume). Even when you could flashcopy the volume to make it consistent
during transfer, you still would have a lot of time between volumes and
you would need to know your applications very well to trust this scheme.

Too bad you don't have a VM system running permanently on the D/R side.
I once wrote a client / server that used the trackread and trackwrite
stage to do incremental track-by-track mirror of a disk (to keep a copy
of our RACF database and avoid copying the full thing all the time).

When we saw the first new z/VM installations with Linux show up, I
proposed a new feature for the Linux disk driver that would allow
arbitrary tracks to be read and written (like the pipeline stages).
That way a Linux guest could be used to backup the VM packs along with
the Linux data. And for D/R restore you could first IPL one Linux guest
native, restore the VM packs (from TSM) and then IPL VM again.
Something like that would fit your needs.
The design of the driver appeared to be very simple after a few beers,
but next morning it turned out to be harder.

Rob

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions, send
email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to