On Thu, Oct 12, 2017 at 1:08 AM, [email protected] <[email protected]>
wrote:

> Hi
>
> > Are the files Classic data sets or z/OS UNIX files?
>
> The files are Classic data sets from unloading zos db2 table.
>
> > Are the files binary or text?
>
>  The files are text
>
> > Which compression technique(s)  are  you considering?
>
> Any  compression technique(s) we are  considering
>
> Thanks a lot!
>
> Jason Cai
>
>
​My first though is to "tune" your network. I'm assuming you are talking
from z/OS to Linux via TCPIP over ethernet. ​From what little I know, most
seem to use an MTU of 1500. You might get better throughput if you could
configure the MTU to be larger. This is oft times called "jumbo frames" (
https://en.wikipedia.org/wiki/Jumbo_frame ).

Whether to compress or not is basically a trade off between how long it
takes to compress-transfer-uncompress vs. just transfer. This will depend
on the power of the boxes on each end and the "size of the pipe" between
them.

Assuming a "large pipe" (that is 10 Gib/s or larger):
I am not certain how you generate the list of files to be transferred. But
what I would possibly do is multiple concurrent FTP jobs running at the
same time. For example: job 1 in the job stream finds all the DSNs to be
transferred. For each DSN, it creates an FTP "put" control card. Assuming
REXX as the language of choice, each control card is recorded in a stem
variable. Something like: ftp_control.0=<number of ftp control cards>;
ftp_control.1="put ...."; and so on. Now determine the number of concurrent
FTPs you want to do. Divide the number of ftp control cards by this number.
Create a normal batch job where each job runs a single FTP step which has
"n" ftp_control cards in it. Submit each job to z/OS using the internal
reader. Have enough initiators running to run those jobs. Let them all (or
a subset) run at one time. The extreme of this is to have each job do a
single FTP. And have "n" initiators running those jobs. You could generate,
say, 20 FTP jobs, and run 5 at a time by having 5 initiators set up &
dedicated to running just those jobs (by dedicating a specific JOBCLASS to
this purpose).

Anyway, I was just thinking that instead of serially compressing,
transferring, and uncompressing the data; it might be faster with a "large
pipe" to do multiple transfers concurrently.


-- 
I just child proofed my house.
But the kids still manage to get in.


Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to