What about NFS?
More general: What is the work flow?
Usually the transfer of many big files is not the goal itself. Maybe the
files should be opened and read byt some application. NFS (of DRF/SMB)
would save the time by simply avoiding the transfer before the
applications start reading data.
On Wed, Oct 18, 2017 at 9:45 PM, David Mingee wrote:
> Hello, another option would be to add the line MODE C before the put or
> mput line in your FTP'S. This does compression only during the FTP. This
> could speed up the FTP's.
>
This would require that the FTP
Hello, another option would be to add the line MODE C before the put or mput
line in your FTP'S. This does compression only during the FTP. This could
speed up the FTP's.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf
Of
[Default] On 12 Oct 2017 09:03:14 -0700, in bit.listserv.ibm-main
ibmm...@foxmail.com (ibmm...@foxmail.com) wrote:
>Hi John
>
> Could you give us a sample of the ftp job or the rexx?
>
>Thanks a lot !
A major consideration, at least on the z series side is the cpu and
MSU costs of doing the
Hi John
Could you give us a sample of the ftp job or the rexx?
Thanks a lot !
Jason Cai
>>My first though is to "tune" your network. I'm assuming you are talking
>>from z/OS to Linux via TCPIP over ethernet. From what little I know, most
seem to use an MTU of 1500. You might get better
On Thu, Oct 12, 2017 at 1:08 AM, ibmm...@foxmail.com
wrote:
> Hi
>
> > Are the files Classic data sets or z/OS UNIX files?
>
> The files are Classic data sets from unloading zos db2 table.
>
> > Are the files binary or text?
>
> The files are text
>
> > Which compression
Hi
We could move our datasets to a same volume and convert the volume to a cckd
image using Hercules utilies
after we transfer the cckd image to linux,how will the cckd image be used by
linux?
Thanks a lot!
Best Regards
Jason Cai
>A suggestion...Could be better or not... You could
Hi
> Are the files Classic data sets or z/OS UNIX files?
The files are Classic data sets from unloading zos db2 table.
> Are the files binary or text?
The files are text
> Which compression technique(s) are you considering?
Any compression technique(s) we are considering
Thanks a
On Thu, 12 Oct 2017 03:54:44 +, W Mainframe wrote:
>A suggestion...Could be better or not... You could try... Convert your volume
>to a cckd image using Hercules utilies. Once they are converted, you will be
>take advantage of compression and time to this transfer. I did same thing some
A suggestion...Could be better or not... You could try... Convert your volume
to a cckd image using Hercules utilies. Once they are converted, you will be
take advantage of compression and time to this transfer. I did same thing some
thing, with success. Btw there is one a problem, you need to
Rob
Could you share some nice presentations on FTP performance ?
Thanks a lot!
Jason Cai
From: Rob Schramm
Date: 2017-10-12 10:50
To: IBM-MAIN
Subject: Re: Transfer a large number of sequential file from mainframe to
redhat linux V6.5
I know ftp has done a lot of work in performance. I
I know ftp has done a lot of work in performance. I don't know about
xcom. There were some nice presentations on FTP performance.
Rob Schramm
On Wed, Oct 11, 2017, 10:43 PM ibmm...@foxmail.com
wrote:
> Hi all
>
> We will transfer a large number of sequential file from
Hi all
We will transfer a large number of sequential file from mainframe to redhat
linux V6.5.
Normally we use FTP or XCOM to transfer file.
Could you tell us wherther there is the best way to transfer file from
mainframe to redhat linux V6.5 for
saving the transfer's time ? for
13 matches
Mail list logo