Ron,
dir / will ALWAYS show system root filesystem; that's a unix standard. There
are several ftp clients - even for free - that do allows you to setup local
and remote homes and run commands after startup. Another way is to use -s:file
flag with windows command line ftp. The downside of this
Is there a way I can code a parm (* * * * * * * * * * * * ) for the volsers
for an output PS dsn (as available when defining a VSAM cluster) ? I have a
problem with space abends - IEC030I B37-04. The dsn is SMS managed. I am
using a primary of 2000,500 since it is a huge file. The job is
You can use this JCL:
//SEARCH EXEC PGM=ISRSUPC,PARM=(SRCHCMP,'ANYC')
//NEWDD DD DISP=SHR,DSN=YOUR-LIBRARY
//OUTDD DD SYSOUT=*
//SYSIN DD *
SRCHFOR 'DISP=SHR'
/*
I have a 500,000 record text file with a record length of 150 bytes. I'm
tring
to find some way of splitting it in two. Because of the record lenght, I'm
not
able to use the tso edit function. I'm sure I have the solution somewhere.
Thanks,
Dave
If you have DFSORT try this:
RMF Monitor III for online display (RMF data gatherer address space must be
active) or RMF Monitor I reports for long term analisys.
HTH,
Walter Trovijo Jr
Hi.
Question:
Would anyone know how can I display input/output activity
for all devices (dasd, tape, pending mount request)?
We just migrated z/OS 1.7 to z/OS 1.8 and some DB2 batch jobs are failing
with ABEND0C4 in module IGZCEV5. Has anyone encountered this? Thank you.
No, but have you checked if there are old libraries mixed with new ones or if
everything that needs to be apf authorized lost apf
We are expanding our storage by 1.5TB...we have an option to make some of
this 3390-27 volumes...
I've been using model 27 for DB2 databases for a long time and it works fine
even with manual PAV. It helped us implement DB2 online reorgs - which require
large amounts of storage to
How many Administrators are they hiring to
replace you?
I'm still here, just doing different stuff (SAN management and
high-performance computing at the moment). The new administrative
system is such a huge boondoggle that it's hard to tell how many people
are involved in providing
hi ,
as the subject, can I specify the OSA interface or source IP address during
FTPing from z/OS to the other FTP server ? The FTP client is z/OS.
Thanks !
Laurence
The simple answer is ftp does send its ip address to server already; this is
how connection is made, ip+port pairs make
I usually do that with icetool
Walter Trovijo Jr.
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at
I have a question about how to determine if an ALIAS is being used to
access
a dataset
Just delete it, and clean up what fails. As I said recently I just
cleaned out *all* our old alias entries.
I looked at a user-friendly way of doing this a while back, and in my
case that
I didn't read the whole thread, so maybe my suggestion is duplicated.
I had to do something similar and I found very nice stuff in dfsort website to
work with dcollect output:
http://www-304.ibm.com/jct01004c/systems/support/storage/software/sort/mvs/srtmdwn.html
Look for dcollect symbols and
Just make a ICETOOL or ICEMAN job to copy 1 record from the input to a dummy
output
and make input dataset DD point to GDG name.
HTH,
Walter.
Is there a way to recall the generations of a GDG which have been
archived on the tape?
I am talking of a batch method here. I need to recall
Have you tried ADRDSSU DUMP/RESTORE? You didn't say how far one system is from
the other, but depending on the distance you can DUMP to a sequential dataset,
ftp it to remote system (binary) and then RESTORE. If both systems are in the
same datacenter I'd write it to tape or virtual tape.
Somewhere along the line I had heard that ADRDSSU uses other IBM
utilities to access various datasets. Like IDCAMS for vsam datasets
and IEBCOPY for PDS(e)'s *IF* that is the case then why not use
IEBCOPY ?
Ed
The idea of using DUMP/RESTORE was just because it's easier to dump
buhetom wrote:
We're planning to run some batch jobs with heavy I/O on a new System
Z.There would be about 50 concurrent jobs at the same time.We have two
configurations IBM offered:more cpus with lower processing capacity
each and less cpus with higher processing capacity.Which one would
Jacky,
You didn't mention which database it is, so I'm assuming it's DB2.
Just use TEMPLATE to dynamically allocate required files and DB2 will do the
calculations
for you based on DB2 catalog information. Sort work files are also controlled
by REORG
SORTDEVT and SORTNUM keywords which will
I just tested in my 1.2 system and it does not work the way your tester said. I
was
able to restart only specifying STEPNAME.PROCSTEPNAME even if the proc in
question
has only one step. The closer thing I remember about proc stepnames is that if
you
want to pass OVERRIDE DD statements to the
My guess is that systems are pointing to different sets of couple datasets
I'm trying to add my 2nd lpar to a sysples, but I'm getting message
IXC414I CANNOT JOIN SUSPLEX PRODPLEX WHICH IS RUNNING IN MONOPLEX MODE:
EXTERNAL TIME REFERENCE IS IN LOCAL MODE
The system that is active has
By the way it would be nice to have some kind of modified rexx environment to
allow rexx programs to share stems. It would avoid passing data thru stacks
even between different rexx programs.
Walter.
--
For IBM-MAIN
OPTFILE does it. From C/C++ User's Guide:
...
//DOCLGEXEC CBCCBG,
// INFILE='PETE.TEST.C(CBC3UBRC)',
// CPARM='OPTFILE(DD:CCOPT)'
//COMPILE.CCOPT DD *
LSEARCH('PETE.TESTHDR.H')
SEARCH('CEE.SCEEH.+','CBC.SCLBH.+
/*
...
rtFm
Walter Trovijo Jr.
21 matches
Mail list logo