Re: GETMAIN/FREEMAIN and virtual storage backing up

2007-10-06 Thread Hunkeler Peter (KIUK 3)
 OS/360 was a real storage only operating system. DAT was introduced
with
 S/370. OS/390 could run on that hardware but not use DAT (and other
 new hardware facilites). 

DAT was introduced on 360/67 ... basically 360/65 with dynamic address
translation ... at least in its single processor version (although
360/67 offerred both 24-bit as well as 32-bit virtual addressing
modes). The 360/67 multiprocessor did offer some additional features
vis-a-vis 360/65 multiprocessor ... like all 360/67 processors could
directly address all physical channels (while 360/65 multiprocessor was
limited to addressing common real storage ... but didn't provide
channel
multiprocessor connectivity).

I based my statement on the IBM brochure MVS... a long and rich
heritage
(GC28-1594); I haven't been in that business yet back then, so I admit,
I do not know from own experience.

The details in your statement make it look very trustworthy to me.
Thanks.

-- 
Peter Hunkeler
Credit Suisse

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


About CA-Allocate and High Water Mark postings

2007-10-06 Thread Tony
Hi all,

I have performed several conversions to CA-Allocate these include STOP X37
and DIF (from DTS). It is not always an easy conversion. The first thing you
must know about CA-Allocate is that it uses program code like ACS routines,
it is not just table driven. On the other had the language is easy to learn,
it is just a subset of ACS routines (no ACS SELECT just lots if IF/ENDIFs),
it even uses FILTLISTs exactly the same way as ACS and for GURUs you can
call assembler modules.

Often the biggest problems with a conversion is understanding how the
existing product works. In a conversion storage managers will just say
making it work like product X but even they don't know exactly how X works
so you're shooting in the dark from the start. You can do the basic X37
support very easily but to make it work exactly the same way as another
product is almost possible. I would normally allow 2 days to do the initial
conversion as the code will be the same everywhere. Extra days would be
needed depending on how close a match you need. The old saying 90% can be
done with minimal effort the remaining 10% well you decide

Example. In STOP X37 you would say if dataset is on volume ABC123 then
perform an X37 function (Reduce/Increase space, Extend). But in CA-Allocate
you would normally do this by a FILTLIST on Dataset name.

Another key difference between CA-Allocate and SMS is that you have full
control over all attributes of dataset, you can change just about anything
before the datasets is allocated. You can even deny allocation if you don't
like the data set name or time of day.

CA-Allocate almost replaces SMS. Some MVS functions still require SMS to be
active (eg LOGR) which requires allocation very early in an IPL way before
the CA-Allocate STC is active. By they way the STC does nothing apart from
being a manager. CA-Allocate just hooks into the system. Some of the
allocation overhead for X37 support can get attributed to the initiator NOT
the job.

Until recently the code to perform X37 processing in CA-Allocate was quite
horrendous but a couple of years ago CA put a Does It Fit type function to
reduce primary and secondary space until the allocation fits. That saved
loads of coding.

CA-Allocate can also do real-time High-water-mark and Low-water-mark
monitoring at HLQ or other user defined granularity. This is implemented in
the QUOTA option. You can set a max HWM and stop allocations, issue messages
or do nothing. It all controlled by the programming language. You can
cut/paste the code straight out of the manual to do this. You can query HLQ
usage from a CLIST.

You can easily trace the code :-) but you need to be familiar with
CA-Allocate to read it and fix it :-(.

DASD pooling is easy peasy. Volumes can be in multiple pools and they don't
have to be CONVERTed like SMS.

NOT CAT 2 is supported, its really a legacy thing. As is wrong unit where
JCL says UNIT=3380 and data has moved to a 3390. If and how you deal with
these are your decision.

Tony. (no longer at CA).

No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.488 / Virus Database: 269.14.2/1052 - Release Date: 05/10/2007
18:53

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Duplicate Dataset Entries with wildcards

2007-10-06 Thread Varun Manocha
Hi Bruce,

Thanks for that. I tried the procedure which was described in the post. It 
worked! It was easier to find the name of the catalog using the call to 
SVC26 rather than having to do a listcat on all the catalogs. Though, I 
must admit I didn't understand much of the remaining output that was 
produced :-)

Thanks,
Varun



This is a PRIVATE message. If you are not the intended recipient, please 
delete without copying and kindly advise us by e-mail of the mistake in 
delivery. NOTE: Regardless of content, this e-mail shall not operate to 
bind CSC to any order or other contract unless pursuant to explicit 
written agreement or government initiative expressly permitting the use of 
e-mail for such purpose.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Time Zones (Was: IBMLink is UP - just kidding)

2007-10-06 Thread J R

OTOH, there's considerable utility in knowing when it's lunchtime,
etc., in other localities because that's a bad time to attempt
a phone call.  One of our offices which does considerable
global conferencing has a half dozen clocks on the wall, displaying
the time at our other major offices (you've seen similar in TV
and press newsrooms).  It would miss the mark to have them all
show UTC.


If they are analog clocks, you could orient each one with its
local noon at the top.  ;-)

Of course, once you get into UTC, analog clocks don't work so
well because time is only part of it, date being the other.



From: Paul Gilmartin [EMAIL PROTECTED]
Reply-To: IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Time Zones (Was: IBMLink is UP - just kidding)
Date: Fri, 5 Oct 2007 16:11:15 -0500

On Fri, 5 Oct 2007 11:36:32 -0600, Howard Brazee wrote:

On 5 Oct 2007 09:27:19 -0700, in bit.listserv.ibm-main you wrote:

Or c) set all clocks worldwide to GMT (UTC) and just learn what time of
day things (like sunrise, sunset, lunchtime, bedtime, happy hour,
etc.) happen in your locality.

World sports (ESPN time) may lead towards some public acceptance of
this idea.

OTOH, there's considerable utility in knowing when it's lunchtime,
etc., in other localities because that's a bad time to attempt
a phone call.  One of our offices which does considerable
global conferencing has a half dozen clocks on the wall, displaying
the time at our other major offices (you've seen similar in TV
and press newsrooms).  It would miss the mark to have them all
show UTC.

I suppose that if our lunchtime is 1800 UTC, the clock for Canberra
could be adjusted to show 1800 whenever it's lunchtime in Canberra.
This would work middling well except for an employee based in
Canberra who happens to visit the home office.

And I recall the time I tried to telephone someone a couple zones
east of me:

(Admin. Asst's voice): He's out to lunch.

(Early, I thought.  But when he returns, I'll be at lunch.): Can
he call me back two hours from now?

(Long pause): I don't know; what time zone are you in?

I had intended two hours from now to make that question and
convoluted computation unnecessary.

-- gil


_
Spiderman 3 Spin to Win! Your chance to win $50,000  many other great 
prizes! Play now! http://spiderman3.msn.com


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: GETMAIN/FREEMAIN and virtual storage backing up

2007-10-06 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


Bill Ogden [EMAIL PROTECTED] writes:
 The statements about the 360/67 are correct.  It was a little ahead of
 its time in several ways. The 67's DAT design was a bit different than
 the later S/370 DAT that was used by MVS, and is typically not
 considered in the history lines for MVS.

re:
2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up
2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up
2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up

other than original os/vs2 prototype implementation was done with mvt
kernel modified with a lot of code borrowed from cp67 running on 360/67

i had done a lot of work with virtual memory as an undergraduate
http://www.garlic.com/~lynn/subtopic.html#wsclock
and then later after joining the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

and in the early 70s several of us would make frequent sojourns to pok
(out the mass pike and down the taconic) for architecture meetings
(virtual memory, multiprocessing, etc) ... including architecture
meetings where several features were pulled from 370 virtual memory
architecture in order to buy 370/165 engineers six month schedule in
their hardware implementation.

there were other issues in the os/vs2 virtual memory implementation
(spanning both svs and mvs) ... one had to do with the page replacement
algorithm implementation ... the standard is LRU (least recently used)
or various approximations related of LRU. The pok performance modeling
group had discovered that (at a micro-level) that if a non-changed page
was selected for replacement ... that the latency to service a page
fault was much less than if a changed page was selected for replacement
(non-changed pages could be immediately discarded, without needing to
write, relying on copy already out on disk). However, i repeatedly
pointed out to them that weighting the replacement algorithm based on
changed bit as opposed to the reference bit ... severely negated any
recently used strategy.  They went ahead with it anyway (possibly they
didn't have very good macro-level simulation capability and stuck with
just the micro-level simulation could make informed judgement). in any
case, it was well into a number of MVS release before somebody got an
award for improving MVS performance by changing to give more weight to
the reference use in replacement decisions (example was that under the
earlier strategy, the replacement algorithm was selecting high-use,
shared, executable linklib virtual pages for replacement before private,
lower-use application data virtual pages).

another influence of cp67 and the science center was a joint project
between endicott and the science center to do custom modifications to
cp67 to provide 370 (virtual memory architecture) virtual
machines. For instance, this required cp67 simulating 370 architecture
hardware format virtual memory tables ... rather than 360/67
architecture hardware format virtual memory tables ... internally, this
was comingly referred to as cp67h system. After that was done, there
were modifications to cp67 to make it run on 370 hardware ... building
370 format tables ... rather than 360/67 format tables. Internally, this
was comingly referred to as cp67i. 

The first operational 370 hardware supporting virtual memory was a
370/145 engineering processor. However, cp67h with cp67i running in a
370 virtual machine was in regular operation a year before the 370/145
engineering box was operational. In fact, cp67i system was used as
initial software brought up on the 370/145 engineering box.

One of the complexities in the cp67h  cp67i development was it was all
done on the science center cp67 timesharing service.  Information about
virtual memory for 370 was an extremely tightly held corporate secret
... and there were a variety of non-employees (from numerous education
institutions in the cambridg area) with regular access to the science
center timesharing service. As a result ... nearly all of the cp67h work
went on in a 360/67 virtual machine (not on the bare hardware) to
isolate it from any non-employee prying eyes.

lots of past posts about use of cp67 for timesharing service ... both
internally and externally (including mentioning it being used to
address various security issues)
http://www.garlic.com/~lynn/subtopic.html#timeshare

misc past posts mentioning cp67h and/or cp67i systems:
http://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - 
do we need it?
http://www.garlic.com/~lynn/2004b.html#31 determining memory size
http://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than 
modern crap !
http://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
http://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization 
in general

Re: Hipersockets performance

2007-10-06 Thread Mark L. Wheeler
John,

What sort of data rate do you see just copying this file to another using
TSO COPY or IEBGENER, for example? That would give you an idea of what your
disk subsystem was capable of. Considering the TCPIP and FTP protocol
overhead, 40 MB/sec doesn't sound too bad.

I have also had customers come to me with issues about slow FTP performance
which, on inspection, was caused by use of inappropriate blocksizes on the
z/OS output file. Something they wouldn't normally have a problem with if
specifying DCB parms in a JCL deck, but don't think about when they are
FTP'ing.

Also note that hipersocket performance is affected by CPU utilization
utilization. I don't know what the exact relationship is, but I have
measured that is does occur.

Finally, I should explain the 200 MB/sec results I mentioned in my previous
post. They were produced during testing to see what sort of  throughput
hipersockets could actually deliver using FTP (in order to correct
misinformation floating around the shop that hipersockets is slower than
the regular network). I was FTPing between two virtual Linux servers
running on the same single-IFL z9-109. CPU utilization was 10% at the
time. In order to remove DASD performance as a variable, LINUXA had a 256MB
file, in cache. LINUXB did an FTP get from LINUXA, directing the output
into /dev/null. Elapsed time was on the order of 1.2 seconds. SCP of the
same file, on the other hand, only ran at about 8 MB/sec. Investigation
showed that each Linux virtual machine was running at 45% CPU - encrypting
the file on one end and decrypting it on the other (in software, since we
don't have a hardware crypto facility). Dunno why one would need to encrypt
data over a hipersocket link, but someone is bound to do so. So heads up,
y'all, if they come to complain...

Mark Wheeler, 3M Company




   
 John S. Giltner, 
 Jr.  
 [EMAIL PROTECTED]  To 
 .NET IBM-MAIN@BAMA.UA.EDU
 Sent by: IBM   cc 
 Mainframe 
 Discussion List   Subject 
 [EMAIL PROTECTED] Re: Hipersockets performance
 .EDU 
   
   
 10/05/2007 06:56  
 PM
   
   
 Please respond to 
   IBM Mainframe   
  Discussion List  
 [EMAIL PROTECTED] 
   .EDU   
   
   





On a z990-303 with a single IFL and 3 CP's using 56K MTU's.  No matter
what I did I could only get 40MB ps using FTP.  However, I could get
somewhere between 3-5 FTP streams running concurrently with 40MB ps
each, total of 120-200 MB ps

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Dumping SMF directly to TAPE

2007-10-06 Thread George Dranes
I have our SMF dump jobs dumping directly to a tape dataset with 
LRECL=32760 and BLKKSIZE=32760,RECFM=VBS.  Fortunately we are small 
enough to dump SMF once a day so there are no speed issues not dumping to 
DASD (saves a lot of dasd).  I was just curious if this a a sound way of 
handling SMF?  I've even considered making the output tape blksize something 
larger such as 256K but have always just stayed with the safe 32760.  Thanks 
for any help!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hipersockets performance

2007-10-06 Thread John S. Giltner, Jr.

Mark,

Never thought about just doing a copy to see what we would get.

When doing the ftp's invloving z/OS the CPU utlization was under 40% 
busy, using 1/2 track blocking, and binary transfers (no EBCDIC to ASCII 
 overhead).


Doing the transfers between Linux images the only thing running was ftp 
transfers.  I never tried to ftp to /dev/null.


I was using a 700MB file.

-- John Giltner


[EMAIL PROTECTED] wrote:

John,

What sort of data rate do you see just copying this file to another using
TSO COPY or IEBGENER, for example? That would give you an idea of what your
disk subsystem was capable of. Considering the TCPIP and FTP protocol
overhead, 40 MB/sec doesn't sound too bad.

I have also had customers come to me with issues about slow FTP performance
which, on inspection, was caused by use of inappropriate blocksizes on the
z/OS output file. Something they wouldn't normally have a problem with if
specifying DCB parms in a JCL deck, but don't think about when they are
FTP'ing.

Also note that hipersocket performance is affected by CPU utilization
utilization. I don't know what the exact relationship is, but I have
measured that is does occur.

Finally, I should explain the 200 MB/sec results I mentioned in my previous
post. They were produced during testing to see what sort of  throughput
hipersockets could actually deliver using FTP (in order to correct
misinformation floating around the shop that hipersockets is slower than
the regular network). I was FTPing between two virtual Linux servers
running on the same single-IFL z9-109. CPU utilization was 10% at the
time. In order to remove DASD performance as a variable, LINUXA had a 256MB
file, in cache. LINUXB did an FTP get from LINUXA, directing the output
into /dev/null. Elapsed time was on the order of 1.2 seconds. SCP of the
same file, on the other hand, only ran at about 8 MB/sec. Investigation
showed that each Linux virtual machine was running at 45% CPU - encrypting
the file on one end and decrypting it on the other (in software, since we
don't have a hardware crypto facility). Dunno why one would need to encrypt
data over a hipersocket link, but someone is bound to do so. So heads up,
y'all, if they come to complain...

Mark Wheeler, 3M Company




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dynamic load module name extraction?

2007-10-06 Thread Shmuel Metz (Seymour J.)
In [EMAIL PROTECTED], on
10/01/2007
   at 02:41 PM, Craddock, Chris [EMAIL PROTECTED] said:

Forget CDEs and forget JSCBs,

Maybe, and maybe not; the Devil is in the details.

CSVQUERY allows you to specify (any) address as the search criteria. If
the address lies within a loaded module, CSVQUERY will tell you which
module that is.

It will not, however, tell you which entry point was used. Depending on
what the OP needs, extracting a name from a CDE or JSCB might be a better
solution.
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see http://patriot.net/~shmuel/resume/brief.html 
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dumping SMF directly to TAPE

2007-10-06 Thread Tom Schmidt
On Sat, 6 Oct 2007 16:30:52 -0500, George Dranes wrote:
 
I have our SMF dump jobs dumping directly to a tape dataset with
LRECL=32760 and BLKKSIZE=32760,RECFM=VBS.  Fortunately we are small
enough to dump SMF once a day so there are no speed issues not dumping to
DASD (saves a lot of dasd).  I was just curious if this a a sound way of
handling SMF?  I've even considered making the output tape blksize something
larger such as 256K but have always just stayed with the safe 32760.  
 
 
You mentioned that you dump directly to a tape dataset... isn't that putting 
an awful lot of faith and hope into one measely tape?  I would, if I were doing 
that, be using 2 tapes written concurrently (just in case one of my tapes 
broke or otherwise failed).  Call it RAIT 0 (a la RAID 0) if you like.  
 
I would also be considering making the output tape blksize closer to 256K as 
you mentioned.  
 
-- 
Tom Schmidt 
Madison, WI

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dumping SMF directly to TAPE

2007-10-06 Thread George Dranes
Actually our SMF dump job dumps to a daily tape which in the next step is 
mod on to a weekly tape using IEBGENER (actually ICEGENER in our case).  
This weekly tape is then concatenated to a monthly tape once a week.  I 
keep multiple generations of the daily and weekly files so I can easily rebuild 
if 
needed.  Does this sound ok?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html