Re: Storage paradigm [was: RE: Data volumes]

2013-06-15 Thread Anne Lynn Wheeler
l...@garlic.com (Anne  Lynn Wheeler) writes:
 In the transition from MVT to OS/VS2 (aka virtual memory), the same
 problem showed up. The original implementation involved putting a little
 bit of code to create 16mbyte virtual address space for MVT, but the
 major effort was hacking CCWTRANS (from CP67) into the side of EXCP
 processing (EXCP had the same problem with access methods creating
 channel programs in the application virtual address space ... as CP67
 did with virtual machine channel programs). Old reference by somebody in
 POK that was in the middle of the transition to OS/VS2 and virtual
 memory ... includes reference that OS/VS2 release 2 (MVS) was on glide
 path to OS/VS2 release 3 (FS)

re:
http://www.garlic.com/~lynn/2013h.html#44 Why does IBM keep saying things like 
this
http://www.garlic.com/~lynn/2013h.html#45 Storage paradigm [was: RE: Data 
volumes]
http://www.garlic.com/~lynn/2013h.html#47 Storage paradigm [was: RE: Data 
volumes]

just finished scanning my copy of cp67 PLM (although it was missing a 
couple pages) ... and uploading to bitsavers ... it should be showing
up shortly in
http://bitsavers.org/pdf/ibm/360/cp67

CP67 CCWTRANS (which was hacked into EXCP to perform the channel program
virtual-real translation for OS/VS2) is described starting on page 35
(physical page 45 in the PDF file).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Ted MacNEIL
Perhaps you are keeping bad company. While humans are not perfect, there are 
methods to improve code reliability.

In 30+ years, I've worked for 6 companies.
Gee, they all must be bad company!

-
Ted MacNEIL
eamacn...@yahoo.ca
Twitter: @TedMacNEIL

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread R.S.

W dniu 2013-06-11 08:58, Ted MacNEIL pisze:

Perhaps you are keeping bad company. While humans are not perfect, there are 
methods to improve code reliability.

In 30+ years, I've worked for 6 companies.
Gee, they all must be bad company!



Are we still talking about possible bug in the aplication which can 
cause enormous disk space consumption?


If so, I can say I don't know how LUW systems could manage it without 
SPACE=(), but they do, definitely.


--
Radoslaw Skorupka
Lodz, Poland






--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorised to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive. 


BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax 
+48 (22) 829 00 33, www.brebank.pl, e-mail: i...@brebank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2013 r. kapitał zakładowy BRE Banku SA (w całości wpłacony) wynosi 168.555.904 złotych.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Vernooij, CP - SPLXM
There is no need to be so afraid of runaway disk space consumption, when
we don't control similar problems that precisely in other areas:
spoolspace utilization, job execution time (/*JOBPARM T=), GBs written
to tape, etc. etc. In those areas, the control is to let people do their
thing without prior specification of prediction and only take action if
certain limits are exceeded. 

Why not tread Dasd the same: let people specify Dataclass and skip SPACE
(we can now already) and set limits on the size of the dataset per
Dataclass. Like JES: Warn or abend the job if the size is exceeded,
control who can use different Dataclasses etc.

Kees.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Gerhard Postpischil
Sent: Monday, June 10, 2013 19:32
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Storage paradigm [was: RE: Data volumes]

On 6/10/2013 11:38 AM, Farley, Peter x23353 wrote:
 Rant
 Like a few others on this list, I have often gritted my teeth at the 
 necessity to estimate disk storage quantities that vary widely over 
 time in a fixed manner (i.e., SPACE in JCL) when the true need is just

 to match output volume to input volume each day.

If it's that predictable, it's trivial to write code to produce an
estimated output volume from input, and tailor and submit the
appropriate JCL. So that's a non-issue.

 EAV or not EAV, guaranteed space or not, candidate volumes, striped or

 not striped, compressed or not compressed - all of that baggage is 
 clearly non-optimal for getting the job done in a timely manner.  Why 
 should allocating a simple sequential file require a team of Storage 
 Administration experts to accomplish effectively?
 /Rant

There is no theoretical solution. On any system running jobs, it is
possible for one job to monopolize available space, requiring other jobs
to wait forever or be terminated. Even on a single job system that job
may exhaust space. Requiring a space specification may be a PITA, but it
guarantees that a started job will finish (subject to other
constraints). And the SA experts, especially for sequential files, can
be avoided with simple estimator programs.

This seems to be more of a religious war than a practical discussion.

Gerhard Postpischil
Bradford, Vermont

--
For IBM-MAIN subscribe / signoff / archive access instructions, send
email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Thomas Berg
Hear, hear!



Regards
Thomas Berg

Thomas Berg   Specialist   z/OS\RQM\IT Delivery   SWEDBANK AB (Publ)

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Farley, Peter x23353
 Sent: Monday, June 10, 2013 5:38 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Storage paradigm [was: RE: Data volumes]
 
 Rant
 Like a few others on this list, I have often gritted my teeth at the
 necessity to estimate disk storage quantities that vary widely over
 time in a fixed manner (i.e., SPACE in JCL) when the true need is just
 to match output volume to input volume each day.
 
 Why is it that IBM (and organizations that use their mainframe systems)
 so vigorously resist a conversion off of the ECKD standard?  (Yes, I
 know it's all about conversion cost, but in the larger picture that
 is a red herring.)  Not that I'm likely to see such a transition in my
 lifetime, but in this dawning time of soi-disant big data, perhaps it
 is past time to change the storage paradigm entirely, not from ECKD to
 FBA but to transition instead to something like the Multics model where
 every object in the system (whether in memory or on external storage,
 whether data or program) has an address, and all addresses are unique.
 Let the storage subsystem decide how to optimally position and
 aggregate the various parts of objects, and how to organize them for
 best performance.  Such decisions should not require human guesstimate
 input to be optimal, or nearly so.  Characteristics of application
 access are far more critical specifications than mere size.  The
 ability to specify just the desired application access characteristics
 (random, sequential, growing, shrinking, response-time-critical, etc.)
 should be necessary and sufficient.
 
 EAV or not EAV, guaranteed space or not, candidate volumes, striped or
 not striped, compressed or not compressed - all of that baggage is
 clearly non-optimal for getting the job done in a timely manner.  Why
 should allocating a simple sequential file require a team of Storage
 Administration experts to accomplish effectively?
 /Rant
 
 Peter
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Ed Jaffe
 Sent: Sunday, June 09, 2013 10:47 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Data volumes
 
 On 6/9/2013 7:12 AM, Scott Ford wrote:
  We need bigger dasd ...ouch
 
 The largest 3390 volumes in our tiny shop hold 3,940,020 tracks or
 262,668 cylinders. That is the maximum size supported by the back-level
 DASD we are running. Newer DASD hardware can support volumes up to 1TB
 in size. I assume nearly all zEC12 and z196 customers are capable of
 exploiting these large sizes. But, do they?
 
 I spent three years dealing with, and eventually helping IBM to solve
 (via OA40210 - HIPER, DATALOSS), a serious EAV bug that should have
 been seen in most every shop in the world that uses the DFSMSdss
 CONSOLIDATE function (with or without DEFRAG). The experience was a
 real eye-opener for me and I concluded that almost nobody is using EAV!
 
 Why not? Personally, I would find it embarrassing if the Corsair thumb
 drive in my pocket held more data than our largest mainframe volumes.
 But, that's just me...
 --
 
 This message and any attachments are intended only for the use of the
 addressee and may contain information that is privileged and
 confidential. If the reader of the message is not the intended
 recipient or an authorized representative of the intended recipient,
 you are hereby notified that any dissemination of this communication is
 strictly prohibited. If you have received this communication in error,
 please notify us immediately by e-mail and delete the message and any
 attachments from your system.
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send
 email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Elardus Engelbrecht
Gerhard Postpischil wrote:

 What planet are you from?
Sol 3

Interesting that you refer to that filthy big blue polluted ironball with 
warring citizens. The last time I checked, we're on Tatooine in a galaxy far 
far away or so I think... ;-)


Ted MacNEIL wrote: 

 Programmers seem able to test everything except that one condition that will 
 break in Production

Murphy's law. You design/build a job, program, etc and when going live / 
production, something crash. Fix it and live with it. Sometimes when going 
Production, it is actually the LAST major test to iron out any last wrinkles.

Groete / Greetings
Elardus Engelbrecht

Joke of the day (Sol 3 of course): One day a teacher was talking about marriage 
in class.

Teacher : What kind of wife would you like Johnny?
Johnny : I would want a wife like the moon.
Teacher : Wow! What a choice... Do you want her to be beautiful and calm like 
the moon?
Johnny : No, I want her to arrive at night and disappear in the morning...

Give that boy a Bells!!!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Joel C. Ewing

On 06/10/2013 01:57 PM, Ted MacNEIL wrote:

As to run-away programs, they should be thoroughly checked on a test system 
before

going into production; a run-away in production should be so rare as to be 
immaterial.

What planet are you from?
Programmers seem able to test everything except that one condition that will 
break in Production
-
Ted MacNEIL
eamacn...@yahoo.ca
Twitter: @TedMacNEIL


Another way to look at it is that there are usually many, many more 
users than application testers.  A clever, curious, or just plain inept 
user can always find many more weird combinations of input to an 
application than can be tested, sometimes nonsensical ones that would 
just never occur to a tester to try; and at least a few of these may 
give highly undesirable results.   Consider z/OS itself as a case in 
point.  One would hope this is an example of extensively tested 
software, yet every release always has a number of fix PTFs, some of 
which address problems that were highly fatal to one or more installations.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Gerhard Postpischil

On 6/11/2013 5:41 AM, Elardus Engelbrecht wrote:

Interesting that you refer to that filthy big blue polluted ironball
with warring citizens. The last time I checked, we're on Tatooine in
a galaxy far far away or so I think... ;-)


Nice display of dry wit G

Gerhard Postpischil
Bradford, Vermont

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Shmuel Metz (Seymour J.)
In
CAAJSdjhdFqcpkSUKJ9Uu+DSJh1VwwJdFT2VSHwC3P9=79nc...@mail.gmail.com,
on 06/10/2013
   at 11:45 AM, John McKown john.archie.mck...@gmail.com said:

LUW works similar to z/OS UNIX file systems. I.e. there is a file
system which is formatted using some utility (mkfs in the Linux/UNIX
world, format in Windows). This sets up all the internals. In today's
LUW, it is usually possible for a single file to be as big as the
file system upon which it resides. But no bigger. There is nothing
like a multi file system file (which would vaguely like a
multivolume data set).

I don't know about windoze, but don't LVM and EVMS allow file systems
to span devices?

-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 Atid/2http://patriot.net/~shmuel
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Shmuel Metz (Seymour J.)
In 51b60a0b.7030...@valley.net, on 06/10/2013
   at 01:16 PM, Gerhard Postpischil gerh...@valley.net said:

Technically the easiest to implement would be adding a new device
type,  thus keeping (E)CKD completely distinct from FBA. The new type
could be  supported by VSAM/AMS only (and JCL, SVC 99, etc.) without
impacting  other programs. (I would hope there are no programs out
there using TM  UCBTBYT3 rather than CLI?)

If IBM could implement a VSAM ESDS reverse compatibility interface
(RCI) for other systems, why not for MVS?

-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 Atid/2http://patriot.net/~shmuel
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Shmuel Metz (Seymour J.)
In
985915eee6984740ae93f8495c624c6c23194bd...@jscpcwexmaa1.bsg.ad.adp.com,
on 06/10/2013
   at 02:46 PM, Farley, Peter x23353 peter.far...@broadridge.com
said:

There *are* non-theoretical solutions to runaway file output.  The
*ix system model of using disk quotas per user makes it entirely
possible to imagine z/OS application users with reasonable disk
quotas specific to the application (i.e., not by job but by suite of
jobs).

Mommy, make it go away. An out of control job would kill unrelated
work. Per file quotas might be workable, but would lead to the same
types of outcry as space specification: I don't know how much output
the job will create.

-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 Atid/2http://patriot.net/~shmuel
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Shmuel Metz (Seymour J.)
In
985915eee6984740ae93f8495c624c6c23194bd...@jscpcwexmaa1.bsg.ad.adp.com,
on 06/10/2013
   at 11:38 AM, Farley, Peter x23353 peter.far...@broadridge.com
said:

Why is it that IBM (and organizations that use their mainframe
systems) so vigorously resist a conversion off of the ECKD
standard?

Before asking why, ask whether. I have seen no evidence that
anybody but IBM resists such a change.

perhaps it is past time to change the storage paradigm entirely, not
from ECKD to FBA but to transition instead to something like the
Multics model where every object in the system (whether in memory or
on external storage, whether data or program) has an address, and
all addresses are unique. 

That's not the Multics model. The Multic model is that segment numbers
are dynamically assigned as needed, and that in general two processes
will use different numbers for the same segment. IBM had something
similar in TSS, but abandoned it.

-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 Atid/2http://patriot.net/~shmuel
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Gerhard Postpischil

On 6/11/2013 2:58 AM, Ted MacNEIL wrote:

In 30+ years, I've worked for 6 companies.
Gee, they all must be bad company!


I'll take your word for it.

In 40+ years I worked for 8 companies, two of which were ISVs, two 
service bureaus, and the rest were contract software providers (mostly, 
but not all for government agencies). Many had strict procedures for how 
code was inspected and tested; one went so far as to reduce any abending 
production program back to test status, even when the abend was due to a 
hardware failure.


In all that time, I saw programs that evinced run-away space 
consumption, but they were all caught during testing.


Gerhard Postpischil
Bradford, Vermont

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread John McKown
LVM requires an LV (Logical Volume) to be in a single VG (Volume Group). A
VG is composed of one or more PVs (Physical Volumes). A PV is basically a
specially formatted disk partition (which can be the entire disk or
subdivision). A single file system must reside on a single LV. Which is in
a VG. Which can span multiple PVs. And so, in a somewhat indirect manner, a
file system can span multiple physical volumes.

http://www.web-manual.net/linux-3/logical-volume-manager-in-linux/


On Tue, Jun 11, 2013 at 8:28 AM, Shmuel Metz (Seymour J.) 
shmuel+...@patriot.net wrote:

 In
 CAAJSdjhdFqcpkSUKJ9Uu+DSJh1VwwJdFT2VSHwC3P9=79nc...@mail.gmail.com,
 on 06/10/2013
at 11:45 AM, John McKown john.archie.mck...@gmail.com said:

 LUW works similar to z/OS UNIX file systems. I.e. there is a file
 system which is formatted using some utility (mkfs in the Linux/UNIX
 world, format in Windows). This sets up all the internals. In today's
 LUW, it is usually possible for a single file to be as big as the
 file system upon which it resides. But no bigger. There is nothing
 like a multi file system file (which would vaguely like a
 multivolume data set).

 I don't know about windoze, but don't LVM and EVMS allow file systems
 to span devices?

 --
  Shmuel (Seymour J.) Metz, SysProg and JOAT
  Atid/2http://patriot.net/~shmuel
 We don't care. We don't have to care, we're Congress.
 (S877: The Shut up and Eat Your spam act of 2003)

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




-- 
This is a test of the Emergency Broadcast System. If this had been an
actual emergency, do you really think we'd stick around to tell you?

Maranatha! 
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread John McKown
Yes, most people basically want to say: I don't want to have to concern
myself with how much space and time it takes to do my work. I just want it
to work. Oh, and I need it ASAP. And it must not require too much thought.
That is, I need to be able to use it on a Monday morning, before I have had
my first cup of coffee. Oh, and it needs to be free so that it doesn't
impact my budget. That's so I can be under budget and get a raise or bonus.
cynic type=old/

On Tue, Jun 11, 2013 at 8:37 AM, Shmuel Metz (Seymour J.) 
shmuel+...@patriot.net wrote:

 In
 985915eee6984740ae93f8495c624c6c23194bd...@jscpcwexmaa1.bsg.ad.adp.com,
 on 06/10/2013
at 02:46 PM, Farley, Peter x23353 peter.far...@broadridge.com
 said:

 There *are* non-theoretical solutions to runaway file output.  The
 *ix system model of using disk quotas per user makes it entirely
 possible to imagine z/OS application users with reasonable disk
 quotas specific to the application (i.e., not by job but by suite of
 jobs).

 Mommy, make it go away. An out of control job would kill unrelated
 work. Per file quotas might be workable, but would lead to the same
 types of outcry as space specification: I don't know how much output
 the job will create.

 --
  Shmuel (Seymour J.) Metz, SysProg and JOAT
  Atid/2http://patriot.net/~shmuel
 We don't care. We don't have to care, we're Congress.
 (S877: The Shut up and Eat Your spam act of 2003)

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




-- 
This is a test of the Emergency Broadcast System. If this had been an
actual emergency, do you really think we'd stick around to tell you?

Maranatha! 
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Thomas Berg
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Shmuel Metz (Seymour J.)
 Sent: Tuesday, June 11, 2013 3:37 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Storage paradigm [was: RE: Data volumes]
 
 In
 985915eee6984740ae93f8495c624c6c23194bd...@jscpcwexmaa1.bsg.ad.adp.com
 ,
 on 06/10/2013
at 02:46 PM, Farley, Peter x23353 peter.far...@broadridge.com
 said:
 
 There *are* non-theoretical solutions to runaway file output.  The
 *ix system model of using disk quotas per user makes it entirely
 possible to imagine z/OS application users with reasonable disk
 quotas specific to the application (i.e., not by job but by suite of
 jobs).
 
 Mommy, make it go away. An out of control job would kill unrelated
 work. Per file quotas might be workable, but would lead to the same
 types of outcry as space specification: I don't know how much output
 the job will create.

No.



Regards
Thomas Berg

Thomas Berg   Specialist   z/OS\RQM\IT Delivery   SWEDBANK AB (Publ)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Anne Lynn Wheeler
shmuel+...@patriot.net (Shmuel Metz  , Seymour J.) writes:
 That's not the Multics model. The Multic model is that segment numbers
 are dynamically assigned as needed, and that in general two processes
 will use different numbers for the same segment. IBM had something
 similar in TSS, but abandoned it.

some of the people from CTSS went to the 5th flr and did Project Mac
multics. other of the people went to the science center on the 4th flr
and did virtual machines, online computing, the internal network ...
GML was invented at the science center in 1969, as well as lots of
performance monitoring and modeling stuff ... some of which evolves into
capacity planning ... misc. past posts mentioning 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech

supposedly 360/67 (360/65 with virtual memory) was to be ibm's candidate
for project mac ... but ge won the bid instead ... and multics was done
on ge645. melinda's history has lots of details ... can be found here:
http://www.leeandmelindavarian.com/Melinda/

tss/360 was then going to the official operating system ...  including
virtual memory and single-level-store (storage mapped as virtual
memory ... 360/67 had both 24bit and 32bit virtual addressing modes). at
one point tss/360 is claimed to have something like 1200 people at the
time the science people had 12 people on cp40/cms (before being able to
get 360/67, they modify 360/40 with virtual memory hardwre, later when
they were able to get 360/67, cp40/cms morphs into cp67/cms ... and
later into vm370/cms). old user group presentationon cp40
http://www.garlic.com/~lynn/cp40seas1982.txt

tss/360  360/67 were sold to lots of univ. ... but TSS/360 never quite
became product ... and so many systems ran as 360/65 as os/360 for most
of the time. as undergraduate in the 60s, there was work on both tss/360
and cp67 on the weekends. we did one fortran edit, compile, and execute
simulated user benchmark. tss/360 running the script with four simulated
users got worse throughput and interactive response than cp67/cms with
35 simulated users (on the same hardware).

the Future System project in the early 70s was going to also be
single-level-store based ... from tss/360, multics, etc. recent
reference to future system:
http://www.garlic.com/~lynn/2013h.html#44 Why does IBM keep saying things like 
this:

at the same time I was at the science center and did paged-mapped
filesystem for cms (somewhat in competition with multics on the floor
above).  Having observed a lot of the tss/360 problems ... I did an
implementation that avoided many of the problems (and would periodically
ridicule the FS ... claiming what I already had running was better).  it
never made it as part of release product ... in part because of the bad
rep that single-level-store got from the FS effort ... even though I
could show 3times the throughput/efficiency compared to standard CMS
filesystem (both CDF  EDF) for moderate filesystem workload.  some past
posts
http://www.garlic.com/~lynn/submain.html#mmap

part of it was to have same exact shared segments at different virtual
addresses in different virtual address spaces ... which also
wasn't released. A small piece of the shared segment (w/o filesystem
stsuff and supporting concurrent shared images at different virtual
addresses) ... was released as DCSS in vm370 release 3. some old
email from the period
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

one of the hardest problems was that CMS borrowed a lot of stuff from
os/360 and there was enormous problem with os/360 relocatable adcons
(which are swizzled to absolute address after being loaded for
execution). I needed *real* relocatable adcons ... both at
load time as well at execution time. This was one thing supported
by tss/360. some past posts discussing constant battle that
i had with os/360 relocatable adcons
http://www.garlic.com/~lynn/submain.html#adcon

note that folklore is that after FS failure, some of the people
retreated to Rochester and did single-level-store for S/38 ...  note
however, the S/38 wasn't into performance throughput ...  so the
single-level-store performance issues weren't an issue.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-11 Thread Anne Lynn Wheeler
l...@garlic.com (Anne  Lynn Wheeler) writes:
 it never made it as part of release product ... in part because of the
 bad rep that single-level-store got from the FS effort ... even though
 I could show 3times the throughput/efficiency compared to standard CMS
 filesystem (both CDF  EDF) for moderate filesystem workload.  some
 past posts
 http://www.garlic.com/~lynn/submain.html#mmap

re:
http://www.garlic.com/~lynn/2013h.html#44 Why does IBM keep saying things like 
this
http://www.garlic.com/~lynn/2013h.html#45 Storage paradigm [was: RE: Data 
volumes]

note that there was two parts of the significant throughput increase
going to paged-mapped filesystem ... one is that it is higher level
abstraction that allows significant amount of optimization on service
the request to be done under the covers. the other is there is a real
paradigm mismatch between channel program paradigm and virtual memory
... requiring significant pathlengths to stitch together the mismatch

paradigm match/mismatch continues on down ... there is almost direct
paradigm match with page mapped filesystem at the top, down through
virtual memory operation to fixed-block architecture disk structure; and
when there is mismatch requires a lot of extra resources and overhead
... like effort to map CKD to FBA (aka there hasn't been any real CKD
disks manufactured for decades).

Virtual machines has to scan each channel program, making a copy and
replacing virtual addresses for real. CMS standard filesystem used CCWs
for i/o ... it was originally developed on the same 360/40 (running
stand-alone) as was being used to do CP40 (with added virtual memory
hardware). CMS continued to be able to run on bare machine all during
cp67 ... but there was an artificial cripple put in in the morph from
vm370/cms.

In the transition from MVT to OS/VS2 (aka virtual memory), the same
problem showed up. The original implementation involved putting a little
bit of code to create 16mbyte virtual address space for MVT, but the
major effort was hacking CCWTRANS (from CP67) into the side of EXCP
processing (EXCP had the same problem with access methods creating
channel programs in the application virtual address space ... as CP67
did with virtual machine channel programs). Old reference by somebody in
POK that was in the middle of the transition to OS/VS2 and virtual
memory ... includes reference that OS/VS2 release 2 (MVS) was on glide
path to OS/VS2 release 3 (FS)
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

above also mentions that one of the people responsible for HASP having a
group doing a TSS-style implementation of OS/360 (that had something
like a page-mapped filesystem)

One of the big differences (cp67/cms and os/vs2) was that the rest of
CP67 I/O processing was as little as 5% of the corresponding OS/VS2
pathlength ... so the additional pathlength for doing channel program
duplication with real addresses was less noticeable in OS/VS2 (both SVS
and MVS). In fact, one of the big motivations for the SSCH and other
changes to I/O for 370-xa was to get some portion of the enormous I/O
pathlength moved out of MVS so it could be rewritten (as well as being
moved to separate dedicated processors).

Starting in the late 70s, I got to play disk engineer in the disk
engineering labs and rewrite the I/O supervisor to be bullet proof and
never fail ... so they could do concurrent on-demand development testing
in operating system environment (they had tried MVS, but found MVS had
15min MTBF in that environment with just a single testcell ... requiring
manual re-ipl). However, I also attempted to do further pathlenth
reducution (while still supporting never fail) to demonstrate 370 I/O
coming as close as possible to 370-xa with separate dedicated
processor. past posts mentioning getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk

for other topic drift ... FE had 3380 i/o error injection regression
tests and even after 3380 was introduced, MVS was failing (requiring
re-ipl) for all tests ... and in 2/3rds of the tests, there was no
indication what had precipitated the failure. old email:
http://www.garlic.com/~lynn/2007.html#email801015

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Farley, Peter x23353
Rant
Like a few others on this list, I have often gritted my teeth at the necessity 
to estimate disk storage quantities that vary widely over time in a fixed 
manner (i.e., SPACE in JCL) when the true need is just to match output volume 
to input volume each day.

Why is it that IBM (and organizations that use their mainframe systems) so 
vigorously resist a conversion off of the ECKD standard?  (Yes, I know it's 
all about conversion cost, but in the larger picture that is a red herring.)  
Not that I'm likely to see such a transition in my lifetime, but in this 
dawning time of soi-disant big data, perhaps it is past time to change the 
storage paradigm entirely, not from ECKD to FBA but to transition instead to 
something like the Multics model where every object in the system (whether in 
memory or on external storage, whether data or program) has an address, and all 
addresses are unique.  Let the storage subsystem decide how to optimally 
position and aggregate the various parts of objects, and how to organize them 
for best performance.  Such decisions should not require human guesstimate 
input to be optimal, or nearly so.  Characteristics of application access are 
far more critical specifications than mere size.  The ability to specify just 
the desired application access characteristics (random, sequential, growing, 
shrinking, response-time-critical, etc.) should be necessary and sufficient.

EAV or not EAV, guaranteed space or not, candidate volumes, striped or not 
striped, compressed or not compressed - all of that baggage is clearly 
non-optimal for getting the job done in a timely manner.  Why should allocating 
a simple sequential file require a team of Storage Administration experts to 
accomplish effectively?
/Rant

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ed Jaffe
Sent: Sunday, June 09, 2013 10:47 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Data volumes

On 6/9/2013 7:12 AM, Scott Ford wrote:
 We need bigger dasd ...ouch

The largest 3390 volumes in our tiny shop hold 3,940,020 tracks or 262,668 
cylinders. That is the maximum size supported by the back-level DASD we are 
running. Newer DASD hardware can support volumes up to 1TB in size. I assume 
nearly all zEC12 and z196 customers are capable of exploiting these large 
sizes. But, do they?

I spent three years dealing with, and eventually helping IBM to solve (via 
OA40210 - HIPER, DATALOSS), a serious EAV bug that should have been seen in 
most every shop in the world that uses the DFSMSdss CONSOLIDATE function (with 
or without DEFRAG). The experience was a real eye-opener for me and I concluded 
that almost nobody is using EAV!

Why not? Personally, I would find it embarrassing if the Corsair thumb drive in 
my pocket held more data than our largest mainframe volumes. 
But, that's just me...
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread John McKown
In general, I agree. But I will say that I need something to limit run-away
usage of disk space. Why? Because we have had programmers who didn't want
to be bother either. So they put out a report to SPOOL. And then their
program went into a loop; writing the same message over and over. This
exhausted the SPOOL space. Which caused a production outage on a weekend
(we run dark on the weekends). I can easily envision the same thing
happening if DASD ever went to an all you can eat. Of course, with SMS
control, it is easier to segregate the data into pools. We currently try to
do a type of all you can reasonably eat by having our data classes have a
dynamic volume count of 59. And we have a semi-standard (read:
recommendation) that files of an unknown size be allocated CYL,(500,100)
. Oh, and we use space release in the data class to get rid of excess
allocation. If the data set cannot abide space release for some reason,
there is an exempt data class for the programmers to use.

On Mon, Jun 10, 2013 at 10:38 AM, Farley, Peter x23353 
peter.far...@broadridge.com wrote:

 Rant
 Like a few others on this list, I have often gritted my teeth at the
 necessity to estimate disk storage quantities that vary widely over time in
 a fixed manner (i.e., SPACE in JCL) when the true need is just to match
 output volume to input volume each day.

 Why is it that IBM (and organizations that use their mainframe systems) so
 vigorously resist a conversion off of the ECKD standard?  (Yes, I know
 it's all about conversion cost, but in the larger picture that is a red
 herring.)  Not that I'm likely to see such a transition in my lifetime, but
 in this dawning time of soi-disant big data, perhaps it is past time to
 change the storage paradigm entirely, not from ECKD to FBA but to
 transition instead to something like the Multics model where every object
 in the system (whether in memory or on external storage, whether data or
 program) has an address, and all addresses are unique.  Let the storage
 subsystem decide how to optimally position and aggregate the various parts
 of objects, and how to organize them for best performance.  Such decisions
 should not require human guesstimate input to be optimal, or nearly so.
  Characteristics of application access are far more critical specifications
 than mere size.  The ability to specify just the desired application access
 characteristics (random, sequential, growing, shrinking,
 response-time-critical, etc.) should be necessary and sufficient.

 EAV or not EAV, guaranteed space or not, candidate volumes, striped or not
 striped, compressed or not compressed - all of that baggage is clearly
 non-optimal for getting the job done in a timely manner.  Why should
 allocating a simple sequential file require a team of Storage
 Administration experts to accomplish effectively?
 /Rant

 Peter

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of Ed Jaffe
 Sent: Sunday, June 09, 2013 10:47 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Data volumes

 On 6/9/2013 7:12 AM, Scott Ford wrote:
  We need bigger dasd ...ouch

 The largest 3390 volumes in our tiny shop hold 3,940,020 tracks or 262,668
 cylinders. That is the maximum size supported by the back-level DASD we are
 running. Newer DASD hardware can support volumes up to 1TB in size. I
 assume nearly all zEC12 and z196 customers are capable of exploiting these
 large sizes. But, do they?

 I spent three years dealing with, and eventually helping IBM to solve (via
 OA40210 - HIPER, DATALOSS), a serious EAV bug that should have been seen in
 most every shop in the world that uses the DFSMSdss CONSOLIDATE function
 (with or without DEFRAG). The experience was a real eye-opener for me and I
 concluded that almost nobody is using EAV!

 Why not? Personally, I would find it embarrassing if the Corsair thumb
 drive in my pocket held more data than our largest mainframe volumes.
 But, that's just me...
 --

 This message and any attachments are intended only for the use of the
 addressee and may contain information that is privileged and confidential.
 If the reader of the message is not the intended recipient or an authorized
 representative of the intended recipient, you are hereby notified that any
 dissemination of this communication is strictly prohibited. If you have
 received this communication in error, please notify us immediately by
 e-mail and delete the message and any attachments from your system.

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




-- 
This is a test of the Emergency Broadcast System. If this had been an
actual emergency, do you really think we'd stick around to tell you?

Maranatha! 
John McKown

--
For IBM-MAIN subscribe / 

Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Dale R. Smith
On Mon, 10 Jun 2013 11:38:08 -0400, Farley, Peter x23353 
peter.far...@broadridge.com wrote:

Rant
Like a few others on this list, I have often gritted my teeth at the necessity 
to estimate disk storage quantities that vary widely over time in a fixed 
manner (i.e., SPACE in JCL) when the true need is just to match output volume 
to input volume each day.

Why is it that IBM (and organizations that use their mainframe systems) so 
vigorously resist a conversion off of the ECKD standard?  (Yes, I know it's 
all about conversion cost, but in the larger picture that is a red herring.) 
 Not that I'm likely to see such a transition in my lifetime, but in this 
dawning time of soi-disant big data, perhaps it is past time to change the 
storage paradigm entirely, not from ECKD to FBA but to transition instead to 
something like the Multics model where every object in the system (whether in 
memory or on external storage, whether data or program) has an address, and 
all addresses are unique.  Let the storage subsystem decide how to optimally 
position and aggregate the various parts of objects, and how to organize them 
for best performance.  Such decisions should not require human guesstimate 
input to be optimal, or nearly so.  Characteristics of application access are 
far more critical specifications than mere size.  The ability to specify just 
the desired application access characteristics (random, sequential, growing, 
shrinking, response-time-critical, etc.) should be necessary and sufficient.

EAV or not EAV, guaranteed space or not, candidate volumes, striped or not 
striped, compressed or not compressed - all of that baggage is clearly 
non-optimal for getting the job done in a timely manner.  Why should 
allocating a simple sequential file require a team of Storage Administration 
experts to accomplish effectively?
/Rant

Peter

Oh, then you want to move to IBM System i...  :-)

Seriously, System i, (formerly known by many different names), addresses 
everything in storage/memory and on disk and has been 64-bit since the mid 
1990s, (and it was 48-bit before that).  There is however, no need for system 
programmers on the i, (really IBM, you can't come up with better names for 
hardware/software than 1 character? And i, is this an Apple box?)

-- 
Dale R. Smith

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Blaicher, Christopher Y.
I am not a LUW person, other than I use a windows machine for simple things, so 
I am curious how external storage is allocated and controlled in that 
environment.  I think we have all heard the complaints about the short-comings 
of MVS in this area, but what would be a realistic solution?

I would imagine the people at IBM have spent a little time on this, and if it 
was easy would have started transitioning us from ECKD to 'the new way' a long 
time ago.  The idea of a 'run-away' program is what is the hang-up.

Chris Blaicher
Principal Software Engineer, Software Development
Syncsort Incorporated
50 Tice Boulevard, Woodcliff Lake, NJ 07677
P: 201-930-8260  |  M: 512-627-3803
E: cblaic...@syncsort.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Farley, Peter x23353
Sent: Monday, June 10, 2013 10:38 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Storage paradigm [was: RE: Data volumes]

Rant
Like a few others on this list, I have often gritted my teeth at the necessity 
to estimate disk storage quantities that vary widely over time in a fixed 
manner (i.e., SPACE in JCL) when the true need is just to match output volume 
to input volume each day.

Why is it that IBM (and organizations that use their mainframe systems) so 
vigorously resist a conversion off of the ECKD standard?  (Yes, I know it's 
all about conversion cost, but in the larger picture that is a red herring.)  
Not that I'm likely to see such a transition in my lifetime, but in this 
dawning time of soi-disant big data, perhaps it is past time to change the 
storage paradigm entirely, not from ECKD to FBA but to transition instead to 
something like the Multics model where every object in the system (whether in 
memory or on external storage, whether data or program) has an address, and all 
addresses are unique.  Let the storage subsystem decide how to optimally 
position and aggregate the various parts of objects, and how to organize them 
for best performance.  Such decisions should not require human guesstimate 
input to be optimal, or nearly so.  Characteristics of application access are 
far more critical specifications than mere size.  The ability to specify just 
the desired application access characteristics (random, sequential, growing, 
shrinking, response-time-critical, etc.) should be necessary and sufficient.

EAV or not EAV, guaranteed space or not, candidate volumes, striped or not 
striped, compressed or not compressed - all of that baggage is clearly 
non-optimal for getting the job done in a timely manner.  Why should allocating 
a simple sequential file require a team of Storage Administration experts to 
accomplish effectively?
/Rant

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ed Jaffe
Sent: Sunday, June 09, 2013 10:47 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Data volumes

On 6/9/2013 7:12 AM, Scott Ford wrote:
 We need bigger dasd ...ouch

The largest 3390 volumes in our tiny shop hold 3,940,020 tracks or 262,668 
cylinders. That is the maximum size supported by the back-level DASD we are 
running. Newer DASD hardware can support volumes up to 1TB in size. I assume 
nearly all zEC12 and z196 customers are capable of exploiting these large 
sizes. But, do they?

I spent three years dealing with, and eventually helping IBM to solve (via 
OA40210 - HIPER, DATALOSS), a serious EAV bug that should have been seen in 
most every shop in the world that uses the DFSMSdss CONSOLIDATE function (with 
or without DEFRAG). The experience was a real eye-opener for me and I concluded 
that almost nobody is using EAV!

Why not? Personally, I would find it embarrassing if the Corsair thumb drive in 
my pocket held more data than our largest mainframe volumes.
But, that's just me...
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN



ATTENTION: -

The information contained in this message (including any files transmitted with 
this message) may contain proprietary, trade secret or other  confidential 
and/or legally privileged information. Any pricing information contained in 
this message or in any files transmitted with this message is always 
confidential and cannot be shared with any third parties without prior written

Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread John McKown
LUW works similar to z/OS UNIX file systems. I.e. there is a file system
which is formatted using some utility (mkfs in the Linux/UNIX world, format
in Windows). This sets up all the internals. In today's LUW, it is usually
possible for a single file to be as big as the file system upon which it
resides. But no bigger. There is nothing like a multi file system file
(which would vaguely like a multivolume data set). I'm not too Windows
literate any more, but it used to be that a file system had to reside on a
single disk (or in a partition of that disk). Linux/UNIX used to be that
way too. But Linux/UNIX now implements something called LVM (Logical Volume
Manager). In short, LVM can stitch together a number of physical disk
volumes (called a PV for Physical Volume), or partitions, and the subdivide
that aggregate space into one or more Logical Volumes (LV). A Logical
Volume can be created in many ways, such as using software RAID, or
striping. The admin then formats a file system on the Logical Volume. Even
after creating the PV, the storage admin can add another disk into a PV
(like adding a volume to a storage group in SMS). The storage admin can
then use the new space for another LV or to extend an existing LV.
Depending on the file system formatted on the LV, it might even be possible
to tell the file system to start using the newly added space. Most of the
current Linux file systems can at least be extended when they are unmounted
(not actively used). So, like a zFS file system on z/OS, using something
like EXT4 (or BTRFS) and LVM, it is possible to dynamically expand the size
of a file system. I don't know for certain, but I doubt that Windows can do
this kind of dynamic expansion at all. To control allocation, Linux and
UNIX can use disk quotas. I don't know much about that since I don't use it
on my personal systems.

On Mon, Jun 10, 2013 at 11:15 AM, Blaicher, Christopher Y. 
cblaic...@syncsort.com wrote:

 I am not a LUW person, other than I use a windows machine for simple
 things, so I am curious how external storage is allocated and controlled in
 that environment.  I think we have all heard the complaints about the
 short-comings of MVS in this area, but what would be a realistic solution?

 I would imagine the people at IBM have spent a little time on this, and if
 it was easy would have started transitioning us from ECKD to 'the new way'
 a long time ago.  The idea of a 'run-away' program is what is the hang-up.

 Chris Blaicher
 Principal Software Engineer, Software Development
 Syncsort Incorporated
 50 Tice Boulevard, Woodcliff Lake, NJ 07677
 P: 201-930-8260  |  M: 512-627-3803
 E: cblaic...@syncsort.com

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of Farley, Peter x23353
 Sent: Monday, June 10, 2013 10:38 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Storage paradigm [was: RE: Data volumes]

 Rant
 Like a few others on this list, I have often gritted my teeth at the
 necessity to estimate disk storage quantities that vary widely over time in
 a fixed manner (i.e., SPACE in JCL) when the true need is just to match
 output volume to input volume each day.

 Why is it that IBM (and organizations that use their mainframe systems) so
 vigorously resist a conversion off of the ECKD standard?  (Yes, I know
 it's all about conversion cost, but in the larger picture that is a red
 herring.)  Not that I'm likely to see such a transition in my lifetime, but
 in this dawning time of soi-disant big data, perhaps it is past time to
 change the storage paradigm entirely, not from ECKD to FBA but to
 transition instead to something like the Multics model where every object
 in the system (whether in memory or on external storage, whether data or
 program) has an address, and all addresses are unique.  Let the storage
 subsystem decide how to optimally position and aggregate the various parts
 of objects, and how to organize them for best performance.  Such decisions
 should not require human guesstimate input to be optimal, or nearly so.
  Characteristics of application access are far more critical specifications
 than mere size.  The ability to specify just the desired application access
 characteristics (random, sequential, growing, shrinking,
 response-time-critical, etc.) should be necessary and sufficient.

 EAV or not EAV, guaranteed space or not, candidate volumes, striped or not
 striped, compressed or not compressed - all of that baggage is clearly
 non-optimal for getting the job done in a timely manner.  Why should
 allocating a simple sequential file require a team of Storage
 Administration experts to accomplish effectively?
 /Rant

 Peter

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of Ed Jaffe
 Sent: Sunday, June 09, 2013 10:47 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Data volumes

 On 6/9/2013 7:12 AM, Scott Ford wrote:
  We need bigger dasd ...ouch

Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Roberts, John J
For Windows Capabilities, I suggest reading about Dynamic Disks and Dynamic 
Volumes on MSDN:

http://msdn.microsoft.com/en-us/library/windows/desktop/aa363785(v=vs.85).aspx

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Gerhard Postpischil

On 6/10/2013 12:15 PM, Blaicher, Christopher Y. wrote:

I am not a LUW person, other than I use a windows machine for simple
things, so I am curious how external storage is allocated and
controlled in that environment.  I think we have all heard the
complaints about the short-comings of MVS in this area, but what
would be a realistic solution?


Technically the easiest to implement would be adding a new device type, 
thus keeping (E)CKD completely distinct from FBA. The new type could be 
supported by VSAM/AMS only (and JCL, SVC 99, etc.) without impacting 
other programs. (I would hope there are no programs out there using TM 
UCBTBYT3 rather than CLI?)



I would imagine the people at IBM have spent a little time on this,
and if it was easy would have started transitioning us from ECKD to
'the new way' a long time ago.  The idea of a 'run-away' program is
what is the hang-up.


I cannot see IBM spend any effort on this unless one of the Fortune 10 
companies requires it. Even then I would expect them to stick with the 
pre-allocated space paradigm rather than transitioning. As to run-away 
programs, they should be thoroughly checked on a test system before 
going into production; a run-away in production should be so rare as to 
be immaterial.


Gerhard Postpischil
Bradford, Vermont

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Gerhard Postpischil

On 6/10/2013 11:38 AM, Farley, Peter x23353 wrote:

Rant
Like a few others on this list, I have often gritted my teeth at the
necessity to estimate disk storage quantities that vary widely over
time in a fixed manner (i.e., SPACE in JCL) when the true need is
just to match output volume to input volume each day.


If it's that predictable, it's trivial to write code to produce an 
estimated output volume from input, and tailor and submit the 
appropriate JCL. So that's a non-issue.



EAV or not EAV, guaranteed space or not, candidate volumes, striped
or not striped, compressed or not compressed - all of that baggage is
clearly non-optimal for getting the job done in a timely manner.  Why
should allocating a simple sequential file require a team of Storage
Administration experts to accomplish effectively?
/Rant


There is no theoretical solution. On any system running jobs, it is 
possible for one job to monopolize available space, requiring other jobs 
to wait forever or be terminated. Even on a single job system that job 
may exhaust space. Requiring a space specification may be a PITA, but it 
guarantees that a started job will finish (subject to other 
constraints). And the SA experts, especially for sequential files, can 
be avoided with simple estimator programs.


This seems to be more of a religious war than a practical discussion.

Gerhard Postpischil
Bradford, Vermont

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Mike Schwab
Download GnuPartEd, Burn it to CD-ROM, Boot from it, resize as needed.

On Mon, Jun 10, 2013 at 11:56 AM, Roberts, John J
jrobe...@dhs.state.ia.us wrote:
 For Windows Capabilities, I suggest reading about Dynamic Disks and Dynamic 
 Volumes on MSDN:

 http://msdn.microsoft.com/en-us/library/windows/desktop/aa363785(v=vs.85).aspx

 John

-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Farley, Peter x23353
As to the religious aspect, I did try to signal the less-than-practical 
nature of my note with the Rant and /Rant tags.

To your point about tailoring and dynamically submitting JCL, it really is an 
issue.  In a typical large z/OS shop today, dynamically tailoring and 
submitting JCL is only permitted for test environments and users.  Production 
JCL is frozen and controlled and submitted only by the scheduler software, and 
there is no political possibility to dynamically adjust the parameters even if 
it is technically feasible.

There *are* non-theoretical solutions to runaway file output.  The *ix system 
model of using disk quotas per user makes it entirely possible to imagine 
z/OS application users with reasonable disk quotas specific to the 
application (i.e., not by job but by suite of jobs).  Not the best solution?  
Maybe not, but ISTM to be better than having to predict what each and every 
process (i.e., job and file) output volume will be.

And there may well be other process models out there different from anything I 
know or imagine.  I don't claim to have an exclusive lock on ideas to replace 
what we have to deal with.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Gerhard Postpischil
Sent: Monday, June 10, 2013 1:32 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Storage paradigm [was: RE: Data volumes]

On 6/10/2013 11:38 AM, Farley, Peter x23353 wrote:
 Rant
 Like a few others on this list, I have often gritted my teeth at the
 necessity to estimate disk storage quantities that vary widely over
 time in a fixed manner (i.e., SPACE in JCL) when the true need is
 just to match output volume to input volume each day.

If it's that predictable, it's trivial to write code to produce an 
estimated output volume from input, and tailor and submit the 
appropriate JCL. So that's a non-issue.

 EAV or not EAV, guaranteed space or not, candidate volumes, striped
 or not striped, compressed or not compressed - all of that baggage is
 clearly non-optimal for getting the job done in a timely manner.  Why
 should allocating a simple sequential file require a team of Storage
 Administration experts to accomplish effectively?
 /Rant

There is no theoretical solution. On any system running jobs, it is 
possible for one job to monopolize available space, requiring other jobs 
to wait forever or be terminated. Even on a single job system that job 
may exhaust space. Requiring a space specification may be a PITA, but it 
guarantees that a started job will finish (subject to other 
constraints). And the SA experts, especially for sequential files, can 
be avoided with simple estimator programs.

This seems to be more of a religious war than a practical discussion.

Gerhard Postpischil
Bradford, Vermont
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread John McKown
Too true. And, around here, our QA people appear to be glitz checkers
instead of function and reliability checkers. They have more people than
any other group and do less testing on the mainframe. They seem to check
mainly for ease of use. That is, can a totally numb skull still use
this?

On Mon, Jun 10, 2013 at 1:57 PM, Ted MacNEIL eamacn...@yahoo.ca wrote:

 As to run-away programs, they should be thoroughly checked on a test
 system before
 going into production; a run-away in production should be so rare as to be
 immaterial.

 What planet are you from?
 Programmers seem able to test everything except that one condition that
 will break in Production
 -
 Ted MacNEIL
 eamacn...@yahoo.ca
 Twitter: @TedMacNEIL

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




-- 
This is a test of the Emergency Broadcast System. If this had been an
actual emergency, do you really think we'd stick around to tell you?

Maranatha! 
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [SPAM] Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Gerhard Postpischil

On 6/10/2013 2:46 PM, Farley, Peter x23353 wrote:

To your point about tailoring and dynamically submitting JCL, it
really is an issue.  In a typical large z/OS shop today, dynamically
tailoring and submitting JCL is only permitted for test environments
and users.  Production JCL is frozen and controlled and submitted
only by the scheduler software, and there is no political possibility
to dynamically adjust the parameters even if it is technically
feasible.


The scheduler can be set up to submit the tailoring job just as easily 
as the job to be tailored. And a few critical production abends should 
take care of the political aspect.



There *are* non-theoretical solutions to runaway file output.  The
*ix system model of using disk quotas per user makes it entirely
possible to imagine z/OS application users with reasonable disk
quotas specific to the application (i.e., not by job but by suite of
jobs).  Not the best solution?  Maybe not, but ISTM to be better than
having to predict what each and every process (i.e., job and file)
output volume will be.


I've worked for service bureaus that established just such quotas. My 
objections stand, as both the single job and a suite of jobs can still 
fail; the difference is the number of jobs/users impacted, not the 
principle.



And there may well be other process models out there different from
anything I know or imagine.  I don't claim to have an exclusive lock
on ideas to replace what we have to deal with


Ditto.

Gerhard Postpischil
Bradford, Vermont

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Gerhard Postpischil

On 6/10/2013 2:57 PM, Ted MacNEIL wrote:

What planet are you from?


Sol 3


Programmers seem able to test everything except that one condition that will 
break in Production


Perhaps you are keeping bad company. While humans are not perfect, there 
are methods to improve code reliability.


Gerhard Postpischil
Bradford, Vermont

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN