Tom,
I don't think you answered my question. I remember a year or two before we
built our datacenter that opened in 1996, we looked at getting the STC box
(Now STK). I read a lot about it the time, but in the end we didn't get
one. What you wrote below I remember, especially the compression, and
writing all new and updated data in a new location. BUT, you define so many
volumes. Once you have them defined, and all of the space is allocated, you
can't add volumes because they are blocked better, or delete volumes because
you just wrote a couple huge files blocked at 150 bytes per block. That
just doesn't make sense. (I hope this makes sense!)
When we built the P&H datacenter, we added a bunch of 3380 and 3390 strings.
I never quite understood why we didn't go with the new technowlogy, but they
were cheap - at least the purchase price. I don't know if they saved any
money after maintenance though. We totally filled up the datacenter with
all the dasd. Later, when we got a Hitachi box, and replaced the 3090S with
a MP3000, we had a good sized ballroom available.
Eric
Eric Bielefeld
Sr. Systems Programmer
Milwaukee, Wisconsin
414-475-7434
----- Original Message -----
From: "Tom Marchant" <[email protected]>
Newsgroups: bit.listserv.ibm-main
To: <[email protected]>
Sent: Tuesday, March 31, 2009 5:23 PM
Subject: Re: "A foolish consistancy" or "3390 cyl/track architecture"
On Tue, 31 Mar 2009 15:54:09 -0500, Eric Bielefeld wrote:
You may be right, but from your reply you apparently don't know for sure
whether bad blocksizes actually take up more dasd or not. Does anyone
know
whether this affects the total amount of dasd or not that can be used?
When I worked at Wayne State University in Detroit, we bought an RVA.
That
was IBM's re-branded Iceberg. AFAIK, Sun also sells it as the SVA. On
that
box, all data stored on disk was compressed. Because any new data written
to a track may not fit in the same location, every time data on a track
was
written, the track was written to a new location, and only the disk space
required for the compressed data was used.
There was a special utility used to report on how much of the back-end
disk
storage was used. IIRC, it was called Net Capacity Load. Allocating
another volume or creating a snapshot did not increase the NCL.
The micorcode has garbage collection routines that accumulate track areas
that are no longer used and background tasks that move data around in
order
to maintain a contiguous area where new tracks can be written. It is a
marvelous feat of engineering. And it is no wonder that the Iceberg was
so
much later getting to markket than originally planned.
In order for any DASD subsystem to be insensitive to blocksize, it would
have to do something similar, compressing out the gaps and storing the
track
in discontiguous locations.
AFAIK, the rest of modern DASD subsystems allocate specific locations for
each logical volume, and therefore for each logical track. There has to
be
sufficient disk space to store the maximum amount of data in each track
location. If short blocks are written, less data will fit in that logical
track.
I suppose you might ask why the disk can't store more short blocks on the
track, reducing (or eliminating) the inter-record gap. But then, it
wouldn't behave like a 3390, would it? What might that break?
--
Tom Marchant
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html