I don't see that new microcode is required.

The microcode already clearly supports 255 records per track and key lengths
of up to 255 bytes.  This has been true for many years.

If we take the assumption that the existing microcode fully implements ECKD,
which an examination of the available documentation tends to support, then
no microcode changes are required.

Operating system support is another matter.

IBM had to make significant changes to its mainframe operating systems to
support an unsigned 16-bit cylinder number.  They would have to do the same
thing for an unsigned 16-bit head number.

OEM vendors may likewise be impacted if they have not coded to support
unsigned 16-bit head numbers.

In any case, the original idea raised in this thread was that the current
architecture imposes significant capacity limitations.

Let's see: 2**16 devices times 2**56 bytes per device = 2**72 bytes.

I don't think that we have a problem.

John P Baker
Software Engineer

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Bill Fairchild
Sent: Thursday, July 21, 2005 08:23
To: [email protected]
Subject: Re: capacity of largest drive
 
New controller microcode would be needed to allow 255 records, each of
which 
was 65535 bytes long with a possibly 255-byte-long key, to occupy the same  
one virtual track.  Given the need for new microcode, why must we continue
the 
legacy of overhead for R0, count areas, and gaps?  Let the new virtual
track 
size be as large as it needs to be.  So maybe it exceeds 16MiB by a  few 
thousand bytes and thus needs more than 24 bits.  Who cares?  It's  all
virtual 
and being mapped onto real disks through complex mapping  algorithms.
 
Bill Fairchild

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to