Nothing to defend.  Economics drives development in hardware and 
software.  The cost of
people power has increased.  It just makes $$ to throw hardware at it.

Yea, it took more planning even down to the bit and byte packing some 
times, to get the most
machine effective layout for data structures.  Using stacks and heaps 
for memory allocation were
made a lot easier as the cost of disk them memory dropped (a bit of core 
storage cost $0.10
each and you HAD to have parity.   My first mainframe had 512K of 'fast' 
core, at 100ns,
and a huge 2M of 'slow core' at 250ns.  And with core, when you did a 
read, the hardware
would re-write the data and took 2 cycles to accomplish a read, where a 
write to core only
took one write cycle.  Core had a 'destructive read', as it inverted the 
data whenever it was read.)
Even a 'big' 12" platter type disk drive with 12 platters per spindle 
could only hold 20G, and
mag tape was either 7track at either 256BPI or 512BPI, or 9track at 1200 
or 2400BIP.
Eventually 9 track came to 6250BPI with only a half inch inter-block 
gap.  Prior versions
used 3/4" inter-block gap. ... With those kind of densities it took 
several 2400' (large, 12" reels)
to back up a 20M disk drive!.

Ugh, I have been trying to forget that for years.  I guess I never will 
now. ... Fast drives had
vacuum columns to provide proper tape tensioning over the read/write 
heads and to get it put
on the reels with the correct tension.  Cheaper / slower drives used 
mechanical arms to deal with it.

Using virtual memory, and faster processors/memory changed the face of 
computing.  Reduction
in size has done it again.  Every time the pundits come up with a new 
'wall' for Moore's Law,
something comes up to 'extend the life for a little while longer'.  
Silicon won over germanium,
something will eventually win over silicon.direct write will change.

It has been wonderful to see all the changes that occurred in my working 
lifetime, some expected,
many not.  I hope to be around to see the next 35+ years of change!

Jason Orendorff wrote:
> On Wed, Nov 19, 2008 at 9:41 PM, Jack Coats <[EMAIL PROTECTED]> wrote:
>   
>> Tapes for working data etc were the technology of the day.  Archival,
>> working and temp storage, even bringing in bits of programs (overlays)
>> from tape was not that unusual.  As hardware has gotten cheaper, we
>> throw that at a problem rather than brain cells, [...]
>>     
>
> In our defense, brain cells are precious.
>
> So is time. Apparently different eras have very different views on
> trading space for speed.  For example, Knuth's discussion of dynamic
> memory allocation talks about best-fit and first-fit algorithms.  This
> is good stuff, still useful--but you really don't want to implement
> malloc this way, not today.  For that you want size-classes and
> freelists, because they avoid fragmentation and they are much, much
> faster.
>
> In other words, if you needed a dynamic memory allocator, and you went
> straight to Knuth, you would be led horribly astray.  You would be
> scraping for bytes, when bytes are in plentiful supply and milliseconds
> are not.
>
> I need to keep plugging at it, but it's a lot of extra thinking to figure
> out what's still relevant, and in what situations.
>
> -j
>
> >
>
>   

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"NLUG" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/nlug-talk?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to