At 15:58 29.10.98 +0100, Bojan Antonovic wrote:
>> So far, the drive towards more bits in the instruction code is by the need
>> to address more data rather than the needs to compute longer integer or
>> higher numeric precision. And this is usually driven by 2 factors : The
>> computer
>> RAM size and the biggest Database size required commercially .
>
>Realy ? But with 64 bit or 128 bit adressing space you can address every
atom on
>the earth ...
some of the IBM mainframes had > 32 bit addressing (I believe 48) several
years ago. (MVS/XA? MVS/EXA? Some acronym like that...)
These are devices that run terabyte disk farms, and MEMORY MAPS the disks;
each physical device gets its own address range in "physical memory".
I remember in particular a friend who told me about his need to copy a
disk pack from one disk to another, a process which for some reason needed
a third disk; he simply told the OS to regard a particular chunk of (virtual)
memory as a disk drive.
Since the OS didn't care whether a particular "treated as a disk" part of
its address range was mapped to RAM, disk, or virtual memory, it all worked.
There have been proposed architectures for multiprocessor machines in
which effectively every machine on the Internet could possibly be mapped
into every other machine's address space.
With architectures like that, you can burn off an awful number of
addressing bits in a hurry.
(One of my personal peevees with the Intel architecture: The CPU-level
separation into "I/O" and "memory" doesn't make sense - especially when
you have buffer memory, DMA memory and so on anyway. But IMNSHO, Intel
knows (knew?) as much about elegant processor design as Microsoft knows
about elegant OS design....)
Harald A
--
Harald Tveit Alvestrand, Maxware, Norway
[EMAIL PROTECTED]