Christopher Smith wrote:
David Brown wrote:
With a 64-bit architecture, just memory mapping the harddrive can be an
excellent way to implement a filesystem.
That seems true but desktop processors tend not to actually be able to manage the full 2^64 virtual address space, and at the same time they tend to have large drives attached to them. While right now all is well, I could see where we could encounter problems at some point that are similar to the ones we have encountered at the end of the 2^32 era. Another problem, of course, is the issue of programs that need the virtual address space for other things, although if I remember the Mach driver model the driver gets its own address space, so that may be a non-issue. Finally, one should not ignore the overhead of managing all those page tables....

I dunnoh. I don't think it is as clear cut a bad idea as others have implied, but it is not without its shortcomings.

Well, you're not likely to run out of space, if that's what you're implying. 2^64 is large even compared with Moore's law growth from where we are now. 10^12 is approx 2^36. You need 28 more iterations of Moore's law, so we would need 28 more years (at least) to hit that kind of limit, and I believe there are some energy considerations before we get there.

The bigger issue is simply that disk doesn't act like memory and attempting to make it do so breaks the abstraction. The memory subsystem is tuned for accessing memory, and you have to put in nasty hacks to start working around those tunings to make going to disk via memory map work.

Want proof? Go take a look at the XFS on ARM fiasco. They're *still* trying to work out all of the kinks in mmap() to make it work. Linux has a lot of x86 assumptions baked into both memory and disk subsystems. Stripping those out slows x86 performance, so people have had a very hard time convincing Linus to take the kernel changes required (a couple new invalidation options need to be added to mmap(), but supporting them on x86 slows things down).

Deciding when an abstraction is useful is a tough problem. There was a time when the idea that disk and memory are "random access" was good enough and you could treat disk like memory. I'm pretty sure that doesn't work anymore. If anything, we need to start treating memory more like a disk (random access with slow, error-prone burstable response).

-a

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to