Andrew Lentvorski wrote:
Christopher Smith wrote:
David Brown wrote:
With a 64-bit architecture, just memory mapping the harddrive can be an
excellent way to implement a filesystem.
That seems true but desktop processors tend not to actually be able to manage the full 2^64 virtual address space, and at the same time they tend to have large drives attached to them. While right now all is well, I could see where we could encounter problems at some point that are similar to the ones we have encountered at the end of the 2^32 era. Another problem, of course, is the issue of programs that need the virtual address space for other things, although if I remember the Mach driver model the driver gets its own address space, so that may be a non-issue. Finally, one should not ignore the overhead of managing all those page tables....

I dunnoh. I don't think it is as clear cut a bad idea as others have implied, but it is not without its shortcomings.

Well, you're not likely to run out of space, if that's what you're implying. 2^64 is large even compared with Moore's law growth from where we are now. 10^12 is approx 2^36. You need 28 more iterations of Moore's law, so we would need 28 more years (at least) to hit that kind of limit, and I believe there are some energy considerations before we get there.
You missed my primary point: few if any 64-bit processors can manage the full 2^64 virtual address space. Heck, the first Alphas couldn't handle more than 2^36 or 2^38 (I can't remember which). Sure they are were using 64-bit pointers, but you had to zero out the top bits. This remains the reality to this day (although address spaces are obviously larger), it is entirely possible to hit virtual address space limits *well before* 2^64, even on 64-bit architectures.
The bigger issue is simply that disk doesn't act like memory and attempting to make it do so breaks the abstraction. The memory subsystem is tuned for accessing memory, and you have to put in nasty hacks to start working around those tunings to make going to disk via memory map work.
Well, if you have virtual memory that is backed by disk, then you have to address this issue sooner or later, so the argument is you get it right, with all the necessary ugly performance hacks, and then you let it happen.
Want proof?  Go take a look at the XFS on ARM fiasco.
Yeah, so you've completely confused very distinct arguments here. Yes, Linux has some x86 assumptions (and to a large degree, a fairly simplified x86 model for that matter) baked in to its memory and disk subsystems. For a lot of cases, those assumptions have actually worked out very well even for non-x86 platforms. The XFS codebase was built from it's own set of extensive assumptions that originally were applied to the IRIX kernel (which itself was ported around between 64-bit MIPS, 64-bit IA64, and 64-bit x86-64). Shocking then that with all this patching and repurposing of code, all while maintaining 100% backward compatibility, finally you'd hit upon a platform where the code was difficult to debug/clean up. That's got very little to do with the issue of memory mapping. Heck, *Linux* doesn't require that block device drivers memory map their device.
Deciding when an abstraction is useful is a tough problem. There was a time when the idea that disk and memory are "random access" was good enough and you could treat disk like memory. I'm pretty sure that doesn't work anymore. If anything, we need to start treating memory more like a disk (random access with slow, error-prone burstable response).
Actually, there has been a pretty compelling argument raised that what you really need to do is treat disk as tape these days. Treating memory as disk is a novel idea, although you'd need to do all kinds of hackery under the covers to make it all work anywhere near efficiently, but once you'd done that, you'd have a great codebase for building a Mach-based system. ;-)

--Chris

--
KPLUG-LPSG@kernel-panic.org
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to