But without virtual memory, many applications combinations that work acceptably now would just fail to run at all. Virtual memory itself isn’t the issue. Also, an OS could fairly easily be set up so an application that start to thrash its virtual memory is dropped in priority to get memory, and even getting pages swapped in, so that other applications have their operations only minimally impacted.
One of the issues is due to the Linux fork/exec model. If a process wants to start a process that runs in parallel to it, the Linux system to my understanding doesn’t have an easy call to just start up a brand new process with parameters from you, but instead a process will fork itself, creating two identical copies of itself, one will continue, and the other will exec the new process, replacing itself with the desired process. The act of forking SHOULD allocated all the virtual memory for the copy of the process, but that will take a bit of time. Because most of the time, all that memory is just going to be released in a couple of instructions, it made sense to just postpone the actual allocation until it was actually used (which it likely wasn’t). This ‘optimization’ was so ‘complete’ that the system didn’t really keep track of how much memory had been promised to the various processes, so the system allowed itself to overcommit memory, and if it actually did run out, it didn’t have a good way to determine who was at fault, and no way to tell them that the memory that was promised prior to them isn’t really available. Fixing the issue is more of a political problem. With the current system, when a problem arises, you can normally find a user program or something the user did that was ‘bad’ and can be blamed for the problem. If the system was changed to not allow over committing, then forking would be slower which hits all of the standard system routines. > On Dec 9, 2019, at 8:39 AM, Digital Dog <digitald...@gmail.com> wrote: > > For reasons which you've described I'm a big fan of removing virtual memory > from CPUs altogether. That would speed up things considerably. > >> On Sun, Dec 8, 2019 at 6:43 PM James K. Lowden <jklow...@schemamania.org> >> wrote: >> >> On Sat, 7 Dec 2019 05:23:15 +0000 >> Simon Slavin <slav...@bigfraud.org> wrote: >> >>> (Your operating system is allowed to do this. Checking how much >>> memory is available for every malloc takes too much time.) >> >> Not really. Consider that many (all?) operating systems before Linux >> that supported dynamic memory returned an error if the requested amount >> couldn't be supplied. Some of those machines had 0.00001% of the >> processing capacity, and yet managed to answer the question reasonably >> quickly. >> >> The origin of oversubscribed memory rather has its origins in the >> changed ratio of the speed of RAM to the speed of I/O, and the price of >> RAM. >> >> As RAM prices dropped, our machines got more RAM and the bigger >> applications that RAM supported. As memory got faster, relatively, the >> disk (ipso facto) has gotten slower. Virtual memory -- the hallmark of >> the the VAX, 4 decades ago -- has become infeasibly slow both because >> the disk is relatively slower than it was, and because more is being >> demanded of it to support today's big-memory applications. Swapping in >> Firefox, at 1 GB of memory, who knows why, is a much bigger deal than >> Eight Megabytes and Constantly Swapping. >> >> If too much paging makes the machine too slow (however measured) one >> solution is less paging. One administrative lever is to constrain how >> much paging is possible by limiting the paging resource: swap space. >> However, limiting swap space may leave the machine underutilized, >> because many applications allocate memory they never use. >> >> Rather than prefer applications that use resources rationally or >> administer machines to prevent thrashing, the best-effort, least-effort >> answer was lazy allocation, and its infamous gap-toothed cousin, the >> OOM. >> >> Nothing technical mandates oversubscribed memory. The problem, as >> ever, is not with the stars, but with ourselves. >> >> --jkl >> >> >> _______________________________________________ >> sqlite-users mailing list >> sqlite-users@mailinglists.sqlite.org >> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users >> > _______________________________________________ > sqlite-users mailing list > sqlite-users@mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users _______________________________________________ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users