On 7/8/20 12:15 PM, John M. Harris Jr wrote:
I'd rather crash and restart where I left off than have
the computer drag me along trying to save my application.
Sorry, what? Why would your data not be on your system? What about "the modern
way of computing" would move your data from your system to something else? I'd
rather not see software crash, and risk data loss, or have my system "drag me
along".

I am talking about every process that has enough safeguards to be effectively idempotent, either because it doesn't use local data or saves it often enough to have a reliable, resumeable checkpoints. Here's a couple of examples:

 * browsing, because it both displays remote data and because it saves
   the state (tabs and whatnot)
 * make -j 30
 * even emacs editing, because emacs saves the buffers when it's killed

If the computer gets in trouble doing those things, I don't want it to do heroics trying to recover----it's OK to abort and retry. I think the 'modern cloud computing' is, for many reasons, having to be like that---resilient to failures and idempotent.

Really, this is starting to sound like it's more of an issue with web
browsers, and less of a problem with our current configurations, without
EarlyOOM needlessly killing things.
[...]
Currently, pages that haven't been used in a while are the ones that would get
swapped out first, which I'm sure we can all agree is the most sane option.
Your GIMP example is accurate, but that'll take a fraction of a second.
Argumentative, Your Honor! It's not just an issue with web browsers---you say that yourself few lines further down, it happens with every program that uses big data---GIMP with lots of images, FreeCAD with a complex geometry, rmaxima with a combinatiorally exploding symbolic expression, even your editor where you read in the entire /var/log/httpd/access_log against your better judgement. Literally all those examples happened to me fairly recently---the system went unresponsive, essentially requiring hard reset, whereas the preferred outcome would have been to abort those runaway tasks.
One way to think about it is that disk is tens of thousands times slower
than RAM. If you need to use it, your system is commensurably slower.
That's why zram is such a good idea. Swap was always a tradeoff: you
saved $'s not spent on RAM, and paid with your time sitting idle waiting
for the computer.
Well, no. It's not "tens of thousands times slower than RAM". If you need to
use it, you're swapping in a few pages at a time, not the entire contents of
swap. Swap isn't a replacement for RAM. It's an optimization that doesn't
waste RAM needlessly.

I think we both understand what the other person is trying to say, to the point where no further explanations are needed. Having said that, I'd prefer if you would qualify and augment instead of denying my statements. I stand by both of them:

 * disk access is literally O(10000) slower than RAM access
 * swap is a cheap substitute for RAM, with the right swap/RAM mix
   determined by cost-benefit considerations

You're right that there's a sweet spot where swap just provides a buffer for occasional peak demand---but this entire discussion results from complaints about system behavior under heavy swap use, when swap is being an inadequate replacement for the needed RAM.

_______________________________________________
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org

Reply via email to