On Thursday, September 01, 2016 09:35:15 AM Rich Freeman wrote:
> On Thu, Sep 1, 2016 at 8:41 AM, Michael Mol <mike...@gmail.com> wrote:
> > The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO,
> > badly broken and an insidious source of problems for both regular Linux
> > users and system administrators.
> 
> It depends on whether you tend to yank out drives without unmounting
> them,

The sad truth is that many (most?) users don't understand the idea of 
unmounting. Even Microsoft largely gave up, having flash drives "optimized for 
data safety" as opposed to "optimized for speed". While it'd be nice if the 
average John Doe would follow instructions, anyone who's worked in IT 
understands that the average John Doe...doesn't. And above-average ones assume 
they know better and don't have to.

As such, queuing up that much data while reporting to the user that the copy 
is already complete violates the principle of least surprise.

> or if you have a poorly-implemented database that doesn't know
> about fsync and tries to implement transactions across multiple hosts.

I don't know off the top of my head what database implementation would do that, 
though I could think of a dozen that could be vulnerable if they didn't sync 
properly.

The real culprit that comes to mind, for me, are copy tools. Whether it's dd, 
mv, cp, or a copy dialog in GNOME or KDE. I would love to see CoDeL-style 
time-based buffer sizes applied throughout the stack. The user may not care 
about how many milliseconds it takes for a read to turn into a completed write 
on the face of it, but they do like accurate time estimates and low latency 
UI.

> 
> The flip side of all of this is that you can save-save-save in your
> applications and not sit there and watch your application wait for the
> USB drive to catch up.  It also allows writes to be combined more
> efficiently (less of an issue for flash, but you probably can still
> avoid multiple rounds of overwriting data in place if multiple
> revisions come in succession, and metadata updating can be
> consolidated).

I recently got bit by vim's easytags causing saves to take a couple dozen 
seconds, leading me not to save as often as I used to. And then a bunch of 
code I wrote Monday...wasn't there any more. I was sad.

> 
> For a desktop-oriented workflow I'd think that having nice big write
> buffers would greatly improve the user experience, as long as you hit
> that unmount button or pay attention to that flashing green light
> every time you yank a drive.

Realistically, users aren't going to pay attention. You and I do, but that's 
because we understand the *why* behind the importance.

I love me fat write buffers for write combining, page caches etc. But, IMO, it 
shouldn't take longer than 1-2s (barring spinning rust disk wake) for full 
buffers to flush to disk; at modern write speeds (even for a slow spinning 
disc), that's going to be a dozen or so megabytes of data, which is plenty big 
for write-combining purposes.

-- 
:wq

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to