On Thursday, September 01, 2016 04:21:18 PM J. Roeleveld wrote:
> On Thursday, September 01, 2016 08:41:39 AM Michael Mol wrote:
> > On Wednesday, August 31, 2016 11:45:15 PM Alan McKinnon wrote:
> > > On 31/08/2016 17:25, Grant wrote:
> > > >> Which NTFS system are you using?
> > > >> 
> > > >> ntfs kernel module? It's quite dodgy and unsafe with writes
> > > >> ntfs-ng on fuse? I find that one quite solid
> > > > 
> > > > I'm using ntfs-ng as opposed to the kernel option(s).
> > > 
> > > I'm offering 10 to 1 odds that your problems came from ... one that you
> > > yanked too soon
> > 
> > (pardon the in-line snip, while I get on my soap box)
> > 
> > The likelihood of this happening can be greatly reduced by setting
> > vm.dirty_bytes to something like 2097125 and vm.dirty_background_bytes to
> > something like 1048576. This prevents the kernel from queuing up as much
> > data for sending to disk. The application doing the copy or write will
> > normally report "complete" long before writes to slow media are
> > actually...complete. Setting vm.dirty_bytes to something low prevents the
> > kernel's backlog of data from getting so long.
> > 
> > vm.dirty_bytes has another, closely-related setting, vm.dirty_bytes_ratio.
> > vm.dirty_bytes_ratio is a percentage of RAM that is used for dirty bytes.
> > If vm.dirty_bytes_ratio is set, vm.dirty_bytes will read 0. If
> > vm.dirty_bytes is set, vm.dirty_bytes_ratio will read 0.
> > 
> > The default is for vm.dirty_bytes_ratio to be 20, which means up to 20% of
> > your memory can find itself used as a write buffer for data on its way to
> > a
> > filesystem. On a system with only 2GiB of RAM, that's 409MiB of data that
> > the kernel may still be waiting to push through the filesystem layer! If
> > you're writing to, say, a class 10 SDHC card, the data may not be at rest
> > for another 40s after the application reports the copy operation is
> > complete!
> > 
> > If you've got a system with 8GiB of memory, multiply all that by four.
> > 
> > The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO,
> > badly broken and an insidious source of problems for both regular Linux
> > users and system administrators.
> 
> I would prefer to be able to have different settings per disk.
> Swappable drives like USB, I would put small numbers.
> But for built-in drives, I'd prefer to keep default values or tuned to the
> actual drive.

The problem is that's not really possible. vm.dirty_bytes and 
vm.dirty_background_bytes deal with the page cache, which sits at the VFS 
layer, not the block device layer. It could certainly make sense to apply it 
on a per-mount basis, though.

-- 
:wq

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to