On 2018-03-13 15:36, Goffredo Baroncelli wrote:
On 03/12/2018 10:48 PM, Christoph Anton Mitterer wrote:
On Mon, 2018-03-12 at 22:22 +0100, Goffredo Baroncelli wrote:
Unfortunately no, the likelihood might be 100%: there are some
patterns which trigger this problem quite easily. See The link which
I posted in my previous email. There was a program which creates a
bad checksum (in COW+DATASUM mode), and the file became unreadable.
But that rather seems like a plain bug?!
You are right, unfortunately it seems that it is catalogate as WONT-FIX :(
No reason that would conceptually make checksumming+notdatacow
impossible.
AFAIU, the conceptual thin would be about:
- data is written in nodatacow
=> thus a checksum must be written as well, so write it
- what can then of course happen is
- both csum and data are written => fine
- csum is written but data not and then some crash => csum will show
that => fine
- data is written but csum not and then some crash => csum will give
false positive
Still better few false positives, as many unnoticed data corruptions
and no true raid repair.
A checksum mismatch, is returned as -EIO by a read() syscall. This is an event
handled badly by most part of the programs.
I.e. suppose that a page of a VM ram image file has a wrong checksum. When the
VM starts, tries to read the page, got -EIO and aborts. It is even possible
that it could not print which page is corrupted. In this case, how the user
understand the problem, and what he could do ?
Check the kernel log on the host system, which should have an error
message saying which block failed. If the VM itself actually gets to
the point of booting into an OS (and properly propagates things like
-EIO to the guest environment like it should), that OS should also log
where the error was.
Most of the reason user applications don't tell you where the error was
is because the kernel already does it on any sensible system, and the
kernel tells you _exactly_ where the error was (exact block and device
that threw the error), which user applications can't really do (they
generally can't get sufficiently low-level information to give you all
the info the kernel does).
Again, you are assuming that the likelihood of having a bad checksum
is low. Unfortunately this is not true. There are pattern which
exploits this bug with a likelihood=100%.
Okay I don't understand why this would be so and wouldn't assume that
the IO pattern can affect it heavily... but I'm not really btrfs
expert.
My blind assumption would have been that writing an extent of data
takes much longer to complete than writing the corresponding checksum.
The problem is the following: there is a time window between the checksum
computation and the writing the data on the disk (which is done at the lower
level via a DMA channel), where if the data is update the checksum would
mismatch. This happens if we have two threads, where the first commits the data
on the disk, and the second one updates the data (I think that both VM and
database could behave so).
Though it only matters if you use O_DIRECT or the files in question are
NOCOW.
In btrfs, a checksum mismatch creates an -EIO error during the reading. In a
conventional filesystem (or a btrfs filesystem w/o datasum) there is no
checksum, so this problem doesn't exist.
I am curious how ZFS solves this problem.
It doesn't support disabling COW or the O_DIRECT flag, so it just never
has the problem in the first place.
However I have to point out that this problem is not solved by the COW. COW
solved only the problem about an interrupted commit of the filesystem, where
the data is update in place (so it is available by the user), but the metadata
not.
COW is irrelevant if you're bypassing it. It's only enforced for
metadata so that you don't have to check the FS every time you mount it
(because the way BTRFS uses it guarantees consistency of the metadata).
Even if not... I should be only a problem in case of a crash during
that,.. and than I'd still prefer to get the false positive than bad
data.
How you can know if it is a "bad data" or a "bad checksum" ?
You can't directly. Just like you can't know which copy in a two-device
MD RAID1 array is bad when they mismatch.
That's part of why I'm not all that fond of the idea of having checksums
without COW, you need to verify the data using secondary means anyway,
so why exactly should you waste time verifying it twice?
Anyway... it's not going to happen so the discussion is pointless.
I think people can probably use dm-integrity (which btw: does no CoW
either (IIRC) and still can provide integrity... ;-) ) to see whether
their data is valid.
No nice but since it won't change on btrfs, a possible alternative.
Even in this case I am curious about dm-integrity would sole this issue.
dm-integrity uses journaling, and actually based on the testing I've
done, will typically have much worse performance than the overhead of
just enabling COW on files on BTRFS and manually defragmenting them on a
regular basis.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html