On 03/12/2018 10:48 PM, Christoph Anton Mitterer wrote: > On Mon, 2018-03-12 at 22:22 +0100, Goffredo Baroncelli wrote: >> Unfortunately no, the likelihood might be 100%: there are some >> patterns which trigger this problem quite easily. See The link which >> I posted in my previous email. There was a program which creates a >> bad checksum (in COW+DATASUM mode), and the file became unreadable. > But that rather seems like a plain bug?!
You are right, unfortunately it seems that it is catalogate as WONT-FIX :( > No reason that would conceptually make checksumming+notdatacow > impossible. > > AFAIU, the conceptual thin would be about: > - data is written in nodatacow > => thus a checksum must be written as well, so write it > - what can then of course happen is > - both csum and data are written => fine > - csum is written but data not and then some crash => csum will show > that => fine > - data is written but csum not and then some crash => csum will give > false positive > > Still better few false positives, as many unnoticed data corruptions > and no true raid repair. A checksum mismatch, is returned as -EIO by a read() syscall. This is an event handled badly by most part of the programs. I.e. suppose that a page of a VM ram image file has a wrong checksum. When the VM starts, tries to read the page, got -EIO and aborts. It is even possible that it could not print which page is corrupted. In this case, how the user understand the problem, and what he could do ? [....] > >> Again, you are assuming that the likelihood of having a bad checksum >> is low. Unfortunately this is not true. There are pattern which >> exploits this bug with a likelihood=100%. > > Okay I don't understand why this would be so and wouldn't assume that > the IO pattern can affect it heavily... but I'm not really btrfs > expert. > > My blind assumption would have been that writing an extent of data > takes much longer to complete than writing the corresponding checksum. The problem is the following: there is a time window between the checksum computation and the writing the data on the disk (which is done at the lower level via a DMA channel), where if the data is update the checksum would mismatch. This happens if we have two threads, where the first commits the data on the disk, and the second one updates the data (I think that both VM and database could behave so). In btrfs, a checksum mismatch creates an -EIO error during the reading. In a conventional filesystem (or a btrfs filesystem w/o datasum) there is no checksum, so this problem doesn't exist. I am curious how ZFS solves this problem. However I have to point out that this problem is not solved by the COW. COW solved only the problem about an interrupted commit of the filesystem, where the data is update in place (so it is available by the user), but the metadata not. > > Even if not... I should be only a problem in case of a crash during > that,.. and than I'd still prefer to get the false positive than bad > data. How you can know if it is a "bad data" or a "bad checksum" ? > > > Anyway... it's not going to happen so the discussion is pointless. > I think people can probably use dm-integrity (which btw: does no CoW > either (IIRC) and still can provide integrity... ;-) ) to see whether > their data is valid. > No nice but since it won't change on btrfs, a possible alternative. Even in this case I am curious about dm-integrity would sole this issue. > > > Cheers, > Chris. > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it> Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html