On 2015-07-21 22:01, Qu Wenruo wrote:
Steve Dainard wrote on 2015/07/21 14:07 -0700:
I don't know if this has any bearing on the failure case, but the
filesystem that I sent an image of was only ever created, subvol
created, and mounted/unmounted several times. There was never any data
written to that mount point.

Subvol creation and rw mount is enough to trigger 2~3 transaction with
DATA written into btrfs.
As the first rw mount will create free space cache, which is counted as
data.

But without multiple mount instants, I really can't consider another
method to destroy btrfs so badly but with all csum OK...

I know that a while back RBD had some intermittent issues with data corruption in the default configuration when the network isn't absolutely 100% reliable between all nodes (which for ceph means not only no packet loss, but also tight time synchronization between nodes and only very low network latency).

I also heard somewhere (can't remember exactly where though) of people having issues with ZFS on top of RBD.

The other thing to keep in mind is that Ceph does automatic background data scrubbing (including rewriting stuff it thinks is corrupted), so there is no guarantee that the data on the block device won't change suddenly without the FS on it doing anything.


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to