A colleague and I have been discussing the possibility of using Sheepdog for the storage backing physical hosts as well as qemu virtual machines. It feels like it wouldn't be particularly hard to take the relatively simple qemu <-> sheep protocol defined in qemu/block/sheepdog.c and write a kernel block device, perhaps based on the existing linux nbd driver.
Whilst there aren't any obviously problems with mounting a block device backed on sheepdog outside the cluster, I'm worried about mounts of sheepdog block devices on hosts within the cluster, or even on a machine that's just acting as a gateway. Am I right that this is unlikely to work? I remember that loopback iscsi and nbd are very prone to deadlock under memory pressure, because more dirty pages need to be created to be able to progress with writing out the existing ones. Presumably a kernel sheepdog driver would suffer from the same problem, and it would be very hard to enable sheepdog hosts to mount filesystems on a cluster of which they're a part? (But somehow, cluster filesystems like Gluster and Ceph have user mode storage servers and are still able to mount the filesystem on the same nodes as the storage. I'm puzzled that the same problem doesn't afflict them. Is there some technique they use to avoid deadlock that would be applicable to a Sheepdog kernel client?) Cheers, Chris. -- sheepdog mailing list sheepdog@lists.wpkg.org http://lists.wpkg.org/mailman/listinfo/sheepdog