On Mon, Aug 11, 2014 at 11:34:56AM +0800, Liu Yuan wrote: > On Mon, Aug 11, 2014 at 11:17:33AM +0900, Hitoshi Mitake wrote: > > At Fri, 8 Aug 2014 15:49:37 +0800, > > Liu Yuan wrote: > > > > > > On Fri, Aug 08, 2014 at 03:12:17PM +0900, Hitoshi Mitake wrote: > > > > At Fri, 8 Aug 2014 13:20:39 +0800, > > > > Liu Yuan wrote: > > > > > > > > > > On Thu, Aug 07, 2014 at 04:28:39PM +0900, Hitoshi Mitake wrote: > > > > > > The update is required for supporting iSCSI multipath. It doesn't > > > > > > affect behavior of QEMU driver but adding a new field to vdi request > > > > > > struct is required. > > > > > > > > > > > > Cc: Kevin Wolf <kw...@redhat.com> > > > > > > Cc: Stefan Hajnoczi <stefa...@redhat.com> > > > > > > Cc: Liu Yuan <namei.u...@gmail.com> > > > > > > Cc: MORITA Kazutaka <morita.kazut...@lab.ntt.co.jp> > > > > > > Signed-off-by: Hitoshi Mitake <mitake.hito...@lab.ntt.co.jp> > > > > > > --- > > > > > > block/sheepdog.c | 8 +++++++- > > > > > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > > > > > > > > > diff --git a/block/sheepdog.c b/block/sheepdog.c > > > > > > index 8d9350c..36f76f0 100644 > > > > > > --- a/block/sheepdog.c > > > > > > +++ b/block/sheepdog.c > > > > > > @@ -103,6 +103,9 @@ > > > > > > #define SD_INODE_SIZE (sizeof(SheepdogInode)) > > > > > > #define CURRENT_VDI_ID 0 > > > > > > > > > > > > +#define LOCK_TYPE_NORMAL 1 > > > > > > +#define LOCK_TYPE_SHARED 2 /* for iSCSI multipath */ > > > > > > > > > > How about > > > > > > > > > > #define LOCK_TYPE_NORMAL 0 > > > > > #define LOCK_TYPE_SHARED 1 > > > > > > > > > > Then we don't need this patch. Since qemu won't make use of multipath > > > > > for the > > > > > near future, we should avoid adding stuff related to multipath to > > > > > qemu driver. > > > > > > > > (Cc-ing current Kazutaka-san's address) > > > > > > > > I think this isn't a good idea. Because it means that sheep has an > > > > assumption about padding field of the request data struct. This sort > > > > of workaround can cause hard to find problems in the future. > > > > > > > > > > The point is, how to keep backward compatibilty? E.g, old QEMU with > > > present > > > sheep daemon with lock features. Then QEMU will send 0 instead of 1 to > > > the sheep > > > daemon and based on how you handle the invalid value, these might cause > > > problems. > > > > > > Suppose we have 1 old QEMU and 1 present QEMU who try to start the same > > > image A. > > > Old QEMU will send invalid 0 to sheep daemon. We shouldn't deny it > > > because it > > > can run with old sheep and should run on new sheep too. Then this image A > > > isn't > > > locked due to invalid 0 value. Then present QEMU send correct LOCK signal > > > and > > > will wrongly set up the image. > > > > > > Start with 0 instead of 1, in my option, is reasonable to keep backward > > > compatibility. > > > > I don't think so. Because the backward compatibility of the locking > > functionality is already broken in the far past. > > > > As Meng repoted in the sheepdog mailing list, old QEMU can issue > > locking request to sheep with vid == 0: > > http://lists.wpkg.org/pipermail/sheepdog/2014-August/015438.html > > I don't really understand why we can't start with 0 and can't keep backward > compatibility. By the way, I think the link has nothing to do with qemu, it is > a bug in sheep. > > locking has two state, one is lock and the other unlock. > > We choose 0 to mean 'lock' the vdi and 1 to 'unlock' the vdi. > > So both old and new QEMU issue 0 to lock the image and 'release' request to > unlock the image. What is in the way? > > > > > Even we set the default lock type as 0, the old QEMU cannot issue a > > correct locking request. > > why? > > > I'll post a patch for incrementing protocol > > version number later. But before doing that, I also want to clean > > DISCARD request. Because this request cannot work with snapshot (not > > only with the new GC algorithm. The old naive algorithm cannot work > > with the DISCARD request). > > If you remove discard, what if users run new qemu with old sheep, which make > use of old algorithm and people expect discard to work?
Okay, you mean discard can't work with snapshots, but it should work with non-snapshots, so for the users of non-snapshots, people can expect it to work. As stated in my last email, you can handle this problem transparently without modification of protocol. QEMU --> discard[offset, lenght] --> sheep sheep: if req on snapshot return success; else return sheep_handle_discard(). Thanks Yuan