On Fri, Dec 20, 2013 at 10:48:41AM +0100, Peter Lieven wrote: > On 17.12.2013 17:47, Stefan Hajnoczi wrote: > >On Tue, Dec 17, 2013 at 10:15:25AM +0100, Peter Lieven wrote: > >>This patch adds native support for accessing images on NFS shares without > >>the requirement to actually mount the entire NFS share on the host. > >> > >>NFS Images can simply be specified by an url of the form: > >>nfs://<host>/<export>/<filename> > >> > >>For example: > >>qemu-img create -f qcow2 nfs://10.0.0.1/qemu-images/test.qcow2 > >> > >>You need libnfs from Ronnie Sahlberg available at: > >> git://github.com/sahlberg/libnfs.git > >>for this to work. > >> > >>During configure it is automatically probed for libnfs and support > >>is enabled on-the-fly. You can forbid or enforce libnfs support > >>with --disable-libnfs or --enable-libnfs respectively. > >> > >>Due to NFS restrictions you might need to execute your binaries > >>as root, allow them to open priviledged ports (<1024) or specify > >>insecure option on the NFS server. > >> > >>Signed-off-by: Peter Lieven <p...@kamp.de> > >>--- > >>v1->v2: > >> - fixed block/Makefile.objs [Ronnie] > >> - do not always register a read handler [Ronnie] > >> - add support for reading beyond EOF [Fam] > >> - fixed struct and paramter naming [Fam] > >> - fixed overlong lines and whitespace errors [Fam] > >> - return return status from libnfs whereever possible [Fam] > >> - added comment why we set allocated_file_size to -ENOTSUP after write > >> [Fam] > >> - avoid segfault when parsing filname [Fam] > >> - remove unused close_bh from NFSClient [Fam] > >> - avoid dividing and mutliplying total_size by BDRV_SECTOR_SIZE in > >> nfs_file_create [Fam] > >> > >> MAINTAINERS | 5 + > >> block/Makefile.objs | 1 + > >> block/nfs.c | 419 > >> +++++++++++++++++++++++++++++++++++++++++++++++++++ > >> configure | 38 +++++ > >> 4 files changed, 463 insertions(+) > >> create mode 100644 block/nfs.c > >Which NFS protocol versions are supported by current libnfs? > > > >>+#include <poll.h> > >Why is this header included? > > > >>+typedef struct nfsclient { > >Please either drop the struct tag or use "NFSClient". > > > >>+static void > >>+nfs_co_generic_cb(int status, struct nfs_context *nfs, void *data, > >>+ void *private_data) > >>+{ > >>+ NFSTask *Task = private_data; > >lowercase "task" local variable name please. > > > >>+static int coroutine_fn nfs_co_writev(BlockDriverState *bs, > >>+ int64_t sector_num, int nb_sectors, > >>+ QEMUIOVector *iov) > >>+{ > >>+ NFSClient *client = bs->opaque; > >>+ NFSTask task; > >>+ char *buf = NULL; > >>+ > >>+ nfs_co_init_task(client, &task); > >>+ > >>+ buf = g_malloc(nb_sectors * BDRV_SECTOR_SIZE); > >>+ qemu_iovec_to_buf(iov, 0, buf, nb_sectors * BDRV_SECTOR_SIZE); > >>+ > >>+ if (nfs_pwrite_async(client->context, client->fh, > >>+ sector_num * BDRV_SECTOR_SIZE, > >>+ nb_sectors * BDRV_SECTOR_SIZE, > >>+ buf, nfs_co_generic_cb, &task) != 0) { > >>+ g_free(buf); > >>+ return -EIO; > >Can we get a more detailed errno here? (e.g. ENOSPC) > > > >>+ } > >>+ > >>+ while (!task.complete) { > >>+ nfs_set_events(client); > >>+ qemu_coroutine_yield(); > >>+ } > >>+ > >>+ g_free(buf); > >>+ > >>+ if (task.status != nb_sectors * BDRV_SECTOR_SIZE) { > >>+ return task.status < 0 ? task.status : -EIO; > >>+ } > >>+ > >>+ bs->total_sectors = MAX(bs->total_sectors, sector_num + nb_sectors); > >Why is this necessary? block.c will update bs->total_sectors if the > >file is growable. > > > >>+ /* set to -ENOTSUP since bdrv_allocated_file_size is only used > >>+ * in qemu-img open. So we can use the cached value for allocate > >>+ * filesize obtained from fstat at open time */ > >>+ client->allocated_file_size = -ENOTSUP; > >Can you implement this fully? By stubbing it out like this we won't be > >able to call get_allocated_file_size() at runtime in the future without > >updating the nfs block driver code. It's just an fstat call, shouldn't > >be too hard to implement properly :). > > It seems I have to leave it as is currently. bdrv_get_allocated_file_size > is not in a coroutine context. I get coroutine yields to no one.
Create a coroutine and pump the event loop until it has reached completion: co = qemu_coroutine_create(my_coroutine_fn, ...); qemu_coroutine_enter(co, foo); while (!complete) { qemu_aio_wait(); } See block.c for similar examples. Stefan