Re: [Qemu-devel] [PATCH 1/3] block/gluster: add support for multiple gluster servers

2015-10-20 Thread Prasanna Kumar Kalever
On  Tuesday, October 20, 2015 1:38:37 AM, Eric Blake wrote: 
> On 10/19/2015 06:13 AM, Prasanna Kumar Kalever wrote:
> > This patch adds a way to specify multiple volfile servers to the gluster
> > block backend of QEMU with tcp|rdma transport types and their port numbers.
> 
> When sending a multi-patch series, it is best to also include a 0/3
> cover letter.  Git can be configured to do this automatically with:
> git config format.coverLetter auto
> 

Thanks for the tip Eric :)

[...]
> > +#define GLUSTER_DEFAULT_PORT   24007
> > +
> >  typedef struct GlusterAIOCB {
> >  int64_t size;
> >  int ret;
> > @@ -24,22 +34,72 @@ typedef struct BDRVGlusterState {
> >  struct glfs_fd *fd;
> >  } BDRVGlusterState;
> >  
> > -typedef struct GlusterConf {
> > +typedef struct GlusterServerConf {
> >  char *server;
> >  int port;
> > +char *transport;
> 
> How do you know how many transport tuples are present? I'd expect a size
> argument somewhere.
> 
Its based on users choice I don't want to make it static

> > +} GlusterServerConf;

[...]

> > @@ -117,16 +178,19 @@ static int qemu_gluster_parseuri(GlusterConf *gconf,
> > const char *filename)
> > return -EINVAL;
> > }
> > 
> > + gconf = g_new0(GlusterConf, 1);
> > + gconf->gsconf = g_new0(GlusterServerConf, 1);
> 
> Wow - you are hard-coding things to exactly one server.  The subject
> line of the patch claims multiple gluster servers, but I don't see
> anything that supports more than one.  So something needs to be fixed up
> (if this patch is just renaming things, and a later patch adds support
> for more than one, that's okay - but it needs to be described that way).
> 

[1] I think you need to check 'qemu_gluster_parsejson' function for multiple 
gluster servers
usage which parse JSON syntax with multiple tuples, 'qemu_gluster_parseuri' 
function is to
parse URI syntax only and that supports single server usage only (kept for 
compatibility)

> > +static int qemu_gluster_parsejson(GlusterConf **pgconf, QDict *options)
> > +{

[...]

> > +#
> > +# Since: 2.5
> > +##
> > +{ 'struct': 'GlusterTuple',
> > +  'data': { 'host': 'str',
> > +'*port': 'int',
> > +'*transport': 'GlusterTransport' } }
> > +
> > +##
> > +# @BlockdevOptionsGluster
> > +#
> > +# Driver specific block device options for Gluster
> > +#
> > +# @volume:   name of gluster volume where our VM image resides
> > +#
> > +# @path: absolute path to image file in gluster volume
> > +#
> > +# @servers:  holds multiple tuples of {host, transport, port}
> 
> For this patch, it looks like it holds exactly one tuple.  But it looks
> like you plan to support multiple tuples later on; maybe a better
> wording is:
> 
> @servers: one or more gluster host descriptions (host, port, and transport)
> 
... [1] should clarify your understanding, but yes still I will bind to your 
comment 

[...]
> --
> Eric Blake   eblake redhat com+1-919-301-3266
> Libvirt virtualization library http://libvirt.org
> 
> 

-Prasanna



[Qemu-devel] [PATCH 1/3] block/gluster: add support for multiple gluster servers

2015-10-19 Thread Prasanna Kumar Kalever
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.

Problem:

Currenly VM Image on gluster volume is specified like this:

file=gluster[+tcp]://host[:port]/testvol/a.img

Assuming we have three hosts in trustred pool with replica 3 volume
in action and unfortunately host (mentioned in the command above) went down
for some reason, since the volume is replica 3 we now have other 2 hosts
active from which we can boot the VM.

But currently there is no mechanism to pass the other 2 gluster host
addresses to qemu.

Solution:

New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)

Basic command line syntax looks like:

Pattern I:
 -drive driver=gluster,
volume=testvol,path=/path/a.raw,
servers.0.host=1.2.3.4,
   [servers.0.port=24007,]
   [servers.0.transport=tcp,]
servers.1.host=5.6.7.8,
   [servers.1.port=24008,]
   [servers.1.transport=rdma,] ...

Pattern II:
 'json:{"driver":"qcow2","file":{"driver":"gluster",
   "volume":"testvol","path":"/path/a.qcow2",
   "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'

   driver  => 'gluster' (protocol name)
   volume  => name of gluster volume where our VM image resides
   path=> absolute path of image in gluster volume

  {tuple}  => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}

   host=> host address (hostname/ipv4/ipv6 addresses)
   port=> port number on which glusterd is listening. (default 24007)
   transport   => transport type used to connect to gluster management daemon,
   it can be tcp|rdma (default 'tcp')

Examples:
1.
 -drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,
file.servers.0.host=1.2.3.4,
file.servers.0.port=24007,
file.servers.0.transport=tcp,
file.servers.1.host=5.6.7.8,
file.servers.1.port=24008,
file.servers.1.transport=rdma
2.
 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
 "path":"/path/a.qcow2","servers":
 [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
  {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'

This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.

This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)

This patch depends on a recent fix in libgfapi raised as part of this work:
http://review.gluster.org/#/c/12114/

Credits: Sincere thanks to Kevin Wolf  and
"Deepak C Shetty"  for inputs and all their support

Signed-off-by: Prasanna Kumar Kalever 
---
This series of patches are sent based on "Peter Krempa" 
review comments on v8 sent earlier.
---
 block/gluster.c  | 414 +--
 qapi/block-core.json |  60 +++-
 2 files changed, 426 insertions(+), 48 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 1eb3a8c..dd076fe 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -11,6 +11,16 @@
 #include "block/block_int.h"
 #include "qemu/uri.h"
 
+#define GLUSTER_OPT_FILENAME   "filename"
+#define GLUSTER_OPT_VOLUME "volume"
+#define GLUSTER_OPT_PATH   "path"
+#define GLUSTER_OPT_HOST   "host"
+#define GLUSTER_OPT_PORT   "port"
+#define GLUSTER_OPT_TRANSPORT  "transport"
+#define GLUSTER_OPT_SERVER_PATTERN "servers."
+
+#define GLUSTER_DEFAULT_PORT   24007
+
 typedef struct GlusterAIOCB {
 int64_t size;
 int ret;
@@ -24,22 +34,72 @@ typedef struct BDRVGlusterState {
 struct glfs_fd *fd;
 } BDRVGlusterState;
 
-typedef struct GlusterConf {
+typedef struct GlusterServerConf {
 char *server;
 int port;
+char *transport;
+} GlusterServerConf;
+
+typedef struct GlusterConf {
 char *volname;
 char *image;
-char *transport;
+GlusterServerConf *gsconf;
 } GlusterConf;
 
+static QemuOptsList runtime_json_opts = {
+.name = "gluster_json",
+.head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
+.desc = {
+{
+.name = GLUSTER_OPT_VOLUME,
+.type = QEMU_OPT_STRING,
+.help = "name of gluster volume where our VM image resides",
+},
+{
+.name = GLUSTER_OPT_PATH,
+.type = QEMU_OPT_STRING,
+.help = "absolute path to image file in gluster volume",
+},
+{ /* end of list */ }
+},
+};
+
+static QemuOptsList runtime_tuple_opts = {
+.name = "gluster_tuple",
+.head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
+.desc = {
+{
+.name = GLUSTER_OPT_HOST,
+

Re: [Qemu-devel] [PATCH 1/3] block/gluster: add support for multiple gluster servers

2015-10-19 Thread Eric Blake
On 10/19/2015 06:13 AM, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.

When sending a multi-patch series, it is best to also include a 0/3
cover letter.  Git can be configured to do this automatically with:
git config format.coverLetter auto

> 
> Problem:
> 
> Currenly VM Image on gluster volume is specified like this:
> 
> file=gluster[+tcp]://host[:port]/testvol/a.img
> 
> Assuming we have three hosts in trustred pool with replica 3 volume

s/trustred/trusted/ ?

> in action and unfortunately host (mentioned in the command above) went down
> for some reason, since the volume is replica 3 we now have other 2 hosts
> active from which we can boot the VM.
> 
> ---
> This series of patches are sent based on "Peter Krempa" 
> review comments on v8 sent earlier.

Even though that was a single patch and you now have a series, it is
still probably worth including v9 in the subject line (git send-email
-v9) to make it obvious how to find the earlier review.


> +++ b/block/gluster.c
> @@ -11,6 +11,16 @@
>  #include "block/block_int.h"
>  #include "qemu/uri.h"
>  
> +#define GLUSTER_OPT_FILENAME   "filename"
> +#define GLUSTER_OPT_VOLUME "volume"
> +#define GLUSTER_OPT_PATH   "path"
> +#define GLUSTER_OPT_HOST   "host"
> +#define GLUSTER_OPT_PORT   "port"
> +#define GLUSTER_OPT_TRANSPORT  "transport"
> +#define GLUSTER_OPT_SERVER_PATTERN "servers."

Should the macro name be GLUSTER_OPT_SERVERS_PATTERN?

> +
> +#define GLUSTER_DEFAULT_PORT   24007
> +
>  typedef struct GlusterAIOCB {
>  int64_t size;
>  int ret;
> @@ -24,22 +34,72 @@ typedef struct BDRVGlusterState {
>  struct glfs_fd *fd;
>  } BDRVGlusterState;
>  
> -typedef struct GlusterConf {
> +typedef struct GlusterServerConf {
>  char *server;
>  int port;
> +char *transport;

How do you know how many transport tuples are present? I'd expect a size
argument somewhere.

> +} GlusterServerConf;

It looks like 2/3 fixes confusing naming in preparation for this patch,
which means your series should probably be reordered to do the trivial
code cleanups and renames first, and then the new feature last.

> +
> +typedef struct GlusterConf {
>  char *volname;
>  char *image;
> -char *transport;
> +GlusterServerConf *gsconf;
>  } GlusterConf;
>  
> +static QemuOptsList runtime_json_opts = {
> +.name = "gluster_json",
> +.head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
> +.desc = {
> +{
> +.name = GLUSTER_OPT_VOLUME,
> +.type = QEMU_OPT_STRING,
> +.help = "name of gluster volume where our VM image resides",

s/our //

> +},
> +{
> +.name = GLUSTER_OPT_PATH,
> +.type = QEMU_OPT_STRING,
> +.help = "absolute path to image file in gluster volume",
> +},
> +{ /* end of list */ }
> +},
> +};
> +
> +static QemuOptsList runtime_tuple_opts = {
> +.name = "gluster_tuple",
> +.head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
> +.desc = {
> +{
> +.name = GLUSTER_OPT_HOST,
> +.type = QEMU_OPT_STRING,
> +.help = "host address (hostname/ipv4/ipv6 addresses)",
> +},
> +{
> +.name = GLUSTER_OPT_PORT,
> +.type = QEMU_OPT_NUMBER,
> +.help = "port number on which glusterd is listening(default 
> 24007)",

Space before ( in English.

> +},
> +{
> +.name = GLUSTER_OPT_TRANSPORT,
> +.type = QEMU_OPT_STRING,
> +.help = "transport type used to connect to glusterd(default 
> tcp)",

And again.  This should not be an open-coded string, but a finite set of
enum values, so you may want to list all the valid possibilities rather
than just the default of tcp.

/me reads ahead

Aha, the only other valid value is rdma.

> +},
> +{ /* end of list */ }
> +},
> +};
> +
>  static void qemu_gluster_gconf_free(GlusterConf *gconf)
>  {
>  if (gconf) {
> -g_free(gconf->server);
>  g_free(gconf->volname);
>  g_free(gconf->image);
> -g_free(gconf->transport);
> +if (gconf->gsconf) {
> +g_free(gconf->gsconf[0].server);
> +g_free(gconf->gsconf[0].transport);
> +g_free(gconf->gsconf);
> +gconf->gsconf = NULL;

Dead assignment, given that...

> +}
>  g_free(gconf);

...you are just about to free gconf itself.

> +gconf = NULL;

Dead assignment, given that...

>  }
>  }

...gconf is about to go out of scope.


> @@ -117,16 +178,19 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, 
> const char *filename)
> return -EINVAL;
> }
> 
> + gconf = g_new0(GlusterConf, 1);
> + gconf->gsconf = g_new0(GlusterServerConf, 1);

Wow - you are