On Sat, Jul 13, 2013 at 6:48 PM, <[email protected]> wrote:

> From: Weiwei Jia <[email protected]>
>
> After discuss with my mentor Lance, Michele and other developers
> like Guido and such, GlusterFS Ganeti Support doc is updated as
> follows.
>

The patch description should be more meaningful. It is ok to say that a
change happened after discussing with us, but you should also summarize
what the actual change is, instead of just writing "is updated as follows".


>
> Signed-off-by: Weiwei Jia <[email protected]>
> ---
>  doc/design-glusterfs-ganeti-support.rst |   28
> ++++++++++++++++++++++------
>  1 file changed, 22 insertions(+), 6 deletions(-)
>
> diff --git a/doc/design-glusterfs-ganeti-support.rst
> b/doc/design-glusterfs-ganeti-support.rst
> index 53bfb7a..c5b1bbc 100644
> --- a/doc/design-glusterfs-ganeti-support.rst
> +++ b/doc/design-glusterfs-ganeti-support.rst
> @@ -73,13 +73,29 @@ Now, there are two specific enhancements:
>    uses libgfapi and hence there is no FUSE overhead any longer when
> QEMU/KVM
>    works with VM images on Gluster volumes.
>
> -There are two possible ways to implement "GlusterFS Ganeti Support" inside
> -Ganeti. One is based on libgfapi, which call APIs by libgfapi to realize
> -GlusterFS interfaces in bdev.py. The other way is based on QEMU/KVM. Since
> +Proposed implementation
> +-----------------------
> +
>  QEMU/KVM has supported for GlusterFS and Ganeti could support for
> GlusterFS
> -by QEMU/KVM. However, the latter way can just let VMs of QEMU/KVM use
> GlusterFS
> -backend storage but other VMs like XEN and such. So the first way is more
> -suitable for us.
> +by QEMU/KVM. However, this way could just let VMs of QEMU/KVM use
> GlusterFS
> +backend storage but other VMs like XEN and such. Currently, there are two
>

s/but/but not/.


> +possible parts to implement "GlusterFS Ganeti Support" inside Ganeti,
> which

+could not only support for QEMU/KVM VMs but also for XEN VMs and such. One
> +part is GlusterFS for XEN VM, which is the same as sharedfile disk
> template.
>

Is it really the same? That is, can you just reuse the sharedfile with
glusterfs successfully, or is it just similar?


> +The other part is GlusterFS for QEMU/KVM VM, which is by GlusterFS driver
> for
> +QEMU/KVM way (QEMU/KVM + GlusterFS way).
> +
> +       ``gnt-instance add -t gluster xxx`` -> GlusterFS sharedfile way
> for XEN VMs
> +                                           -> QEMU/KVM + GlusterFS way
> for QEMU/KVM VMs
>

Instead of "xxx" why not writing "instance.example.com"? It's more in line
with what is used in the rest of the docs.

Also, I think I understand what you mean: by executing "gnt-instance add -t
gluster" you'd either invoke the sharedfile backend, or the KVM specific
one. But why using the obscure notation with arrows instead of writing it
explicitly?
(Which, by the way, you already did in the next paragraph, so I'd just
remove the sample command from the previous line.)


> +
> +After ``gnt-instance add -t gluster xxx`` command is executed, the added
> instance
> +should be checked. If the instance is a XEN VM, it would run the GlusterFS
> +sharedfile way. However, if the instance is a QEMU/KVM VM, it would run
> the
> +QEMU/KVM + GlsuterFS way.


This is OK only if the user does not specify what kind of disk template
(-t) to use.
If one is specified, there should be a check allowing the creation of the
instance to go on if the template is the right one (i.e., if it's
sharedfile for a XEN VM, or if it's directly accessed if it's a KVM VM), or
halting the creation if it's not the right template.


> For the first part (GlusterFS for XEN VMs), sharedfile
> +disk template would be a good reference. For the second part (GlusterFS
> for QEMU/KVM
> +VMs), RBD disk template would be a good reference. The first part would
> be finished
> +at first and then the second part would be completed, which is based on
> the first
> +part.
>
>  .. vim: set textwidth=72 :
>  .. Local Variables:
> --
> 1.7.10.4
>
>
Thanks,
Michele

Reply via email to