I've been working on refactoring the storageDomain\images system in VDSM.
Apart from facilitating various features I've also been trying to make adding 
new SD types easier and making the image manipulation bits consistent across 
domain implementation.

Currently in order to create a new domain type you have to create a new 
StoageDomain,Image and Volume objects and implement all the logic to manipulate 
them. Apart from being cumbersome and redundant it also make mixed clustered 
very hard to do.

On of the big changes I put in is separating the image manipulation with the 
actual storage work.

Instead of each domain type implementing createImage and co you have one class 
responsible for all the image manipulation in the cluster.

All you have to do facilitate a new storage type is to create a domain engine.

A domain engine is a python class that implement a minimal interface.
1. It has to be able to create resize and delete a slab (slab being a block of 
writable storage like a lun\lv\file)
2. It has to be able to create and delete tags (tags are pointers to slabs)

The above function are very easy to implement and require very little 
complexity. All the heavy lifting (image manipulation, cleaning, transaction, 
atomic operations, etc) is managed by the Image Manager that just uses this 
unified interface to interact with the different storage types)

In cases where a domain might have a special non-standard features I introduce 
the concept of capabilities. A domain engine can declare support for certain 
capabilities (eg. native snapshotting) and implement additional interfaces. If 
the image manager sees that the domain implements a capability it will use it 
if not it will use a default implementation that uses the default must have 
verbs. This is similar to just having drawLine and having drawRect. This is 
done automatically and at runtime.

I like to compare this to how OpenGL will use software rendering if a certain 
standard feature is not implemented by the card so you might get a slower but 
still correct result.

Now, libstorage is another way to abstract interactions and capabilities for 
different storage types and have a unified API for accessing them.

Building a repo engine on top of libstorage is completely possible. But as you 
can see this creates a redundant layer of abstractions in the libstorage side.

As I see it if you just want to have you storage supported by ovirt creating a 
repo engine is simpler as you can use high level concepts and I do plan to have 
engines run as their own processes so you could use whatever licence, language 
and storage server API you choose.

Also libstorage will have to keep it's abstraction at a much lower level. This 
means exposing target specific flags and abilities. eWhile this is good in 
concept it will mean that the repo engine wrapping libstorage will have to 
juggle all those flags and calls instead of having different distinct class for 
each storage type with it's own specific hacks in place.

Just as a current example, we currently use the same "engine" for nfs3 and 
nfs4. This means that when we are running on nfs4 we are still doing all the 
hacks that are meant to circumvent issues with v3 being stateless. This is no 
longer relevant as v4 is stateful.
And what about SAMBA? or gluster? You got to have special hacks for boths

What I'm saying is that if in the relatively simple world of NAS where we have 
a proven abstraction (file access commands, POSIX). We can't find a way to 
create a 1 class to rule them all. How can we expect to have a sane solution 
for the crazy world of SAN.

I'm not saying we shouldn't create an engine for libstorage, just that we 
should treat it like we treat sharefs. As a simple generic non bullet 
proof\optimized implementation.

Let the flaming commence!
vdsm-devel mailing list

Reply via email to