Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-10-14 Thread Olga Krishtal

On 14/10/16 16:15, John Ferlan wrote:

[...]


I have written this lines as a part of  GPLv2+ boilerplate:
https://www.redhat.com/archives/libvir-list/2016-August/msg01160.html,
which I took from
other libvirt parts. And I guess it was naturally to change name and
company, don't you?
And again, if you insist I can leave out the author/copyright as it
wasn't the aim of this series.

Indeed, storage pool is very similar to FS pool but their items are not
- volumes (block devices)
versus filesystems (directory trees). And intention here was to
introduce a *new API*, which is
also very different from storage pool one, effectivly introducing a new
driver. As driver
boundaries crossing isn't favored, the code was simply borrowed,
following earlier practice used
by libvirt to get new drivers implemented.

John, keeping all said above in mind, do you think it's worth trying to
reuse common code while
introducing a new API? It won't allow us to leave existing code
untouched and it will increase the
series even more.
  

Sorry - just haven't been able to keep up with my work and all the
activity on this list lately.

Here's my point - if you find yourself copying a function from one
driver to another for the express purpose that you cannot cross driver
boundaries, then perhaps your first thought should be - can the copied
code can go in an existing or a new src/util/vir*.c file and be used in
both places. I think there are synergies in the existing code...


Then, I guess, the first think that I should  do - src/util/vir*.c for
both drivers. I try to do it in the next version.


Another thought that occurs to me - I cannot recall if it's been
mentioned in this context or not (short term memory isn't what it used
to be!). IIUC the model is to use these trees more for containers, then
I would hope the security is "built in" very tightly rather than being
an added on after thought. One particularly thorny area is NFS storage
pools (and well directory trees) especially w/r/t root_squash or not. In
any case, I can only imagine that for container consumers - being "held"
to certain security rules will be very important.


Good point, thanks! I need some time to think it over.



One final thought, if the existing 'storage driver' is for BLOCK storage
and this new driver is a 'File System Storage Driver', then rather than
using 'fspool' (which is really confusing IMO) maybe the base driver is
"fs_storage". So that means we create a "src/fs_storage" and start from
there.  The 'fs' was just not descriptive enough.


I agree.


Since it's a File System Storage Driver that's essentially exposing
entire directories, is there really a need for a "pool" concept? Isn't
the driver providing that alone? IOW: The implementation is a pool.
What else other than a directory would something like this expose? Is
there something else that could be envisioned that would add to the
(from patch 1) virFSItemType?


For directory fspool item object may seem meaningless, however
for zfs and  virtuozzo storage, I think, it doesn't: pool can be 
mounted, can

have some restrictions and items is the object that is exposed to costumer.
I mean for zfs - we create pool, than create filesystems with different 
quotas, etc.
For Virtuozzo storage - the entity that is exposed - is cluster. The 
administrator
manipulations - is to mount cluster, and users are able to create items 
for their needs.





John

Oh and please let's not drop 12K lines of new code into one series -
please!  Rome wasn't built in a day and certainly it's very hard to
commit the time to review 12K lines of code in one series. A logical
progression to the finish line is fine.


I will split this series.




[...]



--
Best regards,
Olga

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-10-14 Thread John Ferlan
[...]

>>
>> I have written this lines as a part of  GPLv2+ boilerplate:
>> https://www.redhat.com/archives/libvir-list/2016-August/msg01160.html,
>> which I took from
>> other libvirt parts. And I guess it was naturally to change name and
>> company, don't you?
>> And again, if you insist I can leave out the author/copyright as it
>> wasn't the aim of this series.
> 
> Indeed, storage pool is very similar to FS pool but their items are not
> - volumes (block devices)
> versus filesystems (directory trees). And intention here was to
> introduce a *new API*, which is
> also very different from storage pool one, effectivly introducing a new
> driver. As driver
> boundaries crossing isn't favored, the code was simply borrowed,
> following earlier practice used
> by libvirt to get new drivers implemented.
> 
> John, keeping all said above in mind, do you think it's worth trying to
> reuse common code while
> introducing a new API? It won't allow us to leave existing code
> untouched and it will increase the
> series even more.
>  

Sorry - just haven't been able to keep up with my work and all the
activity on this list lately.

Here's my point - if you find yourself copying a function from one
driver to another for the express purpose that you cannot cross driver
boundaries, then perhaps your first thought should be - can the copied
code can go in an existing or a new src/util/vir*.c file and be used in
both places. I think there are synergies in the existing code...

Another thought that occurs to me - I cannot recall if it's been
mentioned in this context or not (short term memory isn't what it used
to be!). IIUC the model is to use these trees more for containers, then
I would hope the security is "built in" very tightly rather than being
an added on after thought. One particularly thorny area is NFS storage
pools (and well directory trees) especially w/r/t root_squash or not. In
any case, I can only imagine that for container consumers - being "held"
to certain security rules will be very important.

One final thought, if the existing 'storage driver' is for BLOCK storage
and this new driver is a 'File System Storage Driver', then rather than
using 'fspool' (which is really confusing IMO) maybe the base driver is
"fs_storage". So that means we create a "src/fs_storage" and start from
there.  The 'fs' was just not descriptive enough.

Since it's a File System Storage Driver that's essentially exposing
entire directories, is there really a need for a "pool" concept? Isn't
the driver providing that alone? IOW: The implementation is a pool.
What else other than a directory would something like this expose? Is
there something else that could be envisioned that would add to the
(from patch 1) virFSItemType?

John

Oh and please let's not drop 12K lines of new code into one series -
please!  Rome wasn't built in a day and certainly it's very hard to
commit the time to review 12K lines of code in one series. A logical
progression to the finish line is fine.


[...]

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-10-12 Thread Maxim Nestratov

07-Oct-16 20:09, Olga Krishtal пишет:


On 24/09/16 00:12, John Ferlan wrote:

On 09/23/2016 11:56 AM, Olga Krishtal wrote:

On 21/09/16 19:17, Maxim Nestratov wrote:

20 сент. 2016 г., в 23:52, John Ferlan  написал(а):




On 09/15/2016 03:32 AM, Olga Krishtal wrote:
Hi everyone, we would like to propose the first implementation of fspool
with directory backend.

Filesystem pools is a facility to manage filesystems resources similar
to how storage pools manages volume resources. Furthermore new API follows
storage API closely where it makes sense. Uploading/downloading operations
are not defined yet as it is not obvious how to make it properly. I guess
we can use some kind of tar to make a stream from a filesystem. Please share
you thoughts on this particular issue.

So how do you differentiate between with the existing 

Pool type=fs still provides volumes, i. e. block devices rather than 
filesystem, though this storage pool can mount file systems resided on a source 
block device.


http://libvirt.org/storage.html#StorageBackendFS

Sure the existing fs pool requires/uses a source block device as the
source path and this new variant doesn't require that source but seems
to use some item in order to dictate how to "define" the source on the
fly. Currently only a "DIR" is created - so how does that differ from a
"dir" pool.


Same here, storage "dir" provides files, which are in fact block devices for guests. 
While filesystem pool "dir" provides guests with file systems.



I think it'll be confusing to have and differentiate fspool and pool
commands.

Some time ago, we wrote the proposal description and asked for
everyone's advice and opinion.
The aim of fspool is to provide filesystems, not volumes. The simplest
type of fspool is directory pool and
it do has a lot in common with storage_backend_fs. However,  in the
proposal description we said that
the plan is to use other backends:  eg, storage volumes from storage
pool as the source of fs, zfs, etc.
  The final api for fspool will be significantly different, because of
the other backends needs.

Can you please try to create an extra line after the paragraph you're
responding to and the start of your paragraph and then one after.


Thanks for noticing. It looks better.


Anyway, as I pointed out - that description wasn't in my (short term)
memory. Keeping a trail of pointers to previous stuff helps those that
want to refresh their memory on the history.


I will hold this links through the next versions.
https://www.redhat.com/archives/libvir-list/2016-April/msg01941.html
https://www.redhat.com/archives/libvir-list/2016-May/msg00208.html



If you're going to "reuse" things, then using the 'src/util/*' is the
way to go rather than trying to drag in storage_{driver|backend*} APIs.
Crossing driver boundaries is something IIRC we try to avoid.


As I have written before at the moment we have only one backend for fspool -
directory. It is the simplest backend and only the starting point.
I think that it is too early to decide which parts should be moved to 
src/util/*.
Moreover,  as fspool items and storage pool volumes are pretty different, it
could be possible that they have very little in common. That said, I would leave
things as they are, but if you insist I can try.



If you meant the resulting code will have very little in common, then I would 
agree here.
More backends implementation will show us where common parts are
and we will have more basis for splitting out common parts.


I didn't dig through all the patches, but from the few I did look at it
seems as though all that's done is to rip out the guts of stuff not
desired from the storage pool driver and replace it with this new code
attributing all the work to the new author/copyright. IOW: Lots of
places where StoragePool appears to be exactly the same as the FSPool.


I have written this lines as a part of  GPLv2+ boilerplate:
https://www.redhat.com/archives/libvir-list/2016-August/msg01160.html, which I 
took from
other libvirt parts. And I guess it was naturally to change name and company, 
don't you?
And again, if you insist I can leave out the author/copyright as it wasn't the 
aim of this series.


Indeed, storage pool is very similar to FS pool but their items are not - 
volumes (block devices)
versus filesystems (directory trees). And intention here was to introduce a 
*new API*, which is
also very different from storage pool one, effectivly introducing a new driver. 
As driver
boundaries crossing isn't favored, the code was simply borrowed, following 
earlier practice used
by libvirt to get new drivers implemented.

John, keeping all said above in mind, do you think it's worth trying to reuse 
common code while
introducing a new API? It won't allow us to leave existing code untouched and 
it will increase the
series even more.





I think you need to find a different means to do what you want. It's not
100% what the end goal is.I did download/git am the 

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-10-07 Thread Olga Krishtal

On 24/09/16 00:12, John Ferlan wrote:


On 09/23/2016 11:56 AM, Olga Krishtal wrote:

On 21/09/16 19:17, Maxim Nestratov wrote:

20 сент. 2016 г., в 23:52, John Ferlan  написал(а):




On 09/15/2016 03:32 AM, Olga Krishtal wrote:
Hi everyone, we would like to propose the first implementation of fspool
with directory backend.

Filesystem pools is a facility to manage filesystems resources similar
to how storage pools manages volume resources. Furthermore new API follows
storage API closely where it makes sense. Uploading/downloading operations
are not defined yet as it is not obvious how to make it properly. I guess
we can use some kind of tar to make a stream from a filesystem. Please share
you thoughts on this particular issue.

So how do you differentiate between with the existing 

Pool type=fs still provides volumes, i. e. block devices rather than 
filesystem, though this storage pool can mount file systems resided on a source 
block device.


http://libvirt.org/storage.html#StorageBackendFS

Sure the existing fs pool requires/uses a source block device as the
source path and this new variant doesn't require that source but seems
to use some item in order to dictate how to "define" the source on the
fly. Currently only a "DIR" is created - so how does that differ from a
"dir" pool.


Same here, storage "dir" provides files, which are in fact block devices for guests. 
While filesystem pool "dir" provides guests with file systems.



I think it'll be confusing to have and differentiate fspool and pool
commands.

Some time ago, we wrote the proposal description and asked for
everyone's advice and opinion.
The aim of fspool is to provide filesystems, not volumes. The simplest
type of fspool is directory pool and
it do has a lot in common with storage_backend_fs. However,  in the
proposal description we said that
the plan is to use other backends:  eg, storage volumes from storage
pool as the source of fs, zfs, etc.
  The final api for fspool will be significantly different, because of
the other backends needs.

Can you please try to create an extra line after the paragraph you're
responding to and the start of your paragraph and then one after.


Thanks for noticing. It looks better.


Anyway, as I pointed out - that description wasn't in my (short term)
memory. Keeping a trail of pointers to previous stuff helps those that
want to refresh their memory on the history.


I will hold this links through the next versions.
https://www.redhat.com/archives/libvir-list/2016-April/msg01941.html
https://www.redhat.com/archives/libvir-list/2016-May/msg00208.html



If you're going to "reuse" things, then using the 'src/util/*' is the
way to go rather than trying to drag in storage_{driver|backend*} APIs.
Crossing driver boundaries is something IIRC we try to avoid.


As I have written before at the moment we have only one backend for 
fspool -

directory. It is the simplest backend and only the starting point.
I think that it is too early to decide which parts should be moved to 
src/util/*.
Moreover,  as fspool items and storage pool volumes are pretty 
different, it
could be possible that they have very little in common. That said, I 
would leave

things as they are, but if you insist I can try.


I didn't dig through all the patches, but from the few I did look at it
seems as though all that's done is to rip out the guts of stuff not
desired from the storage pool driver and replace it with this new code
attributing all the work to the new author/copyright. IOW: Lots of
places where StoragePool appears to be exactly the same as the FSPool.


I have written this lines as a part of  GPLv2+ boilerplate:
https://www.redhat.com/archives/libvir-list/2016-August/msg01160.html, 
which I took from
other libvirt parts. And I guess it was naturally to change name and 
company, don't you?
And again, if you insist I can leave out the author/copyright as it 
wasn't the aim of this series.




I think you need to find a different means to do what you want. It's not
100% what the end goal is.I did download/git am the patches and scan a few 
patches...
  * In patch 2 you've totally missed how to modify libvirt_public.syms
  * In patch 3, the build breaks in "conf/fs_conf" since the "if { if {}
}" aren't done properly in virFSPoolDefFormatBuf.
  * In patch 5 the remote_protocol_structs fails check/syntax-check... I
stopped there in my build each patch test.

According to the guide I have to do:
|make check| and |make syntax-check for every patch|

Always a good plan!


And it was done.

And yet as we find out *all the time* some compilers complain more than
others. Watch the list - we have a CI environment in which we find all
sorts of oddities.  In any case, the code in question is:

+if (def->target.perms.mode != (mode_t) -1 ||
+def->target.perms.uid != (uid_t) -1 ||
+def->target.perms.gid != (gid_t) -1 ||
+def->target.perms.label) {
+virBufferAddLit(buf, "\n");
+

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-10-03 Thread Maxim Nestratov

23-Sep-16 23:51, John Ferlan пишет:

[snip]

I think rather than just copy what the storage pool does, I would think
the new driver could "build up" what it needs based on some consensus
based on what makes sense for the usage model.



Having a guest mount a host file system would seem to be possible
through other means. I also start wondering about security implications
for either side (haven't put too much thought into it). What can the
guest put "on" the host file system and vice versa where different
security policies may exist for allowing such placement.

Perhaps rather than a large dump of code the RFC should state the goal,
purpose, usage, etc. and see if that's what the community wants or is
willing to provide feedback on.

This was previously done in the mailing list many months ago now.


Well a pointer would have been nice... Obviously I didn't remember it!
There was an fspools v1 posted 8/19. I think there was an assumption
that list readers/reviewers would remember some original RFC. I didn't.
I've just been going through older patches that haven't had review and
this just came up as "next" (actually I had started thinking about the
v1 when v2 showed up).

John



Just a pointer to the previous disscussion:

https://www.redhat.com/archives/libvir-list/2016-April/msg01941.html
https://www.redhat.com/archives/libvir-list/2016-May/msg00208.html

Maxim

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-09-23 Thread John Ferlan


On 09/23/2016 11:56 AM, Olga Krishtal wrote:
> On 21/09/16 19:17, Maxim Nestratov wrote:
>>> 20 сент. 2016 г., в 23:52, John Ferlan  написал(а):
>>>
>>>
>>>
 On 09/15/2016 03:32 AM, Olga Krishtal wrote:
 Hi everyone, we would like to propose the first implementation of fspool
 with directory backend.

 Filesystem pools is a facility to manage filesystems resources similar
 to how storage pools manages volume resources. Furthermore new API follows
 storage API closely where it makes sense. Uploading/downloading operations
 are not defined yet as it is not obvious how to make it properly. I guess
 we can use some kind of tar to make a stream from a filesystem. Please 
 share
 you thoughts on this particular issue.
>>>
>>> So how do you differentiate between with the existing 
>> Pool type=fs still provides volumes, i. e. block devices rather than 
>> filesystem, though this storage pool can mount file systems resided on a 
>> source block device. 
>>
>>> http://libvirt.org/storage.html#StorageBackendFS
>>>
>>> Sure the existing fs pool requires/uses a source block device as the
>>> source path and this new variant doesn't require that source but seems
>>> to use some item in order to dictate how to "define" the source on the
>>> fly. Currently only a "DIR" is created - so how does that differ from a
>>> "dir" pool.
>>>
>> Same here, storage "dir" provides files, which are in fact block devices for 
>> guests. While filesystem pool "dir" provides guests with file systems. 
>>
>>
>>> I think it'll be confusing to have and differentiate fspool and pool
>>> commands.
> Some time ago, we wrote the proposal description and asked for
> everyone's advice and opinion.
> The aim of fspool is to provide filesystems, not volumes. The simplest
> type of fspool is directory pool and
> it do has a lot in common with storage_backend_fs. However,  in the
> proposal description we said that
> the plan is to use other backends:  eg, storage volumes from storage
> pool as the source of fs, zfs, etc.
>  The final api for fspool will be significantly different, because of
> the other backends needs.

Can you please try to create an extra line after the paragraph you're
responding to and the start of your paragraph and then one after.

Anyway, as I pointed out - that description wasn't in my (short term)
memory. Keeping a trail of pointers to previous stuff helps those that
want to refresh their memory on the history.

If you're going to "reuse" things, then using the 'src/util/*' is the
way to go rather than trying to drag in storage_{driver|backend*} APIs.
Crossing driver boundaries is something IIRC we try to avoid.

>>> I didn't dig through all the patches, but from the few I did look at it
>>> seems as though all that's done is to rip out the guts of stuff not
>>> desired from the storage pool driver and replace it with this new code
>>> attributing all the work to the new author/copyright. IOW: Lots of
>>> places where StoragePool appears to be exactly the same as the FSPool.
>>>
>>> I think you need to find a different means to do what you want. It's not
>>> 100% what the end goal is.I did download/git am the patches and scan a few 
>>> patches...
>>>  * In patch 2 you've totally missed how to modify libvirt_public.syms
>>>  * In patch 3, the build breaks in "conf/fs_conf" since the "if { if {}
>>> }" aren't done properly in virFSPoolDefFormatBuf.
>>>  * In patch 5 the remote_protocol_structs fails check/syntax-check... I
>>> stopped there in my build each patch test.
> According to the guide I have to do:
> |make check| and |make syntax-check for every patch|

Always a good plan!

> And it was done. 

And yet as we find out *all the time* some compilers complain more than
others. Watch the list - we have a CI environment in which we find all
sorts of oddities.  In any case, the code in question is:

+if (def->target.perms.mode != (mode_t) -1 ||
+def->target.perms.uid != (uid_t) -1 ||
+def->target.perms.gid != (gid_t) -1 ||
+def->target.perms.label) {
+virBufferAddLit(buf, "\n");
+virBufferAdjustIndent(buf, 2);
+if (def->target.perms.mode != (mode_t) -1)
+virBufferAsprintf(buf, "0%o\n",
+  def->target.perms.mode);
+if (def->target.perms.uid != (uid_t) -1)
+virBufferAsprintf(buf, "%d\n",
+  (int) def->target.perms.uid);
+if (def->target.perms.gid != (gid_t) -1)
+virBufferAsprintf(buf, "%d\n",
+  (int) def->target.perms.gid);
+virBufferEscapeString(buf, "%s\n",
+  def->target.perms.label);
+
+virBufferAdjustIndent(buf, -2);
+virBufferAddLit(buf, "\n");
+}
+
+virBufferAdjustIndent(buf, -2);
+virBufferAddLit(buf, "\n");

So do you "see" the problem?  The first if has an open {, but the second

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-09-23 Thread John Ferlan


On 09/23/2016 11:45 AM, Daniel P. Berrange wrote:
> On Fri, Sep 23, 2016 at 11:38:10AM -0400, John Ferlan wrote:
>>
>>
>> On 09/21/2016 12:17 PM, Maxim Nestratov wrote:
>>>
 20 сент. 2016 г., в 23:52, John Ferlan  написал(а):



> On 09/15/2016 03:32 AM, Olga Krishtal wrote:
> Hi everyone, we would like to propose the first implementation of fspool
> with directory backend.
>
> Filesystem pools is a facility to manage filesystems resources similar
> to how storage pools manages volume resources. Furthermore new API follows
> storage API closely where it makes sense. Uploading/downloading operations
> are not defined yet as it is not obvious how to make it properly. I guess
> we can use some kind of tar to make a stream from a filesystem. Please 
> share
> you thoughts on this particular issue.


 So how do you differentiate between with the existing 
>>>
>>> Pool type=fs still provides volumes, i. e. block devices rather than 
>>> filesystem, though this storage pool can mount file systems resided on a 
>>> source block device. 
>>>

 http://libvirt.org/storage.html#StorageBackendFS

 Sure the existing fs pool requires/uses a source block device as the
 source path and this new variant doesn't require that source but seems
 to use some item in order to dictate how to "define" the source on the
 fly. Currently only a "DIR" is created - so how does that differ from a
 "dir" pool.

>>>
>>> Same here, storage "dir" provides files, which are in fact block devices 
>>> for guests. While filesystem pool "dir" provides guests with file systems. 
>>>
>>>
>>
>> So then what is the purpose of providing a whole new storage driver
>> subsystem?  If you consider the existing storage driver is meant to
>> handle storage pools of various types and the "pool" commands/API's are
>> the "means" to manage those storage backend types, I'm still failing to
>> see the advantage of (essentially) copying the storage driver when it
>> seems you're really trying to write new backends that provide some
>> specific functionality.
> 
> This is a completely different beast to the existing storage pools
> driver code.
> 
> Ultimately the difference here is about what's being managed. The
> existing storage pools code is managing storage volumes, which are
> essentially blobs which are used to provide virtual disks.
> 
> This FS pools code is about managing filesystem trees, which are
> essentially directories used to provide filesystem passthrough
> feature for guests, most compelling for containers.
> 
> Trying to shoe-horn both concepts into the same API is really
> not very attractive, as you have fundamentally diffrent logic
> requirements for managing the entries in the respective pools
> 

OK fair enough, might be something worth of adding to the description
(cover letter). Whether it's documented further in the patches I didn't
check. The whole series is a 8 rather mostly large patches.

I think the other point/concern is that it appears as if the fspool
driver was started by copying storage_driver and then hacking out what
wasn't needed, rather than deciding what needs to be there to build up
such a fs pass through driver. At least that's my impression from
scanning the first couple of patches.

For example, patch 1 first file - libvirt-fs.h it's essentially
libvirt-storage.h copied with some things removed. The first enum has
all build and build overwrite flags. Doesn't seem any of those are used
in the driver code on a quick scan.

The create with build flag was added to allow someone to combine those
pool-create and pool-build type functions because for some backends it
doesn't make sense to "build" as it's already there. That to me seems
like it could be a "design decision" for this new driver. Does a FS pool
need to handle build with [no] overwrite? What would overwrite be?

I think rather than just copy what the storage pool does, I would think
the new driver could "build up" what it needs based on some consensus
based on what makes sense for the usage model.


>> Having a guest mount a host file system would seem to be possible
>> through other means. I also start wondering about security implications
>> for either side (haven't put too much thought into it). What can the
>> guest put "on" the host file system and vice versa where different
>> security policies may exist for allowing such placement.
>>
>> Perhaps rather than a large dump of code the RFC should state the goal,
>> purpose, usage, etc. and see if that's what the community wants or is
>> willing to provide feedback on.
> 
> This was previously done in the mailing list many months ago now.
> 

Well a pointer would have been nice... Obviously I didn't remember it!
There was an fspools v1 posted 8/19. I think there was an assumption
that list readers/reviewers would remember some original RFC. I didn't.
I've just been going through 

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-09-23 Thread Olga Krishtal

On 21/09/16 19:17, Maxim Nestratov wrote:

20 сент. 2016 г., в 23:52, John Ferlan  написал(а):




On 09/15/2016 03:32 AM, Olga Krishtal wrote:
Hi everyone, we would like to propose the first implementation of fspool
with directory backend.

Filesystem pools is a facility to manage filesystems resources similar
to how storage pools manages volume resources. Furthermore new API follows
storage API closely where it makes sense. Uploading/downloading operations
are not defined yet as it is not obvious how to make it properly. I guess
we can use some kind of tar to make a stream from a filesystem. Please share
you thoughts on this particular issue.


So how do you differentiate between with the existing 

Pool type=fs still provides volumes, i. e. block devices rather than 
filesystem, though this storage pool can mount file systems resided on a source 
block device.


http://libvirt.org/storage.html#StorageBackendFS

Sure the existing fs pool requires/uses a source block device as the
source path and this new variant doesn't require that source but seems
to use some item in order to dictate how to "define" the source on the
fly. Currently only a "DIR" is created - so how does that differ from a
"dir" pool.


Same here, storage "dir" provides files, which are in fact block devices for guests. 
While filesystem pool "dir" provides guests with file systems.



I think it'll be confusing to have and differentiate fspool and pool
commands.
Some time ago, we wrote the proposal description and asked for 
everyone's advice and opinion.
The aim of fspool is to provide filesystems, not volumes. The simplest 
type of fspool is directory pool and
it do has a lot in common with storage_backend_fs. However,  in the 
proposal description we said that
the plan is to use other backends:  eg, storage volumes from storage 
pool as the source of fs, zfs, etc.
 The final api for fspool will be significantly different, because of 
the other backends needs.

I didn't dig through all the patches, but from the few I did look at it
seems as though all that's done is to rip out the guts of stuff not
desired from the storage pool driver and replace it with this new code
attributing all the work to the new author/copyright. IOW: Lots of
places where StoragePool appears to be exactly the same as the FSPool.

I think you need to find a different means to do what you want. It's not
100% what the end goal is.I did download/git am the patches and scan a few 
patches...
  * In patch 2 you've totally missed how to modify libvirt_public.syms
  * In patch 3, the build breaks in "conf/fs_conf" since the "if { if {}
}" aren't done properly in virFSPoolDefFormatBuf.
  * In patch 5 the remote_protocol_structs fails check/syntax-check... I
stopped there in my build each patch test.

According to the guide I have to do:
|make check|and|make syntax-check for every patch|
And it was done.
At libvirt_public.syms I will look one more time.

John


The patchset provides 'dir' backend which simply expose directories in some
directory in host filesystem. The virsh commands are provided too. So it is
ready to play with, just replace 'pool' in xml descriptions and virsh commands
to 'fspool' and 'volume' to 'item'.
Examle and usage:
Define:
virsh -c qemu:///system fspool-define-as fs_pool_name dir --target /path/on/host
Build
virsh -c qemu:///system fspool-build fs_pool_name
Start
virsh -c qemu:///system fspool-start fs_pool_name
Look inside
virsh -c qemu:///system fspool-list (--all) fspool_name

Fspool called POOL, on the host fs uses /fs_driver to hold items.
  virsh -c qemu:///system fspool-dumpxml POOL
  
POOL
c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
733722615808
1331486720
534810800128



  /fs_driver
  
0755
0
0
  

  

virsh -c qemu:///system fspool-info POOL
Name:   POOL
UUID:   c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
State:  running
Persistent: yes
Autostart:  no autostart
Capacity:   683.33 GiB
Allocation: 1.24 GiB
Available:  498.08 GiB

virsh -c qemu+unix:///system item-list POOL
  Name Path
--
item1/fs_driver/item1
item10   /fs_driver/item10
item11   /fs_driver/item11
item12   /fs_driver/item12
item15   /fs_driver/item15

Fspool of directory type is some directory on host fs that holds items 
(subdirs).
Example of usage for items:
virsh -c vz+unix:///system item-create-as POOL item1 1g - create item
virsh -c qemu+unix:///system item-dumpxml item1 POOL

item1
/fs_driver/item1


0
0

  

  

virsh -c qemu+unix:///system item-info item1 POOL
Name:   item1
Type:   dir
Capacity:   683.33 GiB
Allocation: 634.87 MiB
Autostart:  no autostart
Capacity:   683.33 GiB
Allocation: 1.24 GiB
Available:  

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-09-23 Thread Daniel P. Berrange
On Fri, Sep 23, 2016 at 11:38:10AM -0400, John Ferlan wrote:
> 
> 
> On 09/21/2016 12:17 PM, Maxim Nestratov wrote:
> > 
> >> 20 сент. 2016 г., в 23:52, John Ferlan  написал(а):
> >>
> >>
> >>
> >>> On 09/15/2016 03:32 AM, Olga Krishtal wrote:
> >>> Hi everyone, we would like to propose the first implementation of fspool
> >>> with directory backend.
> >>>
> >>> Filesystem pools is a facility to manage filesystems resources similar
> >>> to how storage pools manages volume resources. Furthermore new API follows
> >>> storage API closely where it makes sense. Uploading/downloading operations
> >>> are not defined yet as it is not obvious how to make it properly. I guess
> >>> we can use some kind of tar to make a stream from a filesystem. Please 
> >>> share
> >>> you thoughts on this particular issue.
> >>
> >>
> >> So how do you differentiate between with the existing 
> > 
> > Pool type=fs still provides volumes, i. e. block devices rather than 
> > filesystem, though this storage pool can mount file systems resided on a 
> > source block device. 
> > 
> >>
> >> http://libvirt.org/storage.html#StorageBackendFS
> >>
> >> Sure the existing fs pool requires/uses a source block device as the
> >> source path and this new variant doesn't require that source but seems
> >> to use some item in order to dictate how to "define" the source on the
> >> fly. Currently only a "DIR" is created - so how does that differ from a
> >> "dir" pool.
> >>
> > 
> > Same here, storage "dir" provides files, which are in fact block devices 
> > for guests. While filesystem pool "dir" provides guests with file systems. 
> > 
> > 
> 
> So then what is the purpose of providing a whole new storage driver
> subsystem?  If you consider the existing storage driver is meant to
> handle storage pools of various types and the "pool" commands/API's are
> the "means" to manage those storage backend types, I'm still failing to
> see the advantage of (essentially) copying the storage driver when it
> seems you're really trying to write new backends that provide some
> specific functionality.

This is a completely different beast to the existing storage pools
driver code.

Ultimately the difference here is about what's being managed. The
existing storage pools code is managing storage volumes, which are
essentially blobs which are used to provide virtual disks.

This FS pools code is about managing filesystem trees, which are
essentially directories used to provide filesystem passthrough
feature for guests, most compelling for containers.

Trying to shoe-horn both concepts into the same API is really
not very attractive, as you have fundamentally diffrent logic
requirements for managing the entries in the respective pools

> Having a guest mount a host file system would seem to be possible
> through other means. I also start wondering about security implications
> for either side (haven't put too much thought into it). What can the
> guest put "on" the host file system and vice versa where different
> security policies may exist for allowing such placement.
> 
> Perhaps rather than a large dump of code the RFC should state the goal,
> purpose, usage, etc. and see if that's what the community wants or is
> willing to provide feedback on.

This was previously done in the mailing list many months ago now.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-09-23 Thread John Ferlan


On 09/21/2016 12:17 PM, Maxim Nestratov wrote:
> 
>> 20 сент. 2016 г., в 23:52, John Ferlan  написал(а):
>>
>>
>>
>>> On 09/15/2016 03:32 AM, Olga Krishtal wrote:
>>> Hi everyone, we would like to propose the first implementation of fspool
>>> with directory backend.
>>>
>>> Filesystem pools is a facility to manage filesystems resources similar
>>> to how storage pools manages volume resources. Furthermore new API follows
>>> storage API closely where it makes sense. Uploading/downloading operations
>>> are not defined yet as it is not obvious how to make it properly. I guess
>>> we can use some kind of tar to make a stream from a filesystem. Please share
>>> you thoughts on this particular issue.
>>
>>
>> So how do you differentiate between with the existing 
> 
> Pool type=fs still provides volumes, i. e. block devices rather than 
> filesystem, though this storage pool can mount file systems resided on a 
> source block device. 
> 
>>
>> http://libvirt.org/storage.html#StorageBackendFS
>>
>> Sure the existing fs pool requires/uses a source block device as the
>> source path and this new variant doesn't require that source but seems
>> to use some item in order to dictate how to "define" the source on the
>> fly. Currently only a "DIR" is created - so how does that differ from a
>> "dir" pool.
>>
> 
> Same here, storage "dir" provides files, which are in fact block devices for 
> guests. While filesystem pool "dir" provides guests with file systems. 
> 
> 

So then what is the purpose of providing a whole new storage driver
subsystem?  If you consider the existing storage driver is meant to
handle storage pools of various types and the "pool" commands/API's are
the "means" to manage those storage backend types, I'm still failing to
see the advantage of (essentially) copying the storage driver when it
seems you're really trying to write new backends that provide some
specific functionality.

Having a guest mount a host file system would seem to be possible
through other means. I also start wondering about security implications
for either side (haven't put too much thought into it). What can the
guest put "on" the host file system and vice versa where different
security policies may exist for allowing such placement.

Perhaps rather than a large dump of code the RFC should state the goal,
purpose, usage, etc. and see if that's what the community wants or is
willing to provide feedback on.

John
>> I think it'll be confusing to have and differentiate fspool and pool
>> commands.
>>
>> I didn't dig through all the patches, but from the few I did look at it
>> seems as though all that's done is to rip out the guts of stuff not
>> desired from the storage pool driver and replace it with this new code
>> attributing all the work to the new author/copyright. IOW: Lots of
>> places where StoragePool appears to be exactly the same as the FSPool.
>>
>> I think you need to find a different means to do what you want. It's not
>> 100% what the end goal is.
>>
>> I did download/git am the patches and scan a few patches...
>>  * In patch 2 you've totally missed how to modify libvirt_public.syms.
>>  * In patch 3, the build breaks in "conf/fs_conf" since the "if { if {}
>> }" aren't done properly in virFSPoolDefFormatBuf.
>>  * In patch 5 the remote_protocol_structs fails check/syntax-check... I
>> stopped there in my build each patch test.
>>
>> John
>>
>>>
>>> The patchset provides 'dir' backend which simply expose directories in some
>>> directory in host filesystem. The virsh commands are provided too. So it is
>>> ready to play with, just replace 'pool' in xml descriptions and virsh 
>>> commands
>>> to 'fspool' and 'volume' to 'item'.
>>> Examle and usage:
>>> Define:
>>> virsh -c qemu:///system fspool-define-as fs_pool_name dir --target 
>>> /path/on/host
>>> Build
>>> virsh -c qemu:///system fspool-build fs_pool_name
>>> Start
>>> virsh -c qemu:///system fspool-start fs_pool_name
>>> Look inside
>>> virsh -c qemu:///system fspool-list (--all) fspool_name
>>>
>>> Fspool called POOL, on the host fs uses /fs_driver to hold items.
>>>  virsh -c qemu:///system fspool-dumpxml POOL
>>>  
>>>POOL
>>>c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
>>>733722615808
>>>1331486720
>>>534810800128
>>>
>>>
>>>
>>>  /fs_driver
>>>  
>>>0755
>>>0
>>>0
>>>  
>>>
>>>  
>>>
>>> virsh -c qemu:///system fspool-info POOL
>>> Name:   POOL
>>> UUID:   c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
>>> State:  running
>>> Persistent: yes
>>> Autostart:  no autostart
>>> Capacity:   683.33 GiB
>>> Allocation: 1.24 GiB
>>> Available:  498.08 GiB
>>>
>>> virsh -c qemu+unix:///system item-list POOL
>>>  Name Path
>>> --
>>> item1/fs_driver/item1
>>> item10   /fs_driver/item10
>>> item11   

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-09-21 Thread Maxim Nestratov

> 20 сент. 2016 г., в 23:52, John Ferlan  написал(а):
> 
> 
> 
>> On 09/15/2016 03:32 AM, Olga Krishtal wrote:
>> Hi everyone, we would like to propose the first implementation of fspool
>> with directory backend.
>> 
>> Filesystem pools is a facility to manage filesystems resources similar
>> to how storage pools manages volume resources. Furthermore new API follows
>> storage API closely where it makes sense. Uploading/downloading operations
>> are not defined yet as it is not obvious how to make it properly. I guess
>> we can use some kind of tar to make a stream from a filesystem. Please share
>> you thoughts on this particular issue.
> 
> 
> So how do you differentiate between with the existing 

Pool type=fs still provides volumes, i. e. block devices rather than 
filesystem, though this storage pool can mount file systems resided on a source 
block device. 

> 
> http://libvirt.org/storage.html#StorageBackendFS
> 
> Sure the existing fs pool requires/uses a source block device as the
> source path and this new variant doesn't require that source but seems
> to use some item in order to dictate how to "define" the source on the
> fly. Currently only a "DIR" is created - so how does that differ from a
> "dir" pool.
> 

Same here, storage "dir" provides files, which are in fact block devices for 
guests. While filesystem pool "dir" provides guests with file systems. 


> I think it'll be confusing to have and differentiate fspool and pool
> commands.
> 
> I didn't dig through all the patches, but from the few I did look at it
> seems as though all that's done is to rip out the guts of stuff not
> desired from the storage pool driver and replace it with this new code
> attributing all the work to the new author/copyright. IOW: Lots of
> places where StoragePool appears to be exactly the same as the FSPool.
> 
> I think you need to find a different means to do what you want. It's not
> 100% what the end goal is.
> 
> I did download/git am the patches and scan a few patches...
>  * In patch 2 you've totally missed how to modify libvirt_public.syms.
>  * In patch 3, the build breaks in "conf/fs_conf" since the "if { if {}
> }" aren't done properly in virFSPoolDefFormatBuf.
>  * In patch 5 the remote_protocol_structs fails check/syntax-check... I
> stopped there in my build each patch test.
> 
> John
> 
>> 
>> The patchset provides 'dir' backend which simply expose directories in some
>> directory in host filesystem. The virsh commands are provided too. So it is
>> ready to play with, just replace 'pool' in xml descriptions and virsh 
>> commands
>> to 'fspool' and 'volume' to 'item'.
>> Examle and usage:
>> Define:
>> virsh -c qemu:///system fspool-define-as fs_pool_name dir --target 
>> /path/on/host
>> Build
>> virsh -c qemu:///system fspool-build fs_pool_name
>> Start
>> virsh -c qemu:///system fspool-start fs_pool_name
>> Look inside
>> virsh -c qemu:///system fspool-list (--all) fspool_name
>> 
>> Fspool called POOL, on the host fs uses /fs_driver to hold items.
>>  virsh -c qemu:///system fspool-dumpxml POOL
>>  
>>POOL
>>c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
>>733722615808
>>1331486720
>>534810800128
>>
>>
>>
>>  /fs_driver
>>  
>>0755
>>0
>>0
>>  
>>
>>  
>> 
>> virsh -c qemu:///system fspool-info POOL
>> Name:   POOL
>> UUID:   c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
>> State:  running
>> Persistent: yes
>> Autostart:  no autostart
>> Capacity:   683.33 GiB
>> Allocation: 1.24 GiB
>> Available:  498.08 GiB
>> 
>> virsh -c qemu+unix:///system item-list POOL
>>  Name Path
>> --
>> item1/fs_driver/item1
>> item10   /fs_driver/item10
>> item11   /fs_driver/item11
>> item12   /fs_driver/item12
>> item15   /fs_driver/item15
>> 
>> Fspool of directory type is some directory on host fs that holds items 
>> (subdirs).
>> Example of usage for items:
>> virsh -c vz+unix:///system item-create-as POOL item1 1g - create item
>> virsh -c qemu+unix:///system item-dumpxml item1 POOL
>> 
>>item1
>>/fs_driver/item1
>>
>>
>>0
>>0
>>
>>  
>>
>>  
>> 
>> virsh -c qemu+unix:///system item-info item1 POOL
>> Name:   item1
>> Type:   dir
>> Capacity:   683.33 GiB
>> Allocation: 634.87 MiB
>> Autostart:  no autostart
>> Capacity:   683.33 GiB
>> Allocation: 1.24 GiB
>> Available:  498.08 GiB
>> virsh -c qemu+unix:///system item-list POOL
>>  Name Path
>> --
>>  item1/fs_driver/item1
>>  item10   /fs_driver/item10
>>  item11   /fs_driver/item11
>>  item12   /fs_driver/item12
>>  item15   /fs_driver/item15
>> 
>> 

Re: [libvirt] [PATCH rfc v2 0/8] fspool: backend directory

2016-09-20 Thread John Ferlan


On 09/15/2016 03:32 AM, Olga Krishtal wrote:
> Hi everyone, we would like to propose the first implementation of fspool
> with directory backend.
> 
> Filesystem pools is a facility to manage filesystems resources similar
> to how storage pools manages volume resources. Furthermore new API follows
> storage API closely where it makes sense. Uploading/downloading operations
> are not defined yet as it is not obvious how to make it properly. I guess
> we can use some kind of tar to make a stream from a filesystem. Please share
> you thoughts on this particular issue.


So how do you differentiate between with the existing 

http://libvirt.org/storage.html#StorageBackendFS

Sure the existing fs pool requires/uses a source block device as the
source path and this new variant doesn't require that source but seems
to use some item in order to dictate how to "define" the source on the
fly. Currently only a "DIR" is created - so how does that differ from a
"dir" pool.

I think it'll be confusing to have and differentiate fspool and pool
commands.

I didn't dig through all the patches, but from the few I did look at it
seems as though all that's done is to rip out the guts of stuff not
desired from the storage pool driver and replace it with this new code
attributing all the work to the new author/copyright. IOW: Lots of
places where StoragePool appears to be exactly the same as the FSPool.

I think you need to find a different means to do what you want. It's not
100% what the end goal is.

I did download/git am the patches and scan a few patches...
  * In patch 2 you've totally missed how to modify libvirt_public.syms.
  * In patch 3, the build breaks in "conf/fs_conf" since the "if { if {}
}" aren't done properly in virFSPoolDefFormatBuf.
  * In patch 5 the remote_protocol_structs fails check/syntax-check... I
stopped there in my build each patch test.

John

> 
> The patchset provides 'dir' backend which simply expose directories in some
> directory in host filesystem. The virsh commands are provided too. So it is
> ready to play with, just replace 'pool' in xml descriptions and virsh commands
> to 'fspool' and 'volume' to 'item'.
> Examle and usage:
> Define:
> virsh -c qemu:///system fspool-define-as fs_pool_name dir --target 
> /path/on/host
> Build
> virsh -c qemu:///system fspool-build fs_pool_name
> Start
> virsh -c qemu:///system fspool-start fs_pool_name
> Look inside
> virsh -c qemu:///system fspool-list (--all) fspool_name
> 
> Fspool called POOL, on the host fs uses /fs_driver to hold items.
>   virsh -c qemu:///system fspool-dumpxml POOL
>   
> POOL
> c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
> 733722615808
> 1331486720
> 534810800128
> 
> 
> 
>   /fs_driver
>   
> 0755
> 0
> 0
>   
> 
>   
> 
> virsh -c qemu:///system fspool-info POOL
> Name:   POOL
> UUID:   c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
> State:  running
> Persistent: yes
> Autostart:  no autostart
> Capacity:   683.33 GiB
> Allocation: 1.24 GiB
> Available:  498.08 GiB
> 
> virsh -c qemu+unix:///system item-list POOL
>   Name Path
> --
> item1/fs_driver/item1
> item10   /fs_driver/item10
> item11   /fs_driver/item11
> item12   /fs_driver/item12
> item15   /fs_driver/item15
> 
> Fspool of directory type is some directory on host fs that holds items 
> (subdirs).
> Example of usage for items:
> virsh -c vz+unix:///system item-create-as POOL item1 1g - create item
> virsh -c qemu+unix:///system item-dumpxml item1 POOL
> 
> item1
> /fs_driver/item1
> 
> 
> 0
> 0
> 
>   
> 
>   
> 
> virsh -c qemu+unix:///system item-info item1 POOL
> Name:   item1
> Type:   dir
> Capacity:   683.33 GiB
> Allocation: 634.87 MiB
> Autostart:  no autostart
> Capacity:   683.33 GiB
> Allocation: 1.24 GiB
> Available:  498.08 GiB
> virsh -c qemu+unix:///system item-list POOL
>   Name Path
> --
>   item1/fs_driver/item1
>   item10   /fs_driver/item10
>   item11   /fs_driver/item11
>   item12   /fs_driver/item12
>   item15   /fs_driver/item15
> 
> v2:
> - renamed Fs to FS
> - in configure.ac script macro m4 is used
> - updates docs
> - created simple tests
> - updated virsh.pod
> - added information  abot fspool in fotmatfs.html
> 
> Olga Krishtal (8):
>   fspool: introduce filesystem pools API
>   fspool: usual driver based implementation of filesystem pools API
>   fspools: configuration and internal representation
>   fspools: acl support for filesystem pools
>   remote: filesystem pools driver implementation
>   fspool: default implementation of filesystem pools
>