I don't think it is VDO, but I can be wrong.
My ovirt setup is VDO + Gluster v7.7 + CentOS 7.8 . I tested libgfapi a long
time ago and it worked.
If you wish you can ask in the ovirt users' mailing list how qemu is using
libgfapi.
Best Regards,
Strahil Nikolov
На 13 август 2020 г.
C7 -> CentOS7
Just try with the virt group enabled on a test setup .
Best Regards,
Strahil Nikolov
На 13 август 2020 г. 7:14:09 GMT+03:00, Dmitry Melekhov
написа:
>13.08.2020 07:31, Strahil Nikolov пишет:
>> I ment dis you use C7 with Gluster 7 (or older) or C7 with the new
>Gluster 8.
На 13 август 2020 г. 5:23:12 GMT+03:00, Dmitry Melekhov
написа:
>
>12.08.2020 23:25, Strahil Nikolov пишет:
>> I am not sure that it is ok to use any caching (at least ovirt
>doesn't uses) .
>>
>> Have you set the 'virt' group of settings ? They seem to be optimal
>, but keep in mind that
13.08.2020 10:06, Strahil Nikolov пишет:
I don't think it is VDO, but I can be wrong.
My ovirt setup is VDO + Gluster v7.7 + CentOS 7.8 . I tested libgfapi a long
time ago and it worked.
If you wish you can ask in the ovirt users' mailing list how qemu is using
libgfapi.
As I wrote
btw, all I wrote before was about raw file format,
if it is qcow2 then, using gfapi:
virsh create /kvmconf/stewjon.xml
error: Failed to create domain from /kvmconf/stewjon.xml
error: internal error: process exited while connecting to monitor:
[2020-08-13 04:17:37.326933] E [MSGID: 108006]
12.08.2020 23:25, Strahil Nikolov пишет:
I am not sure that it is ok to use any caching (at least ovirt doesn't uses) .
Have you set the 'virt' group of settings ? They seem to be optimal , but keep in
mind that if you enable them -> you will enable sharding which cannot be
'disabled'
I am not sure that it is ok to use any caching (at least ovirt doesn't uses) .
Have you set the 'virt' group of settings ? They seem to be optimal , but keep
in mind that if you enable them -> you will enable sharding which cannot be
'disabled' afterwards.
The fact that it works on C7 is
Libgfapi brings far better performance , but qemu has some limitations.
If it works on FUSE , but not on libgfapi -> it seems obvious.
Have you tried to connect from C7 to the Gluster TSP via libgfapi.
Also, is SELINUX in enforcing or not ?
Best Regards,
Strahil Nikolov
На 12 август 2020 г.
12.08.2020 17:50, Strahil Nikolov пишет:
Libgfapi brings far better performance ,
Yes, and several vms do not rely on the same mount point...
but qemu has some limitations.
If it works on FUSE , but not on libgfapi -> it seems obvious.
Not obvious for me, we tested vdo locally, i.e.
On Wed, Aug 12, 2020 at 2:30 PM Dmitry Melekhov wrote:
> 12.08.2020 12:55, Amar Tumballi пишет:
> > Hi Dimitry,
> >
> > Was this working earlier and now failing on Version 8 or is this a new
> > setup which you did first time?
> >
> Hello!
>
>
> This is first time we are testing gluster over
12.08.2020 12:55, Amar Tumballi пишет:
Hi Dimitry,
Was this working earlier and now failing on Version 8 or is this a new
setup which you did first time?
Hello!
This is first time we are testing gluster over vdo.
Thank you!
Community Meeting Calendar:
Schedule -
Every 2nd
Hi Dimitry,
Was this working earlier and now failing on Version 8 or is this a new
setup which you did first time?
-Amar
On Wed, Aug 12, 2020 at 1:17 PM dm wrote:
> 12.08.2020 11:39, dm пишет:
> > Some more info, really we have lvm over lvm here:
> >
> > lvm-vdo-lvm...
> >
> > Thank you!
> >
Some more info, really we have lvm over lvm here:
lvm-vdo-lvm...
Thank you!
12.08.2020 11:00, Dmitry Melekhov пишет:
Hello!
We are testing gluster 8 on centos 8.2 and we try to use volume
created over vdo.
This is 2 nodes setup.
There is lvm created over vdo, and xfs filesystem.
Test
btw, part of brick log:
[2020-08-12 07:08:32.646082] I [MSGID: 115029]
[server-handshake.c:561:server_setvolume] 0-pool-server: accepted client
from CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0-PID:765
2-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0 (version: 8.0) with subvol
Hello!
We are testing gluster 8 on centos 8.2 and we try to use volume created
over vdo.
This is 2 nodes setup.
There is lvm created over vdo, and xfs filesystem.
Test vm runs just fine if we run vm over fuse:
/root/pool/ is fuse mount.
but if we try to run:
15 matches
Mail list logo