In my case the dio is off, but I can still do direct io:
[root@ovirt1 glusterfs]# cd
/rhev/data-center/mnt/glusterSD/gluster1\:_data__fast/
[root@ovirt1 gluster1:_data__fast]# gluster volume info data_fast | grep dio
network.remote-dio: off
[root@ovirt1 gluster1:_data__fast]# dd if=/dev/zero of=testfile bs=4096 count=1
oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.00295952 s, 1.4 MB/s
Most probably the 2 cases are different.
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:17:23 ч. Гринуич+3, Nir Soffer
<[email protected]> написа:
On Thu, May 16, 2019 at 10:12 PM Darrell Budic <[email protected]> wrote:
On May 16, 2019, at 1:41 PM, Nir Soffer <[email protected]> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <[email protected]> wrote:
I tried adding a new storage domain on my hyper converged test cluster running
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume
fine, but it’s not able to add the gluster storage domain (as either a managed
gluster volume or directly entering values). The created gluster volume mounts
and looks fine from the CLI. Errors in VDSM log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH
createStorageDomain error=Storage Domain target is unsupported: ()
from=::ffff:10.100.90.5,44732, flow_id=31d993dd,
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test':
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we
will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing
volumes, so I poked around at gluster settings for the new volume. It has
network.remote-dio=off set on the new volume, but enabled on old volumes. After
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set:
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so
apparently it’s not getting set by ovirt duding the volume creation/optimze for
storage?
I'm not sure who is responsible for changing these settings. oVirt always
required directio, and wenever had to change anything in gluster.
Sahina, maybe gluster changed the defaults?
Darrell, please file a bug, probably for RHHI.
Nir _______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/[email protected]/message/IC4FIKTK5DSGMRCYXBTK7BLIDFSM76WN/