Ok,
so it seems that Darell's case and mine are different as I use vdo.
Now I have destroyed Storage Domains, gluster volumes and vdo and recreated
again (4 gluster volumes on a single vdo).This time vdo has '--emulate512=true'
and no issues have been observed.
Gluster volume options before 'Optimize for virt':
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable
Gluster volume after 'Optimize for virt':
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
network.ping-timeout: 30
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.enable-shared-storage: enable
After that adding the volumes as storage domains (via UI) worked without any
issues.
Can someone clarify why we have now 'cluster.choose-local: off' when in oVirt
4.2.7 (gluster v3.12.15) we didn't have that ?I'm using storage that is faster
than network and reading from local brick gives very high read speed.
Best Regards,Strahil Nikolov
В неделя, 19 май 2019 г., 9:47:27 ч. Гринуич+3, Strahil
<[email protected]> написа:
On this one
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/configuring_red_hat_virtualization_with_red_hat_gluster_storage/index#proc-To_Configure_Volumes_Using_the_Command_Line_Interface
We should have the following options:
performance.quick-read=off performance.read-ahead=off performance.io-cache=off
performance.stat-prefetch=off performance.low-prio-threads=32
network.remote-dio=enable cluster.eager-lock=enable cluster.quorum-type=auto
cluster.server-quorum-type=server cluster.data-self-heal-algorithm=full
cluster.locking-scheme=granular cluster.shd-max-threads=8
cluster.shd-wait-qlength=10000 features.shard=on user.cifs=off
By the way the 'virt' gluster group disables 'cluster.choose-local' and I think
it wasn't like that.
Any reasons behind that , as I use it to speedup my reads, as local storage is
faster than the network?
Best Regards,
Strahil Nikolov
On May 19, 2019 09:36, Strahil <[email protected]> wrote:
OK,
Can we summarize it:
1. VDO must 'emulate512=true'
2. 'network.remote-dio' should be off ?
As per this:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/configuring_red_hat_openstack_with_red_hat_storage/sect-setting_up_red_hat_storage_trusted_storage_pool
We should have these:
quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on
quorum-type=auto
server-quorum-type=server
I'm a little bit confused here.
Best Regards,
Strahil Nikolov
On May 19, 2019 07:44, Sahina Bose <[email protected]> wrote:
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer <[email protected]> wrote:
On Fri, May 17, 2019 at 7:54 AM Gobinda Das <[email protected]> wrote:
>From RHHI side default we are setting below volume options:
{ group: 'virt', storage.owner-uid: '36', storage.owner-gid: '36',
network.ping-timeout: '30', performance.strict-o-direct: 'on',
network.remote-dio: 'off'
According to the user reports, this configuration is not compatible with oVirt.
Was this tested?
Yes, this is set by default in all test configuration. We’re checking on the
bug, but the error is likely when the underlying device does not support 512b
writes. With network.remote-dio off gluster will ensure o-direct writes
}
On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <[email protected]> wrote:
Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me
to create the storage domain without any issues.I set it on all 4 new gluster
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic
<[email protected]> написа:
On May 16, 2019, at 1:41 PM, Nir Soffer <[email protected]> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <[email protected]> wrote:
I tried adding a new storage domain on my hyper converged test cluster running
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume
fine, but it’s not able to add the gluster storage domain (as either a managed
gluster volume or directly entering values). The created gluster volume mounts
and looks fine from the CLI. Errors in VDSM log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH
createStorageDomain error=Storage Domain target is unsupported: ()
from=::ffff:10.100.90.5,44732, flow_id=31d993dd,
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test':
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we
will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing
volumes, so I poked around at gluster settings for the new volume. It has
network.remote-dio=off set on the new volume, but enabled on old volumes. After
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set:
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so
apparently it’s not getting set by ovirt duding the volume creation/optimze for
storage?
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/[email protected]/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/[email protected]/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/
--
Thanks,Gobinda
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/[email protected]/message/B5WXNGCKJUP2FAEPT2CUI55NCMIK6KFA/