[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-21 Thread Krutika Dhananjay
On Tue, May 21, 2019 at 8:13 PM Strahil  wrote:

> Dear Krutika,
>
> Yes I did but I use 6 ports (1 gbit/s each) and this is the reason that
> reads get slower.
> Do you know a way to force gluster to open more connections (client to
> server & server to server)?
>

The idea was explored sometime back here -
https://review.gluster.org/c/glusterfs/+/19133
But there were some issues that were identified with the approach, so it
had to be dropped.

-Krutika

Thanks for the detailed explanation.
>
> Best Regards,
> Strahil Nikolov
> On May 21, 2019 08:36, Krutika Dhananjay  wrote:
>
> So in our internal tests (with nvme ssd drives, 10g n/w), we found read
> performance to be better with choose-local
> disabled in hyperconverged setup.  See
> https://bugzilla.redhat.com/show_bug.cgi?id=1566386 for more information.
>
> With choose-local off, the read replica is chosen randomly (based on hash
> value of the gfid of that shard).
> And when it is enabled, the reads always go to the local replica.
> We attributed better performance with the option disabled to bottlenecks
> in gluster's rpc/socket layer. Imagine all read
> requests lined up to be sent over the same mount-to-brick connection as
> opposed to (nearly) randomly getting distributed
> over three (because replica count = 3) such connections.
>
> Did you run any tests that indicate "choose-local=on" is giving better
> read perf as opposed to when it's disabled?
>
> -Krutika
>
> On Sun, May 19, 2019 at 5:11 PM Strahil Nikolov 
> wrote:
>
> Ok,
>
> so it seems that Darell's case and mine are different as I use vdo.
>
> Now I have destroyed Storage Domains, gluster volumes and vdo and
> recreated again (4 gluster volumes on a single vdo).
> This time vdo has '--emulate512=true' and no issues have been observed.
>
> Gluster volume options before 'Optimize for virt':
>
> Volume Name: data_fast
> Type: Replicate
> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.enable-shared-storage: enable
>
> Gluster volume after 'Optimize for virt':
>
> Volume Name: data_fast
> Type: Replicate
> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
> Status: Stopped
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
> Options Reconfigured:
> network.ping-timeout: 30
> performance.strict-o-direct: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> server.event-threads: 4
> client.event-threads: 4
> cluster.choose-local: off
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
> cluster.enable-shared-storage: enable
>
> After that adding the volumes as storage domains (via UI) worked without
> any issues.
>
> Can someone clarify why we have now 'cluster.choose-local: off' when in
> oVirt 4.2.7 (gluster v3.12.15) we didn't have that ?
> I'm using storage that is faster than network and reading from local brick
> gives very high read speed.
>
> Best Regards,
> Strahil Nikolov
>
>
>
> В неделя, 19 май 2019 г., 9:47:27 ч. Г�
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OHWZ7Y3T7QKP6CVCC34KDOFSXVILJ332/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-21 Thread Strahil
Dear Krutika,

Yes I did but I use 6 ports (1 gbit/s each) and this is the reason that reads 
get slower.
Do you know a way to force gluster to open more connections (client to server & 
server to server)?

Thanks for the detailed explanation.

Best Regards,
Strahil NikolovOn May 21, 2019 08:36, Krutika Dhananjay  
wrote:
>
> So in our internal tests (with nvme ssd drives, 10g n/w), we found read 
> performance to be better with choose-local 
> disabled in hyperconverged setup.  See 
> https://bugzilla.redhat.com/show_bug.cgi?id=1566386 for more information.
>
> With choose-local off, the read replica is chosen randomly (based on hash 
> value of the gfid of that shard).
> And when it is enabled, the reads always go to the local replica.
> We attributed better performance with the option disabled to bottlenecks in 
> gluster's rpc/socket layer. Imagine all read
> requests lined up to be sent over the same mount-to-brick connection as 
> opposed to (nearly) randomly getting distributed
> over three (because replica count = 3) such connections. 
>
> Did you run any tests that indicate "choose-local=on" is giving better read 
> perf as opposed to when it's disabled?
>
> -Krutika
>
> On Sun, May 19, 2019 at 5:11 PM Strahil Nikolov  wrote:
>>
>> Ok,
>>
>> so it seems that Darell's case and mine are different as I use vdo.
>>
>> Now I have destroyed Storage Domains, gluster volumes and vdo and recreated 
>> again (4 gluster volumes on a single vdo).
>> This time vdo has '--emulate512=true' and no issues have been observed.
>>
>> Gluster volume options before 'Optimize for virt':
>>
>> Volume Name: data_fast
>> Type: Replicate
>> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
>> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
>> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
>> Options Reconfigured:
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> cluster.enable-shared-storage: enable
>>
>> Gluster volume after 'Optimize for virt':
>>
>> Volume Name: data_fast
>> Type: Replicate
>> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
>> Status: Stopped
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
>> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
>> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
>> Options Reconfigured:
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> server.event-threads: 4
>> client.event-threads: 4
>> cluster.choose-local: off
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qlength: 1
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: on
>> cluster.enable-shared-storage: enable
>>
>> After that adding the volumes as storage domains (via UI) worked without any 
>> issues.
>>
>> Can someone clarify why we have now 'cluster.choose-local: off' when in 
>> oVirt 4.2.7 (gluster v3.12.15) we didn't have that ?
>> I'm using storage that is faster than network and reading from local brick 
>> gives very high read speed.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>> В неделя, 19 май 2019 г., 9:47:27 ч. Г�___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEDHZUJUB5ODQ34ME4BZP2L73KYUU5CH/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Krutika Dhananjay
So in our internal tests (with nvme ssd drives, 10g n/w), we found read
performance to be better with choose-local
disabled in hyperconverged setup.  See
https://bugzilla.redhat.com/show_bug.cgi?id=1566386 for more information.

With choose-local off, the read replica is chosen randomly (based on hash
value of the gfid of that shard).
And when it is enabled, the reads always go to the local replica.
We attributed better performance with the option disabled to bottlenecks in
gluster's rpc/socket layer. Imagine all read
requests lined up to be sent over the same mount-to-brick connection as
opposed to (nearly) randomly getting distributed
over three (because replica count = 3) such connections.

Did you run any tests that indicate "choose-local=on" is giving better read
perf as opposed to when it's disabled?

-Krutika

On Sun, May 19, 2019 at 5:11 PM Strahil Nikolov 
wrote:

> Ok,
>
> so it seems that Darell's case and mine are different as I use vdo.
>
> Now I have destroyed Storage Domains, gluster volumes and vdo and
> recreated again (4 gluster volumes on a single vdo).
> This time vdo has '--emulate512=true' and no issues have been observed.
>
> Gluster volume options before 'Optimize for virt':
>
> Volume Name: data_fast
> Type: Replicate
> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.enable-shared-storage: enable
>
> Gluster volume after 'Optimize for virt':
>
> Volume Name: data_fast
> Type: Replicate
> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
> Status: Stopped
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
> Options Reconfigured:
> network.ping-timeout: 30
> performance.strict-o-direct: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> server.event-threads: 4
> client.event-threads: 4
> cluster.choose-local: off
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
> cluster.enable-shared-storage: enable
>
> After that adding the volumes as storage domains (via UI) worked without
> any issues.
>
> Can someone clarify why we have now 'cluster.choose-local: off' when in
> oVirt 4.2.7 (gluster v3.12.15) we didn't have that ?
> I'm using storage that is faster than network and reading from local brick
> gives very high read speed.
>
> Best Regards,
> Strahil Nikolov
>
>
>
> В неделя, 19 май 2019 г., 9:47:27 ч. Гринуич+3, Strahil <
> hunter86...@yahoo.com> написа:
>
>
> On this one
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/configuring_red_hat_virtualization_with_red_hat_gluster_storage/index#proc-To_Configure_Volumes_Using_the_Command_Line_Interface
> We should have the following options:
>
> performance.quick-read=off performance.read-ahead=off performance.io-cache=off
> performance.stat-prefetch=off performance.low-prio-threads=32
> network.remote-dio=enable cluster.eager-lock=enable
> cluster.quorum-type=auto cluster.server-quorum-type=server
> cluster.data-self-heal-algorithm=full cluster.locking-scheme=granular
> cluster.shd-max-threads=8 cluster.shd-wait-qlength=1 features.shard=on
> user.cifs=off
>
> By the way the 'virt' gluster group disables 'cluster.choose-local' and I
> think it wasn't like that.
> Any reasons behind that , as I use it to speedup my reads, as local
> storage is faster than the network?
>
> Best Regards,
> Strahil Nikolov
> On May 19, 2019 09:36, Strahil  wrote:
>
> OK,
>
> Can we summarize it:
> 1. VDO must 'emulate512=true'
> 2. 'network.remote-dio' should be off ?
>
> As per this:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/configuring_red_hat_openstack_with_red_hat_storage/sect-setting_up_red_hat_storage_trusted_storage_pool
>
> We should have these:
>
> quick-read=off
> read-ahead=off
> io-cache=off
> stat-prefetch=off
> eager-lock=enable
> remote-dio=on
> quorum-type=auto
> server-quorum-type=server
>
> I'm a little bit confused here.
>
> Best Regards,
> Strahil Nikolov
> On May 19, 2019 07:44, Sahina Bose  wrote:
>
>
>
> On Su

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Strahil Nikolov
 I got confused so far.What is best for oVirt ?remote-dio off or on ?My latest 
gluster volumes were set to 'off' while the older ones are 'on'.
Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 23:42:09 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 Wow, I think Strahil and i both hit different edge cases on this one. I was 
running that on my test cluster with a ZFS backed brick, which does not support 
O_DIRECT (in the current version, 0.8 will, when it’s released). I tested on a 
XFS backed brick with gluster virt group applied and network.remote-dio 
disabled and ovirt was able to create the storage volume correctly. So not a 
huge problem for most people, I imagine.
Now I’m curious about the apparent disconnect between gluster and ovirt though. 
Since the gluster virt group sets network.remote-dio on, what’s the reasoning 
behind disabling it for these tests?


On May 18, 2019, at 11:44 PM, Sahina Bose  wrote:


On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:

On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:

>From RHHI side default we are setting below volume options:
{ group: 'virt',     storage.owner-uid: '36',     storage.owner-gid: '36',     
network.ping-timeout: '30',     performance.strict-o-direct: 'on',     
network.remote-dio: 'off'

According to the user reports, this configuration is not compatible with oVirt.
Was this tested?

Yes, this is set by default in all test configuration. We’re checking on the 
bug, but the error is likely when the underlying device does not support 512b 
writes. With network.remote-dio off gluster will ensure o-direct writes


   }

On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  wrote:

 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
to create the storage domain without any issues.I set it on all 4 new gluster 
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/



-- 


Thanks,Gobinda



___
Users ma

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Darrell Budic
Wow, I think Strahil and i both hit different edge cases on this one. I was 
running that on my test cluster with a ZFS backed brick, which does not support 
O_DIRECT (in the current version, 0.8 will, when it’s released). I tested on a 
XFS backed brick with gluster virt group applied and network.remote-dio 
disabled and ovirt was able to create the storage volume correctly. So not a 
huge problem for most people, I imagine.

Now I’m curious about the apparent disconnect between gluster and ovirt though. 
Since the gluster virt group sets network.remote-dio on, what’s the reasoning 
behind disabling it for these tests?

> On May 18, 2019, at 11:44 PM, Sahina Bose  wrote:
> 
> 
> 
> On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  > wrote:
> On Fri, May 17, 2019 at 7:54 AM Gobinda Das  > wrote:
> From RHHI side default we are setting below volume options:
> 
> { group: 'virt',
>  storage.owner-uid: '36',
>  storage.owner-gid: '36',
>  network.ping-timeout: '30',
>  performance.strict-o-direct: 'on',
>  network.remote-dio: 'off'
> 
> According to the user reports, this configuration is not compatible with 
> oVirt.
> 
> Was this tested?
> 
> Yes, this is set by default in all test configuration. We’re checking on the 
> bug, but the error is likely when the underlying device does not support 512b 
> writes. 
> With network.remote-dio off gluster will ensure o-direct writes
> 
>}
> 
> 
> On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  > wrote:
> Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
> to create the storage domain without any issues.
> I set it on all 4 new gluster volumes and the storage domains were 
> successfully created.
> 
> I have created bug for that:
> https://bugzilla.redhat.com/show_bug.cgi?id=1711060 
> 
> 
> If someone else already opened - please ping me to mark this one as duplicate.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
> mailto:bu...@onholyground.com>> написа:
> 
> 
> On May 16, 2019, at 1:41 PM, Nir Soffer  > wrote:
> 
>> 
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic > > wrote:
>> I tried adding a new storage domain on my hyper converged test cluster 
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster 
>> volume fine, but it’s not able to add the gluster storage domain (as either 
>> a managed gluster volume or directly entering values). The created gluster 
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>> 
>> ... 
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>> createStorageDomain error=Storage Domain target is unsupported: () 
>> from=:::10.100.90.5,44732, flow_id=31d993dd, 
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>> 
>> The direct I/O check has failed.
>> 
>> 
>> So something is wrong in the files system.
>> 
>> To confirm, you can try to do:
>> 
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>> 
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>> 
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
>> will investigate.
>> 
>> Nir
> 
> Yep, it fails as expected. Just to check, it is working on pre-existing 
> volumes, so I poked around at gluster settings for the new volume. It has 
> network.remote-dio=off set on the new volume, but enabled on old volumes. 
> After enabling it, I’m able to run the dd test:
> 
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
> 
> I’m also able to add the storage domain in ovirt now.
> 
> I see network.remote-dio=enable is part of the gluster virt group, so 
> apparently it’s not getting set by ovirt duding the volume creation/optimze 
> for storage?
> 
> 
> 
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
>  
> 

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-19 Thread Strahil Nikolov
 Ok,
so it seems that Darell's case and mine are different as I use vdo.
Now I have destroyed Storage Domains, gluster volumes and vdo and recreated 
again (4 gluster volumes on a single vdo).This time vdo has '--emulate512=true' 
and no issues have been observed.
Gluster volume options before 'Optimize for virt':
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable

Gluster volume after 'Optimize for virt':
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
network.ping-timeout: 30
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.enable-shared-storage: enable
After that adding the volumes as storage domains (via UI) worked without any 
issues.
Can someone clarify why we have now 'cluster.choose-local: off' when in oVirt 
4.2.7 (gluster v3.12.15) we didn't have that ?I'm using storage that is faster 
than network and reading from local brick gives very high read speed.
Best Regards,Strahil Nikolov


В неделя, 19 май 2019 г., 9:47:27 ч. Гринуич+3, Strahil 
 написа:  
 
 
On this one 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/configuring_red_hat_virtualization_with_red_hat_gluster_storage/index#proc-To_Configure_Volumes_Using_the_Command_Line_Interface
 
We should have the following options:

performance.quick-read=off performance.read-ahead=off performance.io-cache=off 
performance.stat-prefetch=off performance.low-prio-threads=32 
network.remote-dio=enable cluster.eager-lock=enable cluster.quorum-type=auto 
cluster.server-quorum-type=server cluster.data-self-heal-algorithm=full 
cluster.locking-scheme=granular cluster.shd-max-threads=8 
cluster.shd-wait-qlength=1 features.shard=on user.cifs=off

By the way the 'virt' gluster group disables 'cluster.choose-local' and I think 
it wasn't like that.
Any reasons behind that , as I use it to speedup my reads, as local storage is 
faster than the network?

Best Regards,
Strahil Nikolov
On May 19, 2019 09:36, Strahil  wrote:


OK,

Can we summarize it:
1. VDO must 'emulate512=true'
2. 'network.remote-dio' should be off ?

As per this: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/configuring_red_hat_openstack_with_red_hat_storage/sect-setting_up_red_hat_storage_trusted_storage_pool

We should have these:

quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on 
quorum-type=auto
server-quorum-type=server

I'm a little bit confused here.

Best Regards,
Strahil Nikolov
On May 19, 2019 07:44, Sahina Bose  wrote:



On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:

On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:

>From RHHI side default we are setting below volume options:
{ group: 'virt',     storage.owner-uid: '36',     storage.owner-gid: '36',     
network.ping-timeout: '30',     performance.strict-o-direct: 'on',     
network.remote-dio: 'off'

According to the user reports, this configuration is not compatible with oVirt.
Was this tested?

Yes, this is set by default in all test configuration. We’re checking on the 
bug, but the error is likely when the underlying device does not support 512b 
writes. With network.remote-dio off gluster will ensure o-direct writes


   }

On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  wrote:

 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
to create the storage domain without any issues.I set it on all 4 new gluster 
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikol

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Strahil
On this one 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/configuring_red_hat_virtualization_with_red_hat_gluster_storage/index#proc-To_Configure_Volumes_Using_the_Command_Line_Interface
 
We should have the following options:

performance.quick-read=off performance.read-ahead=off performance.io-cache=off 
performance.stat-prefetch=off performance.low-prio-threads=32 
network.remote-dio=enable cluster.eager-lock=enable cluster.quorum-type=auto 
cluster.server-quorum-type=server cluster.data-self-heal-algorithm=full 
cluster.locking-scheme=granular cluster.shd-max-threads=8 
cluster.shd-wait-qlength=1 features.shard=on user.cifs=off

By the way the 'virt' gluster group disables 'cluster.choose-local' and I think 
it wasn't like that.
Any reasons behind that , as I use it to speedup my reads, as local storage is 
faster than the network?

Best Regards,
Strahil NikolovOn May 19, 2019 09:36, Strahil  wrote:
>
> OK,
>
> Can we summarize it:
> 1. VDO must 'emulate512=true'
> 2. 'network.remote-dio' should be off ?
>
> As per this: 
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/configuring_red_hat_openstack_with_red_hat_storage/sect-setting_up_red_hat_storage_trusted_storage_pool
>
> We should have these:
>
> quick-read=off
> read-ahead=off
> io-cache=off
> stat-prefetch=off
> eager-lock=enable
> remote-dio=on 
> quorum-type=auto
> server-quorum-type=server
>
> I'm a little bit confused here.
>
> Best Regards,
> Strahil Nikolov
>
> On May 19, 2019 07:44, Sahina Bose  wrote:
>>
>>
>>
>> On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:
>>>
>>> On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:

 From RHHI side default we are setting below volume options:

 { group: 'virt',
      storage.owner-uid: '36',
      storage.owner-gid: '36',
      network.ping-timeout: '30',
      performance.strict-o-direct: 'on',
      network.remote-dio: 'off'
>>>
>>>
>>> According to the user reports, this configuration is not compatible with 
>>> oVirt.
>>>
>>> Was this tested?
>>
>>
>> Yes, this is set by default in all test configuration. We’re checking on the 
>> bug, but the error is likely when the underlying device does not support 
>> 512b writes. 
>> With network.remote-dio off gluster will ensure o-direct writes
>>>
>>>
    }


 On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  
 wrote:
>
> Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed 
> me to create the storage domain without any issues.
> I set it on all 4 new gluster volumes and the storage domains were 
> successfully created.
>
> I have created bug for that:
> https://bugzilla.redhat.com/show_bug.cgi?id=1711060
>
> If someone else already opened - please ping me to mark this one as 
> duplicate.
>
> Best Regards,
> Strahil Nikolov
>
>
> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
>  написа:
>
>
> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>
>>
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic  
>> wrote:
>>>
>>> I tried adding a new storage domain on my hyper converged test cluster 
>>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new 
>>> gluster volume fine, but it’s not able to add the gluster storage 
>>> domain (as either a managed gluster volume or directly entering 
>>> values). The created gluster volume mounts and looks fine from the CLI. 
>>> Errors in VDSM log:
>>>
>> ... 
>>>
>>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] 
>>> Underlying file system doesn't supportdirect IO (fileSD:110)
>>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>>> createStorageDomain error=Storage Domain target is unsupported: () 
>>> from=:::10.100.90.5,44732, flow_id=31d993dd, 
>>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>
>>
>> The direct I/O check has failed.
>>
>>
>> So something is wrong in the files system.
>>
>> To confirm, you can try to do:
>>
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and 
>> we will investigate.
>>
>> Nir
>
>
> Yep, it fails as expected. Just to check, it is working on pre-existing 
> volumes, so I poked around at gluster settings for the new volume. It has 
> network.remote-dio=off set on the new volume, but enabled on old volumes. 
> After enabling it, I’m able to run the dd test:
>
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Strahil
OK,

Can we summarize it:
1. VDO must 'emulate512=true'
2. 'network.remote-dio' should be off ?

As per this: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/configuring_red_hat_openstack_with_red_hat_storage/sect-setting_up_red_hat_storage_trusted_storage_pool

We should have these:

quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on 
quorum-type=auto
server-quorum-type=server

I'm a little bit confused here.

Best Regards,
Strahil NikolovOn May 19, 2019 07:44, Sahina Bose  wrote:
>
>
>
> On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:
>>
>> On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:
>>>
>>> From RHHI side default we are setting below volume options:
>>>
>>> { group: 'virt',
>>>      storage.owner-uid: '36',
>>>      storage.owner-gid: '36',
>>>      network.ping-timeout: '30',
>>>      performance.strict-o-direct: 'on',
>>>      network.remote-dio: 'off'
>>
>>
>> According to the user reports, this configuration is not compatible with 
>> oVirt.
>>
>> Was this tested?
>
>
> Yes, this is set by default in all test configuration. We’re checking on the 
> bug, but the error is likely when the underlying device does not support 512b 
> writes. 
> With network.remote-dio off gluster will ensure o-direct writes
>>
>>
>>>    }
>>>
>>>
>>> On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  
>>> wrote:

 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed 
 me to create the storage domain without any issues.
 I set it on all 4 new gluster volumes and the storage domains were 
 successfully created.

 I have created bug for that:
 https://bugzilla.redhat.com/show_bug.cgi?id=1711060

 If someone else already opened - please ping me to mark this one as 
 duplicate.

 Best Regards,
 Strahil Nikolov


 В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
  написа:


 On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:

>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic  
> wrote:
>>
>> I tried adding a new storage domain on my hyper converged test cluster 
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new 
>> gluster volume fine, but it’s not able to add the gluster storage domain 
>> (as either a managed gluster volume or directly entering values). The 
>> created gluster volume mounts and looks fine from the CLI. Errors in 
>> VDSM log:
>>
> ... 
>>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] 
>> Underlying file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>> createStorageDomain error=Storage Domain target is unsupported: () 
>> from=:::10.100.90.5,44732, flow_id=31d993dd, 
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
>
> So something is wrong in the files system.
>
> To confirm, you can try to do:
>
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
>
> If it succeeds, but oVirt fail to connect to this domain, file a bug and 
> we will investigate.
>
> Nir


 Yep, it fails as expected. Just to check, it is working on pre-existing 
 volumes, so I poked around at gluster settings for the new volume. It has 
 network.remote-dio=off set on the new volume, but enabled on old volumes. 
 After enabling it, I’m able to run the dd test:

 [root@boneyard mnt]# gluster vol set test network.remote-dio enable
 volume set: success
 [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
 oflag=direct
 1+0 records in
 1+0 records out
 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s

 I’m also able to add the storage domain in ovirt now.

 I see network.remote-dio=enable is part of the gluster virt group, so 
 apparently it’s not getting set by ovirt duding the volume 
 creation/optimze for storage?



 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/

 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct: 
 https://www.

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Sahina Bose
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:

> On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:
>
>> From RHHI side default we are setting below volume options:
>>
>> { group: 'virt',
>>  storage.owner-uid: '36',
>>  storage.owner-gid: '36',
>>  network.ping-timeout: '30',
>>  performance.strict-o-direct: 'on',
>>  network.remote-dio: 'off'
>>
>
> According to the user reports, this configuration is not compatible with
> oVirt.
>
> Was this tested?
>

Yes, this is set by default in all test configuration. We’re checking on
the bug, but the error is likely when the underlying device does not
support 512b writes.
With network.remote-dio off gluster will ensure o-direct writes

>
>}
>>
>>
>> On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov 
>> wrote:
>>
>>> Ok, setting 'gluster volume set data_fast4 network.remote-dio on'
>>> allowed me to create the storage domain without any issues.
>>> I set it on all 4 new gluster volumes and the storage domains were
>>> successfully created.
>>>
>>> I have created bug for that:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1711060
>>>
>>> If someone else already opened - please ping me to mark this one as
>>> duplicate.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <
>>> bu...@onholyground.com> написа:
>>>
>>>
>>> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>>>
>>>
>>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
>>> wrote:
>>>
>>> I tried adding a new storage domain on my hyper converged test cluster
>>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>>> volume fine, but it’s not able to add the gluster storage domain (as either
>>> a managed gluster volume or directly entering values). The created gluster
>>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>>
>>> ...
>>>
>>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>>> file system doesn't supportdirect IO (fileSD:110)
>>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
>>> createStorageDomain error=Storage Domain target is unsupported: ()
>>> from=:::10.100.90.5,44732, flow_id=31d993dd,
>>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>>
>>>
>>> The direct I/O check has failed.
>>>
>>>
>>> So something is wrong in the files system.
>>>
>>> To confirm, you can try to do:
>>>
>>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>>
>>> This will probably fail with:
>>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>>
>>> If it succeeds, but oVirt fail to connect to this domain, file a bug and
>>> we will investigate.
>>>
>>> Nir
>>>
>>>
>>> Yep, it fails as expected. Just to check, it is working on pre-existing
>>> volumes, so I poked around at gluster settings for the new volume. It has
>>> network.remote-dio=off set on the new volume, but enabled on old volumes.
>>> After enabling it, I’m able to run the dd test:
>>>
>>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>>> volume set: success
>>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
>>> oflag=direct
>>> 1+0 records in
>>> 1+0 records out
>>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>>
>>> I’m also able to add the storage domain in ovirt now.
>>>
>>> I see network.remote-dio=enable is part of the gluster virt group, so
>>> apparently it’s not getting set by ovirt duding the volume creation/optimze
>>> for storage?
>>>
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/
>>>
>>
>>
>> --
>>
>>
>> Thanks,
>> Gobinda
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XLDDMCQUQQ3AKN7RMTIPVE47DUVRR4O/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Sahina Bose
On Fri, 17 May 2019 at 2:13 AM, Strahil Nikolov 
wrote:

>
> >This may be another issue. This command works only for storage with 512
> bytes sector size.
>
> >Hyperconverge systems may use VDO, and it must be configured in
> compatibility mode to >support
> >512 bytes sector size.
>
> >I'm not sure how this is configured but Sahina should know.
>
> >Nir
>
> I do use VDO.
>

There’s a 512b emulation property that needs to be set on for the vdo
volume.


>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A77B5ME42N3JMLDLSVORL3REDXQXUJ4J/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Nir Soffer
On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:

> From RHHI side default we are setting below volume options:
>
> { group: 'virt',
>  storage.owner-uid: '36',
>  storage.owner-gid: '36',
>  network.ping-timeout: '30',
>  performance.strict-o-direct: 'on',
>  network.remote-dio: 'off'
>

According to the user reports, this configuration is not compatible with
oVirt.

Was this tested?

   }
>
>
> On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov 
> wrote:
>
>> Ok, setting 'gluster volume set data_fast4 network.remote-dio on'
>> allowed me to create the storage domain without any issues.
>> I set it on all 4 new gluster volumes and the storage domains were
>> successfully created.
>>
>> I have created bug for that:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1711060
>>
>> If someone else already opened - please ping me to mark this one as
>> duplicate.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <
>> bu...@onholyground.com> написа:
>>
>>
>> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>>
>>
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
>> wrote:
>>
>> I tried adding a new storage domain on my hyper converged test cluster
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>> volume fine, but it’s not able to add the gluster storage domain (as either
>> a managed gluster volume or directly entering values). The created gluster
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>
>> ...
>>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
>> createStorageDomain error=Storage Domain target is unsupported: ()
>> from=:::10.100.90.5,44732, flow_id=31d993dd,
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>
>>
>> The direct I/O check has failed.
>>
>>
>> So something is wrong in the files system.
>>
>> To confirm, you can try to do:
>>
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and
>> we will investigate.
>>
>> Nir
>>
>>
>> Yep, it fails as expected. Just to check, it is working on pre-existing
>> volumes, so I poked around at gluster settings for the new volume. It has
>> network.remote-dio=off set on the new volume, but enabled on old volumes.
>> After enabling it, I’m able to run the dd test:
>>
>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>> volume set: success
>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
>> oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>
>> I’m also able to add the storage domain in ovirt now.
>>
>> I see network.remote-dio=enable is part of the gluster virt group, so
>> apparently it’s not getting set by ovirt duding the volume creation/optimze
>> for storage?
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/
>>
>
>
> --
>
>
> Thanks,
> Gobinda
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/72IJEAJ7RN42H4GDG7DC4JGCRACIGOOV/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Gobinda Das
>From RHHI side default we are setting below volume options:

{ group: 'virt',
 storage.owner-uid: '36',
 storage.owner-gid: '36',
 network.ping-timeout: '30',
 performance.strict-o-direct: 'on',
 network.remote-dio: 'off'
   }


On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov 
wrote:

> Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed
> me to create the storage domain without any issues.
> I set it on all 4 new gluster volumes and the storage domains were
> successfully created.
>
> I have created bug for that:
> https://bugzilla.redhat.com/show_bug.cgi?id=1711060
>
> If someone else already opened - please ping me to mark this one as
> duplicate.
>
> Best Regards,
> Strahil Nikolov
>
>
> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <
> bu...@onholyground.com> написа:
>
>
> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
> wrote:
>
> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
> volume fine, but it’s not able to add the gluster storage domain (as either
> a managed gluster volume or directly entering values). The created gluster
> volume mounts and looks fine from the CLI. Errors in VDSM log:
>
> ...
>
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=:::10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
>
> So something is wrong in the files system.
>
> To confirm, you can try to do:
>
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
>
> If it succeeds, but oVirt fail to connect to this domain, file a bug and
> we will investigate.
>
> Nir
>
>
> Yep, it fails as expected. Just to check, it is working on pre-existing
> volumes, so I poked around at gluster settings for the new volume. It has
> network.remote-dio=off set on the new volume, but enabled on old volumes.
> After enabling it, I’m able to run the dd test:
>
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
> oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>
> I’m also able to add the storage domain in ovirt now.
>
> I see network.remote-dio=enable is part of the gluster virt group, so
> apparently it’s not getting set by ovirt duding the volume creation/optimze
> for storage?
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZAHK6LYDPJ7IS6D4OFNBHVVY5JETMAY7/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov
 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
to create the storage domain without any issues.I set it on all 4 new gluster 
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov
 In my case the dio is off, but I can still do direct io:
[root@ovirt1 glusterfs]# cd 
/rhev/data-center/mnt/glusterSD/gluster1\:_data__fast/
[root@ovirt1 gluster1:_data__fast]# gluster volume info data_fast | grep dio
network.remote-dio: off
[root@ovirt1 gluster1:_data__fast]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.00295952 s, 1.4 MB/s


Most probably the 2 cases are different.
Best Regards,Strahil Nikolov


В четвъртък, 16 май 2019 г., 22:17:23 ч. Гринуич+3, Nir Soffer 
 написа:  
 
 On Thu, May 16, 2019 at 10:12 PM Darrell Budic  wrote:

On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?

I'm not sure who is responsible for changing these settings. oVirt always 
required directio, and wenever had to change anything in gluster.
Sahina, maybe gluster changed the defaults?
Darrell, please file a bug, probably for RHHI.
Nir  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IC4FIKTK5DSGMRCYXBTK7BLIDFSM76WN/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov

>This may be another issue. This command works only for storage with 512 bytes 
>sector size.
>Hyperconverge systems may use VDO, and it must be configured in compatibility 
>mode to >support>512 bytes sector size.
>I'm not sure how this is configured but Sahina should know.
>Nir
I do use VDO.
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ONURR6EWEOC7ERV5FYMMBTWYFAVDMWR/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Darrell Budic
https://bugzilla.redhat.com/show_bug.cgi?id=1711054



> On May 16, 2019, at 2:17 PM, Nir Soffer  wrote:
> 
> On Thu, May 16, 2019 at 10:12 PM Darrell Budic  > wrote:
> On May 16, 2019, at 1:41 PM, Nir Soffer  > wrote:
>> 
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic > > wrote:
>> I tried adding a new storage domain on my hyper converged test cluster 
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster 
>> volume fine, but it’s not able to add the gluster storage domain (as either 
>> a managed gluster volume or directly entering values). The created gluster 
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>> 
>> ... 
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>> createStorageDomain error=Storage Domain target is unsupported: () 
>> from=:::10.100.90.5,44732, flow_id=31d993dd, 
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>> 
>> The direct I/O check has failed.
>> 
>> 
>> So something is wrong in the files system.
>> 
>> To confirm, you can try to do:
>> 
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>> 
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>> 
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
>> will investigate.
>> 
>> Nir
> 
> Yep, it fails as expected. Just to check, it is working on pre-existing 
> volumes, so I poked around at gluster settings for the new volume. It has 
> network.remote-dio=off set on the new volume, but enabled on old volumes. 
> After enabling it, I’m able to run the dd test:
> 
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
> 
> I’m also able to add the storage domain in ovirt now.
> 
> I see network.remote-dio=enable is part of the gluster virt group, so 
> apparently it’s not getting set by ovirt duding the volume creation/optimze 
> for storage?
> 
> I'm not sure who is responsible for changing these settings. oVirt always 
> required directio, and we
> never had to change anything in gluster.
> 
> Sahina, maybe gluster changed the defaults?
> 
> Darrell, please file a bug, probably for RHHI.
> 
> Nir

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/66BVZIQJVEP2Q3H5HQ5QAQIGLCMF6XZG/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Nir Soffer
On Thu, May 16, 2019 at 10:12 PM Darrell Budic 
wrote:

> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
> wrote:
>
>> I tried adding a new storage domain on my hyper converged test cluster
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>> volume fine, but it’s not able to add the gluster storage domain (as either
>> a managed gluster volume or directly entering values). The created gluster
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>
>> ...
>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
>> createStorageDomain error=Storage Domain target is unsupported: ()
>> from=:::10.100.90.5,44732, flow_id=31d993dd,
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>
>
> The direct I/O check has failed.
>
>
> So something is wrong in the files system.
>
> To confirm, you can try to do:
>
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
>
> If it succeeds, but oVirt fail to connect to this domain, file a bug and
> we will investigate.
>
> Nir
>
>
> Yep, it fails as expected. Just to check, it is working on pre-existing
> volumes, so I poked around at gluster settings for the new volume. It has
> network.remote-dio=off set on the new volume, but enabled on old volumes.
> After enabling it, I’m able to run the dd test:
>
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
> oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>
> I’m also able to add the storage domain in ovirt now.
>
> I see network.remote-dio=enable is part of the gluster virt group, so
> apparently it’s not getting set by ovirt duding the volume creation/optimze
> for storage?
>

I'm not sure who is responsible for changing these settings. oVirt always
required directio, and we
never had to change anything in gluster.

Sahina, maybe gluster changed the defaults?

Darrell, please file a bug, probably for RHHI.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XRE4XE5WJECVMCUFTS4Y2ADKGWQWJ5CE/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Darrell Budic
On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
> 
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic  > wrote:
> I tried adding a new storage domain on my hyper converged test cluster 
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster 
> volume fine, but it’s not able to add the gluster storage domain (as either a 
> managed gluster volume or directly entering values). The created gluster 
> volume mounts and looks fine from the CLI. Errors in VDSM log:
> 
> ... 
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
> createStorageDomain error=Storage Domain target is unsupported: () 
> from=:::10.100.90.5,44732, flow_id=31d993dd, 
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
> 
> The direct I/O check has failed.
> 
> 
> So something is wrong in the files system.
> 
> To confirm, you can try to do:
> 
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
> 
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
> 
> If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
> will investigate.
> 
> Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:

[root@boneyard mnt]# gluster vol set test network.remote-dio enable
volume set: success
[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s

I’m also able to add the storage domain in ovirt now.

I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Nir Soffer
On Thu, May 16, 2019 at 10:02 PM Strahil  wrote:

> This is my previous e-mail:
>
> On May 16, 2019 15:23, Strahil Nikolov  wrote:
>
> It seems that the issue is within the 'dd' command as it stays waiting for
> input:
>
> [root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock  of=file
> oflag=direct,seek_bytes seek=1048576 bs=256512 count=1
> conv=notrunc,nocreat,fsync  ^C0+0 records in
> 0+0 records out
> 0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s
>
> Changing the dd command works and shows that the gluster is working:
>
> [root@ovirt1 mnt]# cat /dev/urandom |  /usr/bin/dd  of=file
> oflag=direct,seek_bytes seek=1048576 bs=256512 count=1
> conv=notrunc,nocreat,fsync  0+1 records in
> 0+1 records out
> 131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s
>
> Best Regards,
>
> Strahil Nikolov
>
> - Препратено съобщение -
>
> *От:* Strahil Nikolov 
>
> *До:* Users 
>
> *Изпратено:* четвъртък, 16 май 2019 г., 5:56:44 ч. Гринуич-4
>
> *Тема:* ovirt 4.3.3.7 cannot create a gluster storage domain
>
> Hey guys,
>
> I have recently updated (yesterday) my platform to latest available (v
> 4.3.3.7) and upgraded to gluster v6.1 .The setup is hyperconverged 3 node
> cluster with ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX
> is for gluster communication) while ovirt3 is the arbiter.
>
> Today I have tried to add new domain storages but they fail with the
> following:
>
> 2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH
> createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock',
> u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
> 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1',
> 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]'
> err="/usr/bin/dd: error writing
> '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
> Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied,
> 0.0138582 s, 0.0 kB/s\n" from=:::192.168.1.2,43864, flow_id=4a54578a,
> task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
>
>
This may be another issue. This command works only for storage with 512
bytes sector size.

Hyperconverge systems may use VDO, and it must be configured in
compatibility mode to support
512 bytes sector size.

I'm not sure how this is configured but Sahina should know.

Nir

> 2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task]
> (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in createStorageDomain
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614,
> in createStorageDomain
> storageType, domVersion, block_size, alignment)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py
> ", line 106, in create
> block_size)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py
> ", line 466, in _prepareMetadata
> cls.format_external_leases(sdUUID, xleases_path)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255,
> in format_external_leases
> xlease.format_index(lockspace, backend)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 681, in format_index
> index.dump(file)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 843, in dump
> file.pwrite(INDEX_BASE, self._buf)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 1076, in pwr
>
>
> It seems that the 'dd' is having trouble checking the new gluster volume.
> The output is from the RC1 , but as you see Darell's situation is maybe
> the same.
> On May 16, 2019 21:41, Nir Soffer  wrote:
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
> wrote:
>
> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new
> gluster volume fine, but it’s not able to add the gluster storage domain
> (as either a managed gluster volume or directly entering values). The
> created gluster volume mounts and looks fine from the CLI. Errors in VDSM
> log:
>
> ...
>
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=:::10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
> This is the code doing the check:
>
>  98 def validateFileSystemFeatures(sdUUID, mountDir):
>  

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil
This is my previous e-mail:

On May 16, 2019 15:23, Strahil Nikolov  wrote:

It seems that the issue is within the 'dd' command as it stays waiting for 
input:


[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock  of=file oflag=direct,seek_bytes 
seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync  ^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s



Changing the dd command works and shows that the gluster is working:


[root@ovirt1 mnt]# cat /dev/urandom |  /usr/bin/dd  of=file 
oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 
conv=notrunc,nocreat,fsync  0+1 records in
0+1 records out
131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s

Best Regards,

Strahil Nikolov



- Препратено съобщение -

От: Strahil Nikolov 

До: Users 

Изпратено: четвъртък, 16 май 2019 г., 5:56:44 ч. Гринуич-4

Тема: ovirt 4.3.3.7 cannot create a gluster storage domain


Hey guys,


I have recently updated (yesterday) my platform to latest available (v4.3.3.7) 
and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with 
ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster 
communication) while ovirt3 is the arbiter.


Today I have tried to add new domain storages but they fail with the following:


2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n" from=:::192.168.1.2,43864, flow_id=4a54578a, 
task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in 
createStorageDomain
    storageType, domVersion, block_size, alignment)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in 
create
    block_size)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in 
_prepareMetadata
    cls.format_external_leases(sdUUID, xleases_path)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in 
format_external_leases
    xlease.format_index(lockspace, backend)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in 
format_index
    index.dump(file)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in 
dump
    file.pwrite(INDEX_BASE, self._buf)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in 
pwr

It seems that the 'dd' is having trouble checking the new gluster volume.
The output is from the RC1 , but as you see Darell's situation is maybe the 
same.On May 16, 2019 21:41, Nir Soffer  wrote:
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:
>>
>> I tried adding a new storage domain on my hyper converged test cluster 
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster 
>> volume fine, but it’s not able to add the gluster storage domain (as either 
>> a managed gluster volume or directly entering values). The created gluster 
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>
> ... 
>>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>> createStorageDomain error=Storage Domain target is unsupported: () 
>> from=:::10.100.90.5,44732, flow_id=31d993dd, 
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
> This is the code doing the check:
>
>  98 def validateFileSystemFeatures(sdUUID, mountDir):
>  99     try:
> 100         # Don't unlink this file, we don't have the cluster lock yet as it
> 101         # requires direct IO which is what we are trying to test for. This
> 102         # means that unlinking the file might cause a race. Since we don't
> 103         # care what the content of the file is, just that we managed to
> 104         # open it O_DIRECT.
> 105         testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
> 106         oop.getProcessPool(sdUUID).directT

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Nir Soffer
On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
wrote:

> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
> volume fine, but it’s not able to add the gluster storage domain (as either
> a managed gluster volume or directly entering values). The created gluster
> volume mounts and looks fine from the CLI. Errors in VDSM log:
>
> ...

> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=:::10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>

The direct I/O check has failed.

This is the code doing the check:

 98 def validateFileSystemFeatures(sdUUID, mountDir):
 99 try:
100 # Don't unlink this file, we don't have the cluster lock yet as
it
101 # requires direct IO which is what we are trying to test for.
This
102 # means that unlinking the file might cause a race. Since we
don't
103 # care what the content of the file is, just that we managed to
104 # open it O_DIRECT.
105 testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
106 oop.getProcessPool(sdUUID).directTouch(testFilePath)


107 except OSError as e:
108 if e.errno == errno.EINVAL:
109 log = logging.getLogger("storage.fileSD")
110 log.error("Underlying file system doesn't support"
111   "direct IO")
112 raise se.StorageDomainTargetUnsupported()
113
114 raise

The actual check is done in ioprocess, using:

319 fd = open(path->str, allFlags, mode);


320 if (fd == -1) {
321 rv = fd;
322 goto clean;
323 }
324
325 rv = futimens(fd, NULL);
326 if (rv < 0) {
327 goto clean;
328 }
With:

allFlags = O_WRONLY | O_CREAT | O_DIRECT

See:
https://github.com/oVirt/ioprocess/blob/7508d23e19aeeb4dfc180b854a5a92690d2e2aaf/src/exported-functions.c#L291

According to the error message:
Underlying file system doesn't support direct IO

We got EINVAL, which is possible only from open(), and is likely an issue
opening
the file with O_DIRECT.

So something is wrong in the files system.

To confirm, you can try to do:

dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct

This will probably fail with:
dd: failed to open '/path/to/mountoint/test': Invalid argument

If it succeeds, but oVirt fail to connect to this domain, file a bug and we
will investigate.

Nir


>
> On May 16, 2019, at 11:55 AM, Nir Soffer  wrote:
>
> On Thu, May 16, 2019 at 7:42 PM Strahil  wrote:
>
>> Hi Sandro,
>>
>> Thanks for the update.
>>
>> I have just upgraded to RC1 (using gluster v6 here)  and the issue  I
>> detected in 4.3.3.7 - where gluster Storage domain fails creation - is
>> still present.
>>
>
> What is is this issue? can you provide a link to the bug/mail about it?
>
> Can you check if the 'dd' command executed during creation has been
>> recently modified ?
>>
>> I've received update from Darrell  (also gluster v6) , but haven't
>> received an update from anyone who is using gluster v5 -> thus I haven't
>> opened a bug yet.
>>
>> Best Regards,
>> Strahil Nikolov
>> On May 16, 2019 11:21, Sandro Bonazzola  wrote:
>>
>> The oVirt Project is pleased to announce the availability of the oVirt
>> 4.3.4 First Release Candidate, as of May 16th, 2019.
>>
>> This update is a release candidate of the fourth in a series of
>> stabilization updates to the 4.3 series.
>> This is pre-release software. This pre-release should not to be used
>> inproduction.
>>
>> This release is available now on x86_64 architecture for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>>
>> This release supports Hypervisor Hosts on x86_64 and ppc64le
>> architectures for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>> * oVirt Node 4.3 (available for x86_64 only)
>>
>> Experimental tech preview for x86_64 and s390x architectures for Fedora
>> 28 is also included.
>>
>> See the release notes [1] for installation / upgrade instructions and a
>> list of new features and bugs fixed.
>>
>> Notes:
>> - oVirt Appliance is already available
>> - oVirt Node is already available[2]
>>
>> Additional Resources:
>> * Read more about the oVirt 4.3.4 release highlights:
>> http://www.ovirt.org/release/4.3.4/
>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>> * Check out the latest project news on the oVirt blog:
>> http://www.ovirt.org/blog/
>>
>> [1] http://www.ovirt.org/release/4.3.4/
>> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>>
>> --
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA 

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Darrell Budic
I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

2019-05-16 10:25:08,158-0500 INFO  (jsonrpc/1) [vdsm.api] START 
connectStorageServer(domType=7, spUUID=u'----', 
conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.11:10.50.3.10', 
u'id': u'----', u'connection': 
u'10.50.3.12:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': 
u'false', u'vfs_type': u'glusterfs', u'password': '', u'port': u''}], 
options=None) from=:::10.100.90.5,44732, 
flow_id=fcde45c4-3b03-4a85-818a-06be560edee4, 
task_id=0582219d-ce68-4951-8fbd-3dce6d102fca (api:48)
2019-05-16 10:25:08,306-0500 INFO  (jsonrpc/1) 
[storage.StorageServer.MountConnection] Creating directory 
u'/rhev/data-center/mnt/glusterSD/10.50.3.12:_test' (storageServer:168)
2019-05-16 10:25:08,306-0500 INFO  (jsonrpc/1) [storage.fileUtils] Creating 
directory: /rhev/data-center/mnt/glusterSD/10.50.3.12:_test mode: None 
(fileUtils:199)
2019-05-16 10:25:08,306-0500 WARN  (jsonrpc/1) 
[storage.StorageServer.MountConnection] Using user specified 
backup-volfile-servers option (storageServer:275)
2019-05-16 10:25:08,306-0500 INFO  (jsonrpc/1) [storage.Mount] mounting 
10.50.3.12:/test at /rhev/data-center/mnt/glusterSD/10.50.3.12:_test (mount:204)
2019-05-16 10:25:08,453-0500 INFO  (jsonrpc/1) [IOProcessClient] (Global) 
Starting client (__init__:308)
2019-05-16 10:25:08,460-0500 INFO  (ioprocess/5389) [IOProcess] (Global) 
Starting ioprocess (__init__:434)
2019-05-16 10:25:08,473-0500 INFO  (itmap/0) [IOProcessClient] 
(/glusterSD/10.50.3.12:_test) Starting client (__init__:308)
2019-05-16 10:25:08,481-0500 INFO  (ioprocess/5401) [IOProcess] 
(/glusterSD/10.50.3.12:_test) Starting ioprocess (__init__:434)
2019-05-16 10:25:08,484-0500 INFO  (jsonrpc/1) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'status': 0, 'id': 
u'----'}]} from=:::10.100.90.5,44732, 
flow_id=fcde45c4-3b03-4a85-818a-06be560edee4, 
task_id=0582219d-ce68-4951-8fbd-3dce6d102fca (api:54)
2019-05-16 10:25:08,484-0500 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.connectStorageServer succeeded in 0.33 seconds (__init__:312)

2019-05-16 10:25:09,169-0500 INFO  (jsonrpc/7) [vdsm.api] START 
connectStorageServer(domType=7, spUUID=u'----', 
conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.11:10.50.3.10', 
u'id': u'd0ab6b05-2486-40f0-9b15-7f150017ec12', u'connection': 
u'10.50.3.12:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': 
u'false', u'vfs_type': u'glusterfs', u'password': '', u'port': u''}], 
options=None) from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=9eb2f42c-852d-4af6-ae4e-f65d8283d6e0 (api:48)
2019-05-16 10:25:09,180-0500 INFO  (jsonrpc/7) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'status': 0, 'id': 
u'd0ab6b05-2486-40f0-9b15-7f150017ec12'}]} from=:::10.100.90.5,44732, 
flow_id=31d993dd, task_id=9eb2f42c-852d-4af6-ae4e-f65d8283d6e0 (api:54)
2019-05-16 10:25:09,180-0500 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.connectStorageServer succeeded in 0.01 seconds (__init__:312)
2019-05-16 10:25:09,186-0500 INFO  (jsonrpc/5) [vdsm.api] START 
createStorageDomain(storageType=7, 
sdUUID=u'4037f461-2b6d-452f-8156-fcdca820a8a1', domainName=u'gTest', 
typeSpecificArg=u'10.50.3.12:/test', domClass=1, domVersion=u'4', 
block_size=512, max_hosts=250, options=None) from=:::10.100.90.5,44732, 
flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:48)
2019-05-16 10:25:09,492-0500 WARN  (jsonrpc/5) [storage.LVM] Reloading VGs 
failed (vgs=[u'4037f461-2b6d-452f-8156-fcdca820a8a1'] rc=5 out=[] err=['  
Volume group "4037f461-2b6d-452f-8156-fcdca820a8a1" not found', '  Cannot 
process volume group 4037f461-2b6d-452f-8156-fcdca820a8a1']) (lvm:442)
2019-05-16 10:25:09,507-0500 INFO  (jsonrpc/5) [storage.StorageDomain] 
sdUUID=4037f461-2b6d-452f-8156-fcdca820a8a1 domainName=gTest 
remotePath=10.50.3.12:/test domClass=1, block_size=512, alignment=1048576 
(nfsSD:86)
2019-05-16 10:25:09,521-0500 INFO  (jsonrpc/5) [IOProcessClient] 
(4037f461-2b6d-452f-8156-fcdca820a8a1) Starting client (__init__:308)
2019-05-16 10:25:09,528-0500 INFO  (ioprocess/5437) [IOProcess] 
(4037f461-2b6d-452f-8156-fcdca820a8a1) Starting ioprocess (__init__:434)
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.