[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Sahina Bose
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:

> On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:
>
>> From RHHI side default we are setting below volume options:
>>
>> { group: 'virt',
>>  storage.owner-uid: '36',
>>  storage.owner-gid: '36',
>>  network.ping-timeout: '30',
>>  performance.strict-o-direct: 'on',
>>  network.remote-dio: 'off'
>>
>
> According to the user reports, this configuration is not compatible with
> oVirt.
>
> Was this tested?
>

Yes, this is set by default in all test configuration. We’re checking on
the bug, but the error is likely when the underlying device does not
support 512b writes.
With network.remote-dio off gluster will ensure o-direct writes

>
>}
>>
>>
>> On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov 
>> wrote:
>>
>>> Ok, setting 'gluster volume set data_fast4 network.remote-dio on'
>>> allowed me to create the storage domain without any issues.
>>> I set it on all 4 new gluster volumes and the storage domains were
>>> successfully created.
>>>
>>> I have created bug for that:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1711060
>>>
>>> If someone else already opened - please ping me to mark this one as
>>> duplicate.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <
>>> bu...@onholyground.com> написа:
>>>
>>>
>>> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>>>
>>>
>>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
>>> wrote:
>>>
>>> I tried adding a new storage domain on my hyper converged test cluster
>>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>>> volume fine, but it’s not able to add the gluster storage domain (as either
>>> a managed gluster volume or directly entering values). The created gluster
>>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>>
>>> ...
>>>
>>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>>> file system doesn't supportdirect IO (fileSD:110)
>>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
>>> createStorageDomain error=Storage Domain target is unsupported: ()
>>> from=:::10.100.90.5,44732, flow_id=31d993dd,
>>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>>
>>>
>>> The direct I/O check has failed.
>>>
>>>
>>> So something is wrong in the files system.
>>>
>>> To confirm, you can try to do:
>>>
>>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>>
>>> This will probably fail with:
>>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>>
>>> If it succeeds, but oVirt fail to connect to this domain, file a bug and
>>> we will investigate.
>>>
>>> Nir
>>>
>>>
>>> Yep, it fails as expected. Just to check, it is working on pre-existing
>>> volumes, so I poked around at gluster settings for the new volume. It has
>>> network.remote-dio=off set on the new volume, but enabled on old volumes.
>>> After enabling it, I’m able to run the dd test:
>>>
>>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>>> volume set: success
>>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
>>> oflag=direct
>>> 1+0 records in
>>> 1+0 records out
>>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>>
>>> I’m also able to add the storage domain in ovirt now.
>>>
>>> I see network.remote-dio=enable is part of the gluster virt group, so
>>> apparently it’s not getting set by ovirt duding the volume creation/optimze
>>> for storage?
>>>
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/
>>>
>>
>>
>> --
>>
>>
>> Thanks,
>> Gobinda
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XLDDMCQUQQ3AKN7RMTIPVE47DUVRR4O/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Sahina Bose
On Fri, 17 May 2019 at 2:13 AM, Strahil Nikolov 
wrote:

>
> >This may be another issue. This command works only for storage with 512
> bytes sector size.
>
> >Hyperconverge systems may use VDO, and it must be configured in
> compatibility mode to >support
> >512 bytes sector size.
>
> >I'm not sure how this is configured but Sahina should know.
>
> >Nir
>
> I do use VDO.
>

There’s a 512b emulation property that needs to be set on for the vdo
volume.


>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A77B5ME42N3JMLDLSVORL3REDXQXUJ4J/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-18 Thread Nir Soffer
On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:

> From RHHI side default we are setting below volume options:
>
> { group: 'virt',
>  storage.owner-uid: '36',
>  storage.owner-gid: '36',
>  network.ping-timeout: '30',
>  performance.strict-o-direct: 'on',
>  network.remote-dio: 'off'
>

According to the user reports, this configuration is not compatible with
oVirt.

Was this tested?

   }
>
>
> On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov 
> wrote:
>
>> Ok, setting 'gluster volume set data_fast4 network.remote-dio on'
>> allowed me to create the storage domain without any issues.
>> I set it on all 4 new gluster volumes and the storage domains were
>> successfully created.
>>
>> I have created bug for that:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1711060
>>
>> If someone else already opened - please ping me to mark this one as
>> duplicate.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <
>> bu...@onholyground.com> написа:
>>
>>
>> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>>
>>
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
>> wrote:
>>
>> I tried adding a new storage domain on my hyper converged test cluster
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>> volume fine, but it’s not able to add the gluster storage domain (as either
>> a managed gluster volume or directly entering values). The created gluster
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>
>> ...
>>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
>> createStorageDomain error=Storage Domain target is unsupported: ()
>> from=:::10.100.90.5,44732, flow_id=31d993dd,
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>
>>
>> The direct I/O check has failed.
>>
>>
>> So something is wrong in the files system.
>>
>> To confirm, you can try to do:
>>
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and
>> we will investigate.
>>
>> Nir
>>
>>
>> Yep, it fails as expected. Just to check, it is working on pre-existing
>> volumes, so I poked around at gluster settings for the new volume. It has
>> network.remote-dio=off set on the new volume, but enabled on old volumes.
>> After enabling it, I’m able to run the dd test:
>>
>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>> volume set: success
>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
>> oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>
>> I’m also able to add the storage domain in ovirt now.
>>
>> I see network.remote-dio=enable is part of the gluster virt group, so
>> apparently it’s not getting set by ovirt duding the volume creation/optimze
>> for storage?
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/
>>
>
>
> --
>
>
> Thanks,
> Gobinda
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/72IJEAJ7RN42H4GDG7DC4JGCRACIGOOV/


[ovirt-users] Re: Wrong disk size in UI after expanding iscsi direct LUN

2019-05-18 Thread Nir Soffer
On Thu, May 16, 2019 at 6:10 PM Bernhard Dick  wrote:

> Hi,
>
> I've extended the size of one of my direct iSCSI LUNs. The VM is seeing
> the new size but in the webinterface there is still the old size
> reported. Is there a way to update this information? I already took a
> look into the list but there are only reports regarding updating the
> size the VM sees.
>

Sounds like you hit this bug:
https://bugzilla.redhat.com/1651939


The description mention a workaround using the REST API.

Nir


>
>Best regards
>  Bernhard
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/54YHISUA66227IAMI2UVPZRIXV54BAKA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WTXNENWK47HH3BQ4ZV4GZKOA7XYHMX6D/


[ovirt-users] Re: Gluster VM image Resync Time

2019-05-18 Thread Indivar Nair
Thanks, Strahil,

Sorry, got busy with some other work.
But, better late than never.

Regards,


Indivar Nair

On Wed, Mar 27, 2019 at 4:34 PM Strahil  wrote:

> By default ovirt uses 'sharding' which splits the files into logical
> chunks. This greatly reduces healing time, as VM's disk is not always
> completely overwritten and only the shards that are different will be
> healed.
>
> Maybe you should change the default shard size.
>
> Best Regards,
> Strahil Nikolov
> On Mar 27, 2019 08:24, Indivar Nair  wrote:
>
> Hi All,
>
> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
> We would have around 50 - 60 VMs, with an average 500GB disk size.
>
> Now in case one of the Gluster Nodes go completely out of sync, roughly,
> how long would it take to resync? (as per your experience)
> Will it impact the working of VMs in any way?
> Is there anything to be taken care of, in advance, to prepare for such a
> situation?
>
> Regards,
>
>
> Indivar Nair
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E6DX33SHVMEBBRMUBI2YDHTIX4ANZ4WK/


[ovirt-users] Re: Gluster VM image Resync Time

2019-05-18 Thread Indivar Nair
Thanks, Krutika,

Sorry, got busy with some other work.
But, better late than never.

Regards,


Indivar Nair

On Thu, Mar 28, 2019 at 12:26 PM Krutika Dhananjay 
wrote:

> Right. So Gluster stores what are called "indices" for each modified file
> (or shard)
> under a special hidden directory of the "good" bricks at
> $BRICK_PATH/.glusterfs/indices/xattrop.
> When the offline brick comes back up, the file corresponding to each index
> is healed, and then the index deleted
> to mark the fact that the file has been healed.
>
> You can try this and see it for yourself. Just create a 1x3 plain
> replicate volume, and enable shard on it.
> Create a big file (big enough to have multiple shards). Check that the
> shards are created under $BRICK_PATH/.shard.
> Now kill a brick. Modify a small portion of the file. Hit `ls` on
> $BRICK_PATH/.glusterfs/indices/xattrop of the online bricks.
> You'll notice there will be entries named after the gfid (unique
> identifier in gluster for each file) of the shards.
> And only for those shards that the write modified, and not ALL shards of
> this really big file.
> And then when you bring the brick back up using `gluster volume start $VOL
> force`, the
> shards get healed and the directory eventually becomes empty.
>
> -Krutika
>
>
> On Thu, Mar 28, 2019 at 12:14 PM Indivar Nair 
> wrote:
>
>> Hi Krutika,
>>
>> So how does the Gluster node know which shards were modified after it
>> went down?
>> Do the other Gluster nodes keep track of it?
>>
>> Regards,
>>
>>
>> Indivar Nair
>>
>>
>> On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay 
>> wrote:
>>
>>> Each shard is a separate file of size equal to value of
>>> "features.shard-block-size".
>>> So when a brick/node was down, only those shards belonging to the VM
>>> that were modified will be sync'd later when the brick's back up.
>>> Does that answer your question?
>>>
>>> -Krutika
>>>
>>> On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose  wrote:
>>>
 On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair 
 wrote:
 >
 > Hi Strahil,
 >
 > Ok. Looks like sharding should make the resyncs faster.
 >
 > I searched for more info on it, but couldn't find much.
 > I believe it will still have to compare each shard to determine
 whether there are any changes that need to be replicated.
 > Am I right?

 +Krutika Dhananjay
 >
 > Regards,
 >
 > Indivar Nair
 >
 >
 >
 > On Wed, Mar 27, 2019 at 4:34 PM Strahil 
 wrote:
 >>
 >> By default ovirt uses 'sharding' which splits the files into logical
 chunks. This greatly reduces healing time, as VM's disk is not always
 completely overwritten and only the shards that are different will be
 healed.
 >>
 >> Maybe you should change the default shard size.
 >>
 >> Best Regards,
 >> Strahil Nikolov
 >>
 >> On Mar 27, 2019 08:24, Indivar Nair 
 wrote:
 >>
 >> Hi All,
 >>
 >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
 >> We would have around 50 - 60 VMs, with an average 500GB disk size.
 >>
 >> Now in case one of the Gluster Nodes go completely out of sync,
 roughly, how long would it take to resync? (as per your experience)
 >> Will it impact the working of VMs in any way?
 >> Is there anything to be taken care of, in advance, to prepare for
 such a situation?
 >>
 >> Regards,
 >>
 >>
 >> Indivar Nair
 >>
 > ___
 > Users mailing list -- users@ovirt.org
 > To unsubscribe send an email to users-le...@ovirt.org
 > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 > oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 > List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZW5RRVHFRMAIBUZDUSTXTIF4Z4WW5Y5/

>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HU6ACV2QJVNGMWJHAVFJ63Y67GB3NPQU/