[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Nir Soffer
On Thu, May 16, 2019 at 10:12 PM Darrell Budic 
wrote:

> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
> wrote:
>
>> I tried adding a new storage domain on my hyper converged test cluster
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>> volume fine, but it’s not able to add the gluster storage domain (as either
>> a managed gluster volume or directly entering values). The created gluster
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>
>> ...
>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
>> createStorageDomain error=Storage Domain target is unsupported: ()
>> from=:::10.100.90.5,44732, flow_id=31d993dd,
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>
>
> The direct I/O check has failed.
>
>
> So something is wrong in the files system.
>
> To confirm, you can try to do:
>
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
>
> If it succeeds, but oVirt fail to connect to this domain, file a bug and
> we will investigate.
>
> Nir
>
>
> Yep, it fails as expected. Just to check, it is working on pre-existing
> volumes, so I poked around at gluster settings for the new volume. It has
> network.remote-dio=off set on the new volume, but enabled on old volumes.
> After enabling it, I’m able to run the dd test:
>
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
> oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>
> I’m also able to add the storage domain in ovirt now.
>
> I see network.remote-dio=enable is part of the gluster virt group, so
> apparently it’s not getting set by ovirt duding the volume creation/optimze
> for storage?
>

I'm not sure who is responsible for changing these settings. oVirt always
required directio, and we
never had to change anything in gluster.

Sahina, maybe gluster changed the defaults?

Darrell, please file a bug, probably for RHHI.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XRE4XE5WJECVMCUFTS4Y2ADKGWQWJ5CE/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov
 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
to create the storage domain without any issues.I set it on all 4 new gluster 
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Darrell Budic
https://bugzilla.redhat.com/show_bug.cgi?id=1711054



> On May 16, 2019, at 2:17 PM, Nir Soffer  wrote:
> 
> On Thu, May 16, 2019 at 10:12 PM Darrell Budic  > wrote:
> On May 16, 2019, at 1:41 PM, Nir Soffer  > wrote:
>> 
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic > > wrote:
>> I tried adding a new storage domain on my hyper converged test cluster 
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster 
>> volume fine, but it’s not able to add the gluster storage domain (as either 
>> a managed gluster volume or directly entering values). The created gluster 
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>> 
>> ... 
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>> createStorageDomain error=Storage Domain target is unsupported: () 
>> from=:::10.100.90.5,44732, flow_id=31d993dd, 
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>> 
>> The direct I/O check has failed.
>> 
>> 
>> So something is wrong in the files system.
>> 
>> To confirm, you can try to do:
>> 
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>> 
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>> 
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
>> will investigate.
>> 
>> Nir
> 
> Yep, it fails as expected. Just to check, it is working on pre-existing 
> volumes, so I poked around at gluster settings for the new volume. It has 
> network.remote-dio=off set on the new volume, but enabled on old volumes. 
> After enabling it, I’m able to run the dd test:
> 
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
> 
> I’m also able to add the storage domain in ovirt now.
> 
> I see network.remote-dio=enable is part of the gluster virt group, so 
> apparently it’s not getting set by ovirt duding the volume creation/optimze 
> for storage?
> 
> I'm not sure who is responsible for changing these settings. oVirt always 
> required directio, and we
> never had to change anything in gluster.
> 
> Sahina, maybe gluster changed the defaults?
> 
> Darrell, please file a bug, probably for RHHI.
> 
> Nir

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/66BVZIQJVEP2Q3H5HQ5QAQIGLCMF6XZG/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov

>This may be another issue. This command works only for storage with 512 bytes 
>sector size.
>Hyperconverge systems may use VDO, and it must be configured in compatibility 
>mode to >support>512 bytes sector size.
>I'm not sure how this is configured but Sahina should know.
>Nir
I do use VDO.
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ONURR6EWEOC7ERV5FYMMBTWYFAVDMWR/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Nir Soffer
On Thu, May 16, 2019 at 10:02 PM Strahil  wrote:

> This is my previous e-mail:
>
> On May 16, 2019 15:23, Strahil Nikolov  wrote:
>
> It seems that the issue is within the 'dd' command as it stays waiting for
> input:
>
> [root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock  of=file
> oflag=direct,seek_bytes seek=1048576 bs=256512 count=1
> conv=notrunc,nocreat,fsync  ^C0+0 records in
> 0+0 records out
> 0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s
>
> Changing the dd command works and shows that the gluster is working:
>
> [root@ovirt1 mnt]# cat /dev/urandom |  /usr/bin/dd  of=file
> oflag=direct,seek_bytes seek=1048576 bs=256512 count=1
> conv=notrunc,nocreat,fsync  0+1 records in
> 0+1 records out
> 131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s
>
> Best Regards,
>
> Strahil Nikolov
>
> - Препратено съобщение -
>
> *От:* Strahil Nikolov 
>
> *До:* Users 
>
> *Изпратено:* четвъртък, 16 май 2019 г., 5:56:44 ч. Гринуич-4
>
> *Тема:* ovirt 4.3.3.7 cannot create a gluster storage domain
>
> Hey guys,
>
> I have recently updated (yesterday) my platform to latest available (v
> 4.3.3.7) and upgraded to gluster v6.1 .The setup is hyperconverged 3 node
> cluster with ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX
> is for gluster communication) while ovirt3 is the arbiter.
>
> Today I have tried to add new domain storages but they fail with the
> following:
>
> 2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH
> createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock',
> u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
> 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1',
> 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]'
> err="/usr/bin/dd: error writing
> '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
> Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied,
> 0.0138582 s, 0.0 kB/s\n" from=:::192.168.1.2,43864, flow_id=4a54578a,
> task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
>
>
This may be another issue. This command works only for storage with 512
bytes sector size.

Hyperconverge systems may use VDO, and it must be configured in
compatibility mode to support
512 bytes sector size.

I'm not sure how this is configured but Sahina should know.

Nir

> 2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task]
> (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in createStorageDomain
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614,
> in createStorageDomain
> storageType, domVersion, block_size, alignment)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py
> ", line 106, in create
> block_size)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py
> ", line 466, in _prepareMetadata
> cls.format_external_leases(sdUUID, xleases_path)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255,
> in format_external_leases
> xlease.format_index(lockspace, backend)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 681, in format_index
> index.dump(file)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 843, in dump
> file.pwrite(INDEX_BASE, self._buf)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 1076, in pwr
>
>
> It seems that the 'dd' is having trouble checking the new gluster volume.
> The output is from the RC1 , but as you see Darell's situation is maybe
> the same.
> On May 16, 2019 21:41, Nir Soffer  wrote:
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
> wrote:
>
> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new
> gluster volume fine, but it’s not able to add the gluster storage domain
> (as either a managed gluster volume or directly entering values). The
> created gluster volume mounts and looks fine from the CLI. Errors in VDSM
> log:
>
> ...
>
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=:::10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
> This is the code doing the check:
>
>  98 def validateFileSystemFeatures(sdUUID, mountDir):
>  

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Darrell Budic
On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
> 
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic  > wrote:
> I tried adding a new storage domain on my hyper converged test cluster 
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster 
> volume fine, but it’s not able to add the gluster storage domain (as either a 
> managed gluster volume or directly entering values). The created gluster 
> volume mounts and looks fine from the CLI. Errors in VDSM log:
> 
> ... 
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
> createStorageDomain error=Storage Domain target is unsupported: () 
> from=:::10.100.90.5,44732, flow_id=31d993dd, 
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
> 
> The direct I/O check has failed.
> 
> 
> So something is wrong in the files system.
> 
> To confirm, you can try to do:
> 
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
> 
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
> 
> If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
> will investigate.
> 
> Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:

[root@boneyard mnt]# gluster vol set test network.remote-dio enable
volume set: success
[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s

I’m also able to add the storage domain in ovirt now.

I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov
 In my case the dio is off, but I can still do direct io:
[root@ovirt1 glusterfs]# cd 
/rhev/data-center/mnt/glusterSD/gluster1\:_data__fast/
[root@ovirt1 gluster1:_data__fast]# gluster volume info data_fast | grep dio
network.remote-dio: off
[root@ovirt1 gluster1:_data__fast]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.00295952 s, 1.4 MB/s


Most probably the 2 cases are different.
Best Regards,Strahil Nikolov


В четвъртък, 16 май 2019 г., 22:17:23 ч. Гринуич+3, Nir Soffer 
 написа:  
 
 On Thu, May 16, 2019 at 10:12 PM Darrell Budic  wrote:

On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?

I'm not sure who is responsible for changing these settings. oVirt always 
required directio, and wenever had to change anything in gluster.
Sahina, maybe gluster changed the defaults?
Darrell, please file a bug, probably for RHHI.
Nir  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IC4FIKTK5DSGMRCYXBTK7BLIDFSM76WN/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Gobinda Das
>From RHHI side default we are setting below volume options:

{ group: 'virt',
 storage.owner-uid: '36',
 storage.owner-gid: '36',
 network.ping-timeout: '30',
 performance.strict-o-direct: 'on',
 network.remote-dio: 'off'
   }


On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov 
wrote:

> Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed
> me to create the storage domain without any issues.
> I set it on all 4 new gluster volumes and the storage domains were
> successfully created.
>
> I have created bug for that:
> https://bugzilla.redhat.com/show_bug.cgi?id=1711060
>
> If someone else already opened - please ping me to mark this one as
> duplicate.
>
> Best Regards,
> Strahil Nikolov
>
>
> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <
> bu...@onholyground.com> написа:
>
>
> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
> wrote:
>
> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
> volume fine, but it’s not able to add the gluster storage domain (as either
> a managed gluster volume or directly entering values). The created gluster
> volume mounts and looks fine from the CLI. Errors in VDSM log:
>
> ...
>
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=:::10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
>
> So something is wrong in the files system.
>
> To confirm, you can try to do:
>
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
>
> If it succeeds, but oVirt fail to connect to this domain, file a bug and
> we will investigate.
>
> Nir
>
>
> Yep, it fails as expected. Just to check, it is working on pre-existing
> volumes, so I poked around at gluster settings for the new volume. It has
> network.remote-dio=off set on the new volume, but enabled on old volumes.
> After enabling it, I’m able to run the dd test:
>
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
> oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>
> I’m also able to add the storage domain in ovirt now.
>
> I see network.remote-dio=enable is part of the gluster virt group, so
> apparently it’s not getting set by ovirt duding the volume creation/optimze
> for storage?
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZAHK6LYDPJ7IS6D4OFNBHVVY5JETMAY7/


[ovirt-users] RHEL 8 Template Seal failed

2019-05-16 Thread Vinícius Ferrão
Hello,

I’m trying to seal a RHEL8 template but the operation is failing.

Here’s the relevant information from engine.log:

2019-05-17 01:30:31,153-03 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHostJobsVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-58) 
[91e1acd6-efc5-411b-8c76-970def4ebbbe] FINISH, GetHostJobsVDSCommand, return: 
{b80c0bbd-25b8-4007-9b91-376cb0a18e30=HostJobInfo:{id='b80c0bbd-25b8-4007-9b91-376cb0a18e30',
 type='virt', description='seal_vm', status='failed', progress='null', 
error='VDSError:{code='GeneralException', message='General Exception: ('Command 
[\'/usr/bin/virt-sysprep\', \'-a\', 
u\'/rhev/data-center/mnt/192.168.10.6:_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/images/1ecdfbfc-1c22-452f-9a53-2159701549c8/f9de3eae-f475-451b-b587-f6a1405036e8\']
 failed with rc=1 out=\'[   0.0] Examining the guest ...\\nvirt-sysprep: 
warning: mount_options: mount exited with status 32: mount: \\nwrong fs type, 
bad option, bad superblock on /dev/mapper/rhel_rhel8-root,\\n   missing 
codepage or helper program, or other error\\n\\n   In some cases useful 
info is found in syslog - try\\n   dmesg | tail or so. 
(ignored)\\nvirt-sysprep: warning: mount_options: mount: /boot: mount point is 
not a \\ndirectory (ignored)\\nvirt-sysprep: warning: mount_options: mount: 
/boot/efi: mount point is not \\na directory (ignored)\\n[  17.9] Performing 
"abrt-data" ...\\n\' err="virt-sysprep: error: libguestfs error: glob_expand: 
glob_expand_stub: you \\nmust call \'mount\' first to mount the root 
filesystem\\n\\nIf reporting bugs, run virt-sysprep with debugging enabled and 
include the \\ncomplete output:\\n\\n  virt-sysprep -v -x [...]\\n"',)'}'}}, 
log id: 1bbb34bf

I’m not shure what’s wrong or missing. The VM image is using UEFI with Secure 
Boot, so standard UEFI partition is in place.

Ive found something on bugzilla but does not seem to be related:
https://bugzilla.redhat.com/show_bug.cgi?id=1671895

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WILFBK6SOTKJP25PAS4JODNNOUFW7HUQ/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Nir Soffer
On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
wrote:

> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
> volume fine, but it’s not able to add the gluster storage domain (as either
> a managed gluster volume or directly entering values). The created gluster
> volume mounts and looks fine from the CLI. Errors in VDSM log:
>
> ...

> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=:::10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>

The direct I/O check has failed.

This is the code doing the check:

 98 def validateFileSystemFeatures(sdUUID, mountDir):
 99 try:
100 # Don't unlink this file, we don't have the cluster lock yet as
it
101 # requires direct IO which is what we are trying to test for.
This
102 # means that unlinking the file might cause a race. Since we
don't
103 # care what the content of the file is, just that we managed to
104 # open it O_DIRECT.
105 testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
106 oop.getProcessPool(sdUUID).directTouch(testFilePath)


107 except OSError as e:
108 if e.errno == errno.EINVAL:
109 log = logging.getLogger("storage.fileSD")
110 log.error("Underlying file system doesn't support"
111   "direct IO")
112 raise se.StorageDomainTargetUnsupported()
113
114 raise

The actual check is done in ioprocess, using:

319 fd = open(path->str, allFlags, mode);


320 if (fd == -1) {
321 rv = fd;
322 goto clean;
323 }
324
325 rv = futimens(fd, NULL);
326 if (rv < 0) {
327 goto clean;
328 }
With:

allFlags = O_WRONLY | O_CREAT | O_DIRECT

See:
https://github.com/oVirt/ioprocess/blob/7508d23e19aeeb4dfc180b854a5a92690d2e2aaf/src/exported-functions.c#L291

According to the error message:
Underlying file system doesn't support direct IO

We got EINVAL, which is possible only from open(), and is likely an issue
opening
the file with O_DIRECT.

So something is wrong in the files system.

To confirm, you can try to do:

dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct

This will probably fail with:
dd: failed to open '/path/to/mountoint/test': Invalid argument

If it succeeds, but oVirt fail to connect to this domain, file a bug and we
will investigate.

Nir


>
> On May 16, 2019, at 11:55 AM, Nir Soffer  wrote:
>
> On Thu, May 16, 2019 at 7:42 PM Strahil  wrote:
>
>> Hi Sandro,
>>
>> Thanks for the update.
>>
>> I have just upgraded to RC1 (using gluster v6 here)  and the issue  I
>> detected in 4.3.3.7 - where gluster Storage domain fails creation - is
>> still present.
>>
>
> What is is this issue? can you provide a link to the bug/mail about it?
>
> Can you check if the 'dd' command executed during creation has been
>> recently modified ?
>>
>> I've received update from Darrell  (also gluster v6) , but haven't
>> received an update from anyone who is using gluster v5 -> thus I haven't
>> opened a bug yet.
>>
>> Best Regards,
>> Strahil Nikolov
>> On May 16, 2019 11:21, Sandro Bonazzola  wrote:
>>
>> The oVirt Project is pleased to announce the availability of the oVirt
>> 4.3.4 First Release Candidate, as of May 16th, 2019.
>>
>> This update is a release candidate of the fourth in a series of
>> stabilization updates to the 4.3 series.
>> This is pre-release software. This pre-release should not to be used
>> inproduction.
>>
>> This release is available now on x86_64 architecture for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>>
>> This release supports Hypervisor Hosts on x86_64 and ppc64le
>> architectures for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>> * oVirt Node 4.3 (available for x86_64 only)
>>
>> Experimental tech preview for x86_64 and s390x architectures for Fedora
>> 28 is also included.
>>
>> See the release notes [1] for installation / upgrade instructions and a
>> list of new features and bugs fixed.
>>
>> Notes:
>> - oVirt Appliance is already available
>> - oVirt Node is already available[2]
>>
>> Additional Resources:
>> * Read more about the oVirt 4.3.4 release highlights:
>> http://www.ovirt.org/release/4.3.4/
>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>> * Check out the latest project news on the oVirt blog:
>> http://www.ovirt.org/blog/
>>
>> [1] http://www.ovirt.org/release/4.3.4/
>> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>>
>> --
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil
This is my previous e-mail:

On May 16, 2019 15:23, Strahil Nikolov  wrote:

It seems that the issue is within the 'dd' command as it stays waiting for 
input:


[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock  of=file oflag=direct,seek_bytes 
seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync  ^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s



Changing the dd command works and shows that the gluster is working:


[root@ovirt1 mnt]# cat /dev/urandom |  /usr/bin/dd  of=file 
oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 
conv=notrunc,nocreat,fsync  0+1 records in
0+1 records out
131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s

Best Regards,

Strahil Nikolov



- Препратено съобщение -

От: Strahil Nikolov 

До: Users 

Изпратено: четвъртък, 16 май 2019 г., 5:56:44 ч. Гринуич-4

Тема: ovirt 4.3.3.7 cannot create a gluster storage domain


Hey guys,


I have recently updated (yesterday) my platform to latest available (v4.3.3.7) 
and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with 
ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster 
communication) while ovirt3 is the arbiter.


Today I have tried to add new domain storages but they fail with the following:


2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n" from=:::192.168.1.2,43864, flow_id=4a54578a, 
task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in 
createStorageDomain
    storageType, domVersion, block_size, alignment)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in 
create
    block_size)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in 
_prepareMetadata
    cls.format_external_leases(sdUUID, xleases_path)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in 
format_external_leases
    xlease.format_index(lockspace, backend)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in 
format_index
    index.dump(file)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in 
dump
    file.pwrite(INDEX_BASE, self._buf)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in 
pwr

It seems that the 'dd' is having trouble checking the new gluster volume.
The output is from the RC1 , but as you see Darell's situation is maybe the 
same.On May 16, 2019 21:41, Nir Soffer  wrote:
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:
>>
>> I tried adding a new storage domain on my hyper converged test cluster 
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster 
>> volume fine, but it’s not able to add the gluster storage domain (as either 
>> a managed gluster volume or directly entering values). The created gluster 
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>
> ... 
>>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>> createStorageDomain error=Storage Domain target is unsupported: () 
>> from=:::10.100.90.5,44732, flow_id=31d993dd, 
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
> This is the code doing the check:
>
>  98 def validateFileSystemFeatures(sdUUID, mountDir):
>  99     try:
> 100         # Don't unlink this file, we don't have the cluster lock yet as it
> 101         # requires direct IO which is what we are trying to test for. This
> 102         # means that unlinking the file might cause a race. Since we don't
> 103         # care what the content of the file is, just that we managed to
> 104         # open it O_DIRECT.
> 105         testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
> 106         

[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-16 Thread Luca 'remix_tj' Lorenzetto
Hello,

if you want more help you can refer to downstream upgrade helper by
redhat: https://access.redhat.com/labs/rhvupgradehelper/

Luca

On Thu, May 16, 2019 at 2:50 PM MIMMIK _  wrote:
>
> > Hello,
> >
> > as far as I know, the procedure is the usual. You can use the upgrade
> > guide from 4.1 to 4.2.
> >
> > Luca
>
> Hello,
>
> Thanks Luca, but as we are talking about a production platform, I'd like to 
> have an official doc to assure this is the right upgrade procedure.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWNVMWEEHAO5CW2UQ2KEJEJOAI2XHL7G/



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D5IGWKKRZTCSAYQP2IAU44LFEZF7SMQC/


[ovirt-users] Re: Host needs to be reinstalled after configuring power management

2019-05-16 Thread Michael Watters
Had the same message on our cluster.  The solution was to click edit on
each host and refetch the ssh host key.  I'm not sure why this is
necessary in the first place however.


On 5/14/19 3:15 PM, Andrew DeMaria wrote:
> Hi,
>
> I am running ovirt 4.3 and have found the following action item
> immediately after configuring power management for a host:
>
> Host needs to be reinstalled as important configuration changes were
> applied on it.
>
>
> The thing is - I've just freshly installed this host and it seems
> strange that I need to reinstall it.
>  Is there a better way to install a host and configure power
> management without having to reinstall it after?
>
>
> Thanks,
>
> Andrew
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HMTDLIYVJJEPZB373P6CPXB74LIMDYZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6NHGE2EJ7PYHL2S4J6JGJC2NW25EHPW/


[ovirt-users] Re: oVirt Open Source Backup solution?

2019-05-16 Thread Derek Atkins
Jorick Astrego  writes:

> Maybe split it in 2 disks? One OS and one APP/DATA? You can then backup 
> only one. 
>  
> I prefer to do this anyway as I then can just redeploy the OS and attach 
> the second disk to get things back up and running. 

Are you suggesting that /etc and /var should go onto their own disks?
There is lots of configuration in /etc (which is usually in the root
disk) that needs to be backed up.

Also, different apps store configuration and data in different places,
so saying "just put it on a second disk" can be hard.

Sure, it works fine for /home -- but mysql?  imapd?  ...

-derek
-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WCKEURYZBKGWW5HFECF7OMLBCONFXRKZ/


[ovirt-users] oVirt upgrade version from 4.2 to 4.3

2019-05-16 Thread dmarini
I cannot find an official upgrade procedure from 4.2 to 4.3 oVirt version on 
this page: https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide.html

Can you help me?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WG2EI6HL3S2AT6PITGEAJQFGKC6XMYRD/


[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-16 Thread Luca 'remix_tj' Lorenzetto
Hello,

as far as I know, the procedure is the usual. You can use the upgrade
guide from 4.1 to 4.2.

Luca

On Thu, May 16, 2019 at 2:40 PM  wrote:
>
> I cannot find an official upgrade procedure from 4.2 to 4.3 oVirt version on 
> this page: 
> https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide.html
>
> Can you help me?
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WG2EI6HL3S2AT6PITGEAJQFGKC6XMYRD/



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6DK5DR4KTLF6SQFRG55YCDBPBA4RXS55/


[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-16 Thread MIMMIK _
> Hello,
> 
> as far as I know, the procedure is the usual. You can use the upgrade
> guide from 4.1 to 4.2.
> 
> Luca

Hello,

Thanks Luca, but as we are talking about a production platform, I'd like to 
have an official doc to assure this is the right upgrade procedure.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWNVMWEEHAO5CW2UQ2KEJEJOAI2XHL7G/


[ovirt-users] Re: Dropped RX Packets

2019-05-16 Thread Strahil Nikolov
 Hi Magnus,
do you notice any repetition there ? Does it happen completely random ?
Usually to debug network issues you will need tcpdump from Guest, Host and the 
other side if possible.Is that an option ?
Do you see in the host's tab those RX errors ?
What is the output of "ip -s link" on the Guest ?
Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 9:19:57 ч. Гринуич-4, Magnus Isaksson 
 написа:  
 
 Hello all!

I'm having quite some trouble with VMs that have a large amount of dropped 
packets on RX.
This, plus customers complain about short dropped connections, for example one 
customer has a SQL server and an other serevr connecting to it, and it is 
randomly dropping connections. Before they moved their VM:s to us they did not 
have any of these issues.

Does anyone have an idea of what this can be due to? And how can i fix it? It 
is starting to be a deal breaker for our customers on whether they will stay 
with us or not.

I was thinking of reinstalling the nodes with oVirt Node, instead of the full 
CentOS, would this perhaps fix the issue?

The enviroment is:
Huawei x6000 with 4 nodes
Each node having Intel X722 network card and connecting with 10G (fiber) to a 
Juniper EX 4600. Storage via FC to a IBM FS900.
Each node is running a full CentOS 7.6 connecting to a Engine 4.2.8.2

Regards
 Magnus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXGQSKYBUCFPDCBIQVAAZAWFQX54A2BD/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SCZNYVKLR54USPJW3EYA2NX5IH7BZDR6/


[ovirt-users] Fw: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-16 Thread Strahil Nikolov
 It seems that the issue is within the 'dd' command as it stays waiting for 
input:
[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock  of=file oflag=direct,seek_bytes 
seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync  ^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s

 Changing the dd command works and shows that the gluster is working:
[root@ovirt1 mnt]# cat /dev/urandom |  /usr/bin/dd  of=file 
oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 
conv=notrunc,nocreat,fsync  0+1 records in
0+1 records out
131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s

Best Regards,Strahil Nikolov


   - Препратено съобщение - От: Strahil Nikolov 
До: Users Изпратено: четвъртък, 16 май 
2019 г., 5:56:44 ч. Гринуич-4Тема: ovirt 4.3.3.7 cannot create a gluster 
storage domain
 Hey guys,
I have recently updated (yesterday) my platform to latest available (v4.3.3.7) 
and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with 
ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster 
communication) while ovirt3 is the arbiter.
Today I have tried to add new domain storages but they fail with the following:
2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n" from=:::192.168.1.2,43864, flow_id=4a54578a, 
task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in 
createStorageDomain
    storageType, domVersion, block_size, alignment)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in 
create
    block_size)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in 
_prepareMetadata
    cls.format_external_leases(sdUUID, xleases_path)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in 
format_external_leases
    xlease.format_index(lockspace, backend)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in 
format_index
    index.dump(file)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in 
dump
    file.pwrite(INDEX_BASE, self._buf)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in 
pwrite
    self._run(args, data=buf[:])
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1093, in 
_run
    raise cmdutils.Error(args, rc, "[suppressed]", err)
Error: Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') aborting: Task is aborted: 
u'Command [\'/usr/bin/dd\', \'iflag=fullblock\', 
u\'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\',
 \'oflag=direct,seek_bytes\', \'seek=1048576\', \'bs=256512\', \'count=1\', 
\'conv=notrunc,nocreat,fsync\'] failed with rc=1 out=\'[suppressed]\' 
err="/usr/bin/dd: error writing 
\'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\':
 Invalid argument\\n1+0 records in\\n0+0 records out\\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\\n"' - code 100 (task:1181)
2019-05-16 10:15:21,297+0300 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 

[ovirt-users] Dropped RX Packets

2019-05-16 Thread Magnus Isaksson
Hello all!

I'm having quite some trouble with VMs that have a large amount of dropped 
packets on RX.
This, plus customers complain about short dropped connections, for example one 
customer has a SQL server and an other serevr connecting to it, and it is 
randomly dropping connections. Before they moved their VM:s to us they did not 
have any of these issues.

Does anyone have an idea of what this can be due to? And how can i fix it? It 
is starting to be a deal breaker for our customers on whether they will stay 
with us or not.

I was thinking of reinstalling the nodes with oVirt Node, instead of the full 
CentOS, would this perhaps fix the issue?

The enviroment is:
Huawei x6000 with 4 nodes
Each node having Intel X722 network card and connecting with 10G (fiber) to a 
Juniper EX 4600. Storage via FC to a IBM FS900.
Each node is running a full CentOS 7.6 connecting to a Engine 4.2.8.2

Regards
 Magnus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXGQSKYBUCFPDCBIQVAAZAWFQX54A2BD/


[ovirt-users] Re: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-16 Thread Andreas Elvers
Why  did you move to gluster v6? For the kicks? :-) The devs are currently 
evaluating for themselves whether they can switch to V6 for the upcoming 
releases.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYZRKA4QBTXYDR3WXFRW7IXLCSGGVSLC/


[ovirt-users] Re: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-16 Thread Strahil Nikolov
 Due to the issue with dom_md/ids not getting in sync and always pending heal 
on ovirt2/gluster2 & ovirt3

Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 6:08:44 ч. Гринуич-4, Andreas Elvers 
 написа:  
 
 Why  did you move to gluster v6? For the kicks? :-) The devs are currently 
evaluating for themselves whether they can switch to V6 for the upcoming 
releases.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYZRKA4QBTXYDR3WXFRW7IXLCSGGVSLC/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TM5NAZBYXEC2KZCVWKIWOBUAXR5QHKQ4/


[ovirt-users] ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-16 Thread Strahil Nikolov
Hey guys,
I have recently updated (yesterday) my platform to latest available (v4.3.3.7) 
and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with 
ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster 
communication) while ovirt3 is the arbiter.
Today I have tried to add new domain storages but they fail with the following:
2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n" from=:::192.168.1.2,43864, flow_id=4a54578a, 
task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in 
createStorageDomain
    storageType, domVersion, block_size, alignment)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in 
create
    block_size)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in 
_prepareMetadata
    cls.format_external_leases(sdUUID, xleases_path)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in 
format_external_leases
    xlease.format_index(lockspace, backend)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in 
format_index
    index.dump(file)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in 
dump
    file.pwrite(INDEX_BASE, self._buf)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in 
pwrite
    self._run(args, data=buf[:])
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1093, in 
_run
    raise cmdutils.Error(args, rc, "[suppressed]", err)
Error: Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') aborting: Task is aborted: 
u'Command [\'/usr/bin/dd\', \'iflag=fullblock\', 
u\'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\',
 \'oflag=direct,seek_bytes\', \'seek=1048576\', \'bs=256512\', \'count=1\', 
\'conv=notrunc,nocreat,fsync\'] failed with rc=1 out=\'[suppressed]\' 
err="/usr/bin/dd: error writing 
\'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\':
 Invalid argument\\n1+0 records in\\n0+0 records out\\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\\n"' - code 100 (task:1181)
2019-05-16 10:15:21,297+0300 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n" (dispatcher:87)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, 
in wrapper
    result = ctask.prepare(func, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in 
wrapper
    return m(self, *a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189, in 
prepare
    raise self.error
Error: Command ['/usr/bin/dd', 'iflag=fullblock', 

[ovirt-users] Re: Cluster Un-stable since power outage

2019-05-16 Thread Alan G
So, after a week of bashing my head against a wall we finally tracked it down. 

One of the developers was using the hosts for extra processing power and in the 
process he periodically turned up an sshfs mount, when this appeared in 
/etc/mnttab it broke vdsm and caused the error below. So was unrelated to the 
power outage.

This is in 4.2 - if I re-create in 4.3 I'll file a bug, I think the parser 
really should be a but more robust than this.


 On Tue, 07 May 2019 15:47:45 +0100 Darrell Budic 
 wrote 


Was your hyper converged and is this storage gluster based?

Your error is DNS related, if a bit odd. Have you checked the resolv.conf 
configs and confirmed the servers listed there are reachable and responsive? 
When your hosts are active, are they able to mount all the storage domains they 
need? You should also make sure each HA node can reliably ping your gateway IP, 
failures there will cause nodes to bounce.



A starting place rather a solution, but the first places to look. Good luck!



  -Darrell





On May 7, 2019, at 5:14 AM, Alan G  wrote:


Hi,



We have a dev cluster running 4.2. It had to be powered down as the building 
was going to loose power. Since we've brought it back up it has been massively 
un-stable (Hosts constantly switching state, VMs migrating all the time).



I now have one host running (with HE) and all others in maintenance mode. When 
I try activate another host i see storage errors in vdsm.log



2019-05-07 09:41:00,114+ ERROR (monitor/a98c0b4) [storage.Monitor] Error 
checking domain a98c0b42-47b9-4632-8b54-0ff3bd80d4c2 (monitor:424)

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 416, in 
_checkDomainStatus

    masterStats = self.domain.validateMaster()

  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 941, in 
validateMaster

    if not self.validateMasterMount():

  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1377, 
in validateMasterMount

    return mount.isMounted(self.getMasterDir())

  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 161, in 
isMounted

    getMountFromTarget(target)

  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 173, in 
getMountFromTarget

    for rec in _iterMountRecords():

  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 143, in 
_iterMountRecords

    for rec in _iterKnownMounts():

  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 139, in 
_iterKnownMounts

    yield _parseFstabLine(line)

  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 81, in 
_parseFstabLine

    fs_spec = fileUtils.normalize_path(_unescape_spaces(fs_spec))

  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileUtils.py", line 94, 
in normalize_path

    host, tail = address.hosttail_split(path)

  File "/usr/lib/python2.7/site-packages/vdsm/common/network/address.py", line 
43, in hosttail_split

    raise HosttailError('%s is not a valid hosttail address:' % hosttail)

HosttailError: :/ is not a valid hosttail address:



Not sure if it's related but since the restart the hosted_storage domain has 
been elected the master domain.



I'm a bit stuck at the moment. My only idea is to remove HE and switch to a 
standalone Engine VM running outside the cluster.



Thanks,



Alan




___
Users mailing list -- mailto:users@ovirt.org
To unsubscribe send an email to mailto:users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDINZK5BQQHXYENSVV3OYFMVLG2YXBNT/






___
Users mailing list -- mailto:users@ovirt.org
To unsubscribe send an email to mailto:users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I6YJQFP43R5NTQN3HG2VWBJW2WFFBGNB/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QQSEPVUUAQCY3X4X2B2T3AKDFAR7KJ2F/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-16 Thread Fred Rolland
Sahina,
Can someone from your team review the steps done by Adrian?
Thanks,
Freddy

On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero 
wrote:

> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
> re-attach them to clear any possible issues and try out the suggestions
> provided.
>
> thank you!
>
> On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov 
> wrote:
>
>> I have the same locks , despite I have blacklisted all local disks:
>>
>> # VDSM PRIVATE
>> blacklist {
>> devnode "*"
>> wwid Crucial_CT256MX100SSD1_14390D52DCF5
>> wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
>> wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
>> wwid
>> nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
>> }
>>
>> If you have multipath reconfigured, do not forget to rebuild the
>> initramfs (dracut -f). It's a linux issue , and not oVirt one.
>>
>> In your case you had something like this:
>>/dev/VG/LV
>>   /dev/disk/by-id/pvuuid
>>  /dev/mapper/multipath-uuid
>> /dev/sdb
>>
>> Linux will not allow you to work with /dev/sdb , when multipath is
>> locking the block device.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <
>> adrianquint...@gmail.com> написа:
>>
>>
>> under Compute, hosts, select the host that has the locks on /dev/sdb,
>> /dev/sdc, etc.., select storage devices and in here is where you see a
>> small column with a bunch of lock images showing for each row.
>>
>>
>> However as a work around, on the newly added hosts (3 total), I had to
>> manually modify /etc/multipath.conf and add the following at the end as
>> this is what I noticed from the original 3 node setup.
>>
>> -
>> # VDSM REVISION 1.3
>> # VDSM PRIVATE
>> # BEGIN Added by gluster_hci role
>>
>> blacklist {
>> devnode "*"
>> }
>> # END Added by gluster_hci role
>> --
>> After this I restarted multipath and the lock went away and was able to
>> configure the new bricks thru the UI, however my concern is what will
>> happen if I reboot the server will the disks be read the same way by the OS?
>>
>> Also now able to expand the gluster with a new replicate 3 volume if
>> needed using http://host4.mydomain.com:9090.
>>
>>
>> thanks again
>>
>> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov 
>> wrote:
>>
>> In which menu do you see it this way ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
>> adrianquint...@gmail.com> написа:
>>
>>
>> Strahil,
>> this is the issue I am seeing now
>>
>> [image: image.png]
>>
>> The is thru the UI when I try to create a new brick.
>>
>> So my concern is if I modify the filters on the OS what impact will that
>> have after server reboots?
>>
>> thanks,
>>
>>
>>
>> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:
>>
>> I have edited my multipath.conf to exclude local disks , but you need to
>> set '#VDSM private' as per the comments in the header of the file.
>> Otherwise, use the /dev/mapper/multipath-device notation - as you would
>> do with any linux.
>>
>> Best Regards,
>> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>> >
>> > Thanks Alex, that makes more sense now  while trying to follow the
>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
>> are locked and inidicating " multpath_member" hence not letting me create
>> new bricks. And on the logs I see
>> >
>> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
>> "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume
>> '/dev/sdb' failed", "rc": 5}
>> > Same thing for sdc, sdd
>> >
>> > Should I manually edit the filters inside the OS, what will be the
>> impact?
>> >
>> > thanks again.
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>>
>>
>>
>> --
>> Adrian Quintero
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>>
>>
>>
>> --
>> Adrian Quintero
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> 

[ovirt-users] Re: Old mailing list SPAM

2019-05-16 Thread Karli Sjöberg
On 2019-05-15 07:46, Markus Stockhausen wrote:
> Hi,
>
> does anyone currently get old mails of 2016 from the mailing list?

Yep, me to!

I thought it was just my email client or server acting up. Comforting to
know I'm not the only one getting them, at least :)

/K

> We are spammed with something like this from teknikservice.nu:
>
> ...
> Received: from mail.ovirt.org (localhost [IPv6:::1])by mail.ovirt.org
>  (Postfix) with ESMTP id A33EA46AD3;Tue, 14 May 2019 14:48:48 -0400 (EDT)
>
> Received: by mail.ovirt.org (Postfix, from userid 995)id D283A407D0;
> Tue, 14
>  May 2019 14:42:29 -0400 (EDT)
>
> Received: from bauhaus.teknikservice.nu (smtp.teknikservice.nu
> [81.216.61.60])
> (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))(No
>  client certificate requested)by mail.ovirt.org (Postfix) with ESMTPS id
>  BF954467FEfor ; Tue, 14 May 2019 14:36:54 -0400 (EDT)
>
> Received: by bauhaus.teknikservice.nu (Postfix, from userid 0)id
> 259822F504;
>  Tue, 14 May 2019 20:32:33 +0200 (CEST) <- 3 YEAR TIME WARP ?
>
> Received: from washer.actnet.nu (washer.actnet.nu [212.214.67.187])(using
>  TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))(No client
>  certificate requested)by bauhaus.teknikservice.nu (Postfix) with
> ESMTPS id
>  430FEDA541for ; Thu,  6 Oct 2016 18:02:51 +0200
> (CEST)
>
> Received: from lists.ovirt.org (lists.ovirt.org [173.255.252.138])(using
>  TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))(No client
>  certificate requested)by washer.actnet.nu (Postfix) with ESMTPS id
>  D75A82293FCfor ; Thu,  6 Oct 2016 18:04:11 +0200
>  (CEST)
> ...
>
> Markus
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XI3LV4GPACT7ILZ3BNJLHHQBEWI3HWLI/


pEpkey.asc
Description: application/pgp-keys
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XN5LQ4JP5L2FIOGU4J72NZE2XDHBMJL4/


[ovirt-users] Re: VM Windows on 4.2

2019-05-16 Thread Gal Zaidman
Have you followed:
https://ovirt.org/documentation/vmm-guide/chap-Installing_Windows_Virtual_Machines.html
 ?
you need to install the drivers

On Wed, May 15, 2019 at 7:01 PM  wrote:

> > You need to install Windows VirtIO drivers during the installation.
> > This link should help:
> https://pve.proxmox.com/wiki/Windows_10_guest_best_practices
> >
> > On May 15 2019, at 8:36 am, gpesoli(a)it.iliad.com wrote:
>
> Thanks for answer.
>
> I just installed VirtIO drivers on my engine:
>
> # ll /usr/share/virtio-win/*.iso
> -rw-r--r--. 1 root root 370821120 Mar 11 23:00
> /usr/share/virtio-win/virtio-win-0.1.164.iso
> lrwxrwxrwx. 1 root root22 Apr  3 10:50
> /usr/share/virtio-win/virtio-win.iso -> virtio-win-0.1.164.iso
>
> ...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2RZTKY42VOXQVU4G4NA5PC2PYGQUI7O/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NV5BHJ4GOJEFY7JDCZA6LFRVRNH77Y37/


[ovirt-users] Re: VM Windows on 4.2

2019-05-16 Thread fraegia
> Have you followed:
> https://ovirt.org/documentation/vmm-guide/chap-Installing_Windows_Virtual...
>  ?
> you need to install the drivers
> 
> On Wed, May 15, 2019 at 7:01 PM https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z72K5SEATXQ2EMAZRJ4UQ67XT4K2PX6P/


[ovirt-users] [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.4 First Release Candidate, as of May 16th, 2019.

This update is a release candidate of the fourth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used
inproduction.

This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)

Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.

See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]

Additional Resources:
* Read more about the oVirt 4.3.4 release highlights:
http://www.ovirt.org/release/4.3.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.3.4/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MYLTY4WWPBSEIJLXELJHIX5HFFXTNJ35/


[ovirt-users] Re: oVirt Open Source Backup solution?

2019-05-16 Thread Martin
Hi,

there are some :

https://github.com/zipurman/oVIRT_Simple_Backup 

https://github.com/wefixit-AT/oVirtBackup 

https://github.com/vacosta94/VirtBKP 

Just my 2 cents :).

BR!

> On 9 May 2019, at 00:09, mich...@wanderingmad.com wrote:
> 
> Is there a good low to no-cost solution to backup oVirt and the virtual 
> machines?  I've been unabel to find something that will do a direct VM backup 
> instead of a backup agent installed on VM
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LHUU3EQVDYLP6I5NYO42SGKR2746ORN/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4ACW2U7XNFBVCSACNSSYCTEDNPINPUGO/


[ovirt-users] Wrong disk size in UI after expanding iscsi direct LUN

2019-05-16 Thread Bernhard Dick

Hi,

I've extended the size of one of my direct iSCSI LUNs. The VM is seeing 
the new size but in the webinterface there is still the old size 
reported. Is there a way to update this information? I already took a 
look into the list but there are only reports regarding updating the 
size the VM sees.


  Best regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/54YHISUA66227IAMI2UVPZRIXV54BAKA/


[ovirt-users] Re: Dropped RX Packets

2019-05-16 Thread Oliver Riesener

Hi Magnus,

I've had a bad **virtual** network card three times in the last five 
years. Yes it' possible.


I my case, NFS services didn't work as expected, but other services were ok.

Today if this would happen again, i unplug and replug the VM nic. Like:

GUI::Compute::VirtualMachines::VMname::Network Interfaces::nicN
-> Edit CardStatus -> Unplugged :: OK
-> Edit CardStatus -> Plugged :: OK

HTH

Oliver

On 16.05.19 15:17, Magnus Isaksson wrote:

Hello all!

I'm having quite some trouble with VMs that have a large amount of dropped 
packets on RX.
This, plus customers complain about short dropped connections, for example one 
customer has a SQL server and an other serevr connecting to it, and it is 
randomly dropping connections. Before they moved their VM:s to us they did not 
have any of these issues.

Does anyone have an idea of what this can be due to? And how can i fix it? It 
is starting to be a deal breaker for our customers on whether they will stay 
with us or not.

I was thinking of reinstalling the nodes with oVirt Node, instead of the full 
CentOS, would this perhaps fix the issue?

The enviroment is:
Huawei x6000 with 4 nodes
Each node having Intel X722 network card and connecting with 10G (fiber) to a 
Juniper EX 4600. Storage via FC to a IBM FS900.
Each node is running a full CentOS 7.6 connecting to a Engine 4.2.8.2

Regards
  Magnus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXGQSKYBUCFPDCBIQVAAZAWFQX54A2BD/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FQXYN3P2QD727ZKGCNDZCCOOJVJU52DU/


[ovirt-users] Re: Dropped RX Packets

2019-05-16 Thread Darrell Budic
Check your host for dropped packets as well. I had found that some of my older 
10G cards were setting smaller buffers than they could, and using ethtool to 
set tx and rx buffers to their max values significantly improved things for 
those cards. And look at your switch to be sure it/they are not dropping 
packets for some reason. 

If you’re using dual 10g links, how do you have them configured on the host?

> On May 16, 2019, at 9:38 AM, Oliver Riesener  
> wrote:
> 
> Hi Magnus,
> 
> I've had a bad **virtual** network card three times in the last five years. 
> Yes it' possible.
> 
> I my case, NFS services didn't work as expected, but other services were ok.
> 
> Today if this would happen again, i unplug and replug the VM nic. Like:
> 
> GUI::Compute::VirtualMachines::VMname::Network Interfaces::nicN
> -> Edit CardStatus -> Unplugged :: OK
> -> Edit CardStatus -> Plugged :: OK
> 
> HTH
> 
> Oliver
> 
> On 16.05.19 15:17, Magnus Isaksson wrote:
>> Hello all!
>> 
>> I'm having quite some trouble with VMs that have a large amount of dropped 
>> packets on RX.
>> This, plus customers complain about short dropped connections, for example 
>> one customer has a SQL server and an other serevr connecting to it, and it 
>> is randomly dropping connections. Before they moved their VM:s to us they 
>> did not have any of these issues.
>> 
>> Does anyone have an idea of what this can be due to? And how can i fix it? 
>> It is starting to be a deal breaker for our customers on whether they will 
>> stay with us or not.
>> 
>> I was thinking of reinstalling the nodes with oVirt Node, instead of the 
>> full CentOS, would this perhaps fix the issue?
>> 
>> The enviroment is:
>> Huawei x6000 with 4 nodes
>> Each node having Intel X722 network card and connecting with 10G (fiber) to 
>> a Juniper EX 4600. Storage via FC to a IBM FS900.
>> Each node is running a full CentOS 7.6 connecting to a Engine 4.2.8.2
>> 
>> Regards
>>  Magnus
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXGQSKYBUCFPDCBIQVAAZAWFQX54A2BD/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FQXYN3P2QD727ZKGCNDZCCOOJVJU52DU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WDQLQVXY3YIPFB4NGK4QVKQU7XXWOV7Z/


[ovirt-users] Re: oVirt Open Source Backup solution?

2019-05-16 Thread Jorick Astrego

On 5/16/19 4:12 PM, Derek Atkins wrote:
> Jorick Astrego  writes:
>
>> Maybe split it in 2 disks? One OS and one APP/DATA? You can then backup 
>> only one. 
>>  
>> I prefer to do this anyway as I then can just redeploy the OS and attach 
>> the second disk to get things back up and running. 
> Are you suggesting that /etc and /var should go onto their own disks?
> There is lots of configuration in /etc (which is usually in the root
> disk) that needs to be backed up.
>
> Also, different apps store configuration and data in different places,
> so saying "just put it on a second disk" can be hard.
>
> Sure, it works fine for /home -- but mysql?  imapd?  ...
>
> -derek

/etc for us is mostly generated by deployment and provisioning, in
production at least so that is not a problem. All changes have to go
through puppet/ansible.

The /var I have been putting on a seperate disk for a long time. And a
lot of VM's have separate disks for /data or /home.

So this combination works for us but it depends on your layout and
whether you have any provisioning/config management tools an procedures
running.

Regards,

Jorick Astrego





Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YEUDZEG6VDQMESWGQ4UJOKS77ACVB6ZW/


[ovirt-users] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Nir Soffer
On Thu, May 16, 2019 at 7:42 PM Strahil  wrote:

> Hi Sandro,
>
> Thanks for the update.
>
> I have just upgraded to RC1 (using gluster v6 here)  and the issue  I
> detected in 4.3.3.7 - where gluster Storage domain fails creation - is
> still present.
>

What is is this issue? can you provide a link to the bug/mail about it?

Can you check if the 'dd' command executed during creation has been
> recently modified ?
>
> I've received update from Darrell  (also gluster v6) , but haven't
> received an update from anyone who is using gluster v5 -> thus I haven't
> opened a bug yet.
>
> Best Regards,
> Strahil Nikolov
> On May 16, 2019 11:21, Sandro Bonazzola  wrote:
>
> The oVirt Project is pleased to announce the availability of the oVirt
> 4.3.4 First Release Candidate, as of May 16th, 2019.
>
> This update is a release candidate of the fourth in a series of
> stabilization updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used
> inproduction.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
> * oVirt Node 4.3 (available for x86_64 only)
>
> Experimental tech preview for x86_64 and s390x architectures for Fedora 28
> is also included.
>
> See the release notes [1] for installation / upgrade instructions and a
> list of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is already available[2]
>
> Additional Resources:
> * Read more about the oVirt 4.3.4 release highlights:
> http://www.ovirt.org/release/4.3.4/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.4/
> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> 
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CIIDRCRUPCUYN4TX5Z3SL6R/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABFECS5ES4MVL3UZC34GLIDN5PNDTNOR/


[ovirt-users] Re: Dropped RX Packets

2019-05-16 Thread Magnus Isaksson
Hello

@strahil 
The packet drops are frequent, every time i run "ip -s link" on the guest there 
is new dropped packets, on the hosts it says "0" and in oVirt it says "0".

I can run tcpdump on hosts and guests, but i don't know how to capture the 
dropped packets with tcpdump.

There are no RX or TX errors anywhere, not on hosts, guests or switches.

The connection drops are completely random, sometimes after a few minutes and 
sometimes after a couple of hours, really hard to narrow down, but this may be 
some errors in our customers network, they are investigating it now, so i will 
come back with that issue is it still persists.

@Oliver
I tried this, unfortunately still same result, still dropping packets.

@Darell
I tried increasing the RX and TX buffer on the hosts, but the guests still drop 
packets.

I am using dual 10G, setup in Active-Backup going to two switches, but the 
second switch is now turned off during the testing to narrow this down.

Regards
 Magnus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TISY75JZ34AJAVV3B233WLCJ5PFBZRRL/


[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-16 Thread Andreas Elvers
AND HAVE BACKUPS OF THE ENGINE. But I suppose you already have automated 
backups of the engine. Right?

After raising the datacenter compatibility to 4.3 the backups of 4.2 oVirt are 
incompatible with 4.3!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OK5TM7E2KNGQONN746WA57ID66IDKKY/


[ovirt-users] Re: seal a rhel8 vm for templating fails

2019-05-16 Thread Michal Skrivanek
> On 16 May 2019, at 18:32, Strahil  wrote:
>
> Can you try on a linux host to seal the VM via virt-sysprep ?
> Maybe virt-sysprep is not EL 8 ready...

Indeed it won’t work for el8 guests. Unfortunately an el7
libguestfs(so any virt- tool) can’t work with el8 filesystems. And
ovirt still uses el7 hosts
That’s going to be a limitation until 4.4

Thanks,
michal

>
> Best Regards,
> Strahil NikolovOn May 16, 2019 19:15, Nathanaël Blanchet  
> wrote:
>>
>> Hi,
>>
>> I was used to successfully seal some el7 vms when templating, but with
>> rhel8, it always fails with that logs:
>>
>> 2019-05-16 15:45:03,499+02 ERROR
>> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
>> [db02e602-4b9d-4935-908c-e9e8c90a808b] Ending command
>> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' with failure.
>> 2019-05-16 15:45:03,533+02 INFO
>> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
>> [db02e602-4b9d-4935-908c-e9e8c90a808b] START, SetVmStatusVDSCommand(
>> SetVmStatusVDSCommandParameters:{vmId='b79d8d62-212a-4f62-b236-3be6f1ed251e',
>> status='Down', exitStatus='Normal'}), log id: 7e657121
>> 2019-05-16 15:45:03,538+02 INFO
>> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
>> [db02e602-4b9d-4935-908c-e9e8c90a808b] FINISH, SetVmStatusVDSCommand,
>> log id: 7e657121
>> 2019-05-16 15:45:03,546+02 INFO
>> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
>> [db02e602-4b9d-4935-908c-e9e8c90a808b] Lock freed to object
>> 'EngineLock:{exclusiveLocks='[rhel8.0=TEMPLATE_NAME,
>> 8fb93ef2-d8a1-4d95-afed-c37131312462=TEMPLATE,
>> 3ae1ad71-193b-4222-9727-159b402fef49=DISK]',
>> sharedLocks='[b79d8d62-212a-4f62-b236-3be6f1ed251e=VM]'}'
>> 2019-05-16 15:45:03,560+02 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
>> [db02e602-4b9d-4935-908c-e9e8c90a808b] EVENT_ID:
>> USER_ADD_VM_TEMPLATE_SEAL_FAILURE(1,324), Failed to seal Template
>> rhel8.0 (VM: thym-rhel8).
>>
>> But I successfully make a el8 template without sealing, but the issue is
>> that subscription-manager doesn't activate properly when creating new
>> vms from it.
>>
>> ovirt 3.3.3
>>
>> --
>> Nathanaël Blanchet
>>
>> Supervision réseau
>> Pôle Infrastrutures Informatiques
>> 227 avenue Professeur-Jean-Louis-Viala
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax  33 (0)4 67 54 84 14
>> blanc...@abes.fr
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OO7EGM25KWGV5J5V4UDOGEPIU3KJWTMW/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FG53F6UHZZGTF64MITRQIUVMHCXTCV5C/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUEQFHXWXBFAZTKDFD22IW25HOTKMZ6I/


[ovirt-users] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Gobinda Das
On Thu, 16 May 2019, 10:02 p.m. Sandro Bonazzola, 
wrote:

>
>
> Il giorno gio 16 mag 2019 alle ore 18:29 Strahil 
> ha scritto:
>
>> Hi Sandro,
>>
>> Thanks for the update.
>>
>> I have just upgraded to RC1 (using gluster v6 here)  and the issue  I
>> detected in 4.3.3.7 - where gluster Storage domain fails creation - is
>> still present.
>>
> What is the error? Can I get error log?
May be engine and vdsm log.

> Can you check if the 'dd' command executed during creation has been
>> recently modified ?
>>
>> I've received update from Darrell  (also gluster v6) , but haven't
>> received an update from anyone who is using gluster v5 -> thus I haven't
>> opened a bug yet.
>>
>
> Thanks for the feedback, I added a few people to the thread, hopefully
> they can help on this.
>
>
>
>> Best Regards,
>> Strahil Nikolov
>> On May 16, 2019 11:21, Sandro Bonazzola  wrote:
>>
>> The oVirt Project is pleased to announce the availability of the oVirt
>> 4.3.4 First Release Candidate, as of May 16th, 2019.
>>
>> This update is a release candidate of the fourth in a series of
>> stabilization updates to the 4.3 series.
>> This is pre-release software. This pre-release should not to be used
>> inproduction.
>>
>> This release is available now on x86_64 architecture for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>>
>> This release supports Hypervisor Hosts on x86_64 and ppc64le
>> architectures for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>> * oVirt Node 4.3 (available for x86_64 only)
>>
>> Experimental tech preview for x86_64 and s390x architectures for Fedora
>> 28 is also included.
>>
>> See the release notes [1] for installation / upgrade instructions and a
>> list of new features and bugs fixed.
>>
>> Notes:
>> - oVirt Appliance is already available
>> - oVirt Node is already available[2]
>>
>> Additional Resources:
>> * Read more about the oVirt 4.3.4 release highlights:
>> http://www.ovirt.org/release/4.3.4/
>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>> * Check out the latest project news on the oVirt blog:
>> http://www.ovirt.org/blog/
>>
>> [1] http://www.ovirt.org/release/4.3.4/
>> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>> 
>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> 
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MZGF4UVYUBWY5PEG2LOYXBFPJ3ISDX7C/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Darrell Budic
I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

2019-05-16 10:25:08,158-0500 INFO  (jsonrpc/1) [vdsm.api] START 
connectStorageServer(domType=7, spUUID=u'----', 
conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.11:10.50.3.10', 
u'id': u'----', u'connection': 
u'10.50.3.12:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': 
u'false', u'vfs_type': u'glusterfs', u'password': '', u'port': u''}], 
options=None) from=:::10.100.90.5,44732, 
flow_id=fcde45c4-3b03-4a85-818a-06be560edee4, 
task_id=0582219d-ce68-4951-8fbd-3dce6d102fca (api:48)
2019-05-16 10:25:08,306-0500 INFO  (jsonrpc/1) 
[storage.StorageServer.MountConnection] Creating directory 
u'/rhev/data-center/mnt/glusterSD/10.50.3.12:_test' (storageServer:168)
2019-05-16 10:25:08,306-0500 INFO  (jsonrpc/1) [storage.fileUtils] Creating 
directory: /rhev/data-center/mnt/glusterSD/10.50.3.12:_test mode: None 
(fileUtils:199)
2019-05-16 10:25:08,306-0500 WARN  (jsonrpc/1) 
[storage.StorageServer.MountConnection] Using user specified 
backup-volfile-servers option (storageServer:275)
2019-05-16 10:25:08,306-0500 INFO  (jsonrpc/1) [storage.Mount] mounting 
10.50.3.12:/test at /rhev/data-center/mnt/glusterSD/10.50.3.12:_test (mount:204)
2019-05-16 10:25:08,453-0500 INFO  (jsonrpc/1) [IOProcessClient] (Global) 
Starting client (__init__:308)
2019-05-16 10:25:08,460-0500 INFO  (ioprocess/5389) [IOProcess] (Global) 
Starting ioprocess (__init__:434)
2019-05-16 10:25:08,473-0500 INFO  (itmap/0) [IOProcessClient] 
(/glusterSD/10.50.3.12:_test) Starting client (__init__:308)
2019-05-16 10:25:08,481-0500 INFO  (ioprocess/5401) [IOProcess] 
(/glusterSD/10.50.3.12:_test) Starting ioprocess (__init__:434)
2019-05-16 10:25:08,484-0500 INFO  (jsonrpc/1) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'status': 0, 'id': 
u'----'}]} from=:::10.100.90.5,44732, 
flow_id=fcde45c4-3b03-4a85-818a-06be560edee4, 
task_id=0582219d-ce68-4951-8fbd-3dce6d102fca (api:54)
2019-05-16 10:25:08,484-0500 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.connectStorageServer succeeded in 0.33 seconds (__init__:312)

2019-05-16 10:25:09,169-0500 INFO  (jsonrpc/7) [vdsm.api] START 
connectStorageServer(domType=7, spUUID=u'----', 
conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.11:10.50.3.10', 
u'id': u'd0ab6b05-2486-40f0-9b15-7f150017ec12', u'connection': 
u'10.50.3.12:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': 
u'false', u'vfs_type': u'glusterfs', u'password': '', u'port': u''}], 
options=None) from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=9eb2f42c-852d-4af6-ae4e-f65d8283d6e0 (api:48)
2019-05-16 10:25:09,180-0500 INFO  (jsonrpc/7) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'status': 0, 'id': 
u'd0ab6b05-2486-40f0-9b15-7f150017ec12'}]} from=:::10.100.90.5,44732, 
flow_id=31d993dd, task_id=9eb2f42c-852d-4af6-ae4e-f65d8283d6e0 (api:54)
2019-05-16 10:25:09,180-0500 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.connectStorageServer succeeded in 0.01 seconds (__init__:312)
2019-05-16 10:25:09,186-0500 INFO  (jsonrpc/5) [vdsm.api] START 
createStorageDomain(storageType=7, 
sdUUID=u'4037f461-2b6d-452f-8156-fcdca820a8a1', domainName=u'gTest', 
typeSpecificArg=u'10.50.3.12:/test', domClass=1, domVersion=u'4', 
block_size=512, max_hosts=250, options=None) from=:::10.100.90.5,44732, 
flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:48)
2019-05-16 10:25:09,492-0500 WARN  (jsonrpc/5) [storage.LVM] Reloading VGs 
failed (vgs=[u'4037f461-2b6d-452f-8156-fcdca820a8a1'] rc=5 out=[] err=['  
Volume group "4037f461-2b6d-452f-8156-fcdca820a8a1" not found', '  Cannot 
process volume group 4037f461-2b6d-452f-8156-fcdca820a8a1']) (lvm:442)
2019-05-16 10:25:09,507-0500 INFO  (jsonrpc/5) [storage.StorageDomain] 
sdUUID=4037f461-2b6d-452f-8156-fcdca820a8a1 domainName=gTest 
remotePath=10.50.3.12:/test domClass=1, block_size=512, alignment=1048576 
(nfsSD:86)
2019-05-16 10:25:09,521-0500 INFO  (jsonrpc/5) [IOProcessClient] 
(4037f461-2b6d-452f-8156-fcdca820a8a1) Starting client (__init__:308)
2019-05-16 10:25:09,528-0500 INFO  (ioprocess/5437) [IOProcess] 
(4037f461-2b6d-452f-8156-fcdca820a8a1) Starting ioprocess (__init__:434)
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 

[ovirt-users] seal a rhel8 vm for templating fails

2019-05-16 Thread Nathanaël Blanchet

Hi,

I was used to successfully seal some el7 vms when templating, but with 
rhel8, it always fails with that logs:


2019-05-16 15:45:03,499+02 ERROR 
[org.ovirt.engine.core.bll.AddVmTemplateCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) 
[db02e602-4b9d-4935-908c-e9e8c90a808b] Ending command 
'org.ovirt.engine.core.bll.AddVmTemplateCommand' with failure.
2019-05-16 15:45:03,533+02 INFO 
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) 
[db02e602-4b9d-4935-908c-e9e8c90a808b] START, SetVmStatusVDSCommand( 
SetVmStatusVDSCommandParameters:{vmId='b79d8d62-212a-4f62-b236-3be6f1ed251e', 
status='Down', exitStatus='Normal'}), log id: 7e657121
2019-05-16 15:45:03,538+02 INFO 
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) 
[db02e602-4b9d-4935-908c-e9e8c90a808b] FINISH, SetVmStatusVDSCommand, 
log id: 7e657121
2019-05-16 15:45:03,546+02 INFO 
[org.ovirt.engine.core.bll.AddVmTemplateCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) 
[db02e602-4b9d-4935-908c-e9e8c90a808b] Lock freed to object 
'EngineLock:{exclusiveLocks='[rhel8.0=TEMPLATE_NAME, 
8fb93ef2-d8a1-4d95-afed-c37131312462=TEMPLATE, 
3ae1ad71-193b-4222-9727-159b402fef49=DISK]', 
sharedLocks='[b79d8d62-212a-4f62-b236-3be6f1ed251e=VM]'}'
2019-05-16 15:45:03,560+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) 
[db02e602-4b9d-4935-908c-e9e8c90a808b] EVENT_ID: 
USER_ADD_VM_TEMPLATE_SEAL_FAILURE(1,324), Failed to seal Template 
rhel8.0 (VM: thym-rhel8).


But I successfully make a el8 template without sealing, but the issue is 
that subscription-manager doesn't activate properly when creating new 
vms from it.


ovirt 3.3.3

--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OO7EGM25KWGV5J5V4UDOGEPIU3KJWTMW/


[ovirt-users] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil
Hi Sandro,

Thanks for the update.

I have just upgraded to RC1 (using gluster v6 here)  and the issue  I detected 
in 4.3.3.7 - where gluster Storage domain fails creation - is still present.

Can you check if the 'dd' command executed during creation has been recently 
modified ?

I've received update from Darrell  (also gluster v6) , but haven't received an 
update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.

Best Regards,
Strahil NikolovOn May 16, 2019 11:21, Sandro Bonazzola  
wrote:
>
> The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 
> First Release Candidate, as of May 16th, 2019.
>
> This update is a release candidate of the fourth in a series of stabilization 
> updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used 
> inproduction.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures 
> for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
> * oVirt Node 4.3 (available for x86_64 only)
>
> Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is 
> also included.
>
> See the release notes [1] for installation / upgrade instructions and a list 
> of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is already available[2]
>
> Additional Resources:
> * Read more about the oVirt 4.3.4 release 
> highlights:http://www.ovirt.org/release/4.3.4/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt 
> blog:http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.4/
> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
> -- 
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA
>
> sbona...@redhat.com   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CIIDRCRUPCUYN4TX5Z3SL6R/


[ovirt-users] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Sandro Bonazzola
Il giorno gio 16 mag 2019 alle ore 18:29 Strahil  ha
scritto:

> Hi Sandro,
>
> Thanks for the update.
>
> I have just upgraded to RC1 (using gluster v6 here)  and the issue  I
> detected in 4.3.3.7 - where gluster Storage domain fails creation - is
> still present.
>
> Can you check if the 'dd' command executed during creation has been
> recently modified ?
>
> I've received update from Darrell  (also gluster v6) , but haven't
> received an update from anyone who is using gluster v5 -> thus I haven't
> opened a bug yet.
>

Thanks for the feedback, I added a few people to the thread, hopefully they
can help on this.



> Best Regards,
> Strahil Nikolov
> On May 16, 2019 11:21, Sandro Bonazzola  wrote:
>
> The oVirt Project is pleased to announce the availability of the oVirt
> 4.3.4 First Release Candidate, as of May 16th, 2019.
>
> This update is a release candidate of the fourth in a series of
> stabilization updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used
> inproduction.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
> * oVirt Node 4.3 (available for x86_64 only)
>
> Experimental tech preview for x86_64 and s390x architectures for Fedora 28
> is also included.
>
> See the release notes [1] for installation / upgrade instructions and a
> list of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is already available[2]
>
> Additional Resources:
> * Read more about the oVirt 4.3.4 release highlights:
> http://www.ovirt.org/release/4.3.4/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.4/
> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> 
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3R76OJ2CEMUX5NYJ2XNE6RIOXD3B37TK/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-16 Thread Lucie Leistnerova

Hi Rik,

On 5/14/19 2:21 PM, Rik Theys wrote:


Hi,

It seems VM pools are completely broken since our upgrade to 4.3. Is 
anybody else also experiencing this issue?


I've tried to reproduce this issue. And I can use pool VMs as expected, 
no problem. I've tested clean install and also upgrade from 4.2.8.7.
Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with 
ovirt-web-ui-1.5.2-1.el7ev.noarch


Only a single instance from a pool can be used. Afterwards the pool 
becomes unusable due to a lock not being released. Once ovirt-engine 
is restarted, another (single) VM from a pool can be used.



What users are running the VMs? What are the permissions?
Each VM is running by other user? Were already some VMs running before 
the upgrade?

Please provide exact steps.


I've added my findings to bug 1462236, but I'm no longer sure the 
issue is the same as the one initially reported.


When the first VM of a pool is started:

2019-05-14 13:26:46,058+02 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
IsVmDuringInitiatingVDSCommand( 
IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
 log id: 2fb4f7f5
2019-05-14 13:26:46,058+02 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
IsVmDuringInitiatingVDSCommand, return: false, log id: 2fb4f7f5
2019-05-14 13:26:46,208+02 INFO  [org.ovirt.engine.core.bll.VmPoolHandler] 
(default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]', 
sharedLocks=''}'

-> it has acquired a lock (lock1)

2019-05-14 13:26:46,247+02 INFO  
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
 sharedLocks=''}'

-> it has acquired another lock (lock2)

2019-05-14 13:26:46,352+02 INFO  
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Running command: 
AttachUserToVmFromPoolAndRunCommand internal: false. Entities affected :  ID: 
4c622213-e5f4-4032-8639-643174b698cc Type: VmPoolAction group 
VM_POOL_BASIC_OPERATIONS with role type USER
2019-05-14 13:26:46,393+02 INFO  
[org.ovirt.engine.core.bll.AddPermissionCommand] (default task-6) 
[e3c5745c-e593-4aed-ba67-b173808140e8] Running command: AddPermissionCommand 
internal: true. Entities affected :  ID: d8a99676-d520-425e-9974-1b1efe6da8a5 
Type: VMAction group MANIPULATE_PERMISSIONS with role type USER
2019-05-14 13:26:46,433+02 INFO  
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Succeeded giving user 
'a5bed59c-d2fe-4fe4-bff7-52efe089ebd6' permission to Vm 
'd8a99676-d520-425e-9974-1b1efe6da8a5'
2019-05-14 13:26:46,608+02 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
IsVmDuringInitiatingVDSCommand( 
IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
 log id: 67acc561
2019-05-14 13:26:46,608+02 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
IsVmDuringInitiatingVDSCommand, return: false, log id: 67acc561
2019-05-14 13:26:46,719+02 INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Running 
command:RunVmCommand internal: true. Entities affected :  ID: 
d8a99676-d520-425e-9974-1b1efe6da8a5 Type: VMAction group RUN_VM with role type 
USER
2019-05-14 13:26:46,791+02 INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
UpdateVmDynamicDataVDSCommand( 
UpdateVmDynamicDataVDSCommandParameters:{hostId='null', 
vmId='d8a99676-d520-425e-9974-1b1efe6da8a5', 
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@6db8c94d'}), 
log id: 2c110e4
2019-05-14 13:26:46,795+02 INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
UpdateVmDynamicDataVDSCommand, return: , log id: 2c110e4
2019-05-14 13:26:46,804+02 INFO  
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-6) 
[e3c5745c-e593-4aed-ba67-b173808140e8] START,CreateVDSCommand( 
CreateVDSCommandParameters:{hostId='eec7ec2b-cae1-4bb9-b933-4dff47a70bdb', 
vmId='d8a99676-d520-425e-9974-1b1efe6da8a5', vm='VM [stud-c7-1]'}), log id: 
71d599f2
2019-05-14 13:26:46,809+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (default 
task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
CreateBrokerVDSCommand(HostName = 

[ovirt-users] Re: seal a rhel8 vm for templating fails

2019-05-16 Thread Strahil
Can you try on a linux host to seal the VM via virt-sysprep ?
Maybe virt-sysprep is not EL 8 ready...

Best Regards,
Strahil NikolovOn May 16, 2019 19:15, Nathanaël Blanchet  
wrote:
>
> Hi, 
>
> I was used to successfully seal some el7 vms when templating, but with 
> rhel8, it always fails with that logs: 
>
> 2019-05-16 15:45:03,499+02 ERROR 
> [org.ovirt.engine.core.bll.AddVmTemplateCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55) 
> [db02e602-4b9d-4935-908c-e9e8c90a808b] Ending command 
> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' with failure. 
> 2019-05-16 15:45:03,533+02 INFO 
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55) 
> [db02e602-4b9d-4935-908c-e9e8c90a808b] START, SetVmStatusVDSCommand( 
> SetVmStatusVDSCommandParameters:{vmId='b79d8d62-212a-4f62-b236-3be6f1ed251e', 
> status='Down', exitStatus='Normal'}), log id: 7e657121 
> 2019-05-16 15:45:03,538+02 INFO 
> [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55) 
> [db02e602-4b9d-4935-908c-e9e8c90a808b] FINISH, SetVmStatusVDSCommand, 
> log id: 7e657121 
> 2019-05-16 15:45:03,546+02 INFO 
> [org.ovirt.engine.core.bll.AddVmTemplateCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55) 
> [db02e602-4b9d-4935-908c-e9e8c90a808b] Lock freed to object 
> 'EngineLock:{exclusiveLocks='[rhel8.0=TEMPLATE_NAME, 
> 8fb93ef2-d8a1-4d95-afed-c37131312462=TEMPLATE, 
> 3ae1ad71-193b-4222-9727-159b402fef49=DISK]', 
> sharedLocks='[b79d8d62-212a-4f62-b236-3be6f1ed251e=VM]'}' 
> 2019-05-16 15:45:03,560+02 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55) 
> [db02e602-4b9d-4935-908c-e9e8c90a808b] EVENT_ID: 
> USER_ADD_VM_TEMPLATE_SEAL_FAILURE(1,324), Failed to seal Template 
> rhel8.0 (VM: thym-rhel8). 
>
> But I successfully make a el8 template without sealing, but the issue is 
> that subscription-manager doesn't activate properly when creating new 
> vms from it. 
>
> ovirt 3.3.3 
>
> -- 
> Nathanaël Blanchet 
>
> Supervision réseau 
> Pôle Infrastrutures Informatiques 
> 227 avenue Professeur-Jean-Louis-Viala 
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55 
> Fax  33 (0)4 67 54 84 14 
> blanc...@abes.fr 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OO7EGM25KWGV5J5V4UDOGEPIU3KJWTMW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FG53F6UHZZGTF64MITRQIUVMHCXTCV5C/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-16 Thread Gianluca Cecchi
On Thu, May 16, 2019 at 6:32 PM Lucie Leistnerova 
wrote:

> Hi Rik,
> On 5/14/19 2:21 PM, Rik Theys wrote:
>
> Hi,
>
> It seems VM pools are completely broken since our upgrade to 4.3. Is
> anybody else also experiencing this issue?
>
> I've tried to reproduce this issue. And I can use pool VMs as expected, no
> problem. I've tested clean install and also upgrade from 4.2.8.7.
> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
> ovirt-web-ui-1.5.2-1.el7ev.noarch
>
> Only a single instance from a pool can be used. Afterwards the pool
> becomes unusable due to a lock not being released. Once ovirt-engine is
> restarted, another (single) VM from a pool can be used.
>
> What users are running the VMs? What are the permissions?
> Each VM is running by other user? Were already some VMs running before the
> upgrade?
> Please provide exact steps.
>
>
> Hi, just an idea... could it be related in any way with disks always
created as preallocated problems reported by users using gluster as backend
storage?
What kind of storage domains are you using Rik?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IBLEPP446ZONZG3C46OOHVU73CCF7LB/


[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-16 Thread Andreas Elvers
You can take a look at the RHEV 4.3 documentation.

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/upgrade_guide/index

There is another good read at:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/index

You should probably wait for 4.3.4 to arrive, since 4.3.3 is still a little bit 
rough on the edges. I can currently only create a new VM via the templates, 
because the normal creation dialog errors, when I change the datacenter. 
Otherwise the upgrade went very well.

If you are running off of oVirt Node NG it is very straightforward.

I did:

1. Check of all nodes are using firewalld. You have to switch all nodes to use 
firewalld, because Iptables is deprecated and removed in 4.3 as it was said in 
documentation. You have to re-install every node for that. So every node has to 
go through maintenance.

2. Upgrade the engine. First to the 4.2.8 then to 4.3.x

My upgrade flow for this is:

Engine Update START

Minor Upgrades first

enable global maintenance mode

login to engine
engine-upgrade-check
yum update "ovirt-*-setup*"
engine-setup
yum update
when success: disable global maintenance mode

Major Upgrade

enable global maintenance mode

login to engine
yum install 
http://resources.ovirt.org/pub/yum-repo/ovirt-release[releasenumber].rpm
engine-upgrade-check
yum update "ovirt-*-setup*"
engine-setup
remove the old ovirt release from /etc/yum.repos.d
yum update
when success: disable global maintenance mode

Engine Upgrade END

3. Upgrade the nodes. 

First to 4.2.8 then as specified in ovirt release info:

yum install 
https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.3-1.el7.noarch.rpm

4. Upgrade the Datacenter Compatibility to 4.3
This includes rebooting all VMs at one point. You can read about it in the 
documentation.

Then you're done.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBTRBHU6WMA3BGHAVX3BZKQF3PAQBC5H/