[ovirt-users] oVirt 4.3.2 Disk extended broken (UI)

2019-04-05 Thread Strahil Nikolov
Hi,
I have just extended the disk of one of my openSuSE VMs and I have noticed that 
despite the disk is only 140GiB (in UI), the VM sees it as 180GiB.
I think that this should not happen at all.
[root@ovirt1 ee8b1dce-c498-47ef-907f-8f1a6fe4e9be]# qemu-img info 
c525f67d-92ac-4f36-a0ef-f8db501102faimage: 
c525f67d-92ac-4f36-a0ef-f8db501102fafile format: rawvirtual size: 180G 
(193273528320 bytes)disk size: 71G
Attaching some UI screen shots.
Note: I have extended the disk via the UI by selecting 40GB (old value in UI -> 
100GB).
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YDIMSTFH74YPK7RKBKWKKPJ3TP3YI64B/


[ovirt-users] Re: Ovirt Host Replacement/Rebuild

2019-04-05 Thread Vincent Royer
I'd say yes, I blow away nodes and reinstall them often, as workarounds for
various upgrade failures.

If you spend more than 20 mins troubleshooting it's more time efficient to
just start over.

On Fri, Apr 5, 2019, 4:42 PM Jim Kusznir  wrote:

> Hi all:
>
> I had an unplanned power outage (generator failed to start, power failure
> lasted 3 min longer than UPS batteries).  One node didn't survive the
> unplanned power outage.
>
> By that, I mean it kernel panic's on boot, and I haven't been able to
> capture the KP or the first part of it (just the end), and so I don't
> truely know what the root cause is.  I have validated the hardware is just
> fine, so its got to be an OS corruption.
>
> Based on this, I was thinking that perhaps the easiest way to recover
> would simply be to delete the host from the cluster, reformat and reinstall
> this host, and then add it back to the cluster as a new host.  Is this in
> fact a good idea?  Are there any references to how to do this (the detailed
> steps so I don't mess it up)?
>
> My cluster is (was) a 3 node hyperconverged cluster with gluster used for
> the management node.  I also have a gluster share for VMs, but I use an NFS
> share from a NAS for that (which I will ask about in another post).
>
> Thanks for the help!
> --Jim
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y66A6Q3NOGD3BCQ4UVAZK5ATS4ZFPVYV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/72VHXP2RZM47URJMDMXKEHBF4Z7A5WYE/


[ovirt-users] Ovirt Host Replacement/Rebuild

2019-04-05 Thread Jim Kusznir
Hi all:

I had an unplanned power outage (generator failed to start, power failure
lasted 3 min longer than UPS batteries).  One node didn't survive the
unplanned power outage.

By that, I mean it kernel panic's on boot, and I haven't been able to
capture the KP or the first part of it (just the end), and so I don't
truely know what the root cause is.  I have validated the hardware is just
fine, so its got to be an OS corruption.

Based on this, I was thinking that perhaps the easiest way to recover would
simply be to delete the host from the cluster, reformat and reinstall this
host, and then add it back to the cluster as a new host.  Is this in fact a
good idea?  Are there any references to how to do this (the detailed steps
so I don't mess it up)?

My cluster is (was) a 3 node hyperconverged cluster with gluster used for
the management node.  I also have a gluster share for VMs, but I use an NFS
share from a NAS for that (which I will ask about in another post).

Thanks for the help!
--Jim
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y66A6Q3NOGD3BCQ4UVAZK5ATS4ZFPVYV/


[ovirt-users] errors upgrading ovirt 4.3.1 to 4.3.2

2019-04-05 Thread Jason Keltz

Hi.

I have a few issues after a recent upgrade from 4.3.1 to 4.3.2:

1) Power management is no longer working.  I'm using Dell drac7. This 
has always worked previously.  When I click on the "Test" button, I get: 
"Testing in progress.  It will take a few seconds. Please wait" but then 
it just sits there and never returns.


2) After rekickstarting one of my hosts, when I click on it, and choose 
"Host Console",  I get "Authentication failed: invalid-hostkey".  If I 
click "Try again", I'm taken to a page with "404 - Page not found Click 
here to continue".  The page not found is likely a bug.  Now, if I visit 
cockpit directly on the host via its own URL, it works just fine.  Given 
that I deleted the host and re-added to engine, it's really not clear to 
me how to tell engine to refresh.  I figured after rekickstarting the 
host, the problem would surely go away, but it did not.


3) From time to time, I am seeing the following error appear in engine: 
"Uncaught exception occurred. Please try reloading the page. Details: 
(TypeError): oab (...) is null Please have your administrator check the 
UI logs".  Another bug ...


Engine is standalone engine, not hosted.

Jason.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z4JIEXM4WALJTISKCTZC7WSNX7IWSXSK/


[ovirt-users] I can't create disk with ovirt 4.3.2

2019-04-05 Thread siovelrm
Hello, I just installed ovirt 4.3.2, in Self-Hosted mode, all the same as in 
previous versions. It happens that when I want to create a disk with a user 
that is not the admin I get the following error.
"Error while executing action Add Disk to VM: Internal Engine Error"
This happens to me with all the other users except the admin even when those 
users are also Super User. Please I need your help.

Engine logs say the following
2019-04-05 15: 34: 09,977-04 INFO 
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-13) 
[37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Running command: AddDiskCommand 
internal: false. Entities affected: ID: c76a5059-f891-496a-b45f-7ba7ea878ceb 
Type: StorageAction group CREATE_DISK with role type USER
2019-04-05 15: 34: 10,002-04 WARN 
[org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] 
(default task-13) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Validation of action 
'AddImageFromScratch' failed for user jdoe @ internal-authz. Reasons: 
VAR__TYPE__STORAGE__DOMAIN, 
NON_ADMIN_USER_NOT_AUTHORIZED_TO_PERFORM_ACTION_ON_HE
2019-04-05 15: 34: 10,070-04 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-13) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] EVENT_ID: USER_FAILED_ADD_DISK 
(2,023), Add-Disk operation failed (User: jdoe @ internal-authz).
2019-04-05 15: 34: 10,432-04 INFO 
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-8) 
[37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Command ' AddDisk 'id:' 
8b10a2b8-3a38-45a4-9c08-7e742eca001b 'child commands' 
[bcfaa199-dee6-4ae4-9404-9b75cd8e9339]' executions were completed, status' 
FAILED '
2019-04-05 15: 34: 11,461-04 ERROR 
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-59) [37026e0a-92e6-4bfa-9f0f- 
f052d9eced2d] Ending command 
'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' with failure.
2019-04-05 15: 34: 11,471-04 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-59) [37026e0a-92e6-4bfa- 
9f0f-f052d9eced2d] Ending command 
'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' with 
failure.
2019-04-05 15: 34: 11,493-04 WARN 
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-59) [] VM is null - not 
unlocking
2019-04-05 15: 34: 11,523-04 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-59) [] EVENT_ID: 
USER_ADD_DISK_FINISHED_FAILURE (2,022), Add-Disk operation failed to complete.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABW6KRTVP3MI737FYLUV44V72I4ZAUL5/


[ovirt-users] Re: All hosts non-operational after upgrading from 4.2 to 4.3

2019-04-05 Thread John Florian
Doh!  I am such an idiot !!!

First of all, I meant to say I upgraded to 4.3.2 not 4.3.3.  I only installed 
ovirt-release43.rpm on the engine.  I've gotten too lazy with using the upgrade 
host feature in the GUI that I completely failed to think of doing this on each 
of the hosts.  Worse, I've got a vague sense of deja vu like I've been in this 
same spot before, maybe with 4.1 -> 4.2.  These bigger upgrades are just 
infrequent enough I forget important steps.

It seems like this could be handled more gracefully though.  Shouldn't this be 
caught and reported as user-friendly alert in the GUI?  Also, I think it would 
be better if the release notes ...

Change """If you're upgrading from oVirt Engine 4.2.8 you just need to 
execute:"""
 to """If you're upgrading from oVirt Engine 4.2.8 you just need to execute on 
your Engine and each Host:""".
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKLSZPARCOLMNLYS3YEKWWZ5OCYTDLNI/


[ovirt-users] Re: All hosts non-operational after upgrading from 4.2 to 4.3

2019-04-05 Thread John Florian
> What kind of storage are you using? local? 

iSCSI
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/APCJ6BIYABSE4VJDUTZ4Z5RFZLHDESV4/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Strahil
Hi Simone,

> According to gluster administration guide:
> https://docs.gluster.org/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/
>  
> in the "when to bond" section we can read:
> network throughput limit of client/server \<\< storage throughput limit
>
> 1 GbE (almost always)
> 10-Gbps links or faster -- for writes, replication doubles the load on the 
> network and replicas are usually on different peers to which the client can 
> transmit in parallel.
>
> So if you are using oVirt hyper-converged in replica 3 you have to transmit 
> everything two times over the storage network to sync it with other peers.
>
> I'm not really in that details, but if https://bugzilla.redhat.com/1673058 is 
> really like it's described, we even have an 5x overhead with current gluster 
> 5.x.
>
> This means that with a 1000 Mbps nic we cannot expect more than:
> 1000 Mbps / 2 (other replicas) / 5 (overhead in Gluster 5.x ???) / 8 (bit per 
> bytes) = 12.5 MByte per seconds and this is definitively enough to have 
> sanlock failing especially because we don't have just the sanlock load as you 
> can imagine.
>
> I'd strongly advice to move to 10 Gigabit Ethernet (nowadays with a few 
> hundred dollars you can buy a 4/5 ports 10GBASE-T copper switch plus 3 nics 
> and the cables just for the gluster network) or to bond a few 1 Gigabit 
> Ethernet links.

I didn't know that.
So , with 1 Gbit network  everyone should use replica 3 arbiter 1 volumes to 
minimize replication traffic.

Best Regards,
Strahil Nikolov

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YW7VF3WOJO7BGAZJVPCUHHHGYJCR4NJX/


[ovirt-users] Re: All hosts non-operational after upgrading from 4.2 to 4.3

2019-04-05 Thread Alex McWhirter
What kind of storage are you using? local? 

On 2019-04-05 12:26, John Florian wrote:

> Also, I see in the notification drawer a message that says: 
> 
> Storage domains with IDs [ed4d83f8-41a2-41bd-a0cd-6525d9649edb] could not be 
> synchronized. To synchronize them, please move them to maintenance and then 
> activate. 
> 
> However, when I navigate to Compute > Data Centers > Default, the Maintenance 
> option is greyed out.  Activate in the button bar is also greyed out, but it 
> looks like an option in the r-click context menu although selecting that 
> shows "Error while executing action: Cannot activate Storage. There is no 
> active Host in the Data Center.". 
> 
> I'm just stuck in an endless circle here.
> 
> On Fri, Apr 5, 2019 at 12:04 PM John Florian  wrote: 
> 
>> I am in a severe pinch here.  A while back I upgraded from 4.2.8 to 4.3.3 
>> and only had one step remaining and that was to set the cluster compat level 
>> to 4.3 (from 4.2).  When I tried this it gave the usual warning that each VM 
>> would have to be rebooted to complete, but then I got my first unusual piece 
>> when it then told me next that this could not be completed until each host 
>> was in maintenance mode.  Quirky I thought, but I stopped all VMs and put 
>> both hosts into maintenance mode.  I then set the cluster to 4.3.  Things 
>> didn't want to become active again and I eventually noticed that I was being 
>> told the DC needed to be 4.3 as well.  Don't remember that from before, but 
>> oh well that was easy. 
>> 
>> However, the DC and SD remains down.  The hosts are non-op.  I've powered 
>> everything off and started fresh but still wind up in the same state.  Hosts 
>> will look like their active for a bit (green triangle) but then go non-op 
>> after about a minute.  It appears that my iSCSI sessions are active/logged 
>> in.  The one glaring thing I see in the logs is this in vdsm.log: 
>> 
>> 2019-04-05 12:03:30,225-0400 ERROR (monitor/07bb1bf) [storage.Monitor] 
>> Setting up monitor for 07bb1bf8-3b3e-4dc0-bc43-375b09e06683 failed 
>> (monitor:329)
>> Traceback (most recent call last):
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 326, 
>> in _setupLoop
>> self._setupMonitor()
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 348, 
>> in _setupMonitor
>> self._produceDomain()
>> File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 158, in wrapper
>> value = meth(self, *a, **kw)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 366, 
>> in _produceDomain
>> self.domain = sdCache.produce(self.sdUUID)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in 
>> produce
>> domain.getRealDomain()
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in 
>> getRealDomain
>> return self._cache._realProduce(self._sdUUID)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in 
>> _realProduce
>> domain = self._findDomain(sdUUID)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in 
>> _findDomain
>> return findMethod(sdUUID)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in 
>> _findUnfetchedDomain
>> raise se.StorageDomainDoesNotExist(sdUUID)
>> StorageDomainDoesNotExist: Storage domain does not exist: 
>> (u'07bb1bf8-3b3e-4dc0-bc43-375b09e06683',) 
>> 
>> How do I proceed to get back operational?
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A7XPDM3EFUJPXON3YCX3EK66NCMFI6SJ/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BSR624W4NMQUOHHJIYX5LUDNY4XTF5QX/


[ovirt-users] Re: All hosts non-operational after upgrading from 4.2 to 4.3

2019-04-05 Thread Strahil
Are you able to access your iSCSI via the /rhev/data-center/mnt... mount point ?

Best Regards,
Strahil NikolovOn Apr 5, 2019 19:04, John Florian  
wrote:
>
> I am in a severe pinch here.  A while back I upgraded from 4.2.8 to 4.3.3 and 
> only had one step remaining and that was to set the cluster compat level to 
> 4.3 (from 4.2).  When I tried this it gave the usual warning that each VM 
> would have to be rebooted to complete, but then I got my first unusual piece 
> when it then told me next that this could not be completed until each host 
> was in maintenance mode.  Quirky I thought, but I stopped all VMs and put 
> both hosts into maintenance mode.  I then set the cluster to 4.3.  Things 
> didn't want to become active again and I eventually noticed that I was being 
> told the DC needed to be 4.3 as well.  Don't remember that from before, but 
> oh well that was easy.
>
> However, the DC and SD remains down.  The hosts are non-op.  I've powered 
> everything off and started fresh but still wind up in the same state.  Hosts 
> will look like their active for a bit (green triangle) but then go non-op 
> after about a minute.  It appears that my iSCSI sessions are active/logged 
> in.  The one glaring thing I see in the logs is this in vdsm.log:
>
> 2019-04-05 12:03:30,225-0400 ERROR (monitor/07bb1bf) [storage.Monitor] 
> Setting up monitor for 07bb1bf8-3b3e-4dc0-bc43-375b09e06683 failed 
> (monitor:329)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 326, 
> in _setupLoop
>     self._setupMonitor()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 348, 
> in _setupMonitor
>     self._produceDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 158, in wrapper
>     value = meth(self, *a, **kw)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 366, 
> in _produceDomain
>     self.domain = sdCache.produce(self.sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in 
> produce
>     domain.getRealDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in 
> getRealDomain
>     return self._cache._realProduce(self._sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in 
> _realProduce
>     domain = self._findDomain(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in 
> _findDomain
>     return findMethod(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in 
> _findUnfetchedDomain
>     raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist: 
> (u'07bb1bf8-3b3e-4dc0-bc43-375b09e06683',)
>
> How do I proceed to get back operational?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXKT2RFJSF3P75GCMX4WE34H64TRANDG/


[ovirt-users] Re: All hosts non-operational after upgrading from 4.2 to 4.3

2019-04-05 Thread John Florian
Also, I see in the notification drawer a message that says:

Storage domains with IDs [ed4d83f8-41a2-41bd-a0cd-6525d9649edb] could not
be synchronized. To synchronize them, please move them to maintenance and
then activate.

However, when I navigate to Compute > Data Centers > Default, the
Maintenance option is greyed out.  Activate in the button bar is also
greyed out, but it looks like an option in the r-click context menu
although selecting that shows "Error while executing action: Cannot
activate Storage. There is no active Host in the Data Center.".

I'm just stuck in an endless circle here.

On Fri, Apr 5, 2019 at 12:04 PM John Florian  wrote:

> I am in a severe pinch here.  A while back I upgraded from 4.2.8 to 4.3.3
> and only had one step remaining and that was to set the cluster compat
> level to 4.3 (from 4.2).  When I tried this it gave the usual warning that
> each VM would have to be rebooted to complete, but then I got my first
> unusual piece when it then told me next that this could not be completed
> until each host was in maintenance mode.  Quirky I thought, but I stopped
> all VMs and put both hosts into maintenance mode.  I then set the cluster
> to 4.3.  Things didn't want to become active again and I eventually noticed
> that I was being told the DC needed to be 4.3 as well.  Don't remember that
> from before, but oh well that was easy.
>
> However, the DC and SD remains down.  The hosts are non-op.  I've powered
> everything off and started fresh but still wind up in the same state.
> Hosts will look like their active for a bit (green triangle) but then go
> non-op after about a minute.  It appears that my iSCSI sessions are
> active/logged in.  The one glaring thing I see in the logs is this in
> vdsm.log:
>
> 2019-04-05 12:03:30,225-0400 ERROR (monitor/07bb1bf) [storage.Monitor]
> Setting up monitor for 07bb1bf8-3b3e-4dc0-bc43-375b09e06683 failed
> (monitor:329)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 326, in _setupLoop
> self._setupMonitor()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 348, in _setupMonitor
> self._produceDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 158, in
> wrapper
> value = meth(self, *a, **kw)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 366, in _produceDomain
> self.domain = sdCache.produce(self.sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
> in produce
> domain.getRealDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in
> getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
> in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
> in _findDomain
> return findMethod(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176,
> in _findUnfetchedDomain
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'07bb1bf8-3b3e-4dc0-bc43-375b09e06683',)
>
> How do I proceed to get back operational?
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A7XPDM3EFUJPXON3YCX3EK66NCMFI6SJ/


[ovirt-users] All hosts non-operational after upgrading from 4.2 to 4.3

2019-04-05 Thread John Florian
I am in a severe pinch here.  A while back I upgraded from 4.2.8 to 4.3.3
and only had one step remaining and that was to set the cluster compat
level to 4.3 (from 4.2).  When I tried this it gave the usual warning that
each VM would have to be rebooted to complete, but then I got my first
unusual piece when it then told me next that this could not be completed
until each host was in maintenance mode.  Quirky I thought, but I stopped
all VMs and put both hosts into maintenance mode.  I then set the cluster
to 4.3.  Things didn't want to become active again and I eventually noticed
that I was being told the DC needed to be 4.3 as well.  Don't remember that
from before, but oh well that was easy.

However, the DC and SD remains down.  The hosts are non-op.  I've powered
everything off and started fresh but still wind up in the same state.
Hosts will look like their active for a bit (green triangle) but then go
non-op after about a minute.  It appears that my iSCSI sessions are
active/logged in.  The one glaring thing I see in the logs is this in
vdsm.log:

2019-04-05 12:03:30,225-0400 ERROR (monitor/07bb1bf) [storage.Monitor]
Setting up monitor for 07bb1bf8-3b3e-4dc0-bc43-375b09e06683 failed
(monitor:329)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
326, in _setupLoop
self._setupMonitor()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
348, in _setupMonitor
self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 158, in
wrapper
value = meth(self, *a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
366, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in
produce
domain.getRealDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in
getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in
_realProduce
domain = self._findDomain(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in
_findDomain
return findMethod(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in
_findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'07bb1bf8-3b3e-4dc0-bc43-375b09e06683',)

How do I proceed to get back operational?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GJKJKVOKKZVTGKSNIU6RSOYUE5HUDWNN/


[ovirt-users] Re: VM bandwidth limitations

2019-04-05 Thread John Florian
On 4/4/19 7:03 AM, Dominik Holler wrote:
> On Sun, 10 Mar 2019 13:45:59 -0400
> John Florian  wrote:
>
>> In my oVirt deployment at home, I'm trying to minimize the amount of
>> physical HW and its 24/7 power draw.  As such I have the NFS server for
>> my domain virtualized.  This is not used for oVirt's SD, but rather the
>> NFS server's back-end storage comes from oVirt's SD.  To maximize the
>> performance of my NFS server, do I still need to use bonded NICs to
>> increase bandwidth like I would a physical server or does the
>> VirtIO-SCSI stuff magically make this unnecessary? 
> This depends on the scenario.
> Bonding two VirtIO vNICs connected to the same network would be not
> increase the throughput, since a single vNIC has by default no
> artificial bandwidth limit.

Thank you, this exactly what I wanted to know (and suspected).

-- 
John Florian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IC63X6GKXHOHZ7WXISRUM5MSXC3LWVP/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Simone Tiraboschi
On Fri, Apr 5, 2019 at 2:18 PM Strahil Nikolov 
wrote:

> >This definitively helps, but for my experience the network speed is
> really determinant here.Can you describe your network >configuration?
> >A 10 Gbps net is definitively fine here.
> >A few bonded 1 Gbps nics could work.
> >A single 1 Gbps nic could be an issue.
>
>
> I have a gigabit interface on my workstations and sadly I have no option
> for upgrade without switching the hardware.
> I have observed my network traffic for days with iftop and gtop and I have
> never reached my Gbit interface's maximum bandwidth, not even the half of
> it.
>
> Even when reseting my bricks (gluster volume reset-brick) and running a
> full heal - I do not observe more than 50GiB/s utilization. I am not sure
> if FUSE is using network for accessing the local brick - but I  hope that
> it is not true.
>

GlusterFS is a scalable *network* filesystem: the network is always there.
You can use caching technique to read from the local peer first but sooner
or later you will have to compare it with data from other peers or sync the
data to other peers on writes.

According to gluster administration guide:
https://docs.gluster.org/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/

in the "when to bond" section we can read:
network throughput limit of client/server \<\< storage throughput limit

1 GbE (almost always)
10-Gbps links or faster -- for writes, replication doubles the load on the
network and replicas are usually on different peers to which the client can
transmit in parallel.

So if you are using oVirt hyper-converged in replica 3 you have to transmit
everything two times over the storage network to sync it with other peers.

I'm not really in that details, but if https://bugzilla.redhat.com/1673058
is really like it's described, we even have an 5x overhead with current
gluster 5.x.

This means that with a 1000 Mbps nic we cannot expect more than:
1000 Mbps / 2 (other replicas) / 5 (overhead in Gluster 5.x ???) / 8 (bit
per bytes) = 12.5 MByte per seconds and this is definitively enough to have
sanlock failing especially because we don't have just the sanlock load as
you can imagine.

I'd strongly advice to move to 10 Gigabit Ethernet (nowadays with a few
hundred dollars you can buy a 4/5 ports 10GBASE-T copper switch plus 3 nics
and the cables just for the gluster network) or to bond a few 1 Gigabit
Ethernet links.


> Checking disk performance - everything is in the expected ranges.
>
> I suspect that the Gluster v5 enhancements are increasing both network and
> IOPS requirements and my setup was not dealing with it properly.
>
>
> >It's definitively planned, see: https://bugzilla.redhat.com/1693998
> 
> >I'm not really sure about its time plan.
>
> I will try to get involved and provide feedback both to oVirt and Gluster
> dev teams.
>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TIXOGWSRZZTAYEQKIQJ65GPFT4CHHPTN/


[ovirt-users] oVirt 4.3.2 Importing VMs from a detached domain not keeping cluster info

2019-04-05 Thread Strahil Nikolov
Hello,
can someone tell me if this is an expected behaviour:
1. I have created a data storage domain exported by nfs-ganesha via NFS2. Stop 
all VMs on the Storage domain
3. Set to maintenance and detached (without wipe) the storage domain3.2 All VMs 
are gone (which was expected)4. Imported the existing data domain via Gluster5. 
Wen to the Gluster domain and imported all templates and VMs5.2 Power on some 
of the VMs but some of them failed
The reason for failure is that some of the re-imported VMs were automatically 
assigned to the Default Cluster , while they belonged to another one.
Most probably this is not a supported activity, but can someone clarify it ?
Thanks in advance.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SMPIPRXOM6BVPJ7ELN6KVYZGI2WKYRY2/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Strahil Nikolov
 
>This definitively helps, but for my experience the network speed is really 
>determinant here.Can you describe your network >configuration?
>A 10 Gbps net is definitively fine here.
>A few bonded 1 Gbps nics could work.
>A single 1 Gbps nic could be an issue.


I have a gigabit interface on my workstations and sadly I have no option for 
upgrade without switching the hardware. 
I have observed my network traffic for days with iftop and gtop and I have 
never reached my Gbit interface's maximum bandwidth, not even the half of it.
Even when reseting my bricks (gluster volume reset-brick) and running a full 
heal - I do not observe more than 50GiB/s utilization. I am not sure if FUSE is 
using network for accessing the local brick - but I  hope that it is not true.
Checking disk performance - everything is in the expected ranges.
I suspect that the Gluster v5 enhancements are increasing both network and IOPS 
requirements and my setup was not dealing with it properly.

>It's definitively planned, see: https://bugzilla.redhat.com/1693998>I'm not 
>really sure about its time plan.
I will try to get involved and provide feedback both to oVirt and Gluster dev 
teams.
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSNSV63YJFY7LGFVMNYIZYQMNPGAAMCH/


[ovirt-users] Re: Upgrade 3.5 to 4.3

2019-04-05 Thread smirta

Dear all

 

Upgrading from 3.5 to 4.0 or 4.3 is not supported. You need to upgrade to 3.6 first. We've done this and everything worked just fine. Always check out the release notes and upgrade instructions before upgrading.

From https://www.ovirt.org/documentation/upgrade-guide/chap-Upgrading_to_oVirt_4.0.html : "Upgrading to version 4.0 can only be performed from version 3.6"

 

Best regards

Simon

 

 

Gesendet: Donnerstag, 04. April 2019 um 17:53 Uhr
Von: "Николаев Алексей" 
An: users , "Demeter Tibor" 
Betreff: [ovirt-users] Re: Upgrade 3.5 to 4.3

Hi!

 

The best practice is to import the data domain into a new instance of ovirt-engine.

 

 

04.04.2019, 16:17, "Demeter Tibor" :



Hi All,


I did began a very big project, I've started upgrade a cluster from 3.5 to 4.3...

It was a mistake:(

Since upgrade, I can't start the host. The UI seems to working fine.

 

 


[root@virt ~]# service vdsm-network start

Redirecting to /bin/systemctl start vdsm-network.service

Job for vdsm-network.service failed because the control process exited with error code. See "systemctl status vdsm-network.service" and "journalctl -xe" for details.

[root@virt ~]# service vdsm-network status

Redirecting to /bin/systemctl status vdsm-network.service

● vdsm-network.service - Virtual Desktop Server Manager network restoration

   Loaded: loaded (/usr/lib/systemd/system/vdsm-network.service; disabled; vendor preset: enabled)

   Active: failed (Result: exit-code) since Thu 2019-04-04 15:03:44 CEST; 6s ago

  Process: 19325 ExecStart=/usr/bin/vdsm-tool restore-nets (code=exited, status=1/FAILURE)

  Process: 19313 ExecStartPre=/usr/bin/vdsm-tool --vvverbose --append --logfile=/var/log/vdsm/upgrade.log upgrade-unified-persistence (code=exited, status=0/SUCCESS)

 Main PID: 19325 (code=exited, status=1/FAILURE)

 

Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: return tool_command[cmd]["command"](*args)

Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: File "/usr/lib/python2.7/site-packages/vdsm/tool/restore_nets.py", line 41, in restore_command

Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: exec_restore(cmd)

Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: File "/usr/lib/python2.7/site-packages/vdsm/tool/restore_nets.py", line 54, in exec_restore

Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: raise EnvironmentError('Failed to restore the persisted networks')

Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: EnvironmentError: Failed to restore the persisted networks

Apr 04 15:03:44 virt.bolax.hu systemd[1]: vdsm-network.service: main process exited, code=exited, status=1/FAILURE

Apr 04 15:03:44 virt.bolax.hu systemd[1]: Failed to start Virtual Desktop Server Manager network restoration.

Apr 04 15:03:44 virt.bolax.hu systemd[1]: Unit vdsm-network.service entered failed state.

Apr 04 15:03:44 virt.bolax.hu systemd[1]: vdsm-network.service failed.


 

Since, the host does not start. 


Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.764+: 17705: error : virNetSocketReadWire:1806 : End of file while reading data: Input/output error

Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.965+: 17709: error : virNetSASLSessionListMechanisms:393 : internal error: cannot list SASL ...line 1757)

Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.966+: 17709: error : remoteDispatchAuthSaslInit:3440 : authentication failed: authentication failed

Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.966+: 17705: error : virNetSocketReadWire:1806 : End of file while reading data: Input/output error

Apr 04 14:56:01 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:01.167+: 17710: error : virNetSASLSessionListMechanisms:393 : internal error: cannot list SASL ...line 1757)

Apr 04 14:56:01 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:01.167+: 17710: error : remoteDispatchAuthSaslInit:3440 : authentication failed: authentication failed


 

[root@virt ~]# service vdsm-network status

Redirecting to /bin/systemctl status vdsm-network.service

● vdsm-network.service - Virtual Desktop Server Manager network restoration

   Loaded: loaded (/usr/lib/systemd/system/vdsm-network.service; disabled; vendor preset: enabled)

   Active: failed (Result: exit-code) since Thu 2019-04-04 15:00:39 CEST; 2min 49s ago

  Process: 19079 ExecStart=/usr/bin/vdsm-tool restore-nets (code=exited, status=1/FAILURE)

  Process: 19045 ExecStartPre=/usr/bin/vdsm-tool --vvverbose --append --logfile=/var/log/vdsm/upgrade.log upgrade-unified-persistence (code=exited, status=0/SUCCESS)

 Main PID: 19079 (code=exited, status=1/FAILURE)


 

 

 

Also, I upgraded to until 4.0 , but it seems to same. I cant reinstall, and reactivate the host.

 

Originally it was an AIO installation. 

 

Please help me solve this problem.

 


Thanks in advance,

R
Tibor

 


 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an e

[ovirt-users] Re: Need VM run once api

2019-04-05 Thread louisedazzle15
Especially in the event that you are trying with explicit equipment highlights, 
multiprocessor frameworks, and other critical variables. These restrictions 
rely upon the specific VM and you should check to http://www.essayempire.co.uk 
their documentation first before performing mechanized testing.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TDENIJKB6CEA2VG2YVJIS3DPMJZOX5B5/


[ovirt-users] Re: NPE for GetValidHostsForVmsQuery

2019-04-05 Thread Andrej Krejcir
Thanks for the info.

Here is the Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1696621


Andrej

On Fri, 5 Apr 2019 at 10:22,  wrote:

> Hi Andrej,
>
> I missed to point a fact that is probably determining. Prior to noticing
> the error, we upgraded the Cluster & Data Center compatibility version
> from 4.1 to 4.3, which caused ovirt-engine to automatically edit all VMs
> and modify their compatibility versions as well (with changes pending
> until next reboot).
>
> So if we powered down the VM, edited the VM, saved it (even without
> changes) and powered it up, migrations would work again.
>
> This happened with all affected machines.
>
> If you need some additional info, just ask.
>
> Thanks.
>
> El 2019-04-04 17:03, Andrej Krejcir escribió:
> > Hi,
> >
> > The NPE is because the CPU load of a VM is missing. It happens when
> > the VM statistics are not updated.
> >
> > This is definitely a bug, missing CPU load should not prevent
> > migration.
> > I will open a Bugzilla ticket.
> >
> > Can you share some more details about the VMs?
> > Does the NPE happen for all VMs or only some specific types?
> >
> > Thanks,
> > Andrej
> >
> > On Wed, 3 Apr 2019 at 13:45,  wrote:
> >
> >> Hi,
> >>
> >> We're running oVirt 4.3.2. When we click on the "Migrate" button
> >> over a
> >> VM, an error popup shows up and in the ovirt-engine log we see:
> >>
> >>2019-04-03 12:37:40,897+01 ERROR
> >> [org.ovirt.engine.core.bll.GetValidHostsForVmsQuery] (default
> >> task-6)
> >> [478381f0-18e3-4c96-bcb5-aafd116d7b7a] Query
> >> 'GetValidHostsForVmsQuery'
> >> failed: null
> >>
> >> I'm attaching the full NPE.
> >>
> >> Could someone point out what could be the reason for the NPE?
> >>
> >> Thanks.___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ [1]
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/ [2]
> >> List Archives:
> >>
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZWPQR34DDQMTDPUI2EZFA3VSHA77BET/
> >> [3]
> >
> >
> > Links:
> > --
> > [1] https://www.ovirt.org/site/privacy-policy/
> > [2] https://www.ovirt.org/community/about/community-guidelines/
> > [3]
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZWPQR34DDQMTDPUI2EZFA3VSHA77BET/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D5HDG3GZJXTAC3BWXVK36XQGLBA462GS/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Simone Tiraboschi
On Fri, Apr 5, 2019 at 10:48 AM Strahil Nikolov 
wrote:

> Hi Simone,
>
> a short mail chain in gluster-users Amar confirmed my suspicion that
> Gluster v5.5 is performing a little bit slower than 3.12.15 .
> In result the sanlock reservations take too much time.
>

Thanks for the report!


> I have updated my setup and uncached (used lvm caching in writeback mode)
> my data bricks and used the SSD for the engine volume.Now the engine is
> running quite well and no more issues were observed.
>

This definitively helps, but for my experience the network speed is really
determinant here.
Can you describe your network configuration?
A 10 Gbps net is definitively fine here.
A few bonded 1 Gbps nics could work.
A single 1 Gbps nic could be an issue.


> Can you share any thoughts about oVirt being updated to Gluster v6.x ? I
> know that there are any hooks between vdsm and gluster and I'm not sure how
> vdsm will react on the new version.
>

It's definitively planned, see: https://bugzilla.redhat.com/1693998

I'm not really sure about its time plan.


>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K4DNJQNHHMI5TXLNHXI3XEUSTKSO7IAO/


[ovirt-users] Fwd: Re: VDI broker and oVirt

2019-04-05 Thread Jorick Astrego
Forward to the list.


 Forwarded Message 
Subject:Re: [ovirt-users] Re: VDI broker and oVirt
Date:   Fri, 05 Apr 2019 04:52:13 -0400
From:   a...@triadic.us



As far as official software, the best you'll find is the user portal.

There is also this...

https://github.com/nkovacne/ovirt-desktop-client

We used that as a code base for our own vdi connector using smart card
pksc12 certs to auth to the ovirt API.

On Apr 5, 2019 4:08 AM, Jorick Astrego  wrote:

Hi,

I think you mean to ask about the connection broker to connect to
your VDI infrastructure?

Something like this:

Or

https://www.leostream.com/solution/remote-access-for-virtual-and-physical-workstations/

Ovirt has the VM user portal https://github.com/oVirt/ovirt-web-ui ,
but I have never used a third party connection broker myself so I'm
not aware of any compatible with oVirt or RHEV...



On 4/4/19 9:10 PM, oquerej...@gmail.com
 wrote:

I have Ovirt installed, with two hypervisor and Ovirt engine. I Want To 
mount a VDI infrastructure, as cheap as possible, but robust and reliable. The 
question is what broker I can use. Thank you.
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLH7Y5HCOMBYL7QBPOGOPRTQY247GNUD/





Met vriendelijke groet, With kind regards,

Jorick Astrego
*
Netbulae Virtualization Experts *

Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3AKvK 
08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA EnschedeBTW
NL821234584B01









Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3CF4SKHPBC3KSWF6U6CHCOGBJ5M3N63R/


[ovirt-users] Re: Controller recomandation - LSI2008/9265

2019-04-05 Thread Strahil Nikolov
 At least , based on spec  I would prefer  LSI9265-8i as it supports hot spare, 
SSD support , cache and set it up in Raid 0 - but only in a replica 3 or 
replica 3 arbiter 1 volumes.
Best Regards,Strahil Nikolov

В петък, 5 април 2019 г., 9:20:57 ч. Гринуич+3, Leo David 
 написа:  
 
 Thank you Strahil for that.
On Fri, Apr 5, 2019, 06:45 Strahil  wrote:


Adding Gluster users' mail list.
On Apr 5, 2019 06:02, Leo David  wrote:

Hi Everyone,Any thoughts on this ?

On Wed, Apr 3, 2019, 17:02 Leo David  wrote:

Hi Everyone,For a hyperconverged setup started with 3 nodes and going up in 
time up to 12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 
(raid).Perc h710 ( raid ) might be an option too, but on a different 
chassis.There will not be many disk installed on each node, so the replication 
will be replica 3 replicated-distribute volumes across the nodes as:node1/disk1 
 node2/disk1  node3/disk1node1/disk2  node2/disk2  node3/disk2and so on...
As i will add nodes to the cluster ,  I intend expand the volumes using the 
same rule.What would it be a better way,  to used jbod cards ( no cache ) or 
raid card and create raid0 arrays ( one for each disk ) and therefore have a 
bit of raid cache ( 512Mb ) ?Is raid caching a benefit to have it underneath 
ovirt/gluster as long as I go for "Jbod"  installation anyway ?Thank you very 
much !
-- 
Best regards, Leo David


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6STI7U7LTOXSSH6WUNHX63WDIF2LZ46K/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPQXXCANJNLY7NSBCPBGAL6EITRM5BO6/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Strahil Nikolov
 Hi Simone,
a short mail chain in gluster-users Amar confirmed my suspicion that Gluster 
v5.5 is performing a little bit slower than 3.12.15 .In result the sanlock 
reservations take too much time.
I have updated my setup and uncached (used lvm caching in writeback mode) my 
data bricks and used the SSD for the engine volume.Now the engine is running 
quite well and no more issues were observed.
Can you share any thoughts about oVirt being updated to Gluster v6.x ? I know 
that there are any hooks between vdsm and gluster and I'm not sure how vdsm 
will react on the new version.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CEQUSJEZMIA6R6TB6OHTFFA3ZA6FSM6B/


[ovirt-users] Re: How to fix ovn apparent inconsistency?

2019-04-05 Thread Gianluca Cecchi
On Fri, Apr 5, 2019 at 9:56 AM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

>
>
> Mind sharing the created ACLs ? (which I'm quite positive will be the
> default ones, but I just have to be sure). Can be done via "ovn-nbctl
> list acl" . With that I can check the ACLs assigned to the default
> group, and assure they are correct.
>
>
The question is: previous networks (in the sense of already existing before
the port security feature had been introduced in 4.3) seems inherited the
"Enabled" option and this prevents communication between VMs on the same
OVN network.
Is this expected?
Otherwise other people in 4.2 using OVN will have the same problem
migrating to 4.3
If I create now n 4.3.2 a new OVN based network, if I select "Create an
external provider", I get as default "ovirt-provider-ovn" as External
Provider and "Enabled" as Network Port Security. Is this expected?
Is it expected that a new OVN network with default values (Enabled port
security) is made so that by default 2 VMs don't communicate if I don't set
a special security group rule (that in tis moment requires REST api)?

As far as ACLs currently in place are concerned, here they are for my
current environment.

[root@ovmgr1 ~]# ovn-nbctl list acl
_uuid   : 239f0fa4-a66e-4cce-8df2-05630f11e052
action  : drop
direction   : to-lport
external_ids: {description="drop all ingress ip traffic",
ovirt_port_group_id="79d3d3a0-7a57-4903-8646-f678ea53aeca"}
log : false
match   : "outport == @DropAll && ip"
meter   : []
name: ""
priority: 1000
severity: alert

_uuid   : 141aa336-0549-47d0-b09f-c2cb0dd78dd2
action  : allow-related
direction   : from-lport
external_ids: {description="automatically added allow all egress ip
traffic", ovirt_ethertype="IPv4",
ovirt_port_group_id="1fd8cacf-35cf-4aa3-b245-fec9c2e6e616"}
log : false
match   : "inport == @Default && ip4"
meter   : []
name: ""
priority: 1001
severity: alert

_uuid   : ac7d5a16-a596-43dc-88ec-e9d47512e7ce
action  : drop
direction   : from-lport
external_ids: {description="drop all egress ip traffic",
ovirt_port_group_id="79d3d3a0-7a57-4903-8646-f678ea53aeca"}
log : false
match   : "inport == @DropAll && ip"
meter   : []
name: ""
priority: 1000
severity: alert

_uuid   : ef7f32f2-8b78-433f-a831-0e801c9d8b3e
action  : allow-related
direction   : to-lport
external_ids: {ovirt_ethertype="IPv4",
ovirt_port_group_id="1fd8cacf-35cf-4aa3-b245-fec9c2e6e616",
ovirt_remote_group_id="1fd8cacf-35cf-4aa3-b245-fec9c2e6e616"}
log : false
match   : "outport == @Default && ip4 && ip4.src ==
$pg_ip4_Default"
meter   : []
name: ""
priority: 1001
severity: alert

_uuid   : 70c7114b-1be6-49c1-9bbd-966c52751e79
action  : allow-related
direction   : from-lport
external_ids: {description="automatically added allow all egress ip
traffic", ovirt_ethertype="IPv6",
ovirt_port_group_id="1fd8cacf-35cf-4aa3-b245-fec9c2e6e616"}
log : false
match   : "inport == @Default && ip6"
meter   : []
name: ""
priority: 1001
severity: alert

_uuid   : 264111cf-4f66-4b4c-b3c9-693bbca53a70
action  : allow-related
direction   : to-lport
external_ids: {ovirt_ethertype="IPv6",
ovirt_port_group_id="1fd8cacf-35cf-4aa3-b245-fec9c2e6e616",
ovirt_remote_group_id="1fd8cacf-35cf-4aa3-b245-fec9c2e6e616"}
log : false
match   : "outport == @Default && ip6 && ip6.src ==
$pg_ip6_Default"
meter   : []
name: ""
priority: 1001
severity: alert
[root@ovmgr1 ~]#

 Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2VMHQBHEHHCBNIR5SNXG7KUXDYMNRQPN/


[ovirt-users] Re: NPE for GetValidHostsForVmsQuery

2019-04-05 Thread nicolas

Hi Andrej,

I missed to point a fact that is probably determining. Prior to noticing 
the error, we upgraded the Cluster & Data Center compatibility version 
from 4.1 to 4.3, which caused ovirt-engine to automatically edit all VMs 
and modify their compatibility versions as well (with changes pending 
until next reboot).


So if we powered down the VM, edited the VM, saved it (even without 
changes) and powered it up, migrations would work again.


This happened with all affected machines.

If you need some additional info, just ask.

Thanks.

El 2019-04-04 17:03, Andrej Krejcir escribió:

Hi,

The NPE is because the CPU load of a VM is missing. It happens when
the VM statistics are not updated.

This is definitely a bug, missing CPU load should not prevent
migration.
I will open a Bugzilla ticket. 

Can you share some more details about the VMs?
Does the NPE happen for all VMs or only some specific types?

Thanks,
Andrej

On Wed, 3 Apr 2019 at 13:45,  wrote:


Hi,

We're running oVirt 4.3.2. When we click on the "Migrate" button
over a
VM, an error popup shows up and in the ovirt-engine log we see:

   2019-04-03 12:37:40,897+01 ERROR
[org.ovirt.engine.core.bll.GetValidHostsForVmsQuery] (default
task-6)
[478381f0-18e3-4c96-bcb5-aafd116d7b7a] Query
'GetValidHostsForVmsQuery'
failed: null

I'm attaching the full NPE.

Could someone point out what could be the reason for the NPE?

Thanks.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/ [1]
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/ [2]
List Archives:


https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZWPQR34DDQMTDPUI2EZFA3VSHA77BET/

[3]



Links:
--
[1] https://www.ovirt.org/site/privacy-policy/
[2] https://www.ovirt.org/community/about/community-guidelines/
[3]
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZWPQR34DDQMTDPUI2EZFA3VSHA77BET/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3QVBGUKKRYQ5OUWEXZ2DHD3SGLQN5UM/


[ovirt-users] Re: VDI broker and oVirt

2019-04-05 Thread Jorick Astrego
Hi,

I think you mean to ask about the connection broker to connect to your
VDI infrastructure?

Something like this:

Or
https://www.leostream.com/solution/remote-access-for-virtual-and-physical-workstations/

Ovirt has the VM user portal https://github.com/oVirt/ovirt-web-ui , but
I have never used a third party connection broker myself so I'm not
aware of any compatible with oVirt or RHEV...



On 4/4/19 9:10 PM, oquerej...@gmail.com wrote:
> I have Ovirt installed, with two hypervisor and Ovirt engine. I Want To mount 
> a VDI infrastructure, as cheap as possible, but robust and reliable. The 
> question is what broker I can use. Thank you.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLH7Y5HCOMBYL7QBPOGOPRTQY247GNUD/




Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EHSULIYRXUYFGDDFVQT7SAIKOBGVQASY/


[ovirt-users] Re: How to fix ovn apparent inconsistency?

2019-04-05 Thread Miguel Duarte de Mora Barroso
On Thu, Apr 4, 2019 at 2:04 PM Gianluca Cecchi
 wrote:
>
> On Thu, Apr 4, 2019 at 12:07 PM Miguel Duarte de Mora Barroso 
>  wrote:
>>
>>
>> > Questions:
>> > - what is the role of the "Network port security" option for an OVN 
>> > network?
>>
>> It means that newly created ports under that network will inherit the
>> port security value from the network - e.g. if the network's port
>> security attribute is active, so will the newly created port's port
>> security.
>>
>> Port security on a port means 2 things:
>>   #1 - security group rules *will* apply to the VM having that port attached
>>   #2 - only the specified mac address will be allowed to send/receive
>> through that port. MAC spoofing protection is applied.
>>
>> > - what is the meaning of "Undefined" option for it other than "Enabled" 
>> > and "Disabled"?
>>
>> It means that the network will inherit the value from the provider's
>> configuration - you can check what it translates to in
>> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
>
>
> Thanks for clarifications. Digging around RHV 4.2 vs 4.3beta docs I see now 
> that "Network Port Security" will be also one of the new features for it
> In 4.3 beta the third option is explictly defined as "Inherited" (reflecting 
> your explanation) and not "Undefined" as in current oVirt 4.3.2)
>
>>
>>
>> > - it seems I cannot edit the value for "Network port security" option of 
>> > an existing OVN network, is it correct?
>>
>> You cannot do it *through the UI*. You can use ansible / REST api to
>> update the network - or ports - port_security_enabled value.
>
>
>
>>
>>
>> I am working on creating a couple of playbooks for this; hopefully I
>> can provide those early next week. It would be helpful to agilize this
>> process.
>>
>
> Indeed. Because in Openstack web mgmt interface all the settings related to 
> security groups are simplified and intuitive, but here we have not...
> Also because it seems from rhv 4.3beta manual that creation of security 
> groups themselves will not be possible through web gui...
>
>>
>> There is a notion of 'default' group, that ensures connectivity to all
>> VMs whose ports belong to that group - and all ports with active port
>> security, by default do.
>>
>> I'm not sure how you reached that situation, but let's first make sure
>> of a couple of things; please provider the output of:
>>   - ovn-nbctl list logical_switch_port # this will feature info of the
>> port security value, and of which groups the port belongs to - the
>> latter in the 'external_ids' column.
>>   - ovn-nbctl list port_group # this is where the security groups are
>> stored; it has associations to the ACLs belonging to the group, and of
>> the ports that are using it
>>   - ovn-nbctl list address_set # this is where the IPs per group are
>> stored. security groups are an L3 concept.
>>
>> A pastebin with the aforementioned info is welcome.
>
>
> See here:
> https://drive.google.com/file/d/1hgXMGttMgb0oaDEy5k6aWFdb01dYsjwq/view?usp=sharing

From the data you supply, everything looks as is should: both the
ports are members of the default port group, and both their IPs are
featured in the ip4 address set.

Mind sharing the created ACLs ? (which I'm quite positive will be the
default ones, but I just have to be sure). Can be done via "ovn-nbctl
list acl" . With that I can check the ACLs assigned to the default
group, and assure they are correct.




>
> Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLBMI2GVJPFJKCT52AQLIOGUOP3HLMGN/