[ovirt-users] Re: Upgrade 4.0 to 4.1: Global Maintenance Mode not recognized

2018-11-25 Thread gregor
The Engine is up but the host stucks in NonResponsive.

The Webinterface stuck in "hanlding non respsonsive host". "Validating"
is green and "Executing" is spinning.

Can anybody help?
I found no errors in the logs under /var/log/ovirt-hosted-engine-ha and
journalclt

regards
gregor


Am 26.11.18 um 07:35 schrieb gregor:
> I was able to change the maintenance state in the database to finish the
> upgrade with engine-setup.
> 
> these are the lines:
> 
> select vds_id,ha_global_maintenance from vds_statistics where vds_id =
> '706599d5-2dbc-400c-b9da-5b5906de6dbd';
> update vds_statistics set ha_global_maintenance = 't' where vds_id =
> '706599d5-2dbc-400c-b9da-5b5906de6dbd';
> 
> Now it's time to wait if the host and engine can reboot and start the vm's.
> 
> regards
> gregor
> 
> Am 26.11.18 um 06:07 schrieb gregor:
>> Hello,
>>
>> I upgraded one host from 4.0 to 4.1 without problems. Now I have a
>> problem with another host, in another network, where the Global
>> Maintenance mode is not recognized by engine-setup.
>>
>> Is there a way to force the setup or set the maintenance mode inside the
>> database?
>>
>> One problem can be that an the host was accidentally upgraded to 4.1
>> before I upgraded the engine.
>>
>> kind regards
>> gregor
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQOX7YHHW3GTYEOP5YMG6NF2OD4BHUC7/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TLXT24MHJHSPQDI7ENVVJO2WVOBHHLB4/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7M2HF3S6UHCLR7QK2AU2GZBN7DGPEHBF/


[ovirt-users] Re: Upgrade 4.0 to 4.1: Global Maintenance Mode not recognized

2018-11-25 Thread gregor
I was able to change the maintenance state in the database to finish the
upgrade with engine-setup.

these are the lines:

select vds_id,ha_global_maintenance from vds_statistics where vds_id =
'706599d5-2dbc-400c-b9da-5b5906de6dbd';
update vds_statistics set ha_global_maintenance = 't' where vds_id =
'706599d5-2dbc-400c-b9da-5b5906de6dbd';

Now it's time to wait if the host and engine can reboot and start the vm's.

regards
gregor

Am 26.11.18 um 06:07 schrieb gregor:
> Hello,
> 
> I upgraded one host from 4.0 to 4.1 without problems. Now I have a
> problem with another host, in another network, where the Global
> Maintenance mode is not recognized by engine-setup.
> 
> Is there a way to force the setup or set the maintenance mode inside the
> database?
> 
> One problem can be that an the host was accidentally upgraded to 4.1
> before I upgraded the engine.
> 
> kind regards
> gregor
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQOX7YHHW3GTYEOP5YMG6NF2OD4BHUC7/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TLXT24MHJHSPQDI7ENVVJO2WVOBHHLB4/


[ovirt-users] Upgrade 4.0 to 4.1: Global Maintenance Mode not recognized

2018-11-25 Thread gregor
Hello,

I upgraded one host from 4.0 to 4.1 without problems. Now I have a
problem with another host, in another network, where the Global
Maintenance mode is not recognized by engine-setup.

Is there a way to force the setup or set the maintenance mode inside the
database?

One problem can be that an the host was accidentally upgraded to 4.1
before I upgraded the engine.

kind regards
gregor
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQOX7YHHW3GTYEOP5YMG6NF2OD4BHUC7/


[ovirt-users] Re: VMs paused - unknown storage error - Stale file handle - distribute 2 - replica 3 volume with sharding

2018-11-25 Thread Sahina Bose
On Thu, Nov 22, 2018 at 5:51 PM Marco Lorenzo Crociani
 wrote:
>
> Hi,
> I opened a bug on gluster because I have reading errors on files on a
> gluster volume:
> https://bugzilla.redhat.com/show_bug.cgi?id=1652548
>
> The files are many of the VMs images of the oVirt DATA storage domain.
> oVirt pause the vms because unknown storage errors.
> It's impossibile to copy/clone, manage some snapshots of these vms. The
> errors on the low level are "stale file handle".
> Volume is distribute 2 replicate 3 with sharding.
>
> Should I open a bug also on oVirt?

Thanks, this bug should do. I've requested for info on the bug - could
you update it?

>
> Gluster 3.12.15-1.el7
> oVirt 4.2.6.4-1.el7
>
> Regards,
>
> --
> Marco Crociani
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L3JGKXNC45STFUSMPFP7GI5PZ3RACQJY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHVNR4PFB3BPT2ZLN4YTLA6HVDFCB4NM/


[ovirt-users] IPA Users via AD DC's

2018-11-25 Thread TomK

Hello,

I've configured LDAP via IPA in oVirt 4.X.  It works for locally defined 
users in IPA but not those mapped from the AD DC.  So I had two questions:


1) Is there a format of the username and password I need to type in for 
this to work?  Or is retrieving AD DC mapped users not possible with 
oVirt right now?


2) Can I use two providers in oVirt simultaneously?  One IPA and the 
other AD?


--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6TIXNQL3HJUNUP7X5QGKYTELUE33SU6/


[ovirt-users] NFS storage and importation

2018-11-25 Thread Alexis Grillon
(sorry for double post earlier)
Le 25/11/2018 à 18:19, Nir Soffer a écrit :>
> Hello,
> 
> Our cluster use NFS storage (on each node as data, and on a NAS for
> export)
> 
> 
> Looks like you invented your own hyperconverge solution.
Well, when we looks at the network specs for Gluster to works, we
thinked initially that was a more reasonable solution. Seems that might
be true for VM, but deep wrong for the engine for any case.
Regarding the engine, we thinked (once again), that the backup will
allow us to rebuild in case of problem. Wrong again, backup won't work
with new installation, and even if it's looks like it works on things
that haven't changed, it doesn't.

> 
> The NFS server on the node will always be accessible on the node, but if
> it is
> not accessible from other hosts, the entire DC may go down.
> 
> Also you probably don't have any redundancy, so failure of single node cause
> downtime and data loss.
We have RAID and regular backups, but that's not good enough, of course.

> 
> Did you consider hyperconverge Gluster based setup?
> https://ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide/
Yes, we will try to rebuild our cluster with that, at last for engine.

>  
> 
> We currently have a huge problem on our cluster, and export VM as OVA
> takes lot's of time.
> 
> 
> The huge problem is slow export to OVA or there is another problem?

At this time, we have mostly all data cover by backup or OVA, but nearly
all VMs are down and can't be restarted (i explain why at the end of the
mail). Some export in OVA failed (probably, but not only), because of of
free space on storage. So we have to rebuild a clean cluster without
destroying all the data we still have on our disks. (Import domains
function should do the trick, i hope)

>  
> 
> I checked on the NFS storage, and it looks like there's files who might
> be OVA in the images directory,
> 
> 
> I don't know the ova export code, but I'm sure it does not save in the
> images
> directory. It probably creates temporary volume for preparing and
> storing the
> exported ova file.
> 
> Arik, how do you suggest to debug slow export to OVA?
That might not be a bug, some of them make hundred of Gb, it can be
"normal". Anyway, files doesn't have the .ova extension (but the size
matchs the vms)

>  
> 
> and OVF files in the the master/vms
> 
> 
> OVF should be stored in OVF_STORE disks. Maybe you see files 
> created by an old version?
well, it was file with ovf extension on NFS, but i might be wrong, it's
this type of path :
c3c17a66-52e8-42dc-9c09-3e667e4c7290/master/vms/0265bf6b-7bbb-44b1-87f6-7cb5552d12c2/0265bf6b-7bbb-44b1-87f6-7cb5552d12c2.ovf
but it maybe only on exports domains.

A little after my mail, and like you mention it under, i heard about the
"import domain" function in storage domains, which makes me hope my mail
was meaningless, i'll try it in a few hours with true vm inside.

>  
> 
> Can someone tell me if, when i install a new engine,
> there's a way to get this VMs back inside the new engine (like the
> import tools by example)
> 
> 
> What do yo mean by "get this VMs back"?
> 
> If you mean importing all vms and disks on storage to a new engine,
> yest, it should work. This is the base for oVirt DR support.
Yes, thank you.
>  
> 
> ps : it should be said in the documentation to NEVER use backup of an
> engine when he is in a NFS storage domain on a node. It looks like it's
> working, but all the data are unsynch with reality.
> 
> 
> Do you mean hosted engine stored on NFS storage domain served by
> one of the nodes?
Yes

> 
> Can you give more details on this problem?

ovirt version 4.2.7

I'll try to make it short, but it's a weeks worth of stress and wrong
decisions.

We have build our cluster with a few nodes, but our whole storage are on
the nodes (the reason we choose NFS). And we put our engine on one of
this node in a NFS share. We had regular backup. I saw someday that the
status of this node was detoriated (on nodectl check), and it
recommanded us to make a lvs to check.

[ Small precision, if needed, the node has a three disks, merge in a
physical raid5. The installation of the node was a standard ovirt
partitionning except for one thing : we reduced the / part (without size
problem, it's more than 100Gb), to make a separate part in xfs to store
the vm data, this part have shares with the engine, data (vms) and iso
(export is on a NAS). ]

When I check with lvs, the data partition was used at 99.97% (!) when
the df says 55% (spoiler alert, df was right, but who cares).
There's a few days, it wasn't 99.97% but 99.99% (after a log collector,
love the irony) and the whole node crash, with engine on it, of course.
I restarted the cluster on another node, without too much trouble. Then
I looked how to repair the node where the engine was stored.
It seems there were no real solution to clean the lvs (if it was what we
should have done), so i 

[ovirt-users] SPICE QXL Crashes Linux Guests

2018-11-25 Thread Alex McWhirter
I'm having an odd issue that i find hard to believe could be a bug, and 
not some kind of user error, but im at a loss for where else to look.


when booting a linux ISO with QXL SPICE graphics, the boot hangs as soon 
as kernel modesetting kicks in. Tried with latest debian, fedora, and 
centos. Sometimes it randomly works, but most often it does not. QXL / 
VGA VNC work fine. However if i wait a few minutes after starting the VM 
for the graphics to start, then there are no issues and i can install as 
usual.


So after install, i reboot, hangs on reboot right after graphics switch 
back to text mode with QXL SPICE, not with VNC. So i force power off, 
reboot, and wait a while for it to boot. If i did text only install, 
when i open a spice console it will hang after typing a few characters. 
If i did a graphical install then as long as i waited long enough for X 
to start, then it works perfectly fine.


I tried to capture some logs, but since the whole guest OS hangs it's 
rather hard to pull off. I did see an occasional error about the mouse 
driver, so that's really all i have to go on.


As for the spice client, im using virt-viewer on windows 10 x64, tried 
various versions of virt-viewer just to be sure, no change. I also have 
a large amount on windows guests with QXL SPICE. These all work with no 
issue. Having guest agent installed in the linux guest seems to make no 
difference.


There are no out of the ordinary logs on the VDSM hosts, but i can 
provide anything you may need. It's not specific to any one host, i have 
10 VM hosts in the cluster, they all do. They are westmere boxes if that 
makes a difference.


Any ideas on how i should approach this? VNC works well enough for text 
only linux guest, but not being able to reboot my GUI linux guests 
without also closing my spice connection is a small pain.



as far as ovirt versions im on the latest, this is a rather fresh 
install. just set it up a few days ago, but i've been a long time ovirt 
user. I am using a squid spice proxy if that makes a difference.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJWU35TTEUC66K4LQDSJIV2HTB7TI7GI/


[ovirt-users] Re: max number of snapshot per vm

2018-11-25 Thread Nir Soffer
On Fri, Nov 23, 2018 at 1:01 PM Nathanaël Blanchet  wrote:

> Hi,
>
> What are the best pratices about vm snapshots?
>

I think the general guideline is keep only snapshot you need, since every
snapshot has a potential performance cost.

qemu caches some image metadata in memory, so accessing data from an
image with 20 snapshots should be efficient as image with 2 snapshots, but
using more snapshots will consume more memory.

Kevin, do we have performance tests comparing VMs with different amount
of snapshots?

On oVirt side, there is little additional overhead for every snapshot. We
never
measured this overhead but I don't think it will be an issue in normal use.

With block storage oVirt has a soft 1300 volumes limit per storage domain,
so
keeping more snapshots per VM means yo can keep less VMs on the same
storage domain.


> Is there a maximum number of snapshots per vm?
>

How many snapshot do you plan to keep?


>
> Has a high number of present snapshot an impact on the vm performance?
> ... on how long the snapshot is completed?
>

Taking a snapshot is fast - basically the time it take to create a new
empty image,
and  then the time it take to freeze the guest file systems before the
snapshot.

Before 4.2 this could be slow with block storage depending on the number of
snapshot in the VM:
https://bugzilla.redhat.com/1395941

Taking a snapshot with memory is not fast, depending on the amount of RAM in
the VM and how fast you can compress and write memory to storage.

Deleting a snapshot is not fast, the operation requires copying the data in
the
snapshot to previous snapshot. But if the snapshot is not too old, and the
VM
did not write a lot of data to that snapshot, the opeartion is not very
long.

Reverting a VM to an older snapshot is pretty fast and require no data
operations.
This is basically the time to create snapshot based on the snapshot you want
to revert to.

You can measure these operations on your system. Clone existing VM and
try to add and remove snapshots with and without memory. You can also test
if having lot of snapshots cause noticeable performance issues with the
planned
workload.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W4CLPVX6AIGEUVHIKO5LT2N43JS4SG2K/


[ovirt-users] Re: NFS storage and importation

2018-11-25 Thread Nir Soffer
On Fri, Nov 23, 2018 at 1:12 PM AG  wrote:

> Hello,
>
> Our cluster use NFS storage (on each node as data, and on a NAS for export)
>

Looks like you invented your own hyperconverge solution.

The NFS server on the node will always be accessible on the node, but if it
is
not accessible from other hosts, the entire DC may go down.

Also you probably don't have any redundancy, so failure of single node cause
downtime and data loss.

Did you consider hyperconverge Gluster based setup?
https://ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide/


> We currently have a huge problem on our cluster, and export VM as OVA
> takes lot's of time.
>

The huge problem is slow export to OVA or there is another problem?


> I checked on the NFS storage, and it looks like there's files who might
> be OVA in the images directory,


I don't know the ova export code, but I'm sure it does not save in the
images
directory. It probably creates temporary volume for preparing and storing
the
exported ova file.

Arik, how do you suggest to debug slow export to OVA?


> and OVF files in the the master/vms
>

OVF should be stored in OVF_STORE disks. Maybe you see files
created by an old version?


> Can someone tell me if, when i install a new engine,
> there's a way to get this VMs back inside the new engine (like the
> import tools by example)
>

What do yo mean by "get this VMs back"?

If you mean importing all vms and disks on storage to a new engine,
yest, it should work. This is the base for oVirt DR support.


> ps : it should be said in the documentation to NEVER use backup of an
> engine when he is in a NFS storage domain on a node. It looks like it's
> working, but all the data are unsynch with reality.
>

Do you mean hosted engine stored on NFS storage domain served by
one of the nodes?

Can you give more details on this problem?

Please also specify oVirt version you use.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4Z46JHBJAXGSTC6DXYYLCFNBUNBGYFJT/


[ovirt-users] multi-level-virtaulize qustion ?

2018-11-25 Thread mustafa . taha . mu95
can we build ovirt selfhost inside ovirt selfhost ? 
note that : 1) i  have the hardware requirement  to the second level of 
virtualization .
 2) i enable nested virtualization in ovirt node .
but  it doesn't work .
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICMY3573CYRZCUM6H5WAIMHBWWJR3LRC/