You can use 'hosted-engine' to access the VM over VNC.
Usual advice will be to redeploy the engine and restore from backup.
You won't loose your VMs and restore will be fast.
Powering VMs manually is a tricky part. You can find each VM's configuration
file in the vdsm log on the host where t
Ohh yes is important to know ahead.Not so nice if they drop drivers.
Fortunately for now my Perc H710 (LSI MegaRAID SAS 2208) is still supported in
megaraid_sas linux module for RHEL 8.
Upgrade to ovirt 4.4 is really difficult.I had to have downtimes for it to work
correctly.
After you deploy
На 5 юли 2020 г. 11:31:32 GMT+03:00, Erez Zarum написа:
>We are using Dell SC (Storage) with iSCSI with oVirt, it is impossible
>to create a new Target Portal with a specific LUN so it's impossible to
>isolate the SE LUN from other LUNs that are in use by other Storage
>Domains.
>According to th
На 3 юли 2020 г. 11:30:58 GMT+03:00, Andrei Verovski
написа:
>Hi !
>
>I have 2-node oVirt 4.3 installation, with engine running as KVM guest
>on SuSE file server (not hosted engine).
>Nodes are manually installed on CentOS 7.x (further referred as old
>node #1 and #2).
>
>I’m going to add 1 add
Yes, ovirt-ha-broker and ovirt-ha-agent take care of HostedEngine to be up
and running and in case something goes bad - to be migrated away.
Best Regards,
Strahil Nikolov
На 3 юли 2020 г. 7:53:14 GMT+03:00, Anton Louw via Users
написа:
>Hi Everybody,
>
>Thanks for all the responses. So I man
Usually the vdsm-network-init and vdsm-network services take care of the
network configuration.
The easiest way to resolve is to set the host in maintenance and then use
'reinstall' from the UI.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 14:50:14 GMT+03:00, Erez Zarum написа:
>Fresh insta
Usually the vdsm-network-init and vdsm-network services take care of the
network configuration.
The easiest way to resolve is to set the host in maintenance and then use
'reinstall' from the UI.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 14:50:14 GMT+03:00, Erez Zarum написа:
>Fresh insta
We use LDAP authentication to login to ovirt cluster, actually, admin and
another user account have to access API with no problem. My account does cannot
access to API despite that had SuperUser privileges than those accounts that
already access API.
Every time I tried to access API I get next
I changed the BIOS Type settings in the cluster settings section to UEFI BIOS,
and hosted-engine does not start after rebooting. Although, before I made the
changes, I looked in the engine at the /boot partition, which has the /efi
directory. Is there any way to change the engine settings manu
On Tue, Jul 7, 2020 at 5:05 PM Łukasz Kołaciński
wrote:
> Dear ovirt community,
>
Hi Łukasz,
Adding de...@ovit.org since this topic is more appropriate for the devel
list.
> I am trying to use ovirt imageio api to receive changed blocks (dirty
> bitmap) on ovirt 4.4. Could anyone tell me how
Emy,
I was wondering how much if any improvement I'd see with Gluster storage
moving to oVirt 4.4/CentOS 8x (but have not made the switch yet myself).
You should keep in mind that your Perc controllers aren't supported by
CentOS 8 out of the box, they dropped support for many older controllers.
Yo
I had an issue like this where I was using a centos7 (targetcli iscsi)
server which accidentally had LVM enabled upon reboot
which grabbed the RAID device and stopped targetcli from exporting the RAID
disk as iscsi.
It only showed up after a power outage and it took me a while to figure out
what ha
Dear Wodel Youchi,
Sorry for the delayed response. Answers below.
On Tue, Jul 7, 2020 at 4:17 PM wodel youchi wrote:
> Hi,
>
> Could someone give me some help.
>
> Regards.
>
> Le mer. 1 juil. 2020 15:45, wodel youchi a
> écrit :
>
>> Hi,
>>
>> I need to create some Windows Templates for my
Dear ovirt community,
I am trying to use ovirt imageio api to receive changed blocks (dirty bitmap)
on ovirt 4.4. Could anyone tell me how to get them step by step? On the
documentation I saw endpoint "GET /images/ticket-uuid/map". I don't know what
ticket-uuid is and how to generate it. I also
I am running a hosted engine using iscsi. I had to reboot the array, I took
down all the VM's and put the VM data storage into maintenance mode. I also put
the hosted engine "hosted-engine --set-maintenance --mode =global" and shut
down the engine "hosted-engine --vm-shutdown" I also stopped the
Hi !
Something wrong went with this list manager, since July 2 I stopped to
receive e-mails. I can't login with my old password, neither I can reset it.
Is it possible to check by list admins?
Thanks in advance.
Andrei
___
Users mailing list -- user
I forgot to mention, after every failed deploy on a new host using ovirt 4.4
you can run :
/usr/share/ovirt-hosted-engine/scripts/ovirt-hosted-engine-cleanup to free up
disk space and remove configs that did not finish to setup.
Thanks,
Emy
___
User
Hi Erez,
would you like to use the backup tool to backup your self Hosted Engine from
iSCSI, then restore to the Gluster of NFS storage? In this way, you can
migration as you want.
If you are looking forward to a backup tool, maybe you can try Vinchin Backup &
Recovery, which provides the solut
Hi Lukasz,
Thank you for testing the new Incrementa backup feature.
For those kinds of questions, it is better to send the mail to +users
list so other people can learn/answer the question.
Please see my comments inline.
On Fri, 3 Jul 2020 at 15:55, Łukasz Kołaciński
wrote:
> Dear Eyal Shen
Hi Miguel,
Firstly, you can backup your target VM by the backup tool. Then, you can export
VM as you want. This is a perfect solution maybe you can try.
If you are interested in the back tool , I suggest to you the Vinchin Backup &
Recovery.
Hope you solve your question soon.
thanks
__
Fresh installation of oVirt 4.4.0 (using the ISO), after restarting an host,
the host comes up but the slave interfaces of a bond remains in down state.
The only way to bring it up is to login via IPMI and restart the
"NetworkManager" service.
Weird thing, after this, the host says that it's netw
i found the problem.
The kernel version in Centos 7.8 with version 3.x.x is really too old and does
not know how to handle fine new SSD disks or RAID Controllers with latest BIOS
Updates applied.
Booting and Archlinux latest iso image with kernel 5.7.6 or a Centos 8.2 with
kernel 4.18 increase
Il giorno gio 2 lug 2020 alle ore 07:05 Anton Louw via Users <
users@ovirt.org> ha scritto:
>
>
> Hi All,
>
>
>
> I had a space alert on one of my nodes this morning, and when I looked
> around, I saw that var/log/ovirt-hosted-engine-ha/broker.log was sitting
> at around 30GB. Does anybody know i
Hello,
I experienced the same error.
There can be two problems :
1.Sometimes the ovirt 4.4 repository does not respond , either because is down
or the internet connection to it does not work correctly firewall, routing,
etc.
My solution for this was to clear rpm packages cache using : "dnf cl
I am using command line hosted-engine --deploy for install, no cockpit.
I had problems with rpm metadata and the deployed failed, but yes in your case
might be IPv6 problems.
I am using IPv4, never tried IPv6 on Ovirt.
___
Users mailing list -- users@ovi
We have a vm with many virtual drives and need to backup as ova file. Since
this demands a lot of space I had mounted an nfs directory in the host but get
next message after try to export ova:
Error whle executing action: Cannot export VM. Invalid target folder:
/mnt/shared2 on Host. You may re
We are using Dell SC (Storage) with iSCSI with oVirt, it is impossible to
create a new Target Portal with a specific LUN so it's impossible to isolate
the SE LUN from other LUNs that are in use by other Storage Domains.
According to the documentations this is not a best practice, while searching
Hi,
Could someone give me some help.
Regards.
Le mer. 1 juil. 2020 15:45, wodel youchi a écrit :
> Hi,
>
> I need to create some Windows Templates for my client, and I am not quite
> familiar with the Windows sysprep installations.
> I read the "Sealing a Windows Virtual Machine for Deployment
Yes i also had a lot of problems installing ovirt 4.4 .I think it was not
tested enough.
I am upgrading from ovirt 4.3 to 4.4 using shared storage glusterfs, which
makes things more difficult.
Regarding your error, i believe is something with the rpm ovirt 4.4
repository(sometimes it times out
@Martin if needed I can raise a RFE for this. Just point me where to do, and I
will do it.
Thank you.
On 1 Jul 2020, at 03:33, Vinícius Ferrão via Users
mailto:users@ovirt.org>> wrote:
Hi Martin,
On 1 Jul 2020, at 03:26, Martin Perina
mailto:mper...@redhat.com>> wrote:
On Wed, Jul 1, 2020
Hi Everybody,
Thanks for all the responses. So I managed to truncate the log. I just want to
make sure, the ‘ovirt-ha-agent’ service is only for the hosted engine correct?
I can still migrate machines between the hosts, so it should then not impact
the VMs?
Thanks
Anton Louw
Cloud Engineer:
Hi !
I have 2-node oVirt 4.3 installation, with engine running as KVM guest on SuSE
file server (not hosted engine).
Nodes are manually installed on CentOS 7.x (further referred as old node #1 and
#2).
I’m going to add 1 additional node, and migrate system to CentOS 8.2 / oVirt
4.4.
Is this c
They’re just log files, so generally safe to delete. You may want to take a
look at the huge one though, see what’s up. I had a similar problem that turned
out to be a broken HA agent install, cleaned and reinstalled and it went back
to the same volume of logs as the others.
Now I need to check
Hi!
Thanks for your comments! I am unsure regarding your hunch that the oVirt Repo
is to blame, as from inside the engine after the botched deployment, absolutely
nothing outside of the engine and its host is reachable via IPv4/v6, and a
manual "dnf update" also produces a failure that the AppSt
I would recommend you to try to logrotate it.
I had similar issue with corrupted logrotate state file which led to vdsm
growing to 20GB
You can also use 'truncate -s 0 your.log' to wipe without removing it.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 8:00:27 GMT+03:00, Anton Louw via Users
Hi,
I think we ran into this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1802277 and
https://access.redhat.com/solutions/5174961 as we have exactly that problems
and log-entries.
As there are now several VMs that cannot create (or delete) snapshots
accordingly we cannot do backups on tha
Hi!
A short addendum:
I have now also tried to perform the installation using the oVirt Node
distribution as a basis, but that also ended with the same problem. So
it does not seam to be an issue with the underlying CentOS installation,
but rather with my general setup or parameters.
Regards
37 matches
Mail list logo