Hi, All. We have an ovirt 4.1 cluster setup using multiple paths to a single
iSCSI LUN for the data storage domain. I would now like to migrate to a hosted
engine.
I setup the new engine VM, shutdown and backed-up the old VM, and restored to
the new VM using engine-backup. After updating DNS
We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down the
entire cluster and just boot the hosts, none of the hosts login to their iSCSI
sessions until the engine comes up. Without logging into the sessions, sanlock
doesn't obtain any leases and obviously none of the VMs start
t
sure if that would cause problems or if that was all that's needed to allow the
hosted engine to boot automatically on an iSCSI data domain.
Thanks again,
Devin
> 2017-03-10 15:22 GMT-03:00 Devin A. Bougie :
> We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down the
On Mar 11, 2017, at 10:59 AM, Chris Adams wrote:
> Hosted engine runs fine on iSCSI since oVirt 3.5. It needs a separate
> target from VM storage, but then that access is managed by the hosted
> engine HA system.
Thanks so much, Chris. It sounds like that is exactly what I was missing.
It wo
Hi, All. Is it still possible or supported to run a hosted engine without
using the oVirt Engine Appliance? In other words, to install our own OS on a
VM and have it act as a hosted engine? "hosted-engine --deploy" now seems to
insist on using the oVirt Engine Appliance, but if it's possible
Is it possible to setup a hosted engine using the OVS switch type instead of
Legacy? If it's not possible to start out as OVS, instructions for switching
from Legacy to OVS after the fact would be greatly appreciated.
Many thanks,
Devin
___
Users mail
We have a hosted-engine running on 4.1 with an iSCSI hosted_storage domain, and
are able to import the domain. However, we cannot attache the domain to the
data center.
Just to make sure I'm not missing something basic, does the engine VM need to
be able to connect to the iSCSI target itself?
On Mar 20, 2017, at 12:54 PM, Simone Tiraboschi wrote:
> The engine should import it by itself once you add your first storage domain
> for regular VMs.
> No manual import actions are required.
It didn't seem to for us. I don't see it in the Storage tab (maybe I
shouldn't?). I can install a n
Hi Simone,
On Mar 21, 2017, at 4:06 PM, Simone Tiraboschi wrote:
> Did you already add your first storage domain for regular VMs?
> If also that one is on iSCSI, it should be connected trough a different iSCSI
> portal.
Sure enough, once we added the data storage the hosted-storage imported and
ine VM and stop vdsmd on the host.
- Manually change the switch type to ovs in
/var/lib/vdsm/persistence/netconf/nets/ovirtmgmt
- Restart the host
After that, everything seems to be working and new hosts are correctly setup
with the OVS switch type.
Thanks,
Devin
> On Mar 16, 2017, at 4:0
Hi, All. Are there any recommendations or best practices WRT whether or not to
host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine
Appliance)? We have a hosted-engine 4.1.1 cluster up and running, and now just
have to decide where to serve the NFS ISO domain from.
Many
On Mar 23, 2017, at 10:51 AM, Yedidyah Bar David wrote:
> On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie
> wrote:
>> Hi, All. Are there any recommendations or best practices WRT whether or not
>> to host an NFS ISO domain from the hosted-engine VM (running the oVirt
&
Hi, All. We have a new oVirt 4.1.1 cluster up with the OVS switch type.
Everything seems to be working great, except for live migration.
I believe the red flag in vdsm.log on the source is:
Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
Which results from vdsm as
We have a new 4.1.1 cluster setup. Migration of VM's that have a console /
graphics setup is failing. Migration of VM's that run headless succeeds.
The red flag in vdsm.log on the source is:
libvirtError: unsupported configuration: graphics 'listen' attribute
'192.168.55.82' must match 'addres
Just incase anyone else runs into this, you need to set
"migration_ovs_hook_enabled=True" in vdsm.conf. It seems the vdsm.conf created
by "hosted-engine --deploy" did not list all of the options, so I overlooked
this one.
Thanks for all the help,
Devin
On Mar 27, 2017,
We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI
hosted_storage and VM data domain (same target, different LUN's). Everything
works fine, and I can configure iscsid and multipathd outside of the oVirt
engine to ensure redundancy with our iSCSI device. However, if I try
aneously lose both a
controller and a switch without impacting availability.
Thanks again!
Devin
> On Apr 2, 2017, at 7:47 AM, Gianluca Cecchi wrote:
>
>
>
> Il 02 Apr 2017 05:20, "Devin A. Bougie" ha scritto:
> We have a new 4.1.1 cluster up and running with OVS swi
Where do I set the iSCSI iface to use when connecting to both the
hosted_storate and VM Data Domain? I believe this is related to the difficulty
I've had configuring iSCSI bonds within the oVirt engine as opposed to directly
in the underlying OS.
I've set "iscsi_default_ifaces = ovirtsan" in v
Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error
while executing action New SAN Storage Domain: Cannot zero out volume" error.
iscsid does login to the node, and the volumes appear to have been created.
However, I cannot use it to create or import a Data / iSCSI sto
Hi Maor,
On Oct 25, 2015, at 6:36 AM, Maor Lipchuk wrote:
> Does your host is working with enabled selinux?
No, selinux is disabled. Sorry, I should have mentioned that initially.
Any other suggestions would be greatly appreciated.
Many thanks!
Devin
> - Original Message -
>>
>> Eve
Hi Maor,
On Oct 25, 2015, at 12:03 PM, Maor Lipchuk wrote:
> few questions:
> Which RHEL version is installed on your Host?
7.1
> Can you please share the output of "ls -l
> /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/"
[root@lnx84 ~]# ls -l /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/
total 0
lr
Hi Maor,
On Oct 26, 2015, at 1:50 AM, Maor Lipchuk wrote:
> Looks like zeroing out the metadata volume with a dd operation was working.
> Can u try to remove the Storage Domain and add it back again now
The Storage Domain disappears from the GUI and isn't seen by ovirt-shell, so
I'm not sure ho
out volume"
error.
If I try to import, I can log into the target but it doesn't show any "Storage
Name / Storage ID (VG Name)" to import.
Thanks again,
Devin
>
> Regards,
> Maor
>
>
>
> - Original Message -
>> From: "Devin A. Bo
Hi, All. Is it possible to run vdsm without sanlock? We'd prefer to run
libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock overhead,
but it looks like vdsmd / ovirt requires sanlock.
Thanks,
Devin
___
Users mailing list
Users@ovirt
Hi Nir,
On Nov 6, 2015, at 5:02 AM, Nir Soffer wrote:
> On Thu, Nov 5, 2015 at 11:33 PM, Devin A. Bougie
> wrote:
>> Hi, All. Is it possible to run vdsm without sanlock? We'd prefer to run
>> libvirtd with virtlockd (lock_manager = "lockd") to avoid the s
On Nov 7, 2015, at 2:10 AM, Nir Soffer wrote:
>> Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.).
>
> There is no such dependency.
> Sanlock is using either an lv on block device (iscsi, fop)
Thanks, Nir! I was thinking sanlock required a disk_lease_dir, which all the
Hello,
After upgrading our cluster to oVirt 4.5 on EL9 hosts, we had to switch from
QXL and SPICE to VGA and VNC as EL9 dropped support for SPICE. However, we are
now unable to view the console using Windows, Linux, or MacOS.
For example, after downloading and opening console.vv file with Re
ilo Morais wrote:
>
> Devin, good morning! All good? I hope so.
>
> Have you tried accessing VNC through noVNC?
>
> Em sex., 7 de abr. de 2023 às 16:22, Devin A. Bougie
> escreveu:
> Hello,
>
> After upgrading our cluster to oVirt 4.5 on EL9 hosts, we had to sw
ks to Murilo for pointing us in the right direction.
Sincerely,
Devin
> On Apr 10, 2023, at 11:25 AM, Devin A. Bougie
> wrote:
>
> Hi Murilo,
>
> Yes, when trying with noVNC we get:
> Something went wrong, connection is closed
>
> This is getting pretty urgent for u
We've been struggling to install Windows Server 2022 on oVirt. We recently
upgraded to the latest oVirt 4.5 on EL9 hosts, but it didn't help.
In the past, we could boot a VM from the install CD, add the mass storage
drivers from the virt-io CD, and proceed from there. However, oVirt 4.3 didn'
This ended up being a problem with an old ISO from Microsoft. The ISO labeled
Server2022-December2021 doesn't work, but if we download a new one it's called
"Server2022-October2022" and does work.
Devin
> On Apr 20, 2023, at 10:19 AM, Devin A. Bougie
> wrote:
>
Hi, All. We are attempting to migrate to a new storage domain for our oVirt
4.5.4 self-hosted engine setup, and are failing with "cannot import name
'Callable' from 'collections'"
Please see below for the errors on the console.
Many thanks,
Devin
--
hosted-engine --deploy --restore-from-f
> rpm -qa | grep python
> rpm -qa | grep ovirt
>
> On both hosts (old and new)
>
> Em qua., 11 de out. de 2023 às 12:08, Devin A. Bougie
> escreveu:
> Hi, All. We are attempting to migrate to a new storage domain for our oVirt
> 4.5.4 self-hosted engine setup,
h-2.17-1.el9.noarch
> ovirt-openvswitch-2.17-1.el9.noarch
> ovirt-imageio-daemon-2.5.0-1.el9.x86_64
> ovirt-openvswitch-ovn-common-2.17-1.el9.noarch
> python3-passlib-1.7.4-3.2.el9.noarch
> python3-libnmstate-2.2.15-2.el9_2.x86_64
> python3.11-libs-3.11.2-2.el9_2.2.x86_64
> p
-Dcom.redhat.fips=false
DisableFenceAtStartupInSec: 300 version: general" | cut -d' ' -f 2
up
300
--
Any additional questions or suggestions would be greatly appreciated.
Thanks again,
Devin
On Oct 14, 2023, at 12:30 PM, Gianluca Cecchi wrote:
On Sat, Oct 14, 2023 at
On Oct 15, 2023, at 5:59 AM, Gianluca Cecchi
> wrote:
>
> On Sat, Oct 14, 2023 at 7:05 PM Devin A. Bougie
> wrote:
> [snip]
> Any additional questions or suggestions would be greatly appreciated.
>
> Thanks again,
> Devin
>
>
> There is another FATAL line
ort
number are correct and that the Database service is up and running.
Connection to 192.168.222.25 closed.
Any new suggestions or more tests I can run would be greatly appreciated.
Thanks,
Devin
> On Oct 15, 2023, at 9:10 AM, Devin A. Bougie wrote:
>
> Hi Gianluca,
>
> Th
s=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
+
| | | | |
postgres=CTc/postgres
(3 rows)
postgres=#
--
> On Oct 25, 2023, at 12:40 PM, Gianluca Cecchi
> wrote:
>
>
> On Wed,
After a failed attempt at migrating our HostedEngine to a new iSCSI storage
domain, we're unable to restart the original HostedEngine.
Please see below for some details, and let me know what more information I can
provide. "Lnxvirt07" was the Host used to attempt the migration. Any help
wou
eferences the
old hosted engine VM and storage domain.
Thanks,
Devin
> On Oct 25, 2023, at 3:55 PM, Devin A. Bougie wrote:
>
> Thanks again, Gianluca.
>
> I'm currently ssh'd into the new local engine VM, and Postgres is running.
> However, an engine DB doesn't exi
ckup.bck
--provision-all-databases --scope=grafanadb
- delete the lock file, and proceed as usual
Without manual intervention, the Postgres db on the new engine VM was never
initialized or setup.
Thanks again for everyones attention and advice.
Sincerely,
Devin
> On Nov 6, 2023, at 12:00 PM, Devi
I never resolved this, but was eventually able to restore our HostedEngine to a
new NFS storage domain.
Thanks,
Devin
> On Nov 1, 2023, at 12:31 PM, Devin A. Bougie wrote:
>
> After a failed attempt at migrating our HostedEngine to a new iSCSI storage
> domain, we're unab
Hi, All. We're having trouble updating our 4.5.4 cluster to 4.5.5. We're
running a self-hosted engine on fully updated AlmaLinux 9 hosts, and get the
following errors when trying to upgrade to 4.5.5.
Any suggestions would be greatly appreciated.
Many thanks,
Devin
--
[root@lnxvirt01 ~]
Hi, All. When upgrading an EL9 host from 4.5.4 to 4.5.5, I've found I need to
exclude the following packages to avoid the errors shown below:
*openvswitch*,*ovn*,centos-release-nfv-common
Is that to be expected, or am I missing a required repo or other upgrade step?
I just wanted to clarify, a
Hi Zuhaimi,
I was able to update the selinux-policy packages using the link Sandro sent
(https://kojihub.stream.centos.org/koji/buildinfo?buildID=2). In our case,
we created a new local yum repository for our oVirt cluster that contains the
required packages specific to oVirt, but you shou
Are there any known incompatibilities with RHEL 9.4 (and derivatives)?
We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with all
of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, every node
started flapping between “Up” and “NonOperational,” with VMs in tur
and iface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm:
> Error while adding record: invalid parameter\n') (storageServer:580)
>
> Seems like some issue with iscsiadm calls.
> Might want to debug which calls i
ond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm:
> Error while adding record: invalid parameter\n') (storageServer:580)
> Can you try to run those commands manually on the host?
> And see what it gives :)
> On 7/06/2024
Hello,
We are attempting to move a hosted engine from an NFS to an iSCSI storage
domain, following the normal backup / restore procedure.
With a little intervention in the new hosted engine VM, everything seems to be
working with the new engine and the process gets to the point of trying to
cr
up to complete?
Thanks,
Devin
> On Sep 11, 2024, at 9:57 AM, Devin A. Bougie wrote:
>
> Hello,
>
> We are attempting to move a hosted engine from an NFS to an iSCSI storage
> domain, following the normal backup / restore procedure.
>
> With a little intervention in the new hoste
Because CentOS Stream 8 is EOL, you need to update the URLs for the repos in
/etc/yum.repos.d. For example and to start, change
/etc/yum.repos.d/CentOS-Ceph-Pacific.repo to use:
baseurl=http://vault.centos.org/$contentdir/$stream/storage/$basearch/ceph-pacific/
We used "--ansible-extra-vars=he_
51 matches
Mail list logo