ores you have, unless you enable "Count Threads As Cores" in the cluster
> configuration, and even than, the number of vCPUs is limited to the number of
> SMT threads you have.
>
>
> - Gilboa
>
>
> On Sun, Jun 4, 2023 at 6:17 PM David White via Users
I have a fully patched / up-to-date engine:
Software Version:4.5.4-1.el8
And a fully patched, up-to-date host.
[root@cha3-storage dwhite]# yum info ovirt-hostLast metadata expiration check:
1:33:40 ago on Sun 04 Jun 2023 09:28:39 AM EDT.
Installed Packages
Name : ovirt-host
Version
The whole point of oVirt is to provide a virtualization environment.
What are you trying to do, and what steps did you take? Please send us more
details on what you were expecting and what you tried to do.
Sent with Proton Mail secure email.
--- Original Message ---
On Sunday,
I just discovered that my default management network has the "VM Network"
checkbox toggled (on the page where you navigate to Network -> Networks and
select the management network).
There are no virtual machines associated with the management network.
Is it safe to simply edit the management
Take a look at https://github.com/silverorange/ovirt_ansible_backup. It works
like a charm for me, and produces an OVA.
I haven't spent the time (yet) to set this up to run via cron, but that's my
plan, probably in the near future.
Sent with Proton Mail secure email.
--- Original Message
Hi. I'm working on adding instructions to the documentation for how to migrate
from a self-hosted engine to a stand alone manager. This will be the first time
I've contributed anything to this project (or any larger open source project
for that matter), so before I get too far in the weeds, I
You're probably looking for the instructions to manually install onto a "Stand
alone" environment. Yes, it can be done - it just needs to be installed on RHEL
8.6 derivatives.
See
shut them down before bringing them back online.
Hopefully this helps someone else!
-David
Sent with Proton Mail secure email.
--- Original Message ---
On Monday, September 19th, 2022 at 3:44 PM, David White via Users
wrote:
> Restarting the `vdsmd` service on 1 of the problema
, 2022 at 11:37 AM, David White via Users
wrote:
> I tried rebooting the engine to see if that would magically solve the problem
> (worth a try, right?). But as I expected, it didn't help.
>
> Now one of the hosts is in a "Non Responsive" state and the other is
> per
, but I need
to keep downtime at a minimum.
Sent with Proton Mail secure email.
--- Original Message ---
On Monday, September 19th, 2022 at 8:41 AM, David White via Users
wrote:
> Ok, now that I'm able to (re)deploy ovirt to new hosts, I now need to migrate
> VMs that are
Ok, now that I'm able to (re)deploy ovirt to new hosts, I now need to migrate
VMs that are running on hosts that are currently in an "unassigned" state in
the cluser.
This is the result of having moved the oVirt engine OUT of a hyperconverged
environment onto its own stand-alone system, while
; > Sent with Proton Mail secure email.
> >
> > ------- Original Message ---
> > On Monday, September 19th, 2022 at 2:44 AM, Yedidyah Bar David
> > d...@redhat.com wrote:
> >
> > > Hi,
> >
> > > please see my reply to "[ovirt-users]
> please see my reply to "[ovirt-users] Error during deployment of
> ovirt-engine".
>
> Best regards,
>
> On Mon, Sep 19, 2022 at 5:02 AM David White via Users users@ovirt.org wrote:
>
> > I currently have a self-hosted engine that was restored from a ba
I currently have a self-hosted engine that was restored from a backup of an
engine that was originally in a hyperconverged state. (See
https://lists.ovirt.org/archives/list/users@ovirt.org/message/APQ3XBUM34TG76XGRBV6GIW62RP6MZOD/).
This was also an upgrade from ovirt 4.4 to ovirt 4.5.
There
cally tell it that it is no longer self-hosted, but
is instead stand-alone?
Sent with Proton Mail secure email.
--- Original Message ---
On Friday, August 19th, 2022 at 11:01 AM, David White via Users
wrote:
> Hi Paul,
> Thanks for the response.
>
> I think you're suggesting that I
gt; test it works as you will be able to restart the original engine if it
> doesn't work.
>
> Regards,
> Paul S.
>
>
>
>
>
> From: David White via Users
> Sent: 19 August 2022 15:27
> To: David White
> Cc: oVirt Users
> Subject: [
In other words, I want to migrate the Engine from a hyperconverged environment
into a stand-alone setup.
Sent with Proton Mail secure email.
--- Original Message ---
On Friday, August 19th, 2022 at 10:17 AM, David White via Users
wrote:
> Hello,
> I have just purchased a Sy
Hello,
I have just purchased a Synology SA3400 which I plan to use for my oVirt
storage domain(s) going forward. I'm currently using Gluster storage in a
hyperconverged environment.
My goal now is to:
- Use the Synology Virtual Machine manager to host the oVirt Engine on the
Synology
-
This work around does work for me, for what its worth.
Sent with Proton Mail secure email.
--- Original Message ---
On Tuesday, April 26th, 2022 at 8:09 AM, Sandro Bonazzola
wrote:
>
>
> Il giorno lun 25 apr 2022 alle ore 13:42 Alessandro De Salvo
> ha scritto:
>
> > Hi,
> >
l Message ---
On Thursday, May 12th, 2022 at 7:08 AM, Sandro Bonazzola
wrote:
>
>
> Il giorno gio 12 mag 2022 alle ore 12:34 David White via Users
> ha scritto:
>
> > Hello,I followed some instructions I found in
> > https://www.ovirt.org/documen
Hello,I followed some instructions I found in
https://www.ovirt.org/documentation/upgrade_guide/ and
https://www.ovirt.org/download/install_on_rhel.html by doing the following:
883 subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms 884
subscription-manager repos --enable
on Thu 05 May 2022 07:58:16 PM
EDT.Error: No Matches found
Is there a better way to run automated backups than this approach and/or using
qemu-img?
Sent with ProtonMail secure email.
--- Original Message ---
On Wednesday, May 4th, 2022 at 1:27 PM, David White via Users
wrote:
> I
I've recently been working with the qemu-img commands for some work that has
nothing to do with oVirt or anything inside an oVirt environment.
But learning and using these commands have given me an idea for automating
backups.
I believe that the following is true, but to confirm, would the
Hello,
After an update from 4.4 to 4.5 on the Engine, I noticed that the 4.4 repos
still exist:
Is it safe to run "yum autoremove" on the Engine, followed by removing all of
the 4.4 repositories from /etc/yum.repos.d/ ?
ovirt-4.4
Hi Sandro,
I thought I had read somewhere on the list a few weeks ago that Gluster was
having problems with 4.5 and that it wasn't going to be supported. Did I
misundersand?
I recall a few months ago that the announcement was made to deprecate Gluster
anyway, but I don't think the original plan
And when I said "I claim to be" I meant to say: I do NOT claim to be. :)
Sent with ProtonMail Secure Email.
--- Original Message ---
On Sunday, February 6th, 2022 at 2:07 PM, David White
wrote:
> At the risk of sounding like a Red Hat or IBM fanboy, I have decided to give
> Red
At the risk of sounding like a Red Hat or IBM fanboy, I have decided to give
Red Hat the benefit of the doubt here, and to not make any decisions about
switching off of oVirt until and unless an official announcement is made.
In the meantime, I know that I need to move off of Gluster (and I
I have had a lot of problems with gluster in my HCO environment, so was already
leanings towards a storage migration at some point this year. My own plan is to
use 2x Synology NAS SA3400 devices and put them into a HA pair that then
exposes the storage as NFS (or whatever else I want it exposed
Is it safe to remove this package as well?
I noticed the following during the execution of engine-setup:
[ INFO ] DNF Unknown: ovirt-engine-extension-logger-log4j-1.1.1-1.el8.noarch
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Wednesday, January 26th, 2022 at 9:20
ngine cannot connect
> to the vdsm and vdsm-gluster and this makes the whole situation worse.
>
> Best Regards,Strahil Nikolov
>
> > On Sat, Jan 22, 2022 at 23:15, David White via Users
> > wrote:___
> >
> > Users
I have a Hyperconverged cluster with 4 hosts.
Gluster is replicated across 2 hosts, and a 3rd host is an arbiter node.
The 4th host is compute only.
I updated the compute-only node, as well as the arbiter node, early this
morning. I didn't touch either of the actual storage nodes.That said, I
This is affecting me as well on RHEL 8.5 hosts.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Wednesday, December 1st, 2021 at 2:25 AM, Gary Pedretty
wrote:
> Started happening for me yesterday. Was working just a few days ago.
>
> Gary
>
> [FAILED]
Hi team,
I saw that RHEL 8.5 was released yesterday, so I just put one of my hosts that
doesn't have local gluster storage into maintenance mode and again attempted an
update.
The update again failed through the oVirt web UI, and a `yum update` from the
command line again failed with the same
Original Message ‐‐‐
On Saturday, October 23rd, 2021 at 3:35 AM, Strahil Nikolov via Users
wrote:
> That's why I put the user and the pass in single quotes... Like this:
> 'user@domain'@'pass'
>
> Best Regards,Strahil Nikolov
>
> >
I'm running into the same issue with a RHEL 8.4 host.
The following is on my host:
[root@cha2-storage dwhite]# yum repolist
Updating Subscription Management repositories.
repo id
repo name
ovirt-4.4
On Thu, Oct 21, 2021 at 2:17, David White via Users
> > wrote:___
> >
> > Users mailing list -- users@ovirt.org
> >
> > To unsubscribe send an email to users-le...@ovirt.org
> >
> > Privacy Statement: https:/
; > >
> > > > > > > > This seems strange to me. All of these tables are empty, unless
> > > > > > > > I'm doing something wrong (I'm new to Postgres).
> > > > > > > >
> > > > > > > >
> > > > > > engine-# SELECT * from cluster
> > > > > > engine-# select * from vm_pools
> > > > > > engine-# select * from vm_static
> > > > > >
> > > > > > Sent with ProtonMail Secure Email.
> > > >
>From Host 1, try this:
ssh root@localhost
And see if it prompts to add the host to your known_hosts file at that point.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Tuesday, October 19th, 2021 at 2:58 PM, wrote:
> ok I am setting up the gluster file system.
>
> I
ir Soffer wrote:
> On Fri, Oct 15, 2021 at 11:38 AM David White via Users
> wrote:
>
> > Thank you very much.
> > I was able to (re)set the `engine` user's password in Postgres.
> > Unfortunately, I'm still having trouble unlocking the disks.
> >
> >
gt; >
> > > > ‐‐‐ Original Message ‐‐‐
> > > >
> > > > On Monday, October 18th, 2021 at 3:10 AM, Eyal Shenitzky
> > > > wrote:
> > > >
> > > > > The host cannot be set to maintenance if there
> > > FINISHED (success or failure - 9/10 in the DB).
> > >
> > > If there are image transfer session in the DB with status that is
> > > different then those that I mentioned, you should see why the have a
> > > different status and finalize/clea
>
> > On Fri, Oct 15, 2021 at 11:38 AM David White via Users
> > wrote:
> >
> > > Thank you very much.
> > > I was able to (re)set the `engine` user's password in Postgres.
> > > Unfortunately, I'm still having trouble unlocking the disks.
&g
Hi Brad,
What are you using for storage, and are you trying to setup both of your
physical servers into the same cluster?
In my scenario (a hyperconverged environment with Gluster), ideally, you need
two different networks, on separate network interfaces: One for your backend /
storage
run this SQL query on the
> engine:
>
> engine=# select * from async_tasks WHERE storage_pool_id = '123';
>
> Regards,
> Shani Leviim
>
> On Sun, Oct 17, 2021 at 12:14 PM Strahil Nikolov
> wrote:
>
> > Try unlock_entitiy.sh with '-t all -r'
> >
tonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, October 15th, 2021 at 4:32 AM, David White via Users
wrote:
> Thank you very much.
> I was able to (re)set the `engine` user's password in Postgres.
> Unfortunately, I'm still having trouble unlocking the disks.
>
> The f
r ways but that's the simplest) to take effect.
>
> Best Regards,Strahil Nikolov
>
> > On Wed, Oct 13, 2021 at 2:49, David White via Users
> > wrote:___
> >
> > Users mailing list -- users@ovirt.org
> >
> &
[1]
> https://www.ovirt.org/develop/Using-oVirt-Engine-with-a-PostgreSQL-container.html[2]
> https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html
>
> Regards,
> Shani Leviim
>
> On Thu, Oct 14, 2021 at 12:45 PM David White via Users
> wrote:
>
>
I am trying to put a host into maintenance mode, and keep getting this error:
Error while executing action: Cannot switch Host cha1-storage.my-domain.com to
Maintenance mode. Image transfer is in progress for the following (3) disks:
e0f46dc5-7f98-47cf-a586-4645177bd6a2,
: he_local
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Tuesday, October 12th, 2021 at 4:38 PM, David White via Users
wrote:
> > Check the scores of the systems via 'hosted-engine --vm-status'.
> That must be the problem.
>
> Host a has a score of 180
the engine shuts down from a, and comes back up on c.
>
> Check the scores of the systems via 'hosted-engine --vm-status'.Check vdsm
> logs on both hosts.Check the logs in the engine itself.
>
> Best Regards,Strahil Nikolov
>
> > On Tue, Oct 12, 2021 at 13:34, David White via Use
About a month ago, I completely rebuilt my oVirt cluster, as I needed to move
all my hardware from 1 data center to another with minimal downtime.
All my hardware is in the new data center (yay for HIPAA compliance and 24/7
access, unlike the old place!)
I originally built the cluster as a
I can't remember if I've asked this already, or if someone else has brought
this up.
I have noticed that gluster replication in a hyperconverged environment is very
slow.
I just (successfully, this time) added a brick to volume that was originally
built on a single-node Hyperconverged cluster.
.
>
> ‐‐‐ Original Message ‐‐‐
> On Friday, September 3rd, 2021 at 4:10 AM, David White via Users
> wrote:
>
> > In this particular case, I have 1 (one) 250GB virtual disk..
> >
> > Sent with ProtonMail Secure Email.
> >
> > ‐‐‐ Original
at 4:10 AM, David White via Users
wrote:
> In this particular case, I have 1 (one) 250GB virtual disk..
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
>
> On Tuesday, August 31st, 2021 at 11:21 PM, Strahil Nikolov
> wrote:
>
>
very large ones.
>
> Best Regards,Strahil Nikolov
>
> Sent from Yahoo Mail on Android
>
> > On Thu, Aug 26, 2021 at 3:27, David White via Users
> > wrote:I have an HCI cluster running on Gluster storage. I exposed an NFS
> > share into oVirt as a storage doma
I have an HCI cluster running on Gluster storage. I exposed an NFS share into
oVirt as a storage domain so that I could clone all of my VMs (I'm preparing to
move physically to a new datacenter). I got 3-4 VMs cloned perfectly fine
yesterday. But then this evening, I tried to clone a big VM,
ter to put it into pastebin or provide a link to access these files?
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Sunday, August 22nd, 2021 at 3:58 AM, Liran Rotenberg
wrote:
> On Sun, Aug 22, 2021 at 3:42 AM David White via Users users@ovirt.org wrote:
>
> >
I have an unused 200GB partition that I'd like to use to copy / export / backup
a few VMs onto, so I mounted it to one of my oVirt hosts as /ova-images/, and
then ran "chown 36:36" on ova-images.
>From the engine, I then tried to export an OVA to that directory.
Watching the directory with
So it looks like I'm going to move to a new datacenter. I went into somewhere
cheap on a month-to-month contract earlier this year, and they've been a pain
to deal with. At the same time, I've grown a lot faster than expected, so I've
decided to move into a better, more reputable datacenter
ild all 20+ of those VMs.
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
>
> On Friday, August 13th, 2021 at 2:41 PM, Nir Soffer nsof...@redhat.com wrote:
>
> > On Fri, Aug 13, 2021 at 9:13 PM David White via Users users@ovirt.org wrote:
> &g
‐‐‐
On Friday, August 13th, 2021 at 2:41 PM, Nir Soffer wrote:
> On Fri, Aug 13, 2021 at 9:13 PM David White via Users users@ovirt.org wrote:
>
> > Hello,
> >
> > It appears that my Manager / hosted-engine isn't working, and I'm unable to
> > get it to start.
> &g
Hello,
It appears that my Manager / hosted-engine isn't working, and I'm unable to get
it to start.
I have a 3-node HCI cluster, but right now, Gluster is only running on 1 host
(so no replication).
I was hoping to upgrade / replace the storage on my 2nd host today, but aborted
that
Thank you for all the responses.
Following Strahil's instructions, I *think* that I was able to reconstruct the
disk image. I'm just waiting for that image to finish downloading onto my local
machine, at which point I'll try to import into VirtualBox or something.
Fingers crossed!
Worst case
Hi Patrick,
This would be amazing, if possible.
Checking /gluster_bricks/data/data on the host where I've removed (but not
replaced) the bricks, I see a single directory.
When I go into that directory, I see two directories:
dom_md
images
If I go into the images directory, I think I see the
My hyperconverged cluster was running out of space.
The reason for that is a good problem to have - I've grown more in the last 4
months than in the past 4-5 years combined.
But the downside was, I had to go ahead and upgrade my storage, and it became
urgent to do so.
I began that process last
Thank you.
I'm doing some more research & reading on this to make sure I understand
everything before I do this work.
You wrote:
> If you rebuild the raid, you are destroying the brick, so after mounting it
> back, you will need to reset-brick. If it doesn't work for some reason , you
> can
this in the past when doing major maintenance on gluster
> volumes to err on the side of caution.
>
> On Sat, Jul 10, 2021 at 7:22 AM David White via Users wrote:
>
> > Hmm right as I said that, I just had a thought.
> > I DO have a "backup" server in place
of the nodes. Usually I use
> > 'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'- add this
> > new brick (add-brick replica 3 arbiter 1) to the volume- wait for the heals
> > to finish
> >
> > Then repeat again for each volume.
> >
> > Adding the
it for the heals to
> finish
>
> Then repeat again for each volume.
>
> Adding the new disks should be done later.
>
> Best Regards,Strahil Nikolov
>
> > On Sat, Jul 10, 2021 at 3:15, David White via Users
> > wrote:My current hyperconverged environm
My current hyperconverged environment is replicating data across all 3 servers.
I'm running critically low on disk space, and need to add space.
To that end, I've ordered 8x 800GB ssd drives, and plan to put 4 drives in 1
server, and 4 drives in the other.
What's my best option for
Hello,
Reading
https://www.ovirt.org/documentation/administration_guide/index.html#IPv6-networking-support-labels,
I see this tidbit:
- Dual-stack addressing, IPv4andIPv6, is not supported
- Switching clusters from IPv4 to IPv6 is not supported.
If I'm understanding this correctly... does
I deployed a rootless Podman container on a RHEL 8 guest on Saturday (3 days
ago).
At the time, I remember seeing some selinux AVC "denied" messages related to
qemu-guest-agent and podman, but I didn't have time to look into it further,
but made a mental note to come back to it, because it
certificate.
(And I did / do use the "Test Connection" button)
Which logs would be helpful?
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Sunday, June 6th, 2021 at 2:52 AM, Yedidyah Bar David
wrote:
> On Sat, Jun 5, 2021 at 2:41 PM David White via
s
booting to that ISO fine.
Problem resolved.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, June 4, 2021 8:47 PM, David White via Users wrote:
> More details here:
>
> I just tested this (new) VM now on a CentOS 7 ISO.
> That worked perfectly f
If you plan on using CentOS going forward, I would recommend using (starting
with) stream, as CentOS 8 will be completely EOL at the end of this year.
That said, you can easily convert a CentOS 8 server to CentOS Stream by running
these commands:
dnf swap centos-linux-repos
2021-06-04 20-47-14.png]
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, June 4, 2021 8:27 PM, David White via Users wrote:
> I uploaded the RHEL ISOs the same way I uploaded the Ubuntu ISOs:
> Navigate to Storage -> Disks and click Upload (so using t
?
>
> Best Regards,
> Strahil Nikolov
>
> > On Fri, Jun 4, 2021 at 12:25, David White via Users
> > wrote:
> > Ever since I deployed oVirt a couple months ago, I've been unable to boot
> > any VMs from a RHEL ISO.
> > Ubuntu works fine, as does Cen
4, 2021 12:29 PM, Nir Soffer wrote:
> On Fri, Jun 4, 2021 at 12:11 PM David White via Users users@ovirt.org wrote:
>
> > I'm trying to figure out how to keep a "broken" NFS mount point from
> > causing the entire HCI cluster to crash.
> > HCI is working beautifu
Ever since I deployed oVirt a couple months ago, I've been unable to boot any
VMs from a RHEL ISO.
Ubuntu works fine, as does CentOS.
I've tried multiple RHEL 8 ISOs on multiple VMs.
I've destroyed and re-uploaded the ISOs, and I've also destroyed and re-created
the VMs.
Every time I try to
I'm trying to figure out how to keep a "broken" NFS mount point from causing
the entire HCI cluster to crash.
HCI is working beautifully.
Last night, I finished adding some NFS storage to the cluster - this is storage
that I don't necessarily need to be HA, and I was hoping to store some
021 at 3:08 AM Nir Soffer nsof...@redhat.com wrote:
>
> > On Sat, May 22, 2021 at 8:20 PM David White via Users users@ovirt.org wrote:
> >
> > > Hello,
> > > Is it possible to use Ubuntu to share an NFS export with oVirt?
> > > I'm trying to setup a
Hello,
Is there documentation anywhere for adding a 4th compute-only host to an
existing HCI cluster?
I did the following earlier today:
- Installed RHEL 8.4 onto the new (4th) host
- Setup an NFS share on the host
- Attached the NFS share to oVirt as a new storage domain
- I then
y upgraded a 2nd host, onto which I was able to migrate VMs for
preparation of upgrading the 3rd host.
All seems well, for now, after I upgraded the 1st host a second time earlier
today.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Wednesday, May 26, 2021 2:55 PM, David
I have oVirt 4.4.6 running on my Engine VM.
I also have oVirt 4.4.6 on one of my hosts.
The other two hosts are still on oVirt 4.4.5.
My issue seems slightly different than the issue(s) other people have described.
I'm on RHEL 8 hosts.
My Engine VM is running fine on the upgraded host, as is
else is going on. It wants me to put all three hosts into
> > maintenance mode which is impossible.
> >
> > On Sat, May 22, 2021 at 8:36 PM David White via Users
> > wrote:
> >
> > > I have a 3-node hyperconverged cluster with Gluster filesystem r
I have a 3-node hyperconverged cluster with Gluster filesystem running on RHEL
8.3 hosts.
It's been stable on oVirt 4.5.
Today, I just upgraded the Engine to v4.6.
[Screenshot from 2021-05-22 20-29-23.png]
I then logged into the oVirt manager, navigated to Compute -> Clusters, and
clicked on
Hello,
Is it possible to use Ubuntu to share an NFS export with oVirt?I'm trying to
setup a Backup Domain for my environment.
I got to the point of actually adding the new Storage Domain.
When I click OK, I see the storage domain appear momentarily before
disappearing, at which point I get a
Hi Pavel,
At the risk of being somewhat lazy, could I ask you where the docs are for
installing the iDrac modules and getting power management setup? This is a
topic that I haven't explored yet, but probably need to.
I have 3x Dell R630s.
Sent with ProtonMail Secure Email.
‐‐‐ Original
So I have two switches.
All 3 of my HCI oVirt servers are connected to both switches.
1 switch serves the ovirtmgmt network (internal, gluster communication and
everything else on that subnet)
The other switch serves the "main" front-end network (Private).
It turns out that my datacenter
s using a linux bridge and maybe STP kicked in ?
> Do you know of any changes done in the network at that time ?
>
> Best Regards,
> Strahil Nikolov
>
> > On Tue, May 11, 2021 at 2:27, David White via Users
> > wrote:
> > ___
interfaces are also bridged - and controlled - by oVirt itself.
Is it possible that oVirt took them down for some reason.
I don't know what that reason might be?
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, May 10, 2021 7:14 PM, David White via Users wrote:
>
ot too many backups
> running in parallel ?
>
> Best Regards,
> Strahil Nikolov
>
> > On Mon, May 10, 2021 at 19:13, David White via Users
> > wrote:
> > ___
> > Users mailing list -- users@ovirt.org
> >
.anysubdomain.domain host1
> 10.10.10.11 host2.anysubdomain.domain host2
>
> Usually the hostname is defined for each peer in the /var/lib/glusterd/peers.
> Can you check the contents on all nodes ?
>
> Best Regards,
> Strahil Nikolov
>
As part of my troubleshooting earlier this morning, I gracefully shut down the
ovirt-engine so that it would come up on a different host (can't remember if I
mentioned that or not).
I just verified forward DNS on all 3 of the hosts.
All 3 resolve each other just fine, and are able to ping each
This turned into quite a discussion. LOL.
A lot of interesting points.
Thomas said -->
> If only oVirt was a product rather than only a patchwork design!
I think Sandro already spoke into this a little bit, but I would echo what they
(he? she?) said. oVirt is an open source project, so there's
I discovered that the servers I purchased did not come with 10Gbps network
cards, like I thought they did. So my storage network has been running on a
1Gbps connection for the past week, since I deployed the servers into the
datacenter a little over a week ago. I purchased 10Gbps cards, and put
3.4" is my remote IP address in the above example.
Also note that I've had to enter in the remote IP address twice (once when
passing it in using the -x argument)
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Saturday, April 17, 2021 5:01 AM, David White via Users
I'm running into the issue described in this thread:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KT3B6N3UZ3DS3J6FV6OKQAXPNPTLZPOB/
In short, I have ssh to the datacenter. I can ssh to a public IP address with
the "-D 8080" option to forward local port 8080 act as a SOCKS
will be quite
> fine.
>
> Don't forget that lattency kills gluster, so keep it as tight as possible ,
> but in the same time keep them on separate hosts.
>
> Best Regards,
> Strahil Nikolov
>
> В петък, 16 април 2021 г., 03:57:51 ч. Гринуич+3, David White via Use
1 - 100 of 148 matches
Mail list logo