Ifconfig isn’t standard with centos7. You should be using the ip command
like ip addr for example
You can get ifconfig if you install the net-tools package but I would
advise against installing additional packages on node hosts.
On Sun, Jun 30, 2019 at 11:13 AM wrote:
> Hello
>
> Please, I
Basic setup notes: 3 node HCI running oVirt 4.3.3 using nodeNG for hosts.
Storage is SSD backed with 10Gb network dedicated to gluster with jumbo
frames enabled.
The ovirt management network (which also acts as VM network) is 1Gb network
Hosts are dell R720s w/ 256gb ram and E5-2690 procs
I
Ovirt can be installed on as little as one node ,but for a proper HA setup
you should have at least three nodes.
Hci uses gluster and needs a replica 3 or replica 3 arbiter 1
configuration. With arbiter you need less storage as it only holds
metadata so you might be able to put a third server in
Are each of your hosts able to resolve one and other?
On Wed, Jun 12, 2019 at 5:00 AM PS Kazi wrote:
> ovirt Node version 4.3.3.1
> I am trying to configure 3 node Gluster storage and oVirt hosted engine
> but gettng following error:
>
> TASK [gluster.features/roles/gluster_hci : Check if valid
Have you looked at installing ovirt metrics store?
On Tue, Jun 11, 2019 at 12:56 PM Wesley Stewart wrote:
> Is there any way to get ovirt disk performance metrics into the web
> interface? It would be nice to see some type of IOPs data, so we can see
> which VMs are hitting our data stores the
I increased RX ring params on the interface and restarted networking on
each host. So far the error counts on all three hosts 1gig interfaces are
still at zero. Will see how it holds up
On Thu, Jun 6, 2019 at 12:20 PM Jayme wrote:
> I have a three node HCI setup on Dell R720s runn
I have a three node HCI setup on Dell R720s running the latest stable
version of 4.3.3
Each hosts has a 1gig link and a 10gig link. The 1gig is used for ovirt
management network and 10gig link is used for backend glusterFS traffic.
I haven't noticed before but after installing ovirt metrics
at.com/>
>
>
> On Mon, Jun 3, 2019 at 9:52 PM Jayme wrote:
>
>> I just get a blank output from that command. I'm running oVirt Node NG
>> 4.3.3.1 hosts
>>
>> On Mon, Jun 3, 2019 at 3:48 PM Morris, Roy
>> wrote:
>>
>>> Jayme,
>>>
I just get a blank output from that command. I'm running oVirt Node NG
4.3.3.1 hosts
On Mon, Jun 3, 2019 at 3:48 PM Morris, Roy wrote:
> Jayme,
>
>
>
> Can you run the following command and report back with the results?
>
>
>
> #systemctl status selinux* -l
>
>
-1
(Connection reset by peer)
collectd and rsyslog are running, I tried restarting both. I can also
telnet to port 44514 on localhost and it's responding.
On Mon, Jun 3, 2019 at 3:12 PM Jayme wrote:
> I finally managed to get oVirt metrics store running. I loaded sample
> dash
I finally managed to get oVirt metrics store running. I loaded sample
dashboards, searches and visualizations in to Kibana from
/etc/ovirt-engine-metrics/dashboards-examples. When importing searches and
visualizations there are warnings about missing index patterns.
It appears that the "VM"
Lucie, that was it. Thank you very much!
On Mon, Jun 3, 2019 at 2:17 PM Lucie Leistnerova
wrote:
> Hi Jayme,
>
> there is a special option - Delete protection in General tab, that
> prevents from deleting. Is it checked on the VM?
> On 6/3/19 4:46 PM, Jayme wrote:
>
> Runn
Running latest stable 4.3. I have one particular VM that is shutdown but
options for removal are greyed out in the admin gui. I tried starting it
and shutting it down again but still the options are not-selectable.
However, they are for other VMs.
Any idea what is different about this VM that
ahil Nikolov
>
> В петък, 31 май 2019 г., 8:59:54 ч. Гринуич-4, Jayme
> написа:
>
>
> When a VM is renamed a warning in engine gui appears with an exclamation
> point stating "vm was started with a different name". Is there a way to
> clear this warning
When a VM is renamed a warning in engine gui appears with an exclamation
point stating "vm was started with a different name". Is there a way to
clear this warning? The VM has been restarted a few times since but it
doesn't go away.
Thanks!
___
Users
Shirly,
No problem, I understand. I will provide all of the requested info in a
bug report. Thanks again for your help!
On Wed, May 29, 2019 at 11:44 AM Shirly Radco wrote:
> Hi Jayme,
>
> It getting hard to debug your issue over the mailing list.
> Can you please open a bug
- "'results' in control_plane_pods"
- "'results' in control_plane_pods.results"
- control_plane_pods.results.results | length > 0
retries: 60
delay: 5
with_items:
- "{{ 'etcd' if inventory_hostname in groups['oo_etcd_to_config'] else
omit }}"
- api
- cont
ips.
On Tue, May 28, 2019 at 3:28 PM Edward Berger wrote:
> In my case it was a single bare metal host, so that would be equivalent to
> disabling iptables on the master0 VM you're installing to, in your ovirt
> scenario.
>
> On Tue, May 28, 2019 at 1:25 PM Jayme wrote:
&
I should also mention one more thing, I am attempting to install on an
internal domain, not externally accessible.
On Tue, May 28, 2019 at 2:25 PM Jayme wrote:
> Do you mean the iptables firewall on the server being installed to i.e.
> master0 or the actual oVirt host that the mast
uot;cluster".
>
> [1]https://github.com/gshipley/installcentos
>
> The other error I had with [1] was it was trying to install a couple of
> packages (zile and python2-pip) from EPEL with the repo disabled.
>
>
>
> On Tue, May 28, 2019 at 10:41 AM Jayme wrote:
>
-dispatcher.service
enabled
NetworkManager-wait-online.service
enabled
NetworkManager.service
enabled
On Tue, May 28, 2019 at 11:13 AM Jayme wrote:
> Shirly,
>
> I appreciate the help with this. Unfortunately I am still running in to
> the same problem. So far I've tried to install/
r Software Engineer
>
> Red Hat <https://www.redhat.com/>
>
> <https://www.redhat.com/>
>
>
> On Mon, May 27, 2019 at 4:41 PM Jayme wrote:
>
>> I managed to get past that but am running in to another problem later in
>> the process on the control plane pod
27, 2019 at 9:35 AM Shirly Radco wrote:
> Hi Jayme,
>
> Thank you for reaching out.
> Please try rerunning the ansible playbook.
> If this doesn't work, try adding to the integ.ini in the metrics vm
> openshift_disable_check=docker_storage
> and rerun the ansible pl
I'm running in to this ansible error during oVirt metrics installation
(following procedures at:
https://ovirt.org/documentation/metrics-install-guide/Installing_Metrics_Store.html
)
This is happening late in the process, after successfully deploying the
installation VM and then running second
Is a paid Redhat subscription required to install oVirt metrics store?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
That is positive to hear. I'm starting to wonder if node NG was a good
choice for my hosts, it seems to be behind.
On Tue, May 14, 2019 at 1:09 PM Darrell Budic
wrote:
> Yep, so far so good. Feels like 3.12.15 again, stability wise ;)
>
>
> > On May 14, 2019, at 5:28 AM, Strahil wrote:
> >
>
torage space, and so on.
>
> It's basically set and forget.
>
> I just wrote my own little bash wrapper that will grab the output of the
> script and email it to me.
>
> Hope this helps,
>
> -- Peter
>
>
>
> On Mon, May 13, 2019 at 8:12 AM Jayme wrote:
>
I use node NG as well, I just updated to 4.3.3 two days ago and I'm on
Gluster 5.5. Yum update on host node yields no updates available
On Mon, May 13, 2019 at 1:03 PM Darrell Budic
wrote:
> Ovirt just pulls in the gluster5 repos, if you upgrade now you should get
> gluster 5.6 on your nodes.
I've been having problems with my gluster Engine volume recently as well
after updating to latest stable 4.3.3. For the past few days I've seen a
random brick in the Engine volume go down and I have to force start it to
get it working again. Right now I'm seeing that there are unsynced entries
I know that a robust API is provided for performing VM backup functions,
but it seems crazy to me that oVirt has not implemented a simple method to
backup VM to a storage domain such as an NFS export domain. Is this
functionality being left out for a reason (perhaps not wanting to step on
toes of
vProtect is not open source but if you have 10 VMs or less the license is
free. It's an enterprise grade backup solution that can backup oVirt/RHEV
VMs (including incremental backups), also handles restores (can mount on
the backup machine or restore to a new VM directly to oVirt), snapshot
>From my understanding both are possible. Gluster storage can be completely
separate (at that point you would not be working with a hyperconverged
configuration). You should also be able to expand a hyperconverged setup
past 3 nodes but not by only 1 node, you'd need to add more nodes.
On Thu,
https://docs.gluster.org/en/latest/release-notes/5.5/
On Fri, Mar 22, 2019 at 1:53 AM Leo David wrote:
> Hi everyone,
> I have seen a lot of threads here regarding 4.3.x release regarding
> problems a different layers, most of them related to underneath gluster
> storage.
> I would do an
Apparently a new version of gluster was just released that addresses the
issue that is causing The problems, I’d wait and make sure that whatever
version you are upgrading to has that new package
On Fri, Mar 22, 2019 at 1:53 AM Leo David wrote:
> Hi everyone,
> I have seen a lot of threads here
Agree with Chris here, regular CentOS 7 hosts may be easier to manage in
this case. Not much persists when updating oVirt node, some select
folders/files persist on updates such as /etc and /root for example but I'm
not sure how custom packages/rpms are handled. I believe there may be ways
you
I’ve recently tested out vprotect, it’s expensive but free for up to 10
vms. Works great with 4.3 and supports incremental backups which is very
handy. I’d recommend checking it out if you can. I’m really happy with it
thus far.
On Wed, Mar 20, 2019 at 4:32 PM wrote:
> Hello,
>
> I have set up
me to consider it, unfortunately. I’m also a little surprised
> that a major upstream issue like that bug hasn’t caused you to issue more
> warnings, it’s something that is going to affect everyone who’s upgrading a
> converged system. Any discussion on why more news wasn’t released about
ere a volume becomes unstable
> for a period of time after the upgrade, but then seems to settle down after
> a while. I've only witnessed this in the 4.3.x versions. I suspect it's
> more of a Gluster issue than oVirt, but troubling none the less.
>
> On Fri, 15 Mar 2019 at 09:37, Jaym
stabilized, prior to 4.3 I never had a single glusterFS issue or brick
offline on 4.2
On Fri, Mar 15, 2019 at 9:48 AM Sandro Bonazzola
wrote:
>
>
> Il giorno ven 15 mar 2019 alle ore 13:38 Jayme ha
> scritto:
>
>> I along with others had GlusterFS issues after 4.3 upgrades, the
I along with others had GlusterFS issues after 4.3 upgrades, the failed to
dispatch handler issue with bricks going down intermittently. After some
time it seemed to have corrected itself (at least in my enviornment) and I
hadn't had any brick problems in a while. I upgraded my three node HCI
Shane,
This may be possible, I'm sure others will chime in. I do think you'd save
yourself a lot of headaches if you were able to do three server
Hyper-converged infrastructure setup with GlusterFS in either replica 3 or
replica 3 arbiter 1. You will get a very good HA solution out of a 3 node
It sure if this is the same bug I hit but check ownership of the cam
images. There’s a bug in 4.3 upgrade that changes ownership to root and
causes vms to not start until you change back to vdsm
On Wed, Mar 6, 2019 at 4:57 AM Shawn Southern
wrote:
> After running 'hosted-engine --vm-start',
These are both reported bugs
On Fri, Mar 1, 2019 at 7:34 AM Stefano Danzi wrote:
> Hello,
>
> I've just upgrade to version 4.3.1 and I can see this message in gluster
> log of all my host (running oVirt Node):
>
> The message "E [MSGID: 101191]
> [event-epoll.c:671:event_dispatch_epoll_worker]
Also one more thing, did you make sure to setup the 10Gb gluster network in
ovirt and set migration and vm traffic to use the gluster network?
On Thu, Feb 28, 2019 at 6:11 PM Jayme wrote:
> Check volumes in Ovirt admin and make sure the optimize volume for cm
> storage is selected
>
Check volumes in Ovirt admin and make sure the optimize volume for cm
storage is selected
I have a three node ovirt hci with ssd gluster backed storage and 10Gb
storage network and I write at around 50-60 megabytes per second from
within vms. Before I used the optimize for vm storage it was about
Lots of bug reports being logged about this one, here is one:
https://bugzilla.redhat.com/show_bug.cgi?id=1677319
On Tue, Feb 26, 2019 at 9:56 AM Endre Karlson
wrote:
> Hi we are seeing a high number of errors / failures within the logs and
> problems with our ovirt 4.3 cluster. IS there any
Personally I feel like raid on top of GlusterFS is too wasteful. It would
give you a few advantages such as being able to replace a failed drive at
raid level vs replacing bricks with Gluster.
In my production HCI setup I have three Dell hosts each with two 2Tb SSDs
in JBOD. I find this setup
I believe those are just default settings regardless of disk size you can
choose your own to fit your needs
On Sat, Feb 16, 2019 at 1:03 PM matteo fedeli wrote:
> Ok I understand! Thank you! and the LV Size? cockpit by default set me 100
> for engine and 500 for vmstore but my hdd is max
When I installed I selected only the first disk and used auto partitioning.
The other two disks can be just formatted with no partition tables and the
engine install will provision them.
On Sat, Feb 16, 2019 at 12:40 PM matteo fedeli wrote:
> RHV docs says that must be RAW disk, it means
You should only install Node to one disk, the boot/operating system disk.
The other disks for storage will be provisioned later during the
ovirt/engine installation. You will be able to choose which volumes the
data domains use during the installation.
On Sat, Feb 16, 2019 at 11:05 AM matteo
Leo,
Almost positive that it won’t update to the next major release until you
install the 4.3 repos manually. It Should be easily verifiable with yum
update commmand it won’t perform any action until you agree (as long as you
aren’t passing -y flag)
On Sat, Feb 16, 2019 at 2:50 AM Leo David
Running an oVirt 4.3 HCI 3-way replica cluster with SSD backed storage.
I've noticed that my SSD writes (smart Total_LBAs_Written) are quite high
on one particular drive. Specifically I've noticed one volume is much much
higher total bytes written than others (despite using less overall space).
r
[2019-02-14 03:04:00.722382] I [addr.c:54:compare_addr_and_update]
0-/gluster_bricks/non_prod_b/non_prod_b: allowed = "*", received addr =
"10.11.0.221"
[2019-02-14 03:04:00.722466] I [login.c:110:gf_auth] 0-auth/login: allowed
user names: 7b741fe4-72ca-41ba-8efb-7add1e4fe6f3
Riesener <
oliver.riese...@hs-bremen.de> wrote:
>
> Hi Jayme,
>
> btw. in the past there was a long hunting for gluster problems on this
> list.
> as resolution, there was a failed single disk drive on one gluster host.
> the drive was direct connected without contro
e = Host0,
GlusterVolumeAdvancedDetailsVDSParameters:{hostId='771c67eb-56e6-4736-8c67-668502d4ecf5',
volumeName='non_prod_a'}), log id: 11c42649
On Thu, Feb 14, 2019 at 10:16 AM Sandro Bonazzola
wrote:
>
>
> Il giorno gio 14 feb 2019 alle ore 07:54 Jayme ha
> scritto:
>
>> I h
Ron, well it looks like you're not wrong. Less than 24 hours after
upgrading my cluster I have a Gluster brick down...
On Wed, Feb 13, 2019 at 5:58 PM Jayme wrote:
> Ron, sorry to hear about the troubles. I haven't seen any gluster crashes
> yet *knock on wood*. I will monitor c
I have a three node HCI gluster which was previously running 4.2 with zero
problems. I just upgraded it yesterday. I ran in to a few bugs right away
with the upgrade process, but aside from that I also discovered other users
with severe GlusterFS problems since the upgrade to new GlusterFS
Ron, sorry to hear about the troubles. I haven't seen any gluster crashes
yet *knock on wood*. I will monitor closely. Thanks for the heads up!
On Wed, Feb 13, 2019 at 5:09 PM Ron Jerome wrote:
>
> >
> > Can you be more specific? What things did you see, and did you report
> bugs?
>
> I've
I can confirm that this worked. I had to shut down every single VM then
change ownership to vdsm:kvm of the image file then start VM back up.
On Wed, Feb 13, 2019 at 3:08 PM Simone Tiraboschi
wrote:
>
>
> On Wed, Feb 13, 2019 at 8:06 PM Jayme wrote:
>
>>
>> I m
I might be hitting this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
On Wed, Feb 13, 2019 at 1:35 PM Jayme wrote:
> This may be happening because I changed cluster compatibility to 4.3 then
> immediately after changed data center compatibility to 4.3 (before
> restarting
e442e-9989-11e8-b0e4-00163e4bf18a/1f2e9989-9ab3-43d5-971d-568b8feca918/images/d81a6826-dc46-44db-8de7-405d30e44d57/2d6d5f87-ccb0-48ce-b3ac-84495bd12d32',
'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID':
'2d6d5f87-ccb0-48ce-b3ac-84495bd12d32', 'diskType': 'file', 'alias':
'ua
'volumeID':
'2d6d5f87-ccb0-48ce-b3ac-84495bd12d32', 'diskType': 'file', 'alias':
'ua-d81a6826-dc46-44db-8de7-405d30e44d57', 'discard': False}
On Wed, Feb 13, 2019 at 1:01 PM Jayme wrote:
> I think I just figured out what I was doing wrong. On edit cluster screen
> I was changing both the CPU
that you need to do this in two steps for it to work.
On Wed, Feb 13, 2019 at 12:57 PM Jayme wrote:
> Hmm interesting, I wonder how you were able to switch from SandyBridge
> IBRS to SandyBridge IBRS SSBD. I just attempted the same in both regular
> mode and in global maintenance mode and
Environment setup:
3 Host HCI GlusterFS setup. Identical hosts, Dell R720s w/ Intel E5-2690
CPUs
1 default data center (4.2 compat)
1 default cluster (4.2 compat)
Situation: I recently upgraded my three node HCI cluster from Ovirt 4.2 to
4.3. I did so by first updating the engine to 4.3 then
e mode. My
question is, is the change of the family type correct or is it a different
family type being selected due to a bug or some other issue?
On Wed, Feb 13, 2019 at 9:54 AM Jayme wrote:
> I'm also a bit unclear about updating to 4.3 cluster level. I have a 3
> host HCI setup which I ju
I'm also a bit unclear about updating to 4.3 cluster level. I have a 3
host HCI setup which I just updated to 4.3 manually. I upgraded engine to
4.3 then upgraded each of the three hosts to ovirt node 4.3. I still need
to change cluster/data centre compatibility level to 4.3. If I try to
You should not install node image within glusterfs. Use a separate drive
On Wed, Jan 30, 2019 at 8:31 AM Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:
> I think you have to dedicate a whole disk for the node install, you maybe
> able to use customised partitioning but it's not
I upgraded oVirt to 4.2.8 and now I am spammed with the following message
in all host syslog. How can I stop/fix this error?
ovs-vsctl: ovs|1|db_ctl_base|ERR|no key "odl_os_hostconfig_hostid" in
Open_vSwitch record "." column external_ids
___
Users
ansible the ovirt engine was upgraded/rebooted an all
three hosts were showing updates available.
On Fri, Jan 25, 2019 at 3:26 PM Jayme wrote:
> I have a three node HCI setup, running 4.2.7 and want to upgrade to
> 4.2.8. When I use ansible to perform the host updates for some reason it
&
I have a three node HCI setup, running 4.2.7 and want to upgrade to 4.2.8.
When I use ansible to perform the host updates for some reason it fully
updates one host then stops without error, it does not continue upgrading
the remaining two hosts. If I run it again it will proceed to upgrade the
I use mac to manage Ovirt and I find the best method is to set vnc on VM
and choose no vnc option, this way you can launch right from a web browser
On Sat, Nov 17, 2018, 5:18 AM Dear Team,
>
>
>
> Please help me to access ovirt VDI on mac book, as I cannot find any
> remote viewer
is available I can do some more testing with it.
On Wed, Nov 14, 2018 at 3:16 PM Martin Perina wrote:
>
>
> On Wed, Nov 14, 2018 at 6:11 PM Jayme wrote:
>
>> I've been giving this a try but have been running in to a few issues.
>> Namely, it seems to upgrade the first
leaving the others untouched.
could it be because after the first host comes up from a reboot the gluster
healing prevents the other host statuses from being "up" thus ansible skips
over them?
On Wed, Nov 14, 2018 at 11:49 AM Martin Perina wrote:
> Hi Jayme,
>
> you can upgrad
I am having the same issue as well attempting to update oVirt node to
latest.
On Wed, Nov 14, 2018 at 11:07 AM Giulio Casella wrote:
> It's due to a update of collectd in epel, but ovirt repos contain also
> collectd-write_http and collectd-disk (still not updated). We have to
> wait for
I have a very standard three node HCI setup running the latest version of
oVirt 4.2. I've been having some problems updating hosts, the last update
that was released a few weeks ago would produce installfailed when updating
hosts. I was able to resolve this by rebooting the host first then
Is it possible to update oVirt HCI environment automatically with ansible?
If so are there any specific instructions or details on the process?
Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Tony, is there a reason why you wouldn't just do a three more hyper
converged setup with self hosted engine? This is the best option for a
three server setup imo
On Tue, Nov 6, 2018, 8:37 AM Tony Brian Albers Hi guys,
>
> I have 3 machines that I'd like to test oVirt/gluster on.
>
> The idea is
With dell servers for example it's as simple as editing the host and
enabling power management then selecting the appropriate drac version and
entering login details. I'm not familiar with what supermicro uses for
remote management but there is likely an option there to support it
On Mon, Nov 5,
hy the error is so strange to me. I event tested ansible from
>> oVirt host to others and it works ok using ssh keys.
>>
>>
>> W dniu czw., 25.10.2018 o 13:43 Jayme napisał(a):
>>
>>> You should also make sure the host can ssh to itself and accept keys
>>&
You should also make sure the host can ssh to itself and accept keys
On Thu, Oct 25, 2018, 8:42 AM Jayme, wrote:
> Darn autocorrect, sshd config rather
>
> On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski, <
> jprokopow...@gmail.com> wrote:
>
>> Hi,
>>
>&g
Darn autocorrect, sshd config rather
On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski,
wrote:
> Hi,
>
> Please help! :-) I couldn't find any solution via google.
>
> I followed this document to create oVirt hyperconverged on 3 hosts using
> cockpit wizard:
>
>
>
It looks to me like a fairly obvious ssh problem. Are the ssh keys setup
for root user and permitrootlogin yes in Asheville config?
On Thu, Oct 25, 2018, 7:29 AM Jarosław Prokopowski,
wrote:
> Hi,
>
> Please help! :-) I couldn't find any solution via google.
>
> I followed this document to
seeing... Storage costs cpu/ram cycles too
>
> On Sat, Oct 20, 2018 at 7:29 PM Donny Davis wrote:
>
>> I am not trying to be sarcastic here, but the host resources are
>> controlled by what you allocate to the vm... that is kinda how
>> virtualization works
>>
>>
I'm wondering how I can best limit the ability of VMs to overrun the load
on hosts. I have a fairly stock 4.2 HCI setup with three well spec'ed
servers, 10Gbe/SSDs, plenty of ram and CPU with only a hand full of light
use VMs. I notice when the occasional demanding job is run on a VM I'm
seeing
It's no longer there I use no VNC option for accessing consoles in a
browser works great
On Thu, Oct 11, 2018, 5:15 AM , wrote:
> Hello!
> Strange, but i have no spice-html5 option in vm console settings.
> https://prnt.sc/l4qz00
> Should i add a spice proxy for this?
>
> Version 4.2.6.4-1.el7
You should be using shared external storage or glusterfs, if gluster you
should have other drives in the server to porvision as gluster bricks
during the hyoerconverged deployment
On Mon, Oct 8, 2018, 8:07 AM Stefano Danzi, wrote:
> Hi! It's the frist time thai I use node.
>
> I installed node
Upgrading the engine is fairly straight forward. What I do is place the
cluster in global maintenance mode. Then on the engine vm yum update Ovirt
packages then run engine-upgrade. After upgrade I do a general yum update
on engine vm to update other non Ovirt packages
On Thu, Sep 20, 2018,
minik Holler wrote:
>
>> On Tue, 18 Sep 2018 19:10:48 -0300
>> Jayme wrote:
>>
>> > I changed engine and host ips to a totally different subnet. My
>>
>> The way to change the IP addresses of the hosts via oVirt UI is
>> Compute > Hosts >
I changed engine and host ips to a totally different subnet. My cluster is
up and engine is working but I'm seeing that network is out of sync. If I
attempt to sync the network it's changing the host ips back to the old
subnet. I assume I changed the IPS improperly. How can I update the Ovirt
I had similar problems until I clicked "optimize volume for vmstore" in the
admin GUI for each data volume. I'm not sure if this is what is causing
your problem here but I'd recommend trying that first. It is suppose to be
optimized by default but for some reason my ovirt 4.2 cockpit deploy did
do Mayoral.
>
> On 12/09/18 20:54, Jayme wrote:
>
> Should also note that I am using replica 3 configuration with no arbiter
> for extra redundancy.
>
> On Wed, Sep 12, 2018 at 3:53 PM Jayme wrote:
>
>> I am running a three server oVirt hyperconverged
Should also note that I am using replica 3 configuration with no arbiter
for extra redundancy.
On Wed, Sep 12, 2018 at 3:53 PM Jayme wrote:
> I am running a three server oVirt hyperconverged setup with JBOD disks.
> Two 2TB SSDs in JBOD per server. The configuration is working ver
I am running a three server oVirt hyperconverged setup with JBOD disks.
Two 2TB SSDs in JBOD per server. The configuration is working very well
for me so far.
On Wed, Sep 12, 2018 at 2:36 PM Donny Davis wrote:
> JBOD is on the drop down when you do the setup for the volumes
>
> On Wed, Sep
You don't really need a data and vmstore. Vmstore I believe iaeamt to be
the new iso domain but even it is not needed as all data domains act the
same. You can use a seperate data and vmstore domain because it will give
you greater flexibility in terms of backing up thr volumes so you can
choose
You do not need to define the gluster IPS or hostnames during the initial
deployment. You deploy first then you setup the gluster network after.
Serafh for the up and running with Ovirt 4.2 glusterfs guide, it's slightly
dated but goes over how to setup the separate gluster network.
On Tue, Sep
I've been seeing these warnings myself, on 1Gb ovirtmanagement (glusterFS
is 10Gbe backend). I haven't correlated to network graphs yet but I don't
know what would be happening on my management network that would be
exhausting 1Gb network.
On Fri, Aug 31, 2018 at 3:27 AM Florian Schmid wrote:
Hello,
That video has good information but unfortunately it's about Site to Site
DR, not GlusterFS georeplication. I'm looking for information regarding
how to configure GlusterFS replication for use as disaster recovery.
On Tue, Aug 28, 2018 at 2:32 PM femi adegoke
wrote:
> That youtube
Is it expected that choosing georeplication --> new in the Ovirt Gui does
nothing or is that a bug?
On Tue, Aug 28, 2018, 2:02 PM femi adegoke, wrote:
> https://www.youtube.com/watch?v=UH8B7Nek0Nc
> ___
> Users mailing list -- users@ovirt.org
> To
Is there an updated guide for setting up GlusterFS geo-replication? What I
am interested in is having another oVirt setup on a separate server with
glusterFS volume replicated to it. If my primary cluster went down I would
be able to start important VMs on the secondary oVirt build until I'm
Actually I just figured out what was causing this. I disabled root logins
in sshd config.
On Mon, Aug 20, 2018 at 2:28 PM Jayme wrote:
> I have a fairly recent three node HCI setup running 4.2.5. I've recently
> updated hosted engine to the latest version (including yum updates). Wh
201 - 300 of 379 matches
Mail list logo