eating the last space you got left ... so be quick :)
P.S.2: I hope you know that the only supported volume types are
'distributed-replicated' and 'replicated' :)
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt
написа
Hello Gobinda,
I know that gluster can easily convert distributed volume to replica volume, so
why it is not possible to first convert to replica and then add the nodes as
HCI ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 08:20:56 Гринуич+2, Gobinda Das
написа
I found this in the SPAM folder ... maybe it's not relevant any more.
My guess is that you updated chrome recently and they changed something :)
In my case (openSUSE Leap 15) , it was just an ad-blocker , but I guess your
chrome version could be newer.
Best Regards,
Strahil Nikolov
В
olume info
Best Regards,
Strahil Nikolov
В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,
написа:
Hello,
I just add a second brick to the volume. Now I have 10% free, but still cannot
delete the disk. Still the same message:
VDSM command DeleteImageGroupVDS failed: Could not
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/communit
ose-local to 'yes'
- Any errors and warnings in the gluster logs ?
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,
написа:
Hello,
For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt setti
Hm... interesting case.
Have you tried to set it into maintenance ? Setting a domain to maintenance
forces oVirt to pick another domain for master.
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 19:34:19 Гринуич+3, supo...@logicworks.pt
написа:
When data (Master
Can you try to set the destination host into maintenance and then 'reinstall'
from the web UI drop down ?
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 18:00:07 Гринуич+3, Anton Louw via Users
написа:
Apologies, I should also add that the destination node
Most probably , but I have no clue.
You can set the host into maintenance and then activate it ,so the volume get's
mounted properly.
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 03:16:42 Гринуич+3, Simon Scott
написа:
Hi Strahil,
All networking configs have
that is separate
from test :)
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври 2020 г., 14:00:52 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt settings is this?:
performance.quick-read
I might be wrong, but I think that the SAN LUN is used as a PV and then each
disk is a LV from the Host Perspective.
Of course , I could be wrong and someone can correct me. All my oVirt
experience is based on HCI (Gluster + oVirt).
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври
Hi Didi,
thanks for the info - I learned it the hard way (trial & error) and so far it
was working.
Do we have an entry about that topic in the documentation ?
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври 2020 г., 08:27:08 Гринуич+3, Yedidyah Bar David
написа:
On
is quite important and missed.
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 22:35:21 Гринуич+3, Alex McWhirter
написа:
In my experience, the ovirt optimized defaults are fairly sane. I may change a
few things like enabling read ahead or increasing the shard size
Usually, oVirt uses the 'virt' group of settings.
What are you symptoms ?
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
Can anyone help me in how can I improve the performance of glusterfs to work
with oVirt
Have you checked the ovirt_host_network ansible module ?
It got a VLAN example and I guess you can loop over all the VLANs.
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 11:12:53 Гринуич+3, kim.karga...@noroff.no
написа:
Hi all,
We have Ovirt 4.3, with 11 hosts
I usually run the following (HostedEngine):
[root@engine ~]# su - postgres
-bash-4.2$ source /opt/rh/rh-postgresql10/enable
-bash-4.2$ psql engine
How did you try to access the Engine's DB ?
Best Regards,
Strahil Nikolov
В вторник, 20 октомври 2020 г., 17:00:37 Гринуич+3
the mkfs.xfs part.
https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
Best Regards,
Strahil Nikolov
В вторник, 20 октомври 2020 г., 13:36:58 Гринуич+3, harryo...@gmail.com
написа:
Hi,
When I want to use zfs for software raid on my oVirt nodes instead of a
hardware raid
I would go to the UI and identify the hsot with the 'SPM' flag.
Then you should check the vdsm logs on that host (/var/log/vdsm/)
Best Regards,
Strahil Nikolov
В четвъртък, 15 октомври 2020 г., 20:19:57 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
When I Enable Gluster Service
What is the output of:
df -h /rhev/data-center/mnt/glusterSD/server_volume/
gluster volume status volume
gluster volume info volume
In the "df" you should see the new space or otherwise you won't be able to do
anything.
Best Regards,
Strahil Nikolov
В четвъртък, 15 октомври 2
istributed-replicate"
Nope, As far as I know - only when you have 3 copies of the data ('replica 3'
only).
Best Regards,
Strahil Nikolov
On Wed, Oct 14, 2020 at 7:34 AM C Williams wrote:
> Thanks Strahil !
>
> More questions may follow.
>
> Thanks Again For Your
Hi,
I would start with
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/
.
It might have some issues as 4.4 is quite fresh and dynamic, but you just need
to ping the community for help over the e-mail.
Best Regards,
Strahil Nikolov
migration.
Best Regards,
Strahil Nikolov
В сряда, 14 октомври 2020 г., 12:42:44 Гринуич+3, Gilboa Davara
написа:
Hello all,
I'm thinking about converting a couple of old dual Xeon V2
workstations into (yet another) oVirt setup.
However, the use case for this cluster is somewhat different
f building gluster volumes ,as UI's primary focus is oVirt (quite
natural , right).
Best Regards,
Strahil Nikolov
В сряда, 14 октомври 2020 г., 12:30:42 Гринуич+3, Jarosław Prokopowski
написа:
Thanks. I will get rid of multipath.
I did not set performance.strict-o-direct specific
only 'replica 3' volumes - just to keep that in
mind.
From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks
should be in a raid of some type (maybe RAID10 for perf).
Best Regards,
Strahil Nikolov
В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams
написа
/O , so you should not
loose any data.
Best Regards,
Strahil Nikolov
В вторник, 13 октомври 2020 г., 16:35:26 Гринуич+3, Jarosław Prokopowski
написа:
Hi Nikolov,
Thanks for the very interesting answer :-)
I do not use any raid controller. I was hoping glusterfs would take care
I have seen a lot of users to use anonguid=36,anonuid=36,all_squash to force
the vdsm:kvm ownership on the system.
Best Regards,
Strahil Nikolov
В понеделник, 12 октомври 2020 г., 21:40:42 Гринуич+3, Amit Bawer
написа:
On Mon, Oct 12, 2020 at 9:33 PM Amit Bawer wrote
o it's suitable to start a VM during Engine's downtime.
Best Regards,
Strahil Nikolov
В понеделник, 12 октомври 2020 г., 13:36:31 Гринуич+3, Budur Nagaraju
написа:
Hi
Is there a way to deploy vms on the ovirt node without using the oVirt e
://access.redhat.com/solutions/3093891
Best Regards,
Strahil Nikolov
В неделя, 11 октомври 2020 г., 18:41:25 Гринуич+3, Jeremey Wise
написа:
I have a pair of nodes which service DNS / NTP / FTP / AD /Kerberos / IPLB etc..
ns01, ns02
These two "infrastructure VMs have HA Proxy and pace
n the same network ?
What about dns resolution - do you have entries in /etc/hosts ?
Best Regards,
Strahil Nikolov
В неделя, 11 октомври 2020 г., 11:54:47 Гринуич+3, Simon Scott
написа:
Thanks Strahil.
I have found between 1 & 4 Gluster peer rpc-clnt-ping timer expired message
Hi Jiri,
I already opened an Feature request
https://bugzilla.redhat.com/show_bug.cgi?id=1881457 that is about something
similar.
Can you check if your case was similar and update the request ?
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 23:48:01 Гринуич+3, Jiří Sléžka
tart flushing memory to disk
and when to block any process until all memory is flushed.
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 18:18:55 Гринуич+3, Jarosław Prokopowski
написа:
Thanks Strahil
The data center is remote so I will definitely ask the lab g
I guess you tried to ssh to the HostedEngine and then ssh to the host , right ?
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 02:28:35 Гринуич+3, Gianluca Cecchi
написа:
On Fri, Oct 9, 2020 at 7:12 PM Martin Perina wrote:
>
>
> Could you please share wi
Based on the logs you shared, it looks like a network issue - but it could
always be something else.
If you ever experience something like that situation, please share the logs
immediately and add the gluster mailing list - in order to get assistance with
the root cause.
Best Regards,
Strahil
Hi Simon,
I doubt the system needs tuning from network perspective.
I guess you can run some 'screen'-s which a pinging another system and logging
everything to a file.
Best Regards,
Strahil Nikolov
В петък, 9 октомври 2020 г., 01:05:22 Гринуич+3, Simon Scott
написа:
Thanks
I have seen many "checks" that are "OK"...
Have you checked that backups are not used over the same network ?
I would disable the power management (fencing) ,so I can find out what has
happened to the systems.
Best Regards,
Strahil Nikolov
В четвъртък, 8 октомв
in the virt group (/var/lib/glusterd/group/virt - or somethinhg like
that).
Best Regards,
Strahil Nikolov
В четвъртък, 8 октомври 2020 г., 15:16:10 Гринуич+3, Jarosław Prokopowski
написа:
Hi Guys,
I had a situation 2 times that due to unexpected power outage something went
wrong and VMs
?
- Have you check the gluster cluster's logs for anything meaningful ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-pol
Hi Michael,
I'm running 4.3.10 and I can confirm that Opteron_G5 was not removed.
What is reported by 'virsh -c
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf capabilities'
on both hosts ?
Best Regards,
Strahil Nikolov
В сряда, 7 октомври 2020 г., 00:06:08 Гринуич+3
Hello All,
can someone send me the full link (not the short one) as my proxy is blocking
it :)
Best Regards,
Strahil Nikolov
В вторник, 6 октомври 2020 г., 15:26:57 Гринуич+3, Sandro Bonazzola
написа:
Just a kind reminder about the survey (https://forms.gle/bPvEAdRyUcyCbgEc7
I would put it in the yum.conf and export it as "http_proxy" & "https_proxy"
system variables.
Best Regards,
Strahil Nikolov
В вторник, 6 октомври 2020 г., 12:39:22 Гринуич+3, Gianluca Cecchi
написа:
Hello,
I'm testing upgrade from 4.3.10 to 4.4.2
>And of course I want Gluster to switch between single node, replication >and
>dispersion seemlessly and on the fly, as well as much better >diagnostic tools.
Actually Gluster can switch from distributed to
replicated/distributed-replicated on the fly.
Best Regards,
Str
Regards,
Strahil Nikolov
В събота, 3 октомври 2020 г., 16:50:24 Гринуич+3, Michael Jones
написа:
to get these two hosts into a cluster would i need to castrate them down
to nehalem, or would i be able to botch the db for the 2nd host from
"EPYC-IBPB" to "Opteron_G5"?
Have you tried to set the host into maintenance and then "Enroll Certificates"
from the UI ?
Best Regards,
Strahil Nikolov
В петък, 2 октомври 2020 г., 12:27:19 Гринуич+3, momokch--- via Users
написа:
hello everyone,
my ovirt-engine and host certification is expired,
Verify that your host is really down (or at least rebooted) and then in the UI
you can 'confirm: Host has been rebooted' from the dropdown.
This should mark all your VMs as dead .
Best Regards,
Strahil Nikolov
В петък, 2 октомври 2020 г., 12:03:31 Гринуич+3, Vrgotic, Marko
написа
What kind of setting do you want to change ?
Maybe I misunderstood you wrong. The 'scheduling_policy' requires a predefined
scheduling policy and 'scheduling_policy_properties' allows you to override the
score of a setting (like 'Memory').
Best Regards,
Strahil Nikolov
В четвъртък, 1
Based on
'https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_cluster_module.html'
there is option 'scheduling_policy' & 'scheduling_policy_properties' .
Maybe that was recently introduced.
Best Regards,
Strahil Nikolov
В четвъртък, 1 октомври 2020 г., 17:24:25 Гри
-and-updating-the-kernel
https://access.redhat.com/solutions/3710121
Best Regards,
Strahil Nikolov
В четвъртък, 1 октомври 2020 г., 16:12:52 Гринуич+3, Mike Lindsay
написа:
Hey Folks,
I've got a bit of a strange one here. I downloaded and installed
ovirt-node-ng-installer-4.4.2
ation.
Best Regards,
Strahil Nikolov
В четвъртък, 1 октомври 2020 г., 07:36:24 Гринуич+3, Jeremey Wise
написа:
I have for many years used gluster because..well. 3 nodes.. and so long as I
can pull a drive out.. I can get my data.. and with three copies.. I have much
higher chance of ge
Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 22:55:40 Гринуич+3, Jeremey Wise
написа:
As the three servers are Centos8 minimal installs. + oVirt HCI wizard to keep
them lean and mean... a couple questions
1) which version of python would I need for this (note in script about
As I mentioned, I would use systemd service to start the ansible play (or a
script running it).
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 22:15:17 Гринуич+3, Jeremey Wise
написа:
i would like to eventually go ansible route.. and was starting down that
path
}} after snapshot restore
ovirt_vm:
auth: "{{ ovirt_auth }}"
state: running
name: "{{ item }}"
loop:
- VM1
- VM2
Yeah, you have to fix the tabulations (both Ansible and Python are pain in the
*** )
Best Regards,
Strahil Nikolov
В сряда,
Also consider setting a reasonable 'TimeoutStartSec=' in your systemd service
file when you create the service...
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 20:18:01 Гринуич+3, Strahil Nikolov via Users
написа:
I would create an ansible playbook
If you can do it from cli - use the cli as it has far more control over what
the UI can provide.
Usually I use UI for monitoring and basic stuff like starting/stopping the
brick or setting the 'virt'group via the 'optimize for Virt' (or whatever it
was called).
Best Regards,
Strahil Nikolov
. Test the playbook
4. Create a oneshot systemd servce that starts after 'ovirt-engine.service' and
runs your playbook
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 18:27:13 Гринуич+3, Jeremey Wise
написа:
When I have to shut down cluster... ups runs out etc.. I need
In your case it seems reasonable, but you should test the 2 stripe sizes (128K
vs 256K) before running in production. The good thing about replica volumes is
that you can remove a brick , recreate it from cli and then add it back.
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г
You can use this ansible module and assign your scheduling policy:
https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_cluster_module.html
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 11:36:01 Гринуич+3, Kushagra Agarwal
написа:
I was hoping if i
Are you trying to use the same storage domain ?
I hope not, as this is not supposed to be done like that.As far as I remember -
you need fresh storage.
Best Regards,
Strahil NIkolov
В вторник, 29 септември 2020 г., 20:07:51 Гринуич+3, Sergey Kulikov
написа:
Hello, I'm trying
I got the same behaviour with adblock plus add-on.
Try in incognito mode (or with disabled plugins/ new fresh browser).
Best Regards,
Strahil Nikolov
В вторник, 29 септември 2020 г., 18:50:05 Гринуич+3, Philip Brown
написа:
I have an odd situation:
When I go to
https://ovengine
,
Strahil Nikolov
В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams
написа:
Hello,
We have decided to get a 6th server for the install. I hope to set up a 2x3
Distributed replica 3 .
So we are not going to worry about the "5 server" situation.
Thank You All For
low.
My VMs have 4 disks in a raid0 (boot) and striped LV (for "/").
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-polic
and having a
'replica 3 arbiter 1' volume.
Best Regards,
Strahil Nikolov
В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme
написа:
It might be possible to do something similar as described in the documentation
here:
https://access.redhat.com/documentation/en-us
You cannot have 2 IPs for 2 different FQDNs.
You have to use something like:
172.16.100.101 thor.penguinpages.local thor thorst
Fix your /etc/hosts or you should use DNS.
Best Regards,
Strahil Nikolov
В понеделник, 28 септември 2020 г., 03:41:17 Гринуич+3, Jeremey Wise
написа:
when
- Memory Overcommitment Manager Daemon
Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor preset:
disabled)
Active: inactive (dead)
What is the status of mom-vdsm.service ?
Best Regards,
Strahil Nikolov
В неделя, 27 септември 2020 г., 10:06:39 Гринуич+3, duhongyu
написа
Regards,
Strahil NIkolov
В събота, 26 септември 2020 г., 21:44:28 Гринуич+3, matthew.st...@fujitsu.com
написа:
I have created a three host oVirt cluster using 4.4.2.
I created an ISO storage domain to hold my collection of ISO images, and then
decided to migrate
Hi Jeremey,
I am not sure that I completely understand the problem.
Can you provide the Host details page from UI and the output of:
'gluster pool list' & 'gluster peer status' from all nodes ?
Best Regards,
Strahil Nikolov
В събота, 26 септември 2020 г., 20:31:23 Гринуич+3, Jeremey
Importing is done from UI (Admin portal) -> Storage -> Domains -> Newly Added
domain -> "Import VM" -> select Vm and you can import.
Keep in mind that it is easier to import if all VM disks are on the same
storage domain (I've opened a RFE for multi-domain import)
Since oVirt 4.4 , the stage that deploys the oVirt node/host is adding an lvm
filter in /etc/lvm/lvm.conf which is the reason behind that.
Best Regards,
Strahil Nikolov
В петък, 25 септември 2020 г., 20:52:13 Гринуич+3, Staniforth, Paul
написа:
Thanks,
the gluster
Repeat again and remember to never wipe 2 nodes at a time :)
Good luck and take a look at Quick Start Guide - Gluster Docs
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.or
whatever libvirt puts it).
>3) I know that you can backup the engine. If I had been a smart person, >how
>does one backup and recover from this kind of situation. Does >anyone have
>any guides or good articles on this?
https://www.ovirt.
rks without password.
Engine is not running in the host, it is running in a VM called HostedEngine
and that VM has to be able to reach the host over ssh.
Did you do any ssh hardening ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
Have you checked the oVirt 2020 conference videos ?
There was a slot exactly on this topic- I think ansible was used for automatic
upgrade.
I prefer the manual approach , as I have full control over the environment.
Best Regards,
Strahil Nikolov
В петък, 25 септември 2020 г., 02:23:49
gine and issue a
'reboot' and the ovirt-ha-agent on one of the hosts will bring it up, or use
the 'hosted-engine' utility to shutdown and power up the VM.
About the engine not detecting a node up - check if the vdsm.service is running
on the node.
Best Regards,
Strahil Nikolov
_
Once a host is in oVirt , you should not change the network ... or that's what
I have been told.
You should remove the host from oVirt , do your configurations and then add the
host back.
Best Regards,
Strahil Nikolov
В четвъртък, 24 септември 2020 г., 01:43:40 Гринуич+3, wodel youchi
I guess 'yum reinstall vdsm-gluster'.
Best Regards,
Strahil Nikolov
В сряда, 23 септември 2020 г., 22:07:58 Гринуич+3, Jeremey Wise
написа:
Trying to repair / clean up HCI deployment so it is HA and ready for
"production".
I have gluster now showing three bricks
As far as I know there is an automation to do it for you.
Best Regards,
Strahil Nikolov
В сряда, 23 септември 2020 г., 21:41:13 Гринуич+3, Vincent Royer
написа:
well that sounds like a risky nightmare. I appreciate your help.
Vincent Royer
778-825-1057
SUSTAINABLE MOBILE ENERGY
/block/device'.
Once all volumes are again a replica 3 , just wait for the healing to go over
and you can proceed with the oVirt part.
Best Regards,
Strahil Nikolov
В сряда, 23 септември 2020 г., 20:45:30 Гринуич+3, Vincent Royer
написа:
My confusion is that those docume
n someone show me where to get logs? the GUI log >when I
>try to "activate" thor server "Status of host thor was set to
>>NonOperational." "Gluster command [] failed on server >."
>is very unhelpfu
In my setup , I got no filter at all (yet, I'm on 4.3.10):
[root@ovirt ~]# lvmconfig | grep -i filter
[root@ovirt ~]#
P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a
local copy of the lvm.conf
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 23:05
Most probably there is an option to tell it (I mean oVIrt) the exact keys to be
used.
Yet, give the engine a gentle push and reboot it - just to be sure you are not
chasing a ghost.
I'm using self-signed certs and I can't help much in this case.
Best Regards,
Strahil Nikolov
В вторник
to use only gluster it could be far easier to set:
[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf
blacklist {
devnode "*"
}
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer
написа:
On Tue, Sep 22, 2020
terfs 2.4T 535G 1.9T
23% /rhev/data-center/mnt/glusterSD/gluster1:_data
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 19:44:54 Гринуич+3, Jeremey Wise
написа:
Yes.
And at one time it was fine. I did a graceful shutdown.. and after booting it
always seems to now
oVirt 4.4 requires EL8.2 , so no you cannot go to 4.4 without upgrading the OS
to EL8.
Yet, you can still bump the version to 4.3.10 which is still EL7 based and it
works quite good.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 17:39:52 Гринуич+3,
написа:
Hi
By the way, did you add the third host in the oVirt ?
If not , maybe that is the real problem :)
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 17:23:28 Гринуич+3, Jeremey Wise
написа:
Its like oVirt thinks there are only two nodes in gluster replication
# Yet
That's really wierd.
I would give the engine a 'Windows'-style fix (a.k.a. reboot).
I guess some of the engine's internal processes crashed/looped and it doesn't
see the reality.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 16:27:25 Гринуич+3, Jeremey Wise
написа
n the bugzilla.redhat.com for each OS type (for example 1 for
SLES/openSUSE and 1 for EL7/EL8-based).
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: http
ick status.
https://github.com/gluster/gstatus is a good one to verify your cluster health
, yet human's touch is priceless in any kind of technology.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise
написа:
when I posted last.. in the tre
dig deeper
if you suspect network issue.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise
написа:
when I posted last.. in the tread I paste a roling restart. And... now it
is replicating.
oVirt still showing wrong. BUT..
At around Sep 21 20:33 local time , you got a loss of quorum - that's not good.
Could it be a network 'hicup' ?
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 15:05:16 Гринуич+3, Jeremey Wise
написа:
I did.
Here are all three nodes with restart. I find it odd
reffering
to the UI)
- Activate the host , once it was moved to maintenance
Wait for the host's HE score to recover (silver/gold crown in UI) and then
proceed with the next one.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise
написа:
I
ond (or the overhead will be crazy).
Maybe you can extend the Gluster volume temporarily , till you manage to move
away the VM to a bigger storage. Then you can reduce the volume back to
original size.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, s
Hi Eyal,
thanks for the reply - all the proposed options make sense.
I have opened a RFE -> https://bugzilla.redhat.com/show_bug.cgi?id=1881457 ,
but can you verify that the product/team are the correct one ?
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 12:55:56 Грин
around and not a fix .
Are you using oVirt 4.3 or 4.4 ?
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 10:08:44 Гринуич+3, Vinícius Ferrão
написа:
Hi Strahil, yes I can’t find anything recently either. You digged way further
then me, I found some regressions on the kernel
locations to identify the issue.
Anything interesting in the libvirt's log for the HostedEngine.xml on the
destination host ?
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 05:37:18 Гринуич+3, ddqlo
написа:
Yes. I can. The host which does not host the HE could be r
Have you restarted glusterd.service on the affected node.
glusterd is just management layer and it won't affect the brick processes.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 01:43:36 Гринуич+3, Jeremey Wise
написа:
Start is not an option.
It notes two bricks
Interesting is that I don't find anything recent , but this one:
https://devblogs.microsoft.com/oldnewthing/20120511-00/?p=7653
Can you check if anything in the OS was updated/changed recently ?
Also check if the VM is with nested virtualization enabled.
Best Regards,
Strahil Nikolov
В
Usually libvirt's log might provide hints (yet , no clues) of any issues.
For example:
/var/log/libvirt/qemu/.log
Anything changed recently (maybe oVirt version was increased) ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão
написа
Just select the volume and press "start" . It will automatically mark "force
start" and will fix itself.
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise
написа:
oVirt engine shows one of the gluster servers
disks on them), but I guess that is not an option - right ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a ded
For some OS versions , the oVirt's behavior is accurate , but for other
versions it's not accurate.
I think that it is more accurate to say that oVirt improperly calculates memory
for SLES 15/openSUSE 15.
I would open a bug at bugzilla.redhat.com .
Best Regards,
Strahil Nikolov
В
ns I would have
to import the VM the first time , just to delete it and import it again - so I
can get my VM disks from the storage...
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 11:47:04 Гринуич+3, Eyal Shenitzky
написа:
Hi Stranhil,
Maybe those VMs has
801 - 900 of 1577 matches
Mail list logo