I found this in the SPAM folder ... maybe it's not relevant any more.
My guess is that you updated chrome recently and they changed something :)
In my case (openSUSE Leap 15) , it was just an ad-blocker , but I guess your
chrome version could be newer.
Best Regards,
Strahil Nikolov
В
So what is the output of "df" agianst :
- all bricks in the volume (all nodes)
- on the mount point in /rhev/mnt/
Usually adding a new brick (per host) in replica 3 volume should provide you
more space.
Also what is the status of the volume:
gluster volume status
gluster volume info
Hello All,
I would like to learn more about OVN and especially the maximum MTU that I can
use in my environment.
Current Setup 4.3.10
Network was created via UI -> MTU Custom -> 8976 -> Create on External Provider
-> Connect to Physical Network
So my physical connection is MTU 9000 and I have
It seems that your e-mail went to the spam.
I would start with isolating the issue ?
1. Is this a VM specific issue or a more-wide issue.
- Can you move another VM on that storage domain and verify performance ?
- Can you create/migrate same OS-type VM and check performance?
- What about running
Hm... interesting case.
Have you tried to set it into maintenance ? Setting a domain to maintenance
forces oVirt to pick another domain for master.
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 19:34:19 Гринуич+3, supo...@logicworks.pt
написа:
When data (Master) is
Can you try to set the destination host into maintenance and then 'reinstall'
from the web UI drop down ?
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 18:00:07 Гринуич+3, Anton Louw via Users
написа:
Apologies, I should also add that the destination node is a
Most probably , but I have no clue.
You can set the host into maintenance and then activate it ,so the volume get's
mounted properly.
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 03:16:42 Гринуич+3, Simon Scott
написа:
Hi Strahil,
All networking configs have
Virt settings are those:
[root@ovirt1 slow]# cat /var/lib/glusterd/groups/virt
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
cluster.eager-lock=enable
cluster.quorum-type=auto
I might be wrong, but I think that the SAN LUN is used as a PV and then each
disk is a LV from the Host Perspective.
Of course , I could be wrong and someone can correct me. All my oVirt
experience is based on HCI (Gluster + oVirt).
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври
Wed, Oct 21, 2020 at 9:16 PM Strahil Nikolov via Users
wrote:
> I usually run the following (HostedEngine):
>
> [root@engine ~]# su - postgres
>
> -bash-4.2$ source /opt/rh/rh-postgresql10/enable
This is applicable to 4.3, on el7. For 4.4 this isn't needed.
Also, IIRC this
I agree with Alex.
Also, most of the kernel tunables proposed in that thread are also available in
the tuned profiles provided by the redhat-storage-server source rpm available
at ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/
Usually the alignment of XFS ontop the HW raid
Usually, oVirt uses the 'virt' group of settings.
What are you symptoms ?
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
Can anyone help me in how can I improve the performance of glusterfs to work
with oVirt?
Have you checked the ovirt_host_network ansible module ?
It got a VLAN example and I guess you can loop over all the VLANs.
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 11:12:53 Гринуич+3, kim.karga...@noroff.no
написа:
Hi all,
We have Ovirt 4.3, with 11 hosts, and
I usually run the following (HostedEngine):
[root@engine ~]# su - postgres
-bash-4.2$ source /opt/rh/rh-postgresql10/enable
-bash-4.2$ psql engine
How did you try to access the Engine's DB ?
Best Regards,
Strahil Nikolov
В вторник, 20 октомври 2020 г., 17:00:37 Гринуич+3,
The ansible role for Gluster expects raw devices and then it deploys it the
conventional way (forget about ZoL with that role).
I think that you can create and mount your filesystems and deploy gluster all
by yourself - it's not so hard ... Just follow the Gluster's official Docu and
skip the
I would go to the UI and identify the hsot with the 'SPM' flag.
Then you should check the vdsm logs on that host (/var/log/vdsm/)
Best Regards,
Strahil Nikolov
В четвъртък, 15 октомври 2020 г., 20:19:57 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
When I Enable Gluster Service
What is the output of:
df -h /rhev/data-center/mnt/glusterSD/server_volume/
gluster volume status volume
gluster volume info volume
In the "df" you should see the new space or otherwise you won't be able to do
anything.
Best Regards,
Strahil Nikolov
В четвъртък, 15 октомври 2020 г.,
>Please clarify what are the disk groups that you are referring to?
Either Raid5/6 or Raid10 with a HW controller(s).
>Regarding your statement "In JBOD mode, Red Hat support only 'replica 3'
>>volumes." does this also mean "replica 3" variants ex.
>"distributed-replicate"
Nope, As far as I
Hi,
I would start with
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/
.
It might have some issues as 4.4 is quite fresh and dynamic, but you just need
to ping the community for help over the e-mail.
Best Regards,
Strahil Nikolov
Hi Gilboa,
I think that storage domains need to be accessible from all nodes in the
cluster - and as yours will be using local storage and yet be in a 2-node
cluster that will be hard.
My guess is that you can try the following cheat:
Create a single brick gluster volume and do some
strict-o-direct just allows the app to define if direct I/O is needed and yes,
that could be a reason for your data loss.
The good thing is that the feature is part of the virt group and there is a
"Optimize for Virt" button somewhere in the UI . Yet, I prefer the manual
approach of building
Imagine you got a host with 60 Spinning Disks -> I would recommend you to split
it to 10/12 disk groups and these groups will represent several bricks (6/5).
Keep in mind that when you start using many (some articles state hundreds , but
no exact number was given) bricks , you should consider
One recommendation is to get rid of the multipath for your SSD.
Replica 3 volumes are quite resilient and I'm really surprised it happened to
you.
For the multipath stuff , you can create something like this:
[root@ovirt1 ~]# cat /etc/multipath/conf.d/blacklist.conf
blacklist {
wwid
I have seen a lot of users to use anonguid=36,anonuid=36,all_squash to force
the vdsm:kvm ownership on the system.
Best Regards,
Strahil Nikolov
В понеделник, 12 октомври 2020 г., 21:40:42 Гринуич+3, Amit Bawer
написа:
On Mon, Oct 12, 2020 at 9:33 PM Amit Bawer wrote:
>
>
> On
Hi Badur,
theoretically it's possible as oVirt is just a management layer.
You can use 'virsh -c
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' as an alias
of virsh and then you will be able to "virsh define yourVM.xml" & "virsh start
yourVM".
Also it's suitable to start a
Well there are a lot of Red Hat solutions about that one.
You will need a user in ovirt that will be used to restart the VMs.
In my case , I called it 'fencerdrbd' and it has beed granted 'UserRole'
permissions on the systems in the pacemaker cluster.
Here is my stonith device , but keep in
Hi Simon,
Usually it is the network, but you need real-world data. I would open screen
sessions and run ping continiously . Something like this:
while true; do echo -n "$(date) "; timeout -s 9 1 ping -c 1 ovirt2 | grep
icmp_seq; sleep 1; done | tee -a /tmp/icmp_log
Are all systems in the same
Hi Jiri,
I already opened an Feature request
https://bugzilla.redhat.com/show_bug.cgi?id=1881457 that is about something
similar.
Can you check if your case was similar and update the request ?
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 23:48:01 Гринуич+3, Jiří Sléžka
Hi Jaroslaw,
That point was from someone else. I don't think that gluster has a such weak
point. The only weak point I have seen is the infrastructure it relies ontop
and of course the built-in limitations it has.
You need to verify the following:
- mount options are important . Using
I guess you tried to ssh to the HostedEngine and then ssh to the host , right ?
Best Regards,
Strahil Nikolov
В събота, 10 октомври 2020 г., 02:28:35 Гринуич+3, Gianluca Cecchi
написа:
On Fri, Oct 9, 2020 at 7:12 PM Martin Perina wrote:
>
>
> Could you please share with us all
Based on the logs you shared, it looks like a network issue - but it could
always be something else.
If you ever experience something like that situation, please share the logs
immediately and add the gluster mailing list - in order to get assistance with
the root cause.
Best Regards,
Strahil
Hi Simon,
I doubt the system needs tuning from network perspective.
I guess you can run some 'screen'-s which a pinging another system and logging
everything to a file.
Best Regards,
Strahil Nikolov
В петък, 9 октомври 2020 г., 01:05:22 Гринуич+3, Simon Scott
написа:
Thanks
ри 2020 г., 22:43:34 Гринуич+3, Strahil Nikolov via Users
написа:
>Every Monday and Wednesday morning there are gluster connectivity timeouts
>>but all checks of the network and network configs are ok.
Based on this one I make the following conclusions:
1. Issue is reoccuring
2. You
Hi Jaroslaw,
it's more important to find the root cause of the data loss , as this is
definately not supposed to happen (I got myself several power outages without
issues).
Do you keep the logs ?
For now , check if your gluster settings (gluster volume info VOL) matches the
settings in the
>Every Monday and Wednesday morning there are gluster connectivity timeouts
>>but all checks of the network and network configs are ok.
Based on this one I make the following conclusions:
1. Issue is reoccuring
2. You most probably have a network issue
Have you checked the following:
- are
Hi Michael,
I'm running 4.3.10 and I can confirm that Opteron_G5 was not removed.
What is reported by 'virsh -c
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf capabilities'
on both hosts ?
Best Regards,
Strahil Nikolov
В сряда, 7 октомври 2020 г., 00:06:08 Гринуич+3,
Hello All,
can someone send me the full link (not the short one) as my proxy is blocking
it :)
Best Regards,
Strahil Nikolov
В вторник, 6 октомври 2020 г., 15:26:57 Гринуич+3, Sandro Bonazzola
написа:
Just a kind reminder about the survey (https://forms.gle/bPvEAdRyUcyCbgEc7)
I would put it in the yum.conf and export it as "http_proxy" & "https_proxy"
system variables.
Best Regards,
Strahil Nikolov
В вторник, 6 октомври 2020 г., 12:39:22 Гринуич+3, Gianluca Cecchi
написа:
Hello,
I'm testing upgrade from 4.3.10 to 4.4.2 for a standalone manager with
>And of course I want Gluster to switch between single node, replication >and
>dispersion seemlessly and on the fly, as well as much better >diagnostic tools.
Actually Gluster can switch from distributed to
replicated/distributed-replicated on the fly.
Best Regards,
Strahil Nikolov
Hi Mike,
In order to add them to a single cluster , you should set them to Opteron_G5
(my FX-8350 is also there) , untill you replace the host with something more
modern.
Of course , you can have your hosts in separate clusters - but then you won't
be able to live migrate your VMs.
Best
Have you tried to set the host into maintenance and then "Enroll Certificates"
from the UI ?
Best Regards,
Strahil Nikolov
В петък, 2 октомври 2020 г., 12:27:19 Гринуич+3, momokch--- via Users
написа:
hello everyone,
my ovirt-engine and host certification is expired, is it any
Verify that your host is really down (or at least rebooted) and then in the UI
you can 'confirm: Host has been rebooted' from the dropdown.
This should mark all your VMs as dead .
Best Regards,
Strahil Nikolov
В петък, 2 октомври 2020 г., 12:03:31 Гринуич+3, Vrgotic, Marko
написа:
What kind of setting do you want to change ?
Maybe I misunderstood you wrong. The 'scheduling_policy' requires a predefined
scheduling policy and 'scheduling_policy_properties' allows you to override the
score of a setting (like 'Memory').
Best Regards,
Strahil Nikolov
В четвъртък, 1
Based on
'https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_cluster_module.html'
there is option 'scheduling_policy' & 'scheduling_policy_properties' .
Maybe that was recently introduced.
Best Regards,
Strahil Nikolov
В четвъртък, 1 октомври 2020 г., 17:24:25
Either use 'grub2-editenv' or 'grub2-editenv - unset kernelopts' +
'grub2-mkconfig -o /boot/grub2/grub.cfg'
CEPH requires at least 4 nodes to be "good".
I know that Gluster is not the "favourite child" for most vendors, yet it is
still optimal for HCI.
You can check
https://www.ovirt.org/develop/release-management/features/storage/cinder-integration.html
for cinder integration.
Best Regards,
In EL 8 , there is no 'default' python. You can use both.
My choice would be ansible because APIs change , but also ansible modules are
updated. If you create your own script , you will have to take care about the
updates, while with ansible - you just update the relevan packages :)
Best
As I mentioned, I would use systemd service to start the ansible play (or a
script running it).
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 22:15:17 Гринуич+3, Jeremey Wise
написа:
i would like to eventually go ansible route.. and was starting down that
path
---
- name: Example
hosts: localhost
connection: local
vars:
ovirt_auth:
username: 'admin@internal'
password: 'pass'
url: 'https://engine.localdomain/ovirt-engine/api'
insecure: True
ca_file: '/root/ansible/engine.ca'
- name: Power on {{ outer_item }}
Also consider setting a reasonable 'TimeoutStartSec=' in your systemd service
file when you create the service...
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 20:18:01 Гринуич+3, Strahil Nikolov via Users
написа:
I would create an ansible playbook
If you can do it from cli - use the cli as it has far more control over what
the UI can provide.
Usually I use UI for monitoring and basic stuff like starting/stopping the
brick or setting the 'virt'group via the 'optimize for Virt' (or whatever it
was called).
Best Regards,
Strahil Nikolov
I would create an ansible playbook that will be running from the engine:
1. Check the engine's health page via uri module and wait_for (maybe with a
regex)
Healthpage is : https://engine_FQDN/ovirt-engine/services/health
2. Use ansible ovirt_vm module to start your vms in the order you want
3.
In your case it seems reasonable, but you should test the 2 stripe sizes (128K
vs 256K) before running in production. The good thing about replica volumes is
that you can remove a brick , recreate it from cli and then add it back.
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020
You can use this ansible module and assign your scheduling policy:
https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_cluster_module.html
Best Regards,
Strahil Nikolov
В сряда, 30 септември 2020 г., 11:36:01 Гринуич+3, Kushagra Agarwal
написа:
I was hoping if i
Are you trying to use the same storage domain ?
I hope not, as this is not supposed to be done like that.As far as I remember -
you need fresh storage.
Best Regards,
Strahil NIkolov
В вторник, 29 септември 2020 г., 20:07:51 Гринуич+3, Sergey Kulikov
написа:
Hello, I'm trying to
I got the same behaviour with adblock plus add-on.
Try in incognito mode (or with disabled plugins/ new fresh browser).
Best Regards,
Strahil Nikolov
В вторник, 29 септември 2020 г., 18:50:05 Гринуич+3, Philip Brown
написа:
I have an odd situation:
When I go to
One important step is to align the XFS to the stripe size * stripe width. Don't
miss it or you might have issues.
Details can be found at:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
Best Regards,
More questions on this -- since I have 5 servers . Could the following work ?
Each server has (1) 3TB RAID 6 partition that I want to use for contiguous
storage.
Mountpoint for RAID 6 partition (3TB) /brick
Server A: VOL1 - Brick 1 directory
You can setup your bricks in such way , that each host has at least 1 brick.
For example:
Server A: VOL1 - Brick 1
Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
Server D: VOL2 - brick 1
The most optimal is to find a small system/VM for being an arbiter and
You cannot have 2 IPs for 2 different FQDNs.
You have to use something like:
172.16.100.101 thor.penguinpages.local thor thorst
Fix your /etc/hosts or you should use DNS.
Best Regards,
Strahil Nikolov
В понеделник, 28 септември 2020 г., 03:41:17 Гринуич+3, Jeremey Wise
написа:
when
In my case momd is static and not running:
[root@ovirt1 ~]# systemctl status mom-vdsm.service momd.service
● mom-vdsm.service - MOM instance configured for VDSM purposes
Loaded: loaded (/usr/lib/systemd/system/mom-vdsm.service; enabled; vendor
preset: enabled)
Active: active (running) since
Actually ISO domain is not necessary.
You can moutn it via FUSE to a system and either use the python script ( It was
mentioned several times in the mailing list) or the API/UI to upload your ISOs
to a data storage domain.
I think it is about time to get rid of the deprecated ISO domain.
Best
Hi Jeremey,
I am not sure that I completely understand the problem.
Can you provide the Host details page from UI and the output of:
'gluster pool list' & 'gluster peer status' from all nodes ?
Best Regards,
Strahil Nikolov
В събота, 26 септември 2020 г., 20:31:23 Гринуич+3, Jeremey Wise
Importing is done from UI (Admin portal) -> Storage -> Domains -> Newly Added
domain -> "Import VM" -> select Vm and you can import.
Keep in mind that it is easier to import if all VM disks are on the same
storage domain (I've opened a RFE for multi-domain import).
Best Regards,
Strahil
Since oVirt 4.4 , the stage that deploys the oVirt node/host is adding an lvm
filter in /etc/lvm/lvm.conf which is the reason behind that.
Best Regards,
Strahil Nikolov
В петък, 25 септември 2020 г., 20:52:13 Гринуич+3, Staniforth, Paul
написа:
Thanks,
the gluster
>1 node I wiped it clean and the other I left the 3 gluster brick drives
>untouch.
If the last node from the original is untouched you can:
1. Go to the old host and use 'gluster volume remove-brick replica 1
wiped_host:/path/to-brick untouched_bricks_host:/path/to-brick force'
2. Remove the 2
>Question:
>1) Can someone point me to the manual on how to re-constitute a VM and >bring
>it back into oVirt where all "oVirt-engines" were redeployed. It is >only
>three or four VMs I typically care about (HA cluster and OCP >ignition/
>Ansible tower VM).
Ensure that the old Engine is
>"Error while executing action: Cannot add Host. Connecting to host via SSH
>>has failed, verify that the host is reachable (IP address, routable address
>>etc.) You may refer to the engine.log file for further details."
>Tested SSH between all nodes and works without password.
Engine is not
Have you checked the oVirt 2020 conference videos ?
There was a slot exactly on this topic- I think ansible was used for automatic
upgrade.
I prefer the manual approach , as I have full control over the environment.
Best Regards,
Strahil Nikolov
В петък, 25 септември 2020 г., 02:23:49
>How ,without reboot of hosting system, do I restart the oVirt engine?
>
># I tried below but do not seem to effect the virtual machine
>[root@thor iso]# systemctl restart ov
Wrong system - this is most probably your KVM host , not the VM hosting the
Engine. Usually the engine is defined during
Once a host is in oVirt , you should not change the network ... or that's what
I have been told.
You should remove the host from oVirt , do your configurations and then add the
host back.
Best Regards,
Strahil Nikolov
В четвъртък, 24 септември 2020 г., 01:43:40 Гринуич+3, wodel youchi
I guess 'yum reinstall vdsm-gluster'.
Best Regards,
Strahil Nikolov
В сряда, 23 септември 2020 г., 22:07:58 Гринуич+3, Jeremey Wise
написа:
Trying to repair / clean up HCI deployment so it is HA and ready for
"production".
I have gluster now showing three bricks all green
As far as I know there is an automation to do it for you.
Best Regards,
Strahil Nikolov
В сряда, 23 септември 2020 г., 21:41:13 Гринуич+3, Vincent Royer
написа:
well that sounds like a risky nightmare. I appreciate your help.
Vincent Royer
778-825-1057
SUSTAINABLE MOBILE ENERGY
Before you reinstall the node , you should use 'gluster volume remove-brick
replica ovirt_node:/path-to-brick' to reduce the volume
to replica 2 (for example). Then you need to 'gluster peer detach ovirt_node'
in order to fully cleanup the gluster TSP.
You will have to remove the bricks that
>1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)"
>>how does one restart just the oVirt-engine?
ssh to HostedEngine VM and run one of the following:
- reboot
- systemctl restart ovirt-engine.service
>2) I now show in shell 3 nodes, each with the one brick for data,
In my setup , I got no filter at all (yet, I'm on 4.3.10):
[root@ovirt ~]# lvmconfig | grep -i filter
[root@ovirt ~]#
P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a
local copy of the lvm.conf
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г.,
Most probably there is an option to tell it (I mean oVIrt) the exact keys to be
used.
Yet, give the engine a gentle push and reboot it - just to be sure you are not
chasing a ghost.
I'm using self-signed certs and I can't help much in this case.
Best Regards,
Strahil Nikolov
В вторник,
Obtaining the wwid is not exactly correct.
You can identify them via:
multipath -v4 | grep 'got wwid of'
Short example:
[root@ovirt conf.d]# multipath -v4 | grep 'got wwid of'
Sep 22 22:55:58 | nvme0n1: got wwid of
'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001'
Ovirt uses the "/rhev/mnt... mountpoints.
Do you have those (for each storage domain ) ?
Here is an example from one of my nodes:
[root@ovirt1 ~]# df -hT | grep rhev
gluster1:/engine fuse.glusterfs 100G 19G 82G
19%
oVirt 4.4 requires EL8.2 , so no you cannot go to 4.4 without upgrading the OS
to EL8.
Yet, you can still bump the version to 4.3.10 which is still EL7 based and it
works quite good.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 17:39:52 Гринуич+3,
написа:
Hi
By the way, did you add the third host in the oVirt ?
If not , maybe that is the real problem :)
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 17:23:28 Гринуич+3, Jeremey Wise
написа:
Its like oVirt thinks there are only two nodes in gluster replication
# Yet
That's really wierd.
I would give the engine a 'Windows'-style fix (a.k.a. reboot).
I guess some of the engine's internal processes crashed/looped and it doesn't
see the reality.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 16:27:25 Гринуич+3, Jeremey Wise
написа:
>Ok, May I know why you think it's only a bug in SLES?.
I never claimed it is a bug in SLES, but a bug in Ovirt detecting proper memory
usage in SLES.
The behaviour you observe was normal for RHEL6/CentOS6/SLES11/openSUSE and
bellow , so it is normal for some OSes.In my oVirt 4.3.10 , I see that
Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 bricks
up , but usually it was an UI issue and you go to UI and mark a "force start"
which will try to start any bricks that were down (won't affect gluster) and
will wake up the UI task to verify again brick status.
Usually I first start with:
'gluster volume heal info summary'
Anything that is not 'Connected' is bad.
Yeah, the abstraction is not so nice, but the good thing is that you can always
extract the data from a single node left (it will require to play a little bit
with the quorum of the
At around Sep 21 20:33 local time , you got a loss of quorum - that's not good.
Could it be a network 'hicup' ?
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 15:05:16 Гринуич+3, Jeremey Wise
написа:
I did.
Here are all three nodes with restart. I find it odd ...
Replication issue could mean that one of the client (FUSE mounts) is not
attached to all bricks.
You can check the amount of clients via:
gluster volume status all client-list
As a prevention , just do a rolling restart:
- set a host in maintenance and mark it to stop glusterd service (I'm
Any option to extend the Gluster Volume ?
Other approaches are quite destructive. I guess , you can obtain the VM's xml
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.
virsh -c
ou can try to import the VM as partial, another option is to remove the VM
> that remained in the environment but
> keep the disks so you will be able to import the VM and attach the disks to
> it.
>
> On Sat, 19 Sep 2020 at 15:49, Strahil Nikolov via Users
> wrote:
>>
This looks much like my openBSD 6.6 under Latest AMD CPUs. KVM did not accept a
pretty valid instruction and it was a bug in KVM.
Maybe you can try to :
- power off the VM
- pick an older CPU type for that VM only
- power on and monitor in the next days
Do you have a cluster with different cpu
So, let's summarize:
- Cannot migrate the HE due to "CPU policy".
- HE's CPU is westmere - just like hosts
- You have enough resources on the second HE host (both CPU + MEMORY)
What is the Cluster's CPU type (you can check in UI) ?
Maybe you should enable debugging on various locations to
Have you restarted glusterd.service on the affected node.
glusterd is just management layer and it won't affect the brick processes.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 01:43:36 Гринуич+3, Jeremey Wise
написа:
Start is not an option.
It notes two bricks.
Interesting is that I don't find anything recent , but this one:
https://devblogs.microsoft.com/oldnewthing/20120511-00/?p=7653
Can you check if anything in the OS was updated/changed recently ?
Also check if the VM is with nested virtualization enabled.
Best Regards,
Strahil Nikolov
В
Usually libvirt's log might provide hints (yet , no clues) of any issues.
For example:
/var/log/libvirt/qemu/.log
Anything changed recently (maybe oVirt version was increased) ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão
Just select the volume and press "start" . It will automatically mark "force
start" and will fix itself.
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise
написа:
oVirt engine shows one of the gluster servers having an issue. I
Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.
Keep in mind that filling your bricks is bad and if you eat that reserve ,
For some OS versions , the oVirt's behavior is accurate , but for other
versions it's not accurate.
I think that it is more accurate to say that oVirt improperly calculates memory
for SLES 15/openSUSE 15.
I would open a bug at bugzilla.redhat.com .
Best Regards,
Strahil Nikolov
В
ll be able to import the VM and attach the disks to it.
On Sat, 19 Sep 2020 at 15:49, Strahil Nikolov via Users wrote:
> Hello All,
>
> I would like to ask how to proceed further.
>
> Here is what I have done so far on my ovirt 4.3.10:
> 1. Set in maintenance and detached my Glus
Have you tried to upload your qcow2 disks via the UI ?
Maybe you can create a blank VM (same size of disks) and then replacing the
disk with your qcow2 from KVM (works only of file-based storages like
Gluster/NFS).
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г.,
Why is your NVME under multipath ? That doesn't make sense at all .
I have modified my multipath.conf to block all local disks . Also ,don't forget
the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28
801 - 900 of 1135 matches
Mail list logo