As per
https://github.com/oVirt/ovirt-ansible-collection/commit/b506a125efc1901e7f15fbfcfe53b2c7352316b8
it was removed from the code.
Can you remove all 'debug' tasks and try again ?
Best Regards,Strahil Nikolov
On Wed, Jun 2, 2021 at 20:12, lejeczek via Users wrote: Hi
guys,
my
Most probably it does.
Can you try to restart the engine via :systemctl restart ovirt-engine
Best Regards,Strahil Nikolov
On Tue, Jun 1, 2021 at 17:27, jb wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le
In https://github.com/gluster/gluster-ansible-infra there is an example with
:vars: # Firewall setup gluster_infra_fw_ports: -
5900-6923/tcp
Maybe that's causing the problem ?
Best Regards,Strahil Nikolov___
Users mailing list --
I thought that I sent it to all, but I was wrong
What is the output of ls -l
/rhev/data-center/mnt/glusterSD/onode1.example.org:_vmstore/3cf83851-1cc8-4f97-8960-08a60b9e25db/images/ad23c0db-1838-4f1f-811b-2b213d3a11cd/15259a3b-1065-4fb7-bc3c-04c5f4e14479
?
Best Regards,Strahil Nikolov
Ofdtopic: Those stupid Android apps are pure .
> (ps. Strahil, do it the same almost everybody else does it -
you read from the top to bottom so write that way, this not
a conversation just between us two.)___
Users mailing list -- users@ovirt.org
what is the output of 'gluster pool list' ?
Best Regards,Strahil Nikolov
On Mon, May 31, 2021 at 17:00, techbreak--- via Users wrote:
Actually I reinstalled CentOS7, Gluster and oVirt 4.3.10.
Now the situation is like that:
TASK [gluster.features/roles/gluster_hci : Create
So the Hypervisor is complete vanilla without any alterations ?
Do you get any output from 'rpm -V openssh-server' ?
Best Regards,Strahil Nikolov
On Sun, May 30, 2021 at 17:14, lejeczek via Users wrote:
On 29/05/2021 19:29, Strahil Nikolov wrote:
> Most probably it's related to
Most probably it's related to ssh.Did you alter your sshd config ?
oVirt needs password-less access to root.
Best Regards,Strahil Nikolov
On Sat, May 29, 2021 at 14:30, lejeczek via Users wrote:
Hi guys
I'm trying to install HE on a KVM host and installer cannot
get pass this:
[ ERROR
Also, kn each powerup of the VM, the xml file is stored in vdsm's logs.
Best Regards,Strahil Nikolov
On Fri, May 28, 2021 at 7:32, dhanaraj.ramesh--- via Users
wrote: https://access.redhat.com/solutions/795203
In case of RHEV, these files are not stored under /etc/libvirt/qemu
The vdsm
Maybe you can remove 6900/tcp from firewalld and try again ?
Best Regards,Strahil Nikolov
On Thu, May 27, 2021 at 19:43, Dominique D
wrote: it seems to be this problem
I tried to install it again with version 4.4.6-2021051809 and I get this
message.
[ INFO ] TASK
Verify that thr engine's FQDN is resolvable and A/ + PTR records are OK
Best Regards,Strahil Nikolov
On Thu, May 27, 2021 at 17:57, Harry O wrote: Nope, it
didn't fix it, just typed in the wrong IP-address
___
Users mailing list -- users
What is the output of 'ip a s' from the host ?
Best Regards,Strahil Nikolov
On Tue, May 25, 2021 at 16:05, techbreak--- via Users wrote:
I've tried also with command line using "hosted-engine --deploy" and
following the guided tour, but then I end up with this error:
[ IN
Ah, that makes sense :)
I hope someone else with DR experience can share their thoughts.
Best Regards,Strahil Nikolov
On Tue, May 25, 2021 at 9:55, Simon Scott wrote: Just
to clarify...
Site A has volume data2 which geo-replicates to Site B
Site B has volume data1 which geo-replicates
. At
least for Gluster is mandatory.
Can you bump the site A version to match B ?
Best Regards,Strahil Nikolov
Thanks Strahil,
I was trying to do this manually first and was under the impression that Red
Hat Ansible Engine was fairly expensive, something that we are trying to avoid,
hence
-on-linux-unix/
Best Regards,
Strahil Nikolov
В понеделник, 24 май 2021 г., 23:07:40 ч. Гринуич+3, David White via Users
написа:
Thank you, Ritesh.
Those logs were perfect, and I immediately found the problem.
This was my fault.
I had disabled Root Login in /etc/ssh/sshd_config (because
Maybe you can try to run the ansible code from cockpit under the command
line... Sadly, I got no test env to take a look how to do it.
Best Regards,
Strahil Nikolov
В понеделник, 24 май 2021 г., 20:42:22 ч. Гринуич+3, Harry O
написа:
It's same issue in other browsers
Hi Simon,
I think there was a DR Ansible role to make your life easier.
Have you checked section 3 from
https://www.ovirt.org/documentation/disaster_recovery_guide/ ?
Best Regards,
Strahil Nikolov
В понеделник, 24 май 2021 г., 18:57:42 ч. Гринуич+3, si...@justconnect.ie
написа
By the way, if you use full-blown OS, you can try to downgrade cockpit version.
Best Regards,Strahil Nikolov
On Mon, May 24, 2021 at 1:32, Strahil Nikolov wrote:
It seems that there is an issue about it :
https://github.com/cockpit-project/cockpit/issues/14896
Can you extend the timeout
Regards,Strahil Nikolov
On Sun, May 23, 2021 at 22:41, Harry O wrote: I have
found some logs here, hope is is usable.
journalctl | grep ovirt:
May 23 21:25:17 hej1.5ervers.lan ovirt-ha-broker[72235]: ovirt-ha-broker
mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
will
need.Then you can just change in oVirt the mount options and append
'context= and it will work.
I remember another issue related to some ubuntu NFS server options, but I can't
recall it right now.
Best Regards,Strahil Nikolov
On Sat, May 22, 2021 at 20:20, David White via Users wrote:
Hello
I would start with 'journalctl' to identify why the cockpit dies.Most probably
the ansible code is interrupted.
Best Regards,Strahil Nikolov
On Fri, May 21, 2021 at 17:34, Harry O wrote: Do know
what i mean?
___
Users mailing list -- users
FAILED - Data or Metadata usage is above threshold. Check the output of `lvs`
You most probably got the data/metadata filled in.Check and fix your LVM issues.
Best Regards,Strahil Nikolov
On Wed, May 19, 2021 at 2:09, Eugène Ngontang wrote
it a try.
Best Regards,Strahil Nikolov
On Tue, May 18, 2021 at 16:09, Simon Scott wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
The output of the ansible. I'm not sure if it's logged in that log file.
Usually, cockpit provides it directly.
Best Regards,Strahil Nikolov
On Mon, May 17, 2021 at 19:15, Harry O wrote: Would
that be this log file you need?
cat
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine
If you are running on EL8 -> It's the SELINUX.To verify that, stop the session
and use 'setenforce 0' on both source and destination.
To make it work with SELINUX , you will need to use 'sealert -a' extensively
(yum whatprovides '*/sealert').
Best Regards,Strahil Nikolov
Typo - Tha
Yes , it was just a way to get more info about the issue.
What is the output of the debug task ?
Best Regards,Strahil Nikolov
On Mon, May 17, 2021 at 15:58, Harry O wrote: The
error still persist after change in following, is this the worng place? I
couldn't do it under HCI setup:
/usr
g for VT-x capability is "vmx";
So you need to enable the virtualization for your system. For details check
your hardware vendor's documentation.
Best Regards,
Strahil Nikolov
В неделя, 16 май 2021 г., 22:09:45 ч. Гринуич+3, David Johnson
написа:
Next question ... how do I enable
Enable VMX flag or you won't be able to run any VM on it.
Best Regards,
Strahil Nikolov
В събота, 15 май 2021 г., 05:04:38 ч. Гринуич+3, David Johnson
написа:
When I attempt to install a new host to my cluster from the GUI I get this set
of errors/warnings, and the install fails
dir /gluster_bricks//
Of course, if you HW Raid -> you will need to implement some alignments on LVM
& XFS level.
Best Regards,
Strahil Nikolov
В петък, 14 май 2021 г., 17:15:14 ч. Гринуич+3, Harry O
написа:
When I try to create a single disk brick via host view "sto
Yes, but the DEBUG should be above the '- include_tasks: auth_sso.yml' or it
might print the debug info far later in the flow.
Best Regards,
Strahil Nikolov
В петък, 14 май 2021 г., 14:27:35 ч. Гринуич+3, Harry O
написа:
Like this?
- name: Detect VLAN ID
shell: ip -d link show
Hi,
the info is not enough. Can you provide the error you receive.Most probably you
will need to enable some debug to identify the issue.
Best Regards,Strahil Nikolov
On Fri, May 14, 2021 at 12:44, Alessio B.
wrote: Hello, sorry if I continue in this thread but I have the same issue
If you want to update the engine, you have to run:
yum update ovirt\*setup\* engine-setup
What is your lab's network settings ?In 4.4+ by default the
NetworkManager.service is used.
Best Regards,Strahil Nikolov
On Thu, May 13, 2021 at 16:33, Dominique D
wrote: I have a setup with 3 servers
I don't see 6900 in
https://github.com/gluster/glusterfs/blob/devel/extras/firewalld/glusterfs.xml
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
What type of storage (and brand) are you using for the storage ?
Best Regards,Strahil Nikolov
On Wed, May 12, 2021 at 21:01, Patrick Lomakin
wrote: The weekend went horribly. I created a second storage domain in order
to move all the virtual machines to a higher performance RAID array
each indentation level) :
- name: DEBUG debug: var: 'output'
Best Regards,Strahil Nikolov
On Wed, May 12, 2021 at 0:50, Harry O wrote: Thanks
Strahil, I think I we found the relevent error in logs:
[ 00:03 ] Make the engine aware that the external VM is stopped
[ 00:01 ]
You can check the Troubleshooting section of
https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf .
Best Regards,Strahil Nikolov
On Tue, May 11, 2021 at 12:26, Harry O wrote: Hi,
In the second engine dep run in Hyperconverged deployment I get red "Ooops!" in
cockpi
ovirtmgmt is using a linux bridge and maybe STP kicked in ?Do you know of any
changes done in the network at that time ?
Best Regards,Strahil Nikolov
On Tue, May 11, 2021 at 2:27, David White via Users wrote:
___
Users mailing list -- users
instead of a remote one.
Theoretically congestion on storage network could be the root cause, but this
is usually a symptom and not the real problem. Maybe you got too many backups
running in parallel ?
Best Regards,Strahil Nikolov
On Mon, May 10, 2021 at 19:13, David White via Users wrote
Regards,Strahil Nikolov
On Mon, May 10, 2021 at 16:02, Ernest Clyde Chua
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code
create
a separate thread in the gluster mailing list.
Best Regards,Strahil Nikolov
for most safety, you create a new gluster layout and storage domain, and
slowly migrate the VM into new domain. If you do other workaround, you should
test it very carefully beforehand
Yes, I did - but I couldn't take a look into them.
Best Regards,Strahil Nikolov
On Mon, May 10, 2021 at 13:54, Marko Vrgotic
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement
Hm... are those tests done with sharding + full disk preallocation ?If yes,
then this is quite interesting.
Storage migration should be still possible, as oVirt creates a snapshot and the
migrates the disks and consolidates them on the new storage location.
Best Regards,Strahil Nikolov
? Was this volume created by oVirt or
manually ?
Best Regards,Strahil Nikolov
It is because of a serious bug on cluster.lookup-optimize, it cause me few
VM image corruption after new brick added. Although cluster.lookup-optimize
theoretically impact all file not just shards. However, after ran many
Also, keep in mind that RHHI is using shards of 512 MB which reduces the shard
count.
I'm glad that in newer versions of Gluster there are no stalls.
Also, in Gluster v9 healing mechanisms - so we can see.
Anyway, what does it make you think that sharding was your problem ?
Best Regards,Strahil
brick
11. Wait for the healing to be done
Some sources:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/brick_configuration
Best Regards,Strahil Nikolov
On Sun, May 9, 2021 at 7:04, Ernest Clyde Ch
.
Yet I have one question.Did you use preallocated VM disks or you used thin
provisioning for the qcow2 ?
Best Regards,Strahil Nikolov
Description of problem:
Intermittent VM pause and Qcow image corruption after add new bricks.
I'm suffered an issue on image corruption on oVirt 4.3 caused
I can guide you through the manual approach, but I'm pretty sure that there is
an ansible role for that purpose.
Best Regards,Strahil Nikolov
On Sat, May 8, 2021 at 14:56, Ernest Clyde Chua
wrote: Good day,currently we have a 1 node host that also runs a gluster in 1
node distributed
In my opinion,
reinstalling the engine will never give an insight how the situation went so
bad.
Currently, at least the HE can live migrate between host1 & host3 and most
probably will power up properly in case of serious problems.
Yet, it's quite strange why the setup of host2 failsand on top
Usually, when you power off the Engine from inside, there should be no
issues.How do you power it off ?
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Yes,/etc/ovirt-hosted-engine-ha/agent.conf should be enough.
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt
Please disregard the previous e-mail.
Best Regards,Strahil Nikolov
On Wed, May 5, 2021 at 1:06, Strahil Nikolov wrote:
Yes,/etc/ovirt-hosted-engine-ha/agent.conf should be enough.
Best Regards,Strahil Nikolov
___
Users mailing list -- users
Yes,/etc/ovirt-hosted-engine-ha/agent.conf should be enough.
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt
It's quite interesting why the link is not created when it should ...Can you
paste the contents of the ha daemons' config where the shared storage should be
defined?
You can create the link if needed, but it won't solve the problem.
Best Regards,Strahil
with the fix of host1 & host3.
I have no clue if engine-cleanup will affect the shared syorage, but it's
possible - so use as last resort.
If you fail to add host2 , you can always reinstall it as host4 and try to add
it fresh.
Best Regards,Strahil Nikolov
On Fri, Apr 30, 2021 at 1
I think that you have to export the BM as OVA or convert it to KVM before being
able to import via oVirt
Best Regards,Strahil Nikolov
On Wed, Apr 28, 2021 at 12:36, hemaa--- via Users wrote:
Any leads please?
___
Users mailing list -- users
ch will reduce reordering of your I/O
requests and speed up on fast storage. NVMEs by default use that.
Best Regards,Strahil Nikolov
On Mon, Apr 26, 2021 at 22:43, penguin pages wrote:
"...Tuning Gluster with VDO bellow is quite difficult and the overhead of using
VDO could
reduce pe
tant for XFS.
Best Regards,Strahil Nikolov
On Mon, Apr 26, 2021 at 20:50, penguin pages wrote:
The response was that when I select the oVirt HCI storage volumes to deploy to
(with VDO enabled) which are a single 512GB SSD with only one small IDM VM
running. The IPI OCP 4.7 deployment f
I haven't seen your email on the gluster users' mailing list .
What was your problem with the performance ?
Best Regards,Strahil Nikolov
On Mon, Apr 26, 2021 at 17:30, penguin pages wrote:
It was on a support ticket / call I was having. I googled around and the only
article I found
you check the contents on all nodes ?
Best Regards,Strahil Nikolov
On Sat, Apr 24, 2021 at 21:57, David White via Users wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: h
Hi David,
let's start with the DNS.Check that both nodes resolve each other (both A/
& PTR records).
If you set entries in /etc/hosts, check them out.
Also , check the output of 'hostname -s' & 'hostname -f' on both hosts.
Best Regards,Strahil
t safety is always
first. Yet, oVirt is not like other proprietary software - so it let's you use
POSIX-compliant solutions which allows you to use even a pure distributed
volume. It just won't let you shoot your leg with oVirt....
Best Regards,Strahil Nikolov
___
As far as I know oVirt node and Hosts have the same purpose and are
interchangeable (with some slight differences). It shouldn't be a problem at
all.
Best Regards,
Strahil Nikolov
В вторник, 20 април 2021 г., 11:27:08 ч. Гринуич+3, KSNull Zero
написа:
Hello!
We want to switch OS
Another workaround that comes to my mind: disable server quorum gluster option.
Of course all of these are just workarounds and not a real fix.
Best Regards,Strahil Nikolov
On Mon, Apr 19, 2021 at 13:43, Thomas Hoberg wrote:
Well, that's why I really want a theory of operation here
Regards,Strahil Nikolov
On Mon, Apr 19, 2021 at 12:57, Thomas Hoberg wrote: My
understanding is that in a HCI environment, the storage nodes should be rather
static, but that the pure compute nodes, can be much more dynamic or
opportunistic: actually those should/could even be switched
cannot go
over the volume size.
Best Regards,Strahil Nikolov
On Mon, Apr 19, 2021 at 11:46, Thomas Hoberg wrote: Hi
Strahil,
when you said "The Gluster documentation on the topic is quite extensive", I
wasn't quite sure, if that was mean to be ironic: you typically are not.
At
Regards,
Strahil Nikolov
В събота, 17 април 2021 г., 00:01:59 ч. Гринуич+3, Nir Soffer
написа:
On Fri, Apr 16, 2021 at 5:20 PM wrote:
>
> Anyone?
>
> I know that RH 7.6+ and SLES 12.4+ kernel have support but I don't know if
> oVirt interface have it, you know? Becaus
put of gluster volume info and gluster volume status ?
Best Regards,
Strahil Nikolov
В петък, 16 април 2021 г., 22:24:54 ч. Гринуич+3, eev...@digitaldatatechs.com
написа:
My setup is 3 Centos 7 servers hyper-converged with a 4th stand alone engine
server. My issue is Ovirt is read
, so keep it as tight as possible , but
in the same time keep them on separate hosts.
Best Regards,
Strahil Nikolov
В петък, 16 април 2021 г., 03:57:51 ч. Гринуич+3, David White via Users
написа:
> David, I’m curious what the use case is
This is for a customer who wants as much h
More like:Prepare a new linux host + a brickSet the donor into maintenance and
then remove it from oVirt & GlusterExpand with the new brick ( new host)
Put the big disks on the old donor and use it as a new host ...
Best Regards,Strahil Nikolov
On Fri, Apr 16, 2021 at 14:01, David White
Have you thought about copying the data via rsync/scp to the new disks
(assuming that you have similar machine) ?
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
Ah, you got free space on lvm.Just 'lvextend -r -l +xxx vg/lv' on all bricks
and you are good to go.
Best Regards,Strahil Nikolov
On Fri, Apr 16, 2021 at 12:39, David White via Users wrote:
Sorry, I meant to reply-all, to Strahil's most recent message.
So I'm doing so now.
In addition
but should work) to export your
Gluster volumes in a redundant and highly available way
Best Regards,
Strahil Nikolov
В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme
написа:
David, I’m curious what the use case is. :9 you plan on using the disk with
three vms at the same time
me) before searching the old bricks.
P.S: I think that the Engine's web interface fully supports that operation,
although I'm used to the cli.
Best Regards,
Strahil Nikolov
В четвъртък, 15 април 2021 г., 20:05:50 ч. Гринуич+3, David White via Users
написа:
Is it possible to expand a
Usually confirming that the host was rebooted should be enough.
I guess you havw to take a look in engine's logs.
Best Regards,Strahil Nikolov
On Mon, Apr 5, 2021 at 15:10, tfe...@swissonline.ch
wrote: ___
Users mailing list -- users@ovirt.org
th Red Hat and Oracle provide a paid , subscription-based
solution that might suit your needs better.
Best Regards,Strahil Nikolov
On Fri, Apr 2, 2021 at 1:15, Thomas Hoberg wrote: oVirt
may have started as a vSphere 'look-alike', but it graduated to a Nutanix
'clone', at least in terms of
Can you share the gluster brick logs (when the peoblems were happening) ?
I think that it's important to solve this issue.
If you wish, we can move the topic to gluster-users mailing list.
Best Regards,Strahil Nikolov
It works, it doesn't fail.
Thanks.
>
> do you still fail to cre
I know that it's off topic, but If you have access via ssh you can try to
create a local admin user via AAA JDBC (there is an example at
https://www.ovirt.org/develop/release-management/features/infra/aaa-jdbc.html )
Best Regards,Strahil Nikolov
On Fri, Mar 26, 2021 at 10:22, Nicolás wrote
Usually, you should cleanup the engine's volume or even use a new one.
Have you tried to write a file as the vdsm user ?
Best Regards,Strahil Nikolov
On Fri, Mar 26, 2021 at 21:49, Valerio Luccio wrote:
___
Users mailing list -- users@ovirt.org
If you got a backup of /etc/glusterd and /var/lib/glusterd you should be able
to restart gluster on that node (so no syncing will be needed).
Keep in mind that I haven't migrated yet, so I can't guarantee it will work ;)
Best Regards,Strahil Nikolov
On Thu, Mar 25, 2021 at 20:39, Jayme
in oVirt.
Best Regards,Strahil Nikolov
On Tue, Mar 23, 2021 at 19:01, Jayme wrote: I have a
fairly stock three node HCI setup running oVirt 4.3.9. The hosts are oVirt
node. I'm using GlusterFS storage for the self hosted engine and for some VMs.
I also have some other VMs running from
for engine- Hosted Engine - a VM that is running on one of the
hosts and will serve as an Engine
Best Regards,Strahil Nikolov
On Mon, Mar 22, 2021 at 12:14, Jayme wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users
If all fuse clients are using gluster v8 , yes.
Most probably, your oVirt nodes are the only clients, but you can always verify
with list-client volume command.
Best Regards,Strahil Nikolov
On Fri, Mar 19, 2021 at 21:50, Jiří Sléžka wrote: Hello,
I have just upgraded my 2hosts+1arbiter
Did you do any ssh hardening on the Hypervisors ?
Best Regards,Strahil Nikolov
On Thu, Mar 18, 2021 at 16:17, penguin pages wrote:
Fresh install CentOS8 Streams. Gluster wizard runs and deploys. Engine
installs on node.
I copy over /etc/hosts for primary site resources to ovirt
disabling the choose-local, so you should be OK.
Actually, even if the gluster brick died on that host (for example the RAID is
broken), the ovirt node can still use other Gluster bricks - just like oVirt
compute-only nodes.
Best Regards,
Strahil Nikolov
В четвъртък, 18 март 2021 г., 12:00:29
Hi Miguel,
are you also using Gluster ?
If yes, it makes the things more complicated.If you use iSCSI, NFS or SAN you
will be able to remove the host from engine and then you can swap the OS
disks, reinstall and add it backBest Regards,Strahil Nikolov
On Tue, Mar 16, 2021 at 21:08
You have to verify the NFS share's ownership (user +group) and
permissions.Usually, alot of users set 'anonuid' & 'anongid' to 36 and then use
allsquash (check the man for exact spelling) to force all clients to be using
36:36 .
Best Regards,Strahil Nikolov
On Mon, Mar 15, 2021 at 2
You need to share more details.Have you checked 'journalctl' for any hints
?Also, check the vdsm log.
The NIC is just a NIC... could affect vdsm's network service , but nothing more.
Best Regards,Strahil Nikolov
On Sun, Mar 14, 2021 at 20:26, Jason Alexander Hazen
Valliant-Saunders wrote
Just move it away (to be on the safe side) and trigger a full heal.
Best Regards,
Strahil Nikolov
В сряда, 10 март 2021 г., 13:01:21 ч. Гринуич+2, Maria Souvalioti
написа:
Should I delete the file and restart glusterd on the ov-no1 server?
Thank you very much
On 3/10/21 10
It seems that the affected file can be moved away on ov-no1.ariadne-t.local, as
the other 2 bricks "blame" the entry on ov-no1.ariadne-t.local .
After that , you will need to "gluster volume heal full" to
trigger the heal.
Best Regards,
Strahil Nikolov
В сряда, 10 ма
It seems to me that ov-no1 didn't update the file properly.
What was the output of the gluster volume heal command ?
Best Regards,Strahil Nikolov
The output of the getfattr command on the nodes was the following:
Node1:
[root@ov-no1 ~]# getfattr -d -m . -e hex
/gluster_bricks/engine
The output of the command seems quite wierd: 'getfattr -d -m . -e hex file' Is
it the same on all nodes ?
Best Regards,Strahil Nikolov
On Tue, Mar 9, 2021 at 15:36, Maria Souvalioti
wrote: ___
Users mailing list -- users@ovirt.org
Also check the status of the file on each brick with the getfattr command ( see
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/ ) and
provide the output.
Best Regards,Strahil Nikolov
Thank you for your reply.
I'm trying that right now and I see it triggered
I was always running gluster ahead of ovirt's version. Just ensure that there
are no pending heals and you always check release notes before upgrading
gluster.
Best Regards,Strahil Nikolov
On Sun, Mar 7, 2021 at 9:15, Sketch wrote: Is the gluster
version on an oVirt host tied
setting into maintenance and detaching the storage domain and then remove the
LUN from SAN side and reboot the host ;)
Best Regards,
Strahil Nikolov
В събота, 6 март 2021 г., 10:59:20 ч. Гринуич+2, Juhani Rautiainen
написа:
This really might be the best way to go. But it's a bit
Regards,Strahil Nikolov
On Fri, Mar 5, 2021 at 15:52, Juhani Rautiainen
wrote: Hi!
I had already booted the first node so I tried this on the second
node. After cleaning up with dmsetup I ran ansible script again. It
claimed success put multipathd was still checking for paths. I tried
If it's a VM image, just use dd to read the whole file.dd
if=VM_imageof=/dev/null bs=10M status=progress
Best Regards,Strahil Nikolov
On Fri, Mar 5, 2021 at 15:48, Alex K wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send
p the
VM after the restore to snap2 and you will loose that.
Best Regards,Strahil Nikolov
On Fri, Mar 5, 2021 at 8:32, dhanaraj.ramesh--- via Users
wrote: Hi Team,
when I want to commit the older snapshots I'm getting warning stating "
Existing snapshots that were taken after this o
I think that I got such case before (4.2) and the only way I managed to update
the HE VM was to set the OVF update interval to 1 min and wait for an hour to
pass.
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send
Most probably there is an LVM filter.As stated in the /etc/multipath.conf , use
a special file to blacklist the local disks without modifying
/etc/multipath.conf
Best Regards,Strahil Nikolov
Hi All,
I have a server with a RAID1 disk for sda and RAID 5 disk for sdb.
Following default
bricks.
Best Regards,Strahil Nikolov
Thanks for this information. I guess I follow the same way of the first
three nodes to add the next three one.
But, sorry if I say something stupid, but why 3 by 3 ?
My replication (default btw) is to keep 3 copies. If I add a node, he can keep
3 copies
501 - 600 of 1577 matches
Mail list logo