The symptoms are similar to a loss of quorum (like in a network
outage/disruption).
Check the gluster logs for any indication of the root cause.As you have only
one gigabit network, consider enabling cluster choose-local option which will
make FUSE client to try to read from local brick
Just a point of clarification, for all of these hosts, 1 of these interfaces is
connected to my 1Gbps switch, and the other interface is connected to my 10Gbps
switch.
For Host 1 specifically,
enp4s0f0 is physically connected to 1 switch.
eno1 is physically connected to another.
But those
I'm not sure what to make of this, but looking at /var/log/messages on all 3 of
the hosts,it appears that the kernel disabled my oVirt networks at the same
exact time on all 3 hosts.
This occurred twice this morning, once around 8am and again around 8:30am:
ovirtmgmt is the storage network.
Version: 4.3.10
I'm attempting to change the IP address, netmask and gateway of the ovirtmgmt
NIC of a host, but everytime I reboot the host, the old address/netmask/gateway
re-assert themselves.
Where do I need to make the changes, so they will be permanent?
I've modified
Found the answer:
Update /var/lib/vdsm/persistence/netconf/nets/ovritmgmt and reboot.
From: matthew.st...@fujitsu.com
Sent: Monday, May 10, 2021 3:18 PM
To: users@ovirt.org
Subject: [ovirt-users] Changing the ovirtmgmt IP address
Version: 4.3.10
I'm attempting to change the IP address,
ovirtmgmt is using a linux bridge and maybe STP kicked in ?Do you know of any
changes done in the network at that time ?
Best Regards,Strahil Nikolov
On Tue, May 11, 2021 at 2:27, David White via Users wrote:
___
Users mailing list --
It is because of a serious bug on cluster.lookup-optimize, it cause me few VM
image corruption after new brick added. Although cluster.lookup-optimize
theoretically impact all file not just shards. However, after ran many round
verification test, corruption doesn't happen when shards disabled.
A quote from :
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-creating_replicated_volumes
Sharding has one supported use case: in the context of providing Red Hat
Gluster Storage as a storage domain for Red Hat Enterprise Virtualization,
The problem with sparse qcow2 images is that the Gluster shard xlator might not
cope with the random I/O nature of the workload, as it will have to create a
lot of shards in a short period of time ( 64MB shard size) for a small I/O (
for example 50 x 512 byte I/O request could cause 50 shards
for most safety, you create a new gluster layout and storage domain, and slowly
migrate the VM into new domain. If you do other workaround, you should test it
very carefully beforehand.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an
Good day,
The distributed volume was created manually and currently i'm thinking to
create a replica on the two new servers which 1 server will hold 2 bricks
and replace it later, then recreate the brick for the server hosting 2
bricks into 1,
i found the image location
Hi,
I can't still connect to my vms with vmconsole proxy on my production
engine (other test and dev engine are OK).
the ssh key for the wanted user is available in the the API:
Yes, I did - but I couldn't take a look into them.
Best Regards,Strahil Nikolov
On Mon, May 10, 2021 at 13:54, Marko Vrgotic
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
It's far simpler to:- Create new volume on the new hosts (replica volume)
- Create a storage domain from that volume- Live storage migrate the VMs on the
new volume- destroy the old volume- reuse the bricks (don't forget to recreate
the FS) from the old volume and add them to the new one
Best
Sorry i replied to the wrong thread
On Mon, May 10, 2021 at 6:11 PM Ernest Clyde Chua <
ernestclydeac...@gmail.com> wrote:
> Good day,
> the distributed volume was created manually.
> currently i'm thinking to create a replica on the two new servers which 1
> server will hold 2 bricks and
Good day,
the distributed volume was created manually.
currently i'm thinking to create a replica on the two new servers which 1
server will hold 2 bricks and replace it later, then recreate the brick for
the server hosting 2 bricks into 1,
also i found the image location
Hi Yedidyah and Strahil,
Just to double check, if you received the issue request and log files.
-
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e:
Hi Strahil,
cluster.lookup-optimize had been default turn on since i think is gluster
version 6 corresponding in oVirt 4.3, so ovirt inherit this setting regardless
of ovirt preset. My volumes are provisioned by ovirt UI.
Yes, In the theory, shards improve read performance, however write
As we continue to develop oVirt 4.4, the Development and Integration teams
at Red Hat would value insights on how you are deploying the oVirt
environment.
Please help us to hit the mark by completing this short survey. Survey will
close on *May 30th 2021*.
If you're managing multiple oVirt
Hm... are those tests done with sharding + full disk preallocation ?If yes,
then this is quite interesting.
Storage migration should be still possible, as oVirt creates a snapshot and the
migrates the disks and consolidates them on the new storage location.
Best Regards,Strahil Nikolov
Right, I re-test again, shard setting didn't interference the migration, my
previous test failure is caused by root file privilege reset bug in 4.3.
All my test is using qcow spare file, I'm afraid that I will not drag into
preallocate file comparison because it is not practical in our usage.
Indeed, the network is assigned to the cluster however is not listed at all.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of
22 matches
Mail list logo