The problem with sparse qcow2 images is that the Gluster shard xlator might not
cope with the random I/O nature of the workload, as it will have to create a
lot of shards in a short period of time ( 64MB shard size) for a small I/O (
for example 50 x 512 byte I/O request could cause 50 shards
Sorry i replied to the wrong thread
On Mon, May 10, 2021 at 6:11 PM Ernest Clyde Chua <
ernestclydeac...@gmail.com> wrote:
> Good day,
> the distributed volume was created manually.
> currently i'm thinking to create a replica on the two new servers which 1
> server will hold 2 bricks and
for most safety, you create a new gluster layout and storage domain, and slowly
migrate the VM into new domain. If you do other workaround, you should test it
very carefully beforehand.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an
Right, I re-test again, shard setting didn't interference the migration, my
previous test failure is caused by root file privilege reset bug in 4.3.
All my test is using qcow spare file, I'm afraid that I will not drag into
preallocate file comparison because it is not practical in our usage.
Hm... are those tests done with sharding + full disk preallocation ?If yes,
then this is quite interesting.
Storage migration should be still possible, as oVirt creates a snapshot and the
migrates the disks and consolidates them on the new storage location.
Best Regards,Strahil Nikolov
Hi Strahil,
cluster.lookup-optimize had been default turn on since i think is gluster
version 6 corresponding in oVirt 4.3, so ovirt inherit this setting regardless
of ovirt preset. My volumes are provisioned by ovirt UI.
Yes, In the theory, shards improve read performance, however write
Good day,
the distributed volume was created manually.
currently i'm thinking to create a replica on the two new servers which 1
server will hold 2 bricks and replace it later, then recreate the brick for
the server hosting 2 bricks into 1,
also i found the image location
A quote from :
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-creating_replicated_volumes
Sharding has one supported use case: in the context of providing Red Hat
Gluster Storage as a storage domain for Red Hat Enterprise Virtualization,
It is because of a serious bug on cluster.lookup-optimize, it cause me few VM
image corruption after new brick added. Although cluster.lookup-optimize
theoretically impact all file not just shards. However, after ran many round
verification test, corruption doesn't happen when shards disabled.
Also, keep in mind that RHHI is using shards of 512 MB which reduces the shard
count.
I'm glad that in newer versions of Gluster there are no stalls.
Also, in Gluster v9 healing mechanisms - so we can see.
Anyway, what does it make you think that sharding was your problem ?
Best Regards,Strahil
Hi Strahil,
I'm sorry to said I'm not totally agree with the VM pause behavior you
mentioned during healing, VM actually did not pause, but it heal very slow
indeed about 1GB per minutes, VM sync write operation appear slowing down when
healing undergo. it is comparative when sharding
I would still recommend sharding. Imagine that you got 2 TB disks for a VM and
one of the oVirt hosts needs maintenance.When gluster has to heal that 2TB
file, your VM won't be able to access the file for a very long time and will
fail. Sharding is important for having no-downtime maintenance.
12 matches
Mail list logo