[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-03-31 Thread Krutika Dhananjay
Adding back gluster-users
Comments inline ...

On Fri, Mar 29, 2019 at 8:11 PM Olaf Buitelaar 
wrote:

> Dear Krutika,
>
>
>
> 1. I’ve made 2 profile runs of around 10 minutes (see files
> profile_data.txt and profile_data2.txt). Looking at it, most time seems be
> spent at the  fop’s fsync and readdirp.
>
> Unfortunate I don’t have the profile info for the 3.12.15 version so it’s
> a bit hard to compare.
>
> One additional thing I do notice on 1 machine (10.32.9.5) the iowait time
> increased a lot, from an average below the 1% it’s now around the 12% after
> the upgrade.
>
> So first suspicion with be lighting strikes twice, and I’ve also just now
> a bad disk, but that doesn’t appear to be the case, since all smart status
> report ok.
>
> Also dd shows performance I would more or less expect;
>
> dd if=/dev/zero of=/data/test_file  bs=100M count=1  oflag=dsync
>
> 1+0 records in
>
> 1+0 records out
>
> 104857600 bytes (105 MB) copied, 0.686088 s, 153 MB/s
>
> dd if=/dev/zero of=/data/test_file  bs=1G count=1  oflag=dsync
>
> 1+0 records in
>
> 1+0 records out
>
> 1073741824 bytes (1.1 GB) copied, 7.61138 s, 141 MB/s
>
> if=/dev/urandom of=/data/test_file  bs=1024 count=100
>
> 100+0 records in
>
> 100+0 records out
>
> 102400 bytes (1.0 GB) copied, 6.35051 s, 161 MB/s
>
> dd if=/dev/zero of=/data/test_file  bs=1024 count=100
>
> 100+0 records in
>
> 100+0 records out
>
> 102400 bytes (1.0 GB) copied, 1.6899 s, 606 MB/s
>
> When I disable this brick (service glusterd stop; pkill glusterfsd)
> performance in gluster is better, but not on par with what it was. Also the
> cpu usages on the “neighbor” nodes which hosts the other bricks in the same
> subvolume increases quite a lot in this case, which I wouldn’t expect
> actually since they shouldn't handle much more work, except flagging shards
> to heal. Iowait  also goes to idle once gluster is stopped, so it’s for
> sure gluster which waits for io.
>
>
>

So I see that FSYNC %-latency is on the higher side. And I also noticed you
don't have direct-io options enabled on the volume.
Could you set the following options on the volume -
# gluster volume set  network.remote-dio off
# gluster volume set  performance.strict-o-direct on
and also disable choose-local
# gluster volume set  cluster.choose-local off

let me know if this helps.

2. I’ve attached the mnt log and volume info, but I couldn’t find anything
> relevant in in those logs. I think this is because we run the VM’s with
> libgfapi;
>
> [root@ovirt-host-01 ~]# engine-config  -g LibgfApiSupported
>
> LibgfApiSupported: true version: 4.2
>
> LibgfApiSupported: true version: 4.1
>
> LibgfApiSupported: true version: 4.3
>
> And I can confirm the qemu process is invoked with the gluster:// address
> for the images.
>
> The message is logged in the /var/lib/libvert/qemu/  file, which
> I’ve also included. For a sample case see around; 2019-03-28 20:20:07
>
> Which has the error; E [MSGID: 133010]
> [shard.c:2294:shard_common_lookup_shards_cbk] 0-ovirt-kube-shard: Lookup on
> shard 109886 failed. Base file gfid = a38d64bc-a28b-4ee1-a0bb-f919e7a1022c
> [Stale file handle]
>

Could you also attach the brick logs for this volume?


>
> 3. yes I see multiple instances for the same brick directory, like;
>
> /usr/sbin/glusterfsd -s 10.32.9.6 --volfile-id
> ovirt-core.10.32.9.6.data-gfs-bricks-brick1-ovirt-core -p
> /var/run/gluster/vols/ovirt-core/10.32.9.6-data-gfs-bricks-brick1-ovirt-core.pid
> -S /var/run/gluster/452591c9165945d9.socket --brick-name
> /data/gfs/bricks/brick1/ovirt-core -l
> /var/log/glusterfs/bricks/data-gfs-bricks-brick1-ovirt-core.log
> --xlator-option *-posix.glusterd-uuid=fb513da6-f3bd-4571-b8a2-db5efaf60cc1
> --process-name brick --brick-port 49154 --xlator-option
> ovirt-core-server.listen-port=49154
>
>
>
> I’ve made an export of the output of ps from the time I observed these
> multiple processes.
>
> In addition the brick_mux bug as noted by Atin. I might also have another
> possible cause, as ovirt moves nodes from none-operational state or
> maintenance state to active/activating, it also seems to restart gluster,
> however I don’t have direct proof for this theory.
>
>
>

+Atin Mukherjee  ^^
+Mohit Agrawal   ^^

-Krutika

Thanks Olaf
>
> Op vr 29 mrt. 2019 om 10:03 schreef Sandro Bonazzola  >:
>
>>
>>
>> Il giorno gio 28 mar 2019 alle ore 17:48  ha
>> scritto:
>>
>>> Dear All,
>>>
>>> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
>>> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a
>>> different experience. After first trying a test upgrade on a 3 node setup,
>>> which went fine. i headed to upgrade the 9 node production platform,
>>> unaware of the backward compatibility issues between gluster 3.12.15 ->
>>> 5.3. After upgrading 2 nodes, the HA engine stopped and wouldn't start.
>>> Vdsm wasn't able to mount the engine storage domain, since /dom_md/metadata
>>> was missing or couldn't be accessed. R

[ovirt-users] Hosted-Engine constantly dies

2019-03-31 Thread Strahil Nikolov
Hi Guys,
As I'm still quite new in oVirt - I have some problems finding the problem on 
this one.My Hosted Engine (4.3.2) is constantly dieing (even when the Global 
Maintenance is enabled).My interpretation of the logs indicates some lease 
problem , but I don't get the whole picture ,yet.
I'm attaching the output of 'journalctl -f | grep -Ev "Started Session|session 
opened|session closed"' after I have tried to power on the hosted engine 
(hosted-engine --vm-start).
The nodes are fully updated and I don't see anything in the gluster v5.5 logs, 
but I can double check.
Any hints are appreciated and thanks in advance.
Best Regards,Strahil Nikolov

hosted-engine-crash
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TRQL5EOCRLELX46GSLJI4V5KT2QCME7U/


[ovirt-users] Re: Multiple Active VM before the preview" snapshots

2019-03-31 Thread Bruckner, Simone
Dear all,

  does anyone have an idea how to address this?

Thank you and all the best,
Simone

-Ursprüngliche Nachricht-
Von: Bruckner, Simone 
Gesendet: Mittwoch, 27. März 2019 13:25
An: users@ovirt.org
Betreff: [ovirt-users] Multiple Active VM before the preview" snapshots

Hi,

  we see some VMs that show an inconsistent view of snapshots. Checking die 
database for one example vm shows the following result:

engine=# select snapshot_id, status, description from snapshots where vm_id = 
'40c0f334-dac5-42ad-8040-e2d2193c73c0';
 snapshot_id  |   status   | description
--++
--++--
 b77f5752-f1a4-454f-bcde-6afd6897e047 | OK | Active VM
 059d262a-6cc4-4d35-b1d4-62ef7fe28d67 | OK | Active VM before the 
preview
 d22e4f74-6521-45d5-8e09-332c05194ec3 | OK | Active VM before the 
preview
 87d39245-bedf-4cf1-a2a6-4228176091d3 | IN_PREVIEW | base
(4 rows)

We cannot perform any snapshot, clone, copy operations on those vms. Is there a 
way to get this cleared?

All the best,
Simone
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZE7ZQRXI5VPQYZFJK7DTQDCQIGPLKY5K/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W5S3HOIIMB43A6KYK37BHQOM2IIDKZ5V/


[ovirt-users] HE - engine gluster volume - not mounted

2019-03-31 Thread Leo David
Hello Everyone,
Using 4.3.2 installation, and after running through HyperConverged Setup,
at the last stage it fails. It seems that the previously created "engine"
volume is not mounted under "/rhev" path, therefore the setup cannot finish
the deployment.
Any ideea which are the services responsible of mounting the volumes on
oVirt Node distribution ? I'm thinking that maybe this particularly one
failed to start for some reason...
Thank you very much !

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4TIXZ3GIBZHSJ7IC2VHC/