+Krutika Dhananjay +Sachidananda URS
Adrian, can you provide more details on the performance issue you're seeing?
We do have some scripts to collect data to analyse. Perhaps you can run
this and provide us the details in a bug. The ansible scripts to do this is
still under review -
https://github.com/gluster/gluster-ansible-maintenance/pull/4
But would be great if you can provide us some feedback on this as well.
On Thu, Aug 22, 2019 at 7:50 AM wrote:
> Hello,
> I have a hyperconverged setup using ovirt 4.3.5 and the "optimize for
> ovirt store" seems to fail on gluster volumes.
> I am seeing poor performance and trying to see how should I tune gluster
> to give better performance.
> Can you provide any suggestions on the following volume
> settings(parameters)?
>
> option Value
> -- -
> cluster.lookup-unhashedon
> cluster.lookup-optimizeon
> cluster.min-free-disk 10%
> cluster.min-free-inodes5%
> cluster.rebalance-statsoff
> cluster.subvols-per-directory (null)
> cluster.readdir-optimize off
> cluster.rsync-hash-regex (null)
> cluster.extra-hash-regex (null)
> cluster.dht-xattr-name trusted.glusterfs.dht
> cluster.randomize-hash-range-by-gfid off
> cluster.rebal-throttle normal
> cluster.lock-migration off
> cluster.force-migrationoff
> cluster.local-volume-name (null)
> cluster.weighted-rebalance on
> cluster.switch-pattern (null)
> cluster.entry-change-log on
> cluster.read-subvolume (null)
> cluster.read-subvolume-index-1
> cluster.read-hash-mode 1
> cluster.background-self-heal-count 8
> cluster.metadata-self-heal off
> cluster.data-self-heal off
> cluster.entry-self-healoff
> cluster.self-heal-daemon on
> cluster.heal-timeout600
> cluster.self-heal-window-size 1
> cluster.data-change-logon
> cluster.metadata-change-logon
> cluster.data-self-heal-algorithm full
> cluster.eager-lock enable
> disperse.eager-lockon
> disperse.other-eager-lock on
> disperse.eager-lock-timeout 1
> disperse.other-eager-lock-timeout 1
> cluster.quorum-typeauto
> cluster.quorum-count (null)
> cluster.choose-local off
> cluster.self-heal-readdir-size 1KB
> cluster.post-op-delay-secs 1
> cluster.ensure-durability on
> cluster.consistent-metadatano
> cluster.heal-wait-queue-length 128
> cluster.favorite-child-policy none
> cluster.full-lock yes
> diagnostics.latency-measurementoff
> diagnostics.dump-fd-stats off
> diagnostics.count-fop-hits off
> diagnostics.brick-log-levelINFO
> diagnostics.client-log-level INFO
> diagnostics.brick-sys-log-levelCRITICAL
> diagnostics.client-sys-log-level CRITICAL
> diagnostics.brick-logger (null)
> diagnostics.client-logger (null)
> diagnostics.brick-log-format (null)
> diagnostics.client-log-format (null)
> diagnostics.brick-log-buf-size 5
> diagnostics.client-log-buf-size 5
> diagnostics.brick-log-flush-timeout 120
> diagnostics.client-log-flush-timeout120
> diagnostics.stats-dump-interval 0
> diagnostics.fop-sample-interval 0
> diagnostics.stats-dump-format json
> diagnostics.fop-sample-buf-size 65535
> diagnostics.stats-dnscache-ttl-sec 86400
> performance.cache-max-file-size 0
> performance.cache-min-file-size 0
> performance.cache-refresh-timeout 1
> performance.cache-priority
> performance.cache-size 32MB
> performance.io-thread-count 16
> performance.high-prio-threads 16
> performance.normal-prio-threads 16
> performance.low-prio-threads32
> performance.least-prio-threads 1
> performance.enable-least-priority on
> performance.iot-watchdog-secs (null)
> performance.iot-cleanup-disconnected- reqsoff
> performance.iot-pass-through false
> performance.io-cache-pass-through false
> performance.cache-size 128MB
> performance.qr-cache-timeout1
> performance.cache-invalidation false
> performance.ctime-invalidation false
> performance.flush-behind on
> performance.nfs.flush-behind on
> performance.write-behind-window-size 1MB
> performance.resync-failed-syncs-after -fsyncoff
> performance.nfs.write-behind-window-s ize1MB
> performance.strict-o-directon
> performance.nfs.strict-o-directoff
> performance.strict-write-ordering off
> performance.nfs.strict-write-ordering off
> performance.write-behind-trickling-wr iteson
> performance.aggregate-size 128KB
> performance.nfs.write-behind-tricklin g-writeson
> performance.lazy-open yes
> performance.read-after-openyes
> performance.open-behind-pass-through false
> performance.read-ahead-page-count 4
> performance.read-ahead-pass-throughfalse
> performance.readdir-ahead-pass-throug h false
> performan