+Krutika Dhananjay <kdhan...@redhat.com> +Sachidananda URS <s...@redhat.com>


Adrian, can you provide more details on the performance issue you're seeing?
We do have some scripts to collect data to analyse. Perhaps you can run
this and provide us the details in a bug. The ansible scripts to do this is
still under review -
https://github.com/gluster/gluster-ansible-maintenance/pull/4
But would be great if you can provide us some feedback on this as well.

On Thu, Aug 22, 2019 at 7:50 AM <adrianquint...@gmail.com> wrote:

> Hello,
> I have a hyperconverged setup using ovirt 4.3.5 and the "optimize for
> ovirt store" seems to fail on gluster volumes.
> I am seeing poor performance and trying to see how should I tune gluster
> to give better performance.
> Can you provide any suggestions on the following volume
> settings(parameters)?
>
> option     Value
> ------     -----
> cluster.lookup-unhashed    on
> cluster.lookup-optimize    on
> cluster.min-free-disk      10%
> cluster.min-free-inodes    5%
> cluster.rebalance-stats    off
> cluster.subvols-per-directory      (null)
> cluster.readdir-optimize           off
> cluster.rsync-hash-regex           (null)
> cluster.extra-hash-regex           (null)
> cluster.dht-xattr-name     trusted.glusterfs.dht
> cluster.randomize-hash-range-by-gfid       off
> cluster.rebal-throttle     normal
> cluster.lock-migration     off
> cluster.force-migration    off
> cluster.local-volume-name          (null)
> cluster.weighted-rebalance         on
> cluster.switch-pattern     (null)
> cluster.entry-change-log           on
> cluster.read-subvolume     (null)
> cluster.read-subvolume-index    -1
> cluster.read-hash-mode  1
> cluster.background-self-heal-count      8
> cluster.metadata-self-heal         off
> cluster.data-self-heal     off
> cluster.entry-self-heal    off
> cluster.self-heal-daemon           on
> cluster.heal-timeout    600
> cluster.self-heal-window-size   1
> cluster.data-change-log    on
> cluster.metadata-change-log        on
> cluster.data-self-heal-algorithm           full
> cluster.eager-lock         enable
> disperse.eager-lock        on
> disperse.other-eager-lock          on
> disperse.eager-lock-timeout     1
> disperse.other-eager-lock-timeout       1
> cluster.quorum-type        auto
> cluster.quorum-count       (null)
> cluster.choose-local       off
> cluster.self-heal-readdir-size     1KB
> cluster.post-op-delay-secs      1
> cluster.ensure-durability          on
> cluster.consistent-metadata        no
> cluster.heal-wait-queue-length  128
> cluster.favorite-child-policy      none
> cluster.full-lock          yes
> diagnostics.latency-measurement    off
> diagnostics.dump-fd-stats          off
> diagnostics.count-fop-hits         off
> diagnostics.brick-log-level        INFO
> diagnostics.client-log-level       INFO
> diagnostics.brick-sys-log-level    CRITICAL
> diagnostics.client-sys-log-level           CRITICAL
> diagnostics.brick-logger           (null)
> diagnostics.client-logger          (null)
> diagnostics.brick-log-format       (null)
> diagnostics.client-log-format      (null)
> diagnostics.brick-log-buf-size  5
> diagnostics.client-log-buf-size 5
> diagnostics.brick-log-flush-timeout     120
> diagnostics.client-log-flush-timeout    120
> diagnostics.stats-dump-interval 0
> diagnostics.fop-sample-interval 0
> diagnostics.stats-dump-format      json
> diagnostics.fop-sample-buf-size 65535
> diagnostics.stats-dnscache-ttl-sec      86400
> performance.cache-max-file-size 0
> performance.cache-min-file-size 0
> performance.cache-refresh-timeout       1
> performance.cache-priority
> performance.cache-size     32MB
> performance.io-thread-count     16
> performance.high-prio-threads   16
> performance.normal-prio-threads 16
> performance.low-prio-threads    32
> performance.least-prio-threads  1
> performance.enable-least-priority          on
> performance.iot-watchdog-secs      (null)
> performance.iot-cleanup-disconnected-   reqsoff
> performance.iot-pass-through       false
> performance.io-cache-pass-through          false
> performance.cache-size     128MB
> performance.qr-cache-timeout    1
> performance.cache-invalidation     false
> performance.ctime-invalidation     false
> performance.flush-behind           on
> performance.nfs.flush-behind       on
> performance.write-behind-window-size       1MB
> performance.resync-failed-syncs-after   -fsyncoff
> performance.nfs.write-behind-window-s   ize1MB
> performance.strict-o-direct        on
> performance.nfs.strict-o-direct    off
> performance.strict-write-ordering          off
> performance.nfs.strict-write-ordering      off
> performance.write-behind-trickling-wr   iteson
> performance.aggregate-size         128KB
> performance.nfs.write-behind-tricklin   g-writeson
> performance.lazy-open      yes
> performance.read-after-open        yes
> performance.open-behind-pass-through       false
> performance.read-ahead-page-count       4
> performance.read-ahead-pass-through        false
> performance.readdir-ahead-pass-throug   h  false
> performance.md-cache-pass-through          false
> performance.md-cache-timeout    1
> performance.cache-swift-metadata           true
> performance.cache-samba-metadata           false
> performance.cache-capability-xattrs        true
> performance.cache-ima-xattrs       true
> performance.md-cache-statfs        off
> performance.xattr-cache-list
> performance.nl-cache-pass-through          false
> features.encryption        off
> network.frame-timeout   1800
> network.ping-timeout    30
> network.tcp-window-size    (null)
> client.ssl         off
> network.remote-dio         off
> client.event-threads    4
> client.tcp-user-timeout 0
> client.keepalive-time   20
> client.keepalive-interval       2
> client.keepalive-count  9
> network.tcp-window-size    (null)
> network.inode-lru-limit 16384
> auth.allow         *
> auth.reject        (null)
> transport.keepalive     1
> server.allow-insecure      on
> server.root-squash         off
> server.all-squash          off
> server.anonuid  65534
> server.anongid  65534
> server.statedump-path      /var/run/gluster
> server.outstanding-rpc-limit    64
> server.ssl         off
> auth.ssl-allow     *
> server.manage-gids         off
> server.dynamic-auth        on
> client.send-gids           on
> server.gid-timeout      300
> server.own-thread          (null)
> server.event-threads    4
> server.tcp-user-timeout 42
> server.keepalive-time   20
> server.keepalive-interval       2
> server.keepalive-count  9
> transport.listen-backlog        1024
> transport.address-family           inet
> performance.write-behind           on
> performance.read-ahead     off
> performance.readdir-ahead          on
> performance.io-cache       off
> performance.open-behind    on
> performance.quick-read     off
> performance.nl-cache       off
> performance.stat-prefetch          on
> performance.client-io-threads      on
> performance.nfs.write-behind       on
> performance.nfs.read-ahead         off
> performance.nfs.io-cache           off
> performance.nfs.quick-read         off
> performance.nfs.stat-prefetch      off
> performance.nfs.io-threads         off
> performance.force-readdirp         true
> performance.cache-invalidation     false
> performance.global-cache-invalidation      true
> features.uss       off
> features.snapshot-directory        .snaps
> features.show-snapshot-directory           off
> features.tag-namespaces    off
> network.compression        off
> network.compression.window-size -15
> network.compression.mem-level   8
> network.compression.min-size    0
> network.compression.compression-level   -1
> network.compression.debug          false
> features.default-soft-limit        80%
> features.soft-timeout   60
> features.hard-timeout   5
> features.alert-time     86400
> features.quota-deem-statfs         off
> geo-replication.indexing           off
> geo-replication.indexing           off
> geo-replication.ignore-pid-check           off
> geo-replication.ignore-pid-check           off
> features.quota     off
> features.inode-quota       off
> features.bitrot    disable
> debug.trace        off
> debug.log-history          no
> debug.log-file     no
> debug.exclude-ops          (null)
> debug.include-ops          (null)
> debug.error-gen    off
> debug.error-failure        (null)
> debug.error-number         (null)
> debug.random-failure       off
> debug.error-fops           (null)
> nfs.disable        on
> features.read-only         off
> features.worm      off
> features.worm-file-level           off
> features.worm-files-deletable      on
> features.default-retention-period       120
> features.retention-mode    relax
> features.auto-commit-period     180
> storage.linux-aio          off
> storage.batch-fsync-mode           reverse-fsync
> storage.batch-fsync-delay-usec  0
> storage.owner-uid       36
> storage.owner-gid       36
> storage.node-uuid-pathinfo         off
> storage.health-check-interval   30
> storage.build-pgfid        off
> storage.gfid2path          on
> storage.gfid2path-separator        :
> storage.reserve 1
> storage.health-check-timeout    10
> storage.fips-mode-rchecksum        off
> storage.force-create-mode       0
> storage.force-directory-mode    0
> storage.create-mask     777
> storage.create-directory-mask   777
> storage.max-hardlinks   100
> features.ctime     on
> config.gfproxyd    off
> cluster.server-quorum-type         server
> cluster.server-quorum-ratio     0
> changelog.changelog        off
> changelog.changelog-dir    {{ brick.path }}/.glusterfs/changelogs
> changelog.encoding         ascii
> changelog.rollover-time 15
> changelog.fsync-interval        5
> changelog.changelog-barrier-timeout     120
> changelog.capture-del-path         off
> features.barrier           disable
> features.barrier-timeout        120
> features.trash     off
> features.trash-dir         .trashcan
> features.trash-eliminate-path      (null)
> features.trash-max-filesize        5MB
> features.trash-internal-op         off
> cluster.enable-shared-storage      disable
> locks.trace        off
> locks.mandatory-locking    off
> cluster.disperse-self-heal-daemon          enable
> cluster.quorum-reads       no
> client.bind-insecure       (null)
> features.shard     on
> features.shard-block-size          64MB
> features.shard-lru-limit        16384
> features.shard-deletion-rate    100
> features.scrub-throttle    lazy
> features.scrub-freq        biweekly
> features.scrub     false
> features.expiry-time    120
> features.cache-invalidation        off
> features.cache-invalidation-timeout     60
> features.leases    off
> features.lease-lock-recall-timeout      60
> disperse.background-heals       8
> disperse.heal-wait-qlength      128
> cluster.heal-timeout    600
> dht.force-readdirp         on
> disperse.read-policy       gfid-hash
> cluster.shd-max-threads 8
> cluster.shd-wait-qlength        10000
> cluster.locking-scheme     granular
> cluster.granular-entry-heal        enable
> features.locks-revocation-secs  0
> features.locks-revocation-clear-all        false
> features.locks-revocation-max-blocked   0
> features.locks-monkey-unlocking    false
> features.locks-notify-contention           no
> features.locks-notify-contention-dela   y  5
> disperse.shd-max-threads        1
> disperse.shd-wait-qlength       1024
> disperse.cpu-extensions    auto
> disperse.self-heal-window-size  1
> cluster.use-compound-fops          off
> performance.parallel-readdir       off
> performance.rda-request-size    131072
> performance.rda-low-wmark       4096
> performance.rda-high-wmark         128KB
> performance.rda-cache-limit        10MB
> performance.nl-cache-positive-entry        false
> performance.nl-cache-limit         10MB
> performance.nl-cache-timeout    60
> cluster.brick-multiplex    off
> cluster.max-bricks-per-process  250
> disperse.optimistic-change-log     on
> disperse.stripe-cache   4
> cluster.halo-enabled       False
> cluster.halo-shd-max-latency    99999
> cluster.halo-nfsd-max-latency   5
> cluster.halo-max-latency        5
> cluster.halo-max-replicas       99999
> cluster.halo-min-replicas       2
> features.selinux           on
> cluster.daemon-log-level           INFO
> debug.delay-gen    off
> delay-gen.delay-percentage         10%
> delay-gen.delay-duration        100000
> delay-gen.enable
> disperse.parallel-writes           on
> features.sdfs      off
> features.cloudsync         off
> features.ctime     on
> ctime.noatime      on
> feature.cloudsync-storetype        (null)
> features.enforce-mandatory-lock    off
>
>
> thank you!
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EU4TYVYEDB4GRQJD5GXSBBCUR3ZHPYTS/
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQ62FNFC7ZSOOBCFQBZYK7XDO6KXVGWL/

Reply via email to