On 07/07/2017 02:03 PM, Gianluca Cecchi wrote:
On Fri, Jul 7, 2017 at 10:15 AM, knarra <kna...@redhat.com <mailto:kna...@redhat.com>> wrote:




    It seems I have to de-select the checkbox "Show available bricks
    from host" and so I can manually the the directory of the bricks
    I see that bricks are mounted in /gluster/brick3 and that is the
    reason it does not show anything in "Brick Directory" drop down
    filed. If bricks are mounted under /gluster_bricks then it would
    have detected automatically. There is an RFE which is raised to
    detect bricks which are created manually.


I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think I used the "default" path that was proposed inside the ovirt-gluster.conf file to feed gdeploy with...
I think it was based on this from Jason:
https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/
and this conf file
https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5ebe5/raw/ovirt-gluster.conf

Good that there is an RFE. Thanks



    BTW: I see that after creating a volume optimized for oVirt in
    web admin gui of 4.1.2 I get slight option for it in respect for
    a pre-existing volume created in 4.0.5 during initial setup with
    gdeploy.

    NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I
    have gluster 3.10 (manually updated from CentOS storage SIG)

    Making a "gluster volume info" and then a diff of the output for
    the 2 volumes I have:

    new volume ==   <
    old volume  ==    >

    < cluster.shd-max-threads: 8
    ---
    > cluster.shd-max-threads: 6
    13a13,14
    > features.shard-block-size: 512MB
    16c17
    < network.remote-dio: enable
    ---
    > network.remote-dio: off
    23a25
    > performance.readdir-ahead: on
    25c27
    < server.allow-insecure: on
    ---
    > performance.strict-o-direct: on

    Do I have to change anything for the newly created one?
    No, you do not need to change anything for the new volume. But if
    you plan to enable o-direct on the volume then you will have to
    disable/turn off remote-dio.


OK.
Again, in ovirt-gluster.conf file I see there was this kind of setting for the Gluster volumes when running gdeploy for them:
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,off,on
brick_dirs=/gluster/brick1/engine
I'm going to crosscheck now what are the suggested values for oVirt 4.1 and Gluster 3.10 combined...
Now virt group sets the shard block size and it is the default which is 4MB and is the suggested value. With 4MB shards we see that healing is much faster with granular entry heal being enabled on the volume.

I am not sure why the conf file again sets the shard size. May be this can be removed from the file.

Other than this everything looks good for me.

I was in particular worried by the difference of features.shard-block-size but after reading this

http://blog.gluster.org/2015/12/introducing-shard-translator/

I'm not sure if 512Mb is the best in case of VMs storage.... I'm going to dig more eventually

Thanks,
Gianluca


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to