On 06/08/2015 11:51 AM, Mathieu Chateau wrote:
From this slide (maybe outdated) it says that reads are also balanced (in replication scenario slide 22):
http://www.gluster.org/community/documentation/images/8/80/GlusterFS_Architecture_%26_Roadmap-Vijay_Bellur-LinuxCon_EU_2013.pdf

Except for write, having an option to do only "failover" for reads & lookup would be possible I guess ?


Lookups have to be sent to both bricks because AFR uses the response to determine if there is a stale copy etc (and then serve from the good copy). For reads, if a client is also mounted on the same machine as the brick, reads will be served from that brick automatically. You can also use the cluster.read-subvolume option to explicitly force the client to read from a brick:

/`gluster volume set help//`
//
<snip>//
//
Option: cluster.read-subvolume//
//Default Value: (null)//
//Description: inode-read fops happen only on one of the bricks in replicate. Afr will prefer the one specified using this option if it is not stale. Option value must be one of the xlator names of the children. Ex: <volname>-client-0 till <volname>-client-<number-of-bricks - 1>//
//
//</snip>//
//
/

Cordialement,
Mathieu CHATEAU
http://www.lotp.fr

2015-06-08 8:11 GMT+02:00 Ravishankar N <[email protected] <mailto:[email protected]>>:



    On 06/08/2015 11:34 AM, Mathieu Chateau wrote:
    Hello Ravi,

    thanks for clearing things up.

    Anything on the roadmap that would help my case?



    I don't think it would be possible for clients to do I/O only on
    its local brick and yet expect the bricks' contents to be in sync
    in real-time..



    Cordialement,
    Mathieu CHATEAU
    http://www.lotp.fr

    2015-06-08 6:37 GMT+02:00 Ravishankar N <[email protected]
    <mailto:[email protected]>>:



        On 06/06/2015 12:49 AM, Mathieu Chateau wrote:
        Hello,

        sorry to bother again but I am still facing this issue.

        client still looks on the "other side" and not using the
        node declared in fstab:
        prd-sta-sto01:/gluster-preprod
        /mnt/gluster-preprod glusterfs
        defaults,_netdev,backupvolfile-server=prd-sta-sto02 0 0

        I expect client to use sto01 and not sto02 as it's available.

        Hi Mathieu,
        When you do lookups (`ls` etc), they are sent to both bricks
        of the replica. If you write to a file, the write is also
        sent to both bricks. This is how it works. Only reads are
        served from the local brick.
        -Ravi



        If I add a static route to break connectivity to sto02 and
        do a "df", I have around 30s before it works.
        Then it works ok.

        Questions:

          * How to force node to stick as possible with one specific
            (local) node ?
          * How to know where a client is currently connected?

        Thanks for your help :)


        Cordialement,
        Mathieu CHATEAU
        http://www.lotp.fr

        2015-05-11 7:26 GMT+02:00 Mathieu Chateau
        <[email protected] <mailto:[email protected]>>:

            Hello,

            thanks for helping :)

            If gluster server is rebooted, any way to make client
            failback on node after reboot ?

            How to know which node is using a client ? I see TCP
            connection to both node

            Regards,

            Cordialement,
            Mathieu CHATEAU
            http://www.lotp.fr

            2015-05-11 7:13 GMT+02:00 Ravishankar N
            <[email protected] <mailto:[email protected]>>:



                On 05/10/2015 08:29 PM, Mathieu Chateau wrote:
                Hello,

                Short way: Is there any way to define a preferred
                Gluster server ?

                Long way:
                I have the following setup (version 3.6.3) :

                Gluster A  <==> VPN <==> Gluster B

                Volume is replicated between A and B.

                They are in same datacenter, using a 1Gb/s
                connection, low latency (0.5ms)

                I have gluster clients in lan A & B.

                When doing a "ls" on big folder (~60k files), both
                gluster node are used, and so it need 9mn instead
                on 1mn if only the local gluster is reachable.


                Lookups (and writes of course) from clients are sent
                to both  bricks because AFR uses the result of the
                lookup to select which brick to read from if there
                is a pending heal etc.
                If the file is clean on both A and B, then reads are
                always served from the local brick. i.e. reads on
                clients mounted on A will be served from the brick
                in A (and likewise for B).

                Hope that helps,
                Ravi


                It's HA setup, application is present on both side.
                I would like a master/master setup, but using only
                local node as possible.


                Regards,
                Mathieu CHATEAU
                http://www.lotp.fr


                _______________________________________________
                Gluster-users mailing list
                [email protected]  <mailto:[email protected]>
                http://www.gluster.org/mailman/listinfo/gluster-users








_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to