Ok.

I am not sure deleting the file or re-peer probe would be the right way to go.

Gluster-users can help you here.


On 05/21/2014 07:08 PM, Gabi C wrote:
Hello!


I haven't change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, _manually_, from one of the nodes , remove bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI.


On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <[email protected] <mailto:[email protected]>> wrote:

    What are the steps which led this situation?

    Did you re-install one of the nodes after forming the cluster or
    reboot which could have changed the ip?



    On 05/21/2014 03:43 PM, Gabi C wrote:
    On afected node:

    gluster peer status

    gluster peer status
    Number of Peers: 3

    Hostname: 10.125.1.194
    Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
    State: Peer in Cluster (Connected)

    Hostname: 10.125.1.196
    Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
    State: Peer in Cluster (Connected)

    Hostname: 10.125.1.194
    Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
    State: Peer in Cluster (Connected)





    ls -la /var/lib/gluster



    ls -la /var/lib/glusterd/peers/
    total 20
    drwxr-xr-x. 2 root root 4096 May 21 11:10 .
    drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
    -rw-------. 1 root root   73 May 21 11:10
    85c2a08c-a955-47cc-a924-cf66c6814654
    -rw-------. 1 root root   73 May 21 10:52
    c22e41b8-2818-4a96-a6df-a237517836d6
    -rw-------. 1 root root   73 May 21 11:10
    d95558a0-a306-4812-aec2-a361a9ddde3e


    Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??





    On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <[email protected]
    <mailto:[email protected]>> wrote:


        On 05/21/2014 02:04 PM, Gabi C wrote:
        Hello!

        I have an ovirt setup, 3.4.1, up-to date, with gluster
        package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is
        replicated on 3 bricks. On 2 nodes 'gluster peeer status'
        raise 2 peer connected with it's UUID. On third node
        'gluster peer status' raise 3 peers, out of which, two
        reffer to same node/IP but different UUID.

        in every node you can find the peers in /var/lib/glusterd/peers/

        you can get the uuid of the current node using the command
        "gluster system:: uuid get"

        From this you can find which file is wrong in the above location.

        [Adding [email protected] <mailto:[email protected]>]


        What I have tried:
        - stopped gluster volumes, put 3rd node in maintenace,
        reboor -> no effect;
        - stopped  volumes, removed bricks belonging to 3rd node,
        readded it, start volumes but still no effect.


        Any ideas, hints?

        TIA


        _______________________________________________
        Users mailing list
        [email protected]  <mailto:[email protected]>
        http://lists.ovirt.org/mailman/listinfo/users





_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to