Hi,

A client had some issues with their glusterfs cluster and botched an 
update. Currently they have loads of broken symlinks in their bricks 
.glusterfs subfolders

Like the second symlink one below

    lrwxrwxrwx 1 root root 55 Jul 14 2017
    591c33db-e88b-4544-9372-bea4ec362360 ->
    ../../83/4a/834a8af8-95a4-4f8a-9810-4e4d829e87f9/394947
    lrwxrwxrwx 1 root root 74 Jul 17 2017
    591c89df-e5ed-4c5e-8ed1-643bbdf1ac3a ->
    ../../bb/e3/bbe35c18-467e-46a2-96d1-3f3cf07d30b2/omgevingsvisie-gelderland

    ./gfid-resolver.sh /data/home/brick1/
    591c33db-e88b-4544-9372-bea4ec362360
    591c33db-e88b-4544-9372-bea4ec362360 Directory:
    /data/home/brick1/*******/web_root/shared/private/uploads/image/file/39494

    ./gfid-resolver.sh /data/home/brick1/
    591c89df-e5ed-4c5e-8ed1-643bbdf1ac3a
    591c89df-e5ed-4c5e-8ed1-643bbdf1ac3a Directory: ./gfid-resolver.sh:
    line 46: cd:
    
/data/home/brick1//.glusterfs/59/1c/../../bb/e3/bbe35c18-467e-46a2-96d1-3f3cf07d30b2:
    Too many levels of symbolic links
    /root/omgevingsvisie-gelderland

The first is ok, the second link is broken.

    /data/home/brick1/.glusterfs/00/00
    [root@storage01 00]# for l in $(find . -type l); do cd $(dirname
    $l); if [ ! -e "$(readlink $(basename $l))" ]; then echo $l; fi; cd
    - > /dev/null; done
    ./0000d3e7-bdaf-4c57-aaf8-e8d8906d85ce
    ./0000b8dd-6982-4dc5-80ee-0d2226b8a274
    ./0000d173-3e12-41a2-973a-ca167f764b73
    ./00001420-a395-4f79-91db-690b009b8d3d
    ./00009a6d-e885-4856-a9a1-d44badb4bef5
    ./0000169f-1d61-4400-a13d-563df8dc78e1
    ./00009338-f7d9-4c33-8761-b0a7d0eaf6ef
    ./0000498f-3061-4c23-8633-410a21e54f60
    ./0000e61f-eba8-4534-88f4-84c01c9bd698
    ./00009cb9-7d55-4558-93f3-f79ab4c7938d
    ./0000dd8c-ee79-47c4-9f7d-ef3569698907
    ./0000403a-e9d8-4c76-9b62-72c396f34893
    ./00004ff2-a0f7-49d7-ac90-3eac37c6adba
    ./00002d40-3ed7-4d9b-8b72-1e4f0568be3d
    ./0000dcce-6649-4446-ac4c-9eee16d5b009
    ./0000c636-6d5a-4bd0-aeec-a9406af6f716
    ./0000a5d9-57ac-416f-95c8-aa40577c2f99
    ./000036c1-9ec2-4a1c-846f-8b00b88a3718
    ./00003e8f-a9d8-4006-89ca-372800a814a7
    ./00005244-aeb7-4151-b8d5-3e8cc4861080
    ./00003c97-ea76-4c81-be46-9e943aefecce
    ./00007ab9-c9a9-44b4-82d2-522a94270049
    ./00002ec6-ae91-4cae-ab99-db2a84deecaa
    ./0000778a-1489-4c43-bafc-bdcc134639dc

Volume healing doesn't report any files to heal.

Currently things are wonky as the client reports files missing that are 
actually there and this causes processes to fail.

Also there are many of these errors in the client log:

[2018-02-13 10:25:48.377536] W [MSGID: 114031] 
[client-rpc-fops.c:2151:client3_3_seek_cbk] 0-home-client-0: remote 
operation failed [No such device or address]
[2018-02-13 10:25:48.386911] W [MSGID: 114031] 
[client-rpc-fops.c:2151:client3_3_seek_cbk] 0-home-client-0: remote 
operation failed [No such device or address]
[2018-02-13 10:26:19.251306] W [MSGID: 114031] 
[client-rpc-fops.c:2151:client3_3_seek_cbk] 0-home-client-0: remote 
operation failed [No such device or address]

Can I remove the orphaned links in .glusterfs? I haven't found a 
definitive answer yet and I don't want to cause more issues.





Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 

----------------

        Tel: 053 20 30 270      i...@netbulae.eu        Staalsteden 4-3A        
KvK 08198180
        Fax: 053 20 30 271      www.netbulae.eu         7547 TA Enschede        
BTW NL821234584B01

----------------

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to