Hi Phong, as long as we are talking about replicated volumes, what you have to do is:
1-Kill the proccess associated to that brick 2-Umount the drive 3-Replace the drive 4-Format the drive 5-Mount the new drive 6- Set the attr setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id /var/lib/glusterd/vols/VOL_NAME/info | cut -d= -f2 | sed 's/-//g') /brick/MOUNT_POINTs 7- Restart glusterfs so the proccess associated to that brick will be back 8-Let it heal 9-Take a coffe (big one). If you need a more detailed proccess let me know! 2014-08-15 0:22 GMT-03:00 Phong Tran <[email protected]>: > Hi, I have a problem, try to search how to fix. I have one brick failure > in replicate volume. How can i replace with new hard disk? > > > > [root@brick05 ~]# gluster volume status > Status of volume: hdviet > Gluster process Port Online Pid > > ------------------------------------------------------------------------------ > Brick brick01:/export/hdd00/hdd00 49152 Y > 2515 > Brick brick02:/export/hdd00/hdd00 49152 Y > 2358 > Brick brick01:/export/hdd01/hdd01 49153 Y > 2526 > Brick brick02:/export/hdd01/hdd01 49153 Y > 2369 > Brick brick01:/export/hdd02/hdd02 49154 Y > 2537 > Brick brick02:/export/hdd02/hdd02 49154 Y > 2380 > Brick brick01:/export/hdd03/hdd03 49155 Y > 2548 > Brick brick02:/export/hdd03/hdd03 49155 Y > 2391 > Brick brick01:/export/hdd04/hdd04 49156 Y > 2559 > Brick brick02:/export/hdd04/hdd04 49156 Y > 2402 > Brick brick01:/export/hdd05/hdd05 49157 Y > 2570 > Brick brick02:/export/hdd05/hdd05 49157 Y > 2413 > Brick brick01:/export/hdd06/hdd06 49158 Y > 2581 > Brick brick02:/export/hdd06/hdd06 49158 Y > 2424 > Brick brick01:/export/hdd07/hdd07 49159 Y > 2592 > Brick brick02:/export/hdd07/hdd07 49159 Y > 2435 > Brick brick03:/export/hdd00/hdd00 49152 Y > 2208 > Brick brick04:/export/hdd00/hdd00 49152 Y > 2352 > Brick brick03:/export/hdd01/hdd01 49153 Y > 2219 > Brick brick04:/export/hdd01/hdd01 49153 Y > 2363 > Brick brick03:/export/hdd02/hdd02 49154 Y > 2230 > Brick brick04:/export/hdd02/hdd02 49154 Y > 2374 > Brick brick03:/export/hdd03/hdd03 49155 Y > 2241 > Brick brick04:/export/hdd03/hdd03 49155 Y > 2385 > Brick brick03:/export/hdd04/hdd04 49156 Y > 2252 > Brick brick04:/export/hdd04/hdd04 49156 Y > 2396 > Brick brick03:/export/hdd05/hdd05 49157 Y > 2263 > Brick brick04:/export/hdd05/hdd05 49157 Y > 2407 > Brick brick03:/export/hdd06/hdd06 49158 Y > 2274 > Brick brick04:/export/hdd06/hdd06 49158 Y > 2418 > Brick brick03:/export/hdd07/hdd07 49159 Y > 2285 > Brick brick04:/export/hdd07/hdd07 49159 Y > 2429 > Brick brick05:/export/hdd00/hdd00 49152 Y > 2321 > Brick brick06:/export/hdd00/hdd00 49152 Y > 2232 > Brick brick05:/export/hdd01/hdd01 49153 Y > 2332 > Brick brick06:/export/hdd01/hdd01 49153 Y > 2243 > *Brick brick05:/export/hdd02/hdd02 N/A N > N/A* > Brick brick06:/export/hdd02/hdd02 49154 Y > 13976 > Brick brick05:/export/hdd03/hdd03 49155 Y > 2354 > Brick brick06:/export/hdd03/hdd03 49155 Y > 2265 > Brick brick05:/export/hdd04/hdd04 49156 Y > 2365 > Brick brick06:/export/hdd04/hdd04 49156 Y > 2276 > Brick brick05:/export/hdd05/hdd05 49157 Y > 2376 > Brick brick06:/export/hdd05/hdd05 49157 Y > 2287 > Brick brick05:/export/hdd06/hdd06 49158 Y > 2387 > Brick brick06:/export/hdd06/hdd06 49158 Y > 2298 > Brick brick05:/export/hdd07/hdd07 49159 Y > 2398 > Brick brick06:/export/hdd07/hdd07 49159 Y > 2309 > Brick brick07:/export/hdd00/hdd00 49152 Y > 2357 > Brick brick08:/export/hdd00/hdd00 49152 Y > 2261 > Brick brick07:/export/hdd01/hdd01 49153 Y > 2368 > Brick brick08:/export/hdd01/hdd01 49153 Y > 2272 > Brick brick07:/export/hdd02/hdd02 49154 Y > 2379 > Brick brick08:/export/hdd02/hdd02 49154 Y > 2283 > Brick brick07:/export/hdd03/hdd03 49155 Y > 2390 > Brick brick08:/export/hdd03/hdd03 49155 Y > 2294 > Brick brick07:/export/hdd04/hdd04 49156 Y > 2401 > Brick brick08:/export/hdd04/hdd04 49156 Y > 2305 > Brick brick07:/export/hdd05/hdd05 49157 Y > 2412 > Brick brick08:/export/hdd05/hdd05 49157 Y > 2316 > Brick brick07:/export/hdd06/hdd06 49158 Y > 2423 > Brick brick08:/export/hdd06/hdd06 49158 Y > 2327 > Brick brick07:/export/hdd07/hdd07 49159 Y > 2434 > Brick brick08:/export/hdd07/hdd07 49159 Y > 2338 > NFS Server on localhost 2049 Y > 15604 > Self-heal Daemon on localhost N/A Y > 15614 > NFS Server on brick04 2049 Y > 2443 > Self-heal Daemon on brick04 N/A Y > 2447 > NFS Server on brick03 2049 Y > 2300 > Self-heal Daemon on brick03 N/A Y > 2304 > NFS Server on brick02 2049 Y > 2449 > Self-heal Daemon on brick02 N/A Y > 2453 > NFS Server on 192.168.200.1 2049 Y > 2606 > Self-heal Daemon on 192.168.200.1 N/A Y > 2610 > NFS Server on brick06 2049 Y > 14021 > Self-heal Daemon on brick06 N/A Y > 14028 > NFS Server on brick08 2049 Y > 2352 > Self-heal Daemon on brick08 N/A Y > 2356 > NFS Server on brick07 2049 Y > 2448 > Self-heal Daemon on brick07 N/A Y > 2452 > > Task Status of Volume hdviet > > ------------------------------------------------------------------------------ > There are no active volume tasks > > > > > [root@brick05 ~]# gluster volume info > > Volume Name: hdviet > Type: Distributed-Replicate > Volume ID: fe3a2ed8-d727-499b-9cc6-b11ffb80fc5d > Status: Started > Number of Bricks: 32 x 2 = 64 > Transport-type: tcp > Bricks: > Brick1: brick01:/export/hdd00/hdd00 > Brick2: brick02:/export/hdd00/hdd00 > Brick3: brick01:/export/hdd01/hdd01 > Brick4: brick02:/export/hdd01/hdd01 > Brick5: brick01:/export/hdd02/hdd02 > Brick6: brick02:/export/hdd02/hdd02 > Brick7: brick01:/export/hdd03/hdd03 > Brick8: brick02:/export/hdd03/hdd03 > Brick9: brick01:/export/hdd04/hdd04 > Brick10: brick02:/export/hdd04/hdd04 > Brick11: brick01:/export/hdd05/hdd05 > Brick12: brick02:/export/hdd05/hdd05 > Brick13: brick01:/export/hdd06/hdd06 > Brick14: brick02:/export/hdd06/hdd06 > Brick15: brick01:/export/hdd07/hdd07 > Brick16: brick02:/export/hdd07/hdd07 > Brick17: brick03:/export/hdd00/hdd00 > Brick18: brick04:/export/hdd00/hdd00 > Brick19: brick03:/export/hdd01/hdd01 > Brick20: brick04:/export/hdd01/hdd01 > Brick21: brick03:/export/hdd02/hdd02 > Brick22: brick04:/export/hdd02/hdd02 > Brick23: brick03:/export/hdd03/hdd03 > Brick24: brick04:/export/hdd03/hdd03 > Brick25: brick03:/export/hdd04/hdd04 > Brick26: brick04:/export/hdd04/hdd04 > Brick27: brick03:/export/hdd05/hdd05 > Brick28: brick04:/export/hdd05/hdd05 > Brick29: brick03:/export/hdd06/hdd06 > Brick30: brick04:/export/hdd06/hdd06 > Brick31: brick03:/export/hdd07/hdd07 > Brick32: brick04:/export/hdd07/hdd07 > Brick33: brick05:/export/hdd00/hdd00 > Brick34: brick06:/export/hdd00/hdd00 > Brick35: brick05:/export/hdd01/hdd01 > Brick36: brick06:/export/hdd01/hdd01 > *Brick37: brick05:/export/hdd02/hdd02* > Brick38: brick06:/export/hdd02/hdd02 > Brick39: brick05:/export/hdd03/hdd03 > Brick40: brick06:/export/hdd03/hdd03 > Brick41: brick05:/export/hdd04/hdd04 > Brick42: brick06:/export/hdd04/hdd04 > Brick43: brick05:/export/hdd05/hdd05 > Brick44: brick06:/export/hdd05/hdd05 > Brick45: brick05:/export/hdd06/hdd06 > Brick46: brick06:/export/hdd06/hdd06 > Brick47: brick05:/export/hdd07/hdd07 > Brick48: brick06:/export/hdd07/hdd07 > Brick49: brick07:/export/hdd00/hdd00 > Brick50: brick08:/export/hdd00/hdd00 > Brick51: brick07:/export/hdd01/hdd01 > Brick52: brick08:/export/hdd01/hdd01 > Brick53: brick07:/export/hdd02/hdd02 > Brick54: brick08:/export/hdd02/hdd02 > Brick55: brick07:/export/hdd03/hdd03 > Brick56: brick08:/export/hdd03/hdd03 > Brick57: brick07:/export/hdd04/hdd04 > Brick58: brick08:/export/hdd04/hdd04 > Brick59: brick07:/export/hdd05/hdd05 > Brick60: brick08:/export/hdd05/hdd05 > Brick61: brick07:/export/hdd06/hdd06 > Brick62: brick08:/export/hdd06/hdd06 > Brick63: brick07:/export/hdd07/hdd07 > Brick64: brick08:/export/hdd07/hdd07 > > > [root@brick05 ~]# gluster peer status > Number of Peers: 7 > > Hostname: brick08 > Uuid: ae52d2c7-6966-4261-9d51-b789010c78c7 > State: Peer in Cluster (Connected) > > Hostname: brick06 > Uuid: 88910c4e-3b3c-4797-adfd-9236f161051a > State: Peer in Cluster (Connected) > > Hostname: brick03 > Uuid: b30eb4f4-19a8-4309-9c14-02893a52f0b8 > State: Peer in Cluster (Connected) > > Hostname: brick04 > Uuid: b0cd18a8-b5b1-4bf2-b6d1-2803be86e955 > State: Peer in Cluster (Connected) > > Hostname: 192.168.200.1 > Uuid: 574acf46-22b0-45f4-a4d0-768417202bf5 > State: Peer in Cluster (Connected) > > Hostname: brick02 > Uuid: 068389b6-8f4c-4eaf-be91-f7aac490078b > State: Peer in Cluster (Connected) > > Hostname: brick07 > Uuid: 7ff99e83-31fb-4eac-9b82-5a0e54feb761 > State: Peer in Cluster (Connected) > > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Pavlik Salles Juan José Blog - http://viviendolared.blogspot.com
_______________________________________________ Gluster-users mailing list [email protected] http://supercolony.gluster.org/mailman/listinfo/gluster-users
