Hi,

and thanks for your help, but i think you do not understand the situation 
right. The volume is dead, i couldnt reuse it. I reinstalled the os and added 
the storage with the old volume. So there is acutally no vol1 which i can use 
in glusterfs. All i have is the old data struction with .glusterfs files and so 
on. I now want to migrate all files from vol1 to new created and working vol2. 
But if i move the files direct from directory on each node to mounted new vol2, 
the disk size remains the same - disk space isnt freed up.

What can i do? getfattr shows nothing. If i move one folder and look into the 
.glusterfs, the folder seems to be removed, but df -h shows same free disk 
space and so i run into trouble.

Thx
Taste

Am 25.10.2021 21:04:10, schrieb Strahil Nikolov:
> To be honest , I can't imagine the problem actually.
> 
> When you reuse bricks you have two options:
> 1. Recreate the filesystem. It's simpler and easier
> 2. Do the following:
> Delete all previously existing data in the brick, including the .glusterfs 
> subdirectory.
> Run # setfattr -x trusted.glusterfs.volume-id brick and # setfattr -x 
> trusted.gfid brick to remove the attributes from the root of the brick.
> Run # getfattr -d -m . brick to examine the attributes set on the volume. 
> Take note of the attributes.
> Run # setfattr -x attribute brick to remove the attributes relating to the 
> glusterFS file system.
> The trusted.glusterfs.dht attribute for a distributed volume is one such 
> example of attributes that need to be removed. It is necessary to remove the 
> extended attributes `trusted.gfid` and `trusted.glusterfs.volume-id` which 
> are unique for every Gluster brick. These attributes are created the first 
> time a brick gets added to a volume.
> 
> As you still have a ".glusterd" you didn't reintegrate the brick.
> 
> The only other option I know is to use add-brick with the "force" option.
> 
> Can you provide a short summary (commands only) of how the issue happened, 
> what you did and what error is coming up ?
> 
> 
> Best Regards,
> Strahil Nikolov 
> 
> 
> 
> 
> 
> 
> В сряда, 20 октомври 2021 г., 14:06:29 ч. Гринуич+3, Taste-Of-IT 
> <kont...@taste-of-it.de> написа: 
> 
> 
> 
> 
> 
> Hi,
> 
> i now moving from dead vol1 to new vol2 mounted via nfs. 
> 
> The problem is, that the storage rises and not as expected stay the same. Any 
> idea? I think it has something to do with the .glusterfs direcoties on dead 
> vol1.
> 
> thx
> 
> Webmaster Taste-of-IT.de<br/><br/>Am 29.08.2021 12:42:18, schrieb Strahil 
> Nikolov:
> > Best case scenario, you just mount via FUSE on the 'dead' node and start 
> > copying.
> > Yet, in your case you don't have enough space. I guess you can try on 2 VMs 
> > to simulate the failure, rebuild and then forcefully re-add the old brick. 
> > It might work, it might not ... at least it's worth trying.
> > 
> > Best Regards,Strahil Nikolov
> > 
> > Sent from Yahoo Mail on Android 
> >  
> >  On Thu, Aug 26, 2021 at 15:27, Taste-Of-IT<kont...@taste-of-it.de> wrote:  
> >Hi,
> > what do you mean? Copy the data from dead node to runnig node and than add 
> > the new installed node to existing vol1, after that running rebalance? If 
> > so, this is not possible, because node1 has not enough free space to take 
> > all from node2.
> > 
> > thx
> > 
> > Am 22.08.2021 18:35:33, schrieb Strahil Nikolov:
> > > Hi,
> > > 
> > > the best way is to copy the files over the FUSE mount and later add the 
> > > bricks and rebalance.
> > > Best Regards,Strahil Nikolov
> > > 
> > > Sent from Yahoo Mail on Android 
> > >  
> > >  On Thu, Aug 19, 2021 at 23:04, Taste-Of-IT<kont...@taste-of-it.de> 
> > >wrote:  Hello,
> > > 
> > > i have two nodes with a distributed volume. OS is on a separate disk 
> > > which crashed on one node. However i can reinstall the os and the raid6 
> > > which is used vor the distributed volume was rebuild. The question now 
> > > is, how to re-add the brick with the volume back to the existing old 
> > > volume. 
> > > 
> > > If this is not possible what is with this idea: i create a new vol2 with 
> > > distributed over both nodes and move the files direkt from directory to 
> > > new volume via nfs-ganesha share?!
> > > 
> > > thx
> > > ________
> > > 
> > > 
> > > 
> > > Community Meeting Calendar:
> > > 
> > > Schedule -
> > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > > Bridge: https://meet.google.com/cpu-eiue-hvk
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> > >  
> > >
> > ________
> > 
> > 
> > 
> > Community Meeting Calendar:
> > 
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >  
> >
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to