Actually, I just figured out what caused this. It seems the mount point for the brick I had on the rebooted node did not mount before gluster mounted so on the other node it guessed that it should delete everything on the volume⦠yikes!
So, I'm curious if this is a recoverable scenario and also how I can prevent this in the future. I admit I didn't have the brick mount in the fstab but even so can we rely on the order the mount points are written in fstab?. -- Chris LeBlanc On Friday, June 8, 2012 at 12:25 AM, Chris LeBlanc wrote: > I have a replicated gluster using nfs and one of the nodes I purposely > rebooted and may have had trouble coming up. I don't know the exact details > but I am curious to know details of recovering after a volume fails and just > ends up with a directory called .landfill (which I presume is some mechanism > to prevent wacko things from happening to your data). > > For now I copied what was in .landfill locally on one of the nodes so it's > just limping along. > > What should I do? Should I be concern about the data integrity of what's in > .landslide? > > -- > Chris LeBlanc >

