I just poked at the documentation, and it seems as though my replication
issues might have been somewhat addressed by proactive self-heal (
http://www.gluster.org/community/documentation/index.php/About_GlusterFS_3.3
).


On Fri, Jun 14, 2013 at 12:13 AM, Aaron Knister <[email protected]>wrote:

>  Hi Dustin,
>
> Disclaimer here: I've never personally run GlusterFS in production but
> I've evaluated it and also observed it in use on a multi hundred TB
> installation that migrated to GlusterFS from lustre (and has since migrated
> from GlusterFS to ZFS).
>
> My impression and observation is that overall it's a quite stable
> filesystem that in general performs well. There were some odd bugs with the
> 2.x and earlier series that could cause some bizarre permission issues.
> Newer releases seem to have fixed this.
>
> One of the issues I had with it and why I avoided it when evaluating it
> was the lack of centralized co-ordination with replication. To give an
> example:  client A is writing to a replicated volume on servers A and B. A
> network interruption occurs between the client and server A. The client
> finishes writing, but the file being written is now out of sync between
> server A and B. This isn't detected or healed until the next time the file
> is accessed at which point it is healed.
>
> Just my $.02.
>
> -Aaron
>
>
> On Thu, Jun 13, 2013 at 6:50 PM, Dustin Rice <[email protected]> wrote:
>
>>
>> Hello folks, just wanted to ask if anyone out there is using GlusterFS
>> on thier SLURM cluster. I'm looking to implement it myself and am
>> interested if there have been any great successes (or failures) with it.
>>
>> Thanks!
>>
>> --
>> ----------------------------
>> Dustin Rice
>> Systems Administrator - SNAP
>> University of Alaska Fairbanks
>> Ph: 907-474-7148
>> ----------------------------
>>
>
>

Reply via email to