Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Lindsay Mathieson
On 6 January 2017 at 08:42, Michael Watters wrote: > Have you done comparisons against Lustre? From what I've seen Lustre > performance is 2x faster than a replicated gluster volume. No Lustre packages for debian and I really dislike installing from src for production

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Gandalf Corvotempesta
Il 05 gen 2017 6:33 PM, "Joe Julian" ha scritto: That's still not without it's drawbacks, though I'm sure my instance is pretty rare. Ceph's automatic migration of data caused a cascading failure and a complete loss of 580Tb of data due to a hardware bug. If it had been on

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Joe Julian
On 01/05/17 11:32, Gandalf Corvotempesta wrote: Il 05 gen 2017 2:00 PM, "Jeff Darcy" > ha scritto: There used to be an idea called "data classification" to cover this kind of case. You're right that setting arbitrary goals for arbitrary

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Jeff Darcy
> Both ceph and lizard manage this automatically. > If you want, you can add a single disk to a working cluster and automatically > the whole cluster is rebalanced transparently with no user intervention This relates to the granularity problem I mentioned earlier. As long as we're not splitting

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Michael Watters
Have you done comparisons against Lustre? From what I've seen Lustre performance is 2x faster than a replicated gluster volume. On 1/4/17 5:43 PM, Lindsay Mathieson wrote: > Hi all, just wanted to mention that since I had sole use of our > cluster over the holidays and a complete set of

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Gandalf Corvotempesta
Il 05 gen 2017 2:00 PM, "Jeff Darcy" ha scritto: There used to be an idea called "data classification" to cover this kind of case. You're right that setting arbitrary goals for arbitrary objects would be too difficult. However, we could have multiple pools with different

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Jeff Darcy
> Gluster (3.8.7) coped perfectly - no data loss, no maintenance required, > each time it came up by itself with no hand holding and started healing > nodes, which completed very quickly. VM's on gluster auto started with > no problems, i/o load while healing was ok. I felt quite confident in it.

[Gluster-users] Cheers and some thoughts

2017-01-04 Thread Lindsay Mathieson
Hi all, just wanted to mention that since I had sole use of our cluster over the holidays and a complete set of backups :) I decided to test some alternate cluster software and do some stress testing. Stress testing involved multiple soft and *hard* resets of individual servers and hard