On 6 January 2017 at 08:42, Michael Watters wrote:
> Have you done comparisons against Lustre? From what I've seen Lustre
> performance is 2x faster than a replicated gluster volume.
No Lustre packages for debian and I really dislike installing from src
for production
Il 05 gen 2017 6:33 PM, "Joe Julian" ha scritto:
That's still not without it's drawbacks, though I'm sure my instance is
pretty rare. Ceph's automatic migration of data caused a cascading failure
and a complete loss of 580Tb of data due to a hardware bug. If it had been
on
On 01/05/17 11:32, Gandalf Corvotempesta wrote:
Il 05 gen 2017 2:00 PM, "Jeff Darcy" > ha scritto:
There used to be an idea called "data classification" to cover this
kind of case. You're right that setting arbitrary goals for arbitrary
> Both ceph and lizard manage this automatically.
> If you want, you can add a single disk to a working cluster and automatically
> the whole cluster is rebalanced transparently with no user intervention
This relates to the granularity problem I mentioned earlier. As long as
we're not splitting
Have you done comparisons against Lustre? From what I've seen Lustre
performance is 2x faster than a replicated gluster volume.
On 1/4/17 5:43 PM, Lindsay Mathieson wrote:
> Hi all, just wanted to mention that since I had sole use of our
> cluster over the holidays and a complete set of
Il 05 gen 2017 2:00 PM, "Jeff Darcy" ha scritto:
There used to be an idea called "data classification" to cover this
kind of case. You're right that setting arbitrary goals for arbitrary
objects would be too difficult. However, we could have multiple pools
with different
> Gluster (3.8.7) coped perfectly - no data loss, no maintenance required,
> each time it came up by itself with no hand holding and started healing
> nodes, which completed very quickly. VM's on gluster auto started with
> no problems, i/o load while healing was ok. I felt quite confident in it.
Hi all, just wanted to mention that since I had sole use of our cluster
over the holidays and a complete set of backups :) I decided to test
some alternate cluster software and do some stress testing.
Stress testing involved multiple soft and *hard* resets of individual
servers and hard