On 01/05/17 11:32, Gandalf Corvotempesta wrote:
Il 05 gen 2017 2:00 PM, "Jeff Darcy" <[email protected] <mailto:[email protected]>> ha scritto:

    There used to be an idea called "data classification" to cover this
    kind of case.  You're right that setting arbitrary goals for arbitrary
    objects would be too difficult.  However, we could have multiple pools
    with different replication/EC strategies, then use a translator like
    the one for tiering to control which objects go into which pools based
    on some kind of policy.  To support that with a relatively small
    number of nodes/bricks we'd also need to be able to split bricks into
    smaller units, but that's not really all that hard


IMHO one of the biggest drawback in gluster is the way it manage bricks
Adding the ability to add one server per time without having to manually rebalance or similiar would be usefully

Both ceph and lizard manage this automatically.
If you want, you can add a single disk to a working cluster and automatically the whole cluster is rebalanced transparently with no user intervention

This is really usefully an much less error prone that having to manually rebalance all the things


That's still not without it's drawbacks, though I'm sure my instance is pretty rare. Ceph's automatic migration of data caused a cascading failure and a complete loss of 580Tb of data due to a hardware bug. If it had been on gluster, none of it would have been lost.
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to