We strongly second that, would love some more documentation.  We've had 
experience with clustering setups in Cassandra and Riak, but haven't got a 
decent cluster working in Orientdb despite multiple attempts.  Also seeing 
all of the forum and issue postings showing problems with clustering, our 
conclusion for now is that Orientdb's clustering is not really ready for 
production use.  Luckily at his point our project allows us to simply run 
multiple independent Orientdb instances as a form of clustering & fault 
tolerance, so we're sticking with it and hoping they can make it more 
reliable and better documented in the future.

On Wednesday, December 3, 2014 11:12:10 AM UTC-7, Colin wrote:
>
> I know everyone is extremely busy, but would anyone have a moment to write 
> a quick explanation of how the new clustering/partitioning scheme works in 
> 2.0?
>
> I'm very confused right now, and don't understand how data is being 
> partitioned and replicated when multiple servers are involved if each 
> OrientDB server on startup creates a cluster with the name of all the 
> active nodes.  How is the data partitioned?  How is the automatic sharding 
> (using minimumclusters) used to create cluster_0, cluster_1, cluster_2, if 
> we also have cluster_node1 and cluster_node2 being created?
>
> From my experiments, it seems that a master cluster node is elected and 
> that one is almost always written to and that the local node name (without 
> the node name appended) is no longer used.
>
> Thanks!
>
> -Colin
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to