I certainly won't get drawn into a public debate about which database is faster 
per node, but I would recommend you never take such claims at face value.

Eric


On Apr 10, 2013, at 3:02 PM, Tom Zeng <[email protected]> wrote:

> Thanks Eric for the info, that's very helpful.  7 was mentioned at the last 
> Riak DC meetup. not as the minimal but for better performance, when I was 
> chatting with a couple of Basho devs about performance benchmarking, and 
> about Riak is quite a bit slower on single node against Mongo.
> 
> 
> On Wed, Apr 10, 2013 at 5:56 PM, Eric Redmond <[email protected]> wrote:
> 
> 
> On Apr 10, 2013, at 2:26 PM, Tom Zeng <[email protected]> wrote:
> 
>> Hi list,
>> 
>> We have a production installation with only 3 nodes and running on 1.2.1.  
>> I'd appreciate to get some  facts to convince IT to increase the number of 
>> nodes to 7 and upgrade to 1.3.  I heard people from Basho mentioned ideally 
>> 7 nodes for production a couple of time, can someone explain why 7, is 4, or 
>> 5 nodes good enough?
> 
> I'm not sure where you heard the number 7 as a minimum, unless if was for a 
> specific use-case. In general the minimum recommended number is 5 nodes.
> 
> Running with only 3 nodes isn't a great idea. Since a core purpose of Riak is 
> to remain available in the face of outages, 3 will not support any outage. 
> Less than 3 is lower than the default replication value (N=3). This is so 
> important, in fact, that we recommend 5 solely to act as a buffer in the case 
> where 1 of the 5 is down, the remaining 4 is dangerously close to the 
> inflexible 3 node number. Even if you do not upgrade to 1.3, you really need 
> to have at least 5 nodes.
> 
> There are many benefits to upgrading to 1.3, but one of the most compelling 
> from an operations point of view is active anti-entropy (AAE). Rather than 
> waiting on read-repair to fix inconsistent values (which is passive), AAE 
> routinely attempts to keep all node values in sync. This can be a godsend if 
> a node goes down, since you don't need to fore read-repair when you bring the 
> node back up by reading every key... you just let your cluster actively 
> self-heal.
> 
> 
>> Also on the 3 three nodes, the file size for the bitcask directory very 
>> quite a bit: 21GB, 14GB, and 20GB. Could the node with only 14GB missing 
>> something or it's expected to have such big difference?
> 
> There are several reasons sizes could be different. Values are not yet/ever 
> replicated (based on your N and W values). Files may have not been compacted. 
> Some keys have been deleted but not yet reaped...
> 
>> Thanks,
>> Tom
>> 
>> -- 
>> Tom Zeng
>> Director of Engineering
>> Intridea, Inc. | www.intridea.com
>> [email protected]
>> (o) 888.968.4332 x519
>> (c) 240-643-8728
>> _______________________________________________
>> riak-users mailing list
>> [email protected]
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to