I'm testing the example code that dispatches a web request from misultin into a riak_core ring of vnodes. It works fantastic when all nodes are up! :)
Doing "ab -k -c 200 -n 10000 http://localhost:3000/" yields a none-to-shabby performance (dispatching at random into all available vnodes on two separate riak_core processes): Concurrency Level: 200 Time taken for tests: 1.446 seconds Complete requests: 10000 Failed requests: 0 Write errors: 0 Keep-Alive requests: 10000 Total transferred: 1600480 bytes HTML transferred: 120036 bytes Requests per second: 6914.04 [#/sec] (mean) Time per request: 28.927 [ms] (mean) Time per request: 0.145 [ms] (mean, across all concurrent requests) Transfer rate: 1080.64 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 1.0 0 12 Processing: 4 28 9.8 27 78 Waiting: 4 28 9.8 27 78 Total: 4 28 10.1 27 83 Percentage of the requests served within a certain time (ms) 50% 27 66% 31 75% 34 80% 36 90% 41 95% 47 98% 53 99% 58 100% 83 (longest request) If I were really zealous, I'd set up haproxy to load balance between these two misultin servers and get double failover. I'm trying to catch the situation of going into the console of one of my nodes and hitting "CTL-C" to kill that process. I'm not sure what the best way is to handle this. Check before I dispatch to make sure the node is up? Keep a watch of some other kind that, when it sees that node go down and if it's trying to dispatch to that node, it tries to find another one? Essentially, I'm trying to prevent misultin from completely bailing on the request because the sync_spawn_command blows up trying to do a gen_server:call to a non-existent node. I'd like to retry to dispatch to a different node if one happens to have crashed while I'm serving requests (I don't want to loose a request, essentially). Thanks! Jon Brisbin http://about.me/jonbrisbin
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com