But you have to be carefull when using socket.io-clusterhub<https://github.com/fent/socket.io-clusterhub>. There could be a race condition as in normal c++ multithreaded environment. When you set/change the variable in first thread, and then the second request goes to the second thread, then in this second thread you can have the old value in this variable, because maybe the event was not already handled (maybe second request was too fast :) )
W dniu poniedziałek, 12 marca 2012, 18:48:04 UTC+1 użytkownik Evan napisał: > > This is a very similar problem to what I've been fighting as well (also > for a game). > Personally, I didn't feel any of the packages I had found met my needs, so > I created this: http://actionherojs.com/. > > I'm working on support for both pure-node distributed data and for a > common task queue. It's still pre V1, but I am always happy to have > feedback. > > On Friday, March 9, 2012 8:54:44 AM UTC-6, Murat T. wrote: >> >> Hi, >> >> I have been developing a game since last month and there are a couple of >> things which bothers me. >> >> I am using socket.io, and storing all the data into redis. However, most >> of my data is temporary and doesn't need to go into redis. If in case I >> restart the node, those temporary information in redis must be deleted >> anyway. The reason I am storing in redis is that I want to make sure it can >> scale in the future. Current design allows that. >> >> However, I have been doing some small benchmarks and noticed that I won't >> need to scale node to multiple machines. If I can have 8/16 cores in one >> server, and fork workers using clustering mechanism, everything should be >> more than enough. So, I want to eliminate redis and store everything in >> javascript objects, which is fine for me, since I don't need to save any >> state. (I have somethings to save and will still use redis for those cases, >> but for most cases I don't need it) >> >> The main reason is that I read and write a lot of small data and I need >> to write lots of code to do that, which I don't need if only I can use >> simple javascript objects. >> >> If I eliminate redis, I will have some objects which may have more than >> 100.000 elements. I have been testing the performance of interprocess >> communication, and it takes almost a second to send large objects between >> children. (If I am not doing something wrong) >> >> So, is there any way to simplify the app or improve this performance? Do >> I have to use a key value store for sharing large objects across other >> nodes. I just want to have a couple of objects, read and write a lot of >> data and share between processes. >> >> Thanks in advance, >> >> >> >> >> -- Job Board: http://jobs.nodejs.org/ Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines You received this message because you are subscribed to the Google Groups "nodejs" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/nodejs?hl=en?hl=en
