Yariv Sadan wrote: > I think it's highly unlikely that you'll have a key collision given > that most webapps can expect at most a few thousand sessions and > 8.39299e+17 is a very big number :) > > In production code, though, I would make this code more efficient: > > length(lists:seq($a, $z) ++ lists:seq($A, $Z) ++ lists:seq($0, $9)). > > It should lookup each element from a fixed size tuple. >
In production code, if your really worried about efficency you'd take the time, all three seconds of it, to add -define(LETTERS, "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"). at the top of the module if I understand you correctly. This little corner of the attracted my attention because it's something I initially did really badly - more worried about other things - and started to wonder if there's an "optimal" length key vs cpu usage to generate it. It also shows up in a lot of other places such as cookie generation. I was thinking last night that it might be better to start with a list length of 5 octets, then if a collision occurs try 6, then 7, etc. This would keep the keys short, but if you've got images of several KB flying around and all the overhead of HTTP 5 versus 10 or 20 bytes is neither here nor there. Getting back on topic. Any reason you used mnesia:dirty_* functions? It looks as if it should be able to handle cases and ifs without a problem. I only mention this in that it seems that continations relies on side effects and I've been meaning to see how Haskell handles this with Monards and Arrows (not up on Haskell at all). Would it be possible to separate out the functional and non-functional parts a-la OTP gen_* modules? Also, if the functional parts of the code are seperated out then the state could be serialised and written to disk after short timeout so as not to consume resources then brought back when needed or deleted entirely after a longer timeout. Unrelated to this, you could have multiple machines running with the sessions being forward to the correct machine should a connection be made by the client to a machine on which the session exists. The other thing I was think of was: Each process is for each component instance, hence the max number of processes equals the max number of session times the max number of components. Ignoring birth/death of session and components. Anyone care to fill in some realistic numbers or supply a better model? Anyway, I'd be interested in seeing Erlyweb scale horizontally across machines in a manner which minimises the use hardware such as external load balances. Jeff. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "erlyweb" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/erlyweb?hl=en -~----------~----~----~----~------~----~------~--~---
