I guess it's a bit late but thrift <https://thrift.apache.org/> sounds like a better fit. You also get an almost complete client and server, for free.
This <https://thriftpy.readthedocs.org/en/latest/> an awesome example in python, although it's not representative because ThriftPy does more than most implementations. On Friday, November 21, 2014 at 6:31:12 PM UTC, [email protected] wrote: > > Thanks for the reply, > I will move forward with this approach then. > > Am Montag, 17. November 2014 10:55:42 UTC+1 schrieb [email protected]: >> >> Hi, >> >> I'm currently evaluating beanstalkd for our project and since we are in >> need for a job queue it seems to be a pretty good fit. >> The biggest benefit for me would be that it would allow to scale pretty >> easily. However scaling is also a problem for services which not only >> process a job and store the result somewhere. >> >> We run several query services which are currently contacted via http from >> the webserver, compute some result and reply it to the webserver which >> subsequently renders it into html to serve a web client. Basically a RPC >> workflow. >> >> Since the processing of this queries is sometimes computation intensive >> I'm looking for a way to scale through load balancing. >> Doing the with a queue and beanstalk looks like a simple an promising >> solution. Replies would be handled through one time queues with an unique >> id. >> >> However it could be that this is just another problem which looks like a >> nail for my beanstalk hammer. >> So, is this a good idea? What are the performance problems compared to >> http? Are one time queues problematic? >> >> thanks >> > -- You received this message because you are subscribed to the Google Groups "beanstalk-talk" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/beanstalk-talk. For more options, visit https://groups.google.com/d/optout.
