Do any of you have any experience using cyclone for handling the redis-to-client side of things?
iain On Tue, Sep 29, 2015 at 5:32 PM, Iain Duncan <[email protected]> wrote: > Thanks everyone. To clarify for Jonathan, we're using RabbitMQ as our bus > for jobs between web services and worker processes, while I'm thinking > Redis as a fast way to get messages back to client apps, mostly because the > cost of a request checking Redis is so low. I could see our use of RabbitMQ > for ESBish stuff growing in the future. (We're basically slowly breaking a > massive monolith out into distributed services for a variety of good > reasons). The intention is that the design and tool choice should be able > to grow with us as we add more use cases for the bus and server-push > callback scenario. Right now, we're only dealing with file processing, but > we could later be adding more. > > I mostly asked about Erlang because I really don't want to do Node, and it > seems like the most common way to deal with getting lots of notifications > to lots of clients is Node + Redis or Erlang + Redis, should we outgrown > Pyramid+Redis for pushing events back. We do have some situations where > there is potential for everyone to be on the app at the exact same time > (aligned with a real world event for example). I plan to write it all so > it's easy to replace any component. > > It's certainly going to be easiest for me for the prototype to do short > polling from the angular app to a redis store accessed through standard > pyramid. But I also don't mind writing that part in a different language > because it's such a small job ( receive request to check for events, check > store, return json). > > Thanks everyone for weighing in, > iain > > On Tue, Sep 29, 2015 at 4:49 PM, Jonathan Vanasco <[email protected]> > wrote: > >> I'm a firm believer in devising an upgradable proof-of-concept system. >> What drew us to this approach is that we could proof it out relatively >> quickly, and then just scale it out as needed in a rather performant manner >> -- making tradeoffs against dev time, dev-ops time, and performance. >> >> We haven't yet hit a concurrency where the routes need to be split out -- >> and when we do, we'll be able to build and deploy a dedicated service in a >> matter of minutes. I would love to be at the situation where a dedicated >> service in a more performant technology is needed -- but this looks >> scalable to a decent level. I also forgot that if you wanted to shave a >> bit of time off and stay in Python, you can also look at Falcon for >> something like this -- it handles large concurrencies of tiny api data >> really well. >> >> I looked into Erlang a while back – a close friend is really big >> contributor to both Erlang and Python core libraries and on both their >> conference circuits, so he's been a sounding board and evangelist to me. >> If you were doing Chat, Erlang would be perfect, but for what you're >> talking about - it sounds like overkill. Our web spiders and a lot of >> internal services would be best off in erlang , as would some of our SOA >> components -- but all our concerns are in higher concurrency operations >> with lots of blocking events involved... and that blocking is our >> bottleneck. You basically just have a very simple read/reply need of a >> single source -- that is going to be serving mostly from an in-memory >> database. >> >> My 2¢ is this: when it comes to different polling options, it's entirely >> a UX issue. >> Short polling is a pretty crappy and un-savvy solution from a tech >> perspective, but it's usually "good enough" and has more pros than cons in >> many situations. When you're dealing with a file upload, the user will hit >> "upload" and is accustomed to waiting for a few seconds. If you toss a >> little animated gif in there, you're good for 5-10seconds before they >> worry. If you put in a percentage counter with feedback, you can expand >> time. With that experience, you're in a good spot for using >> short-polling. You don't *need* to give a faster response, it's just nicer >> if you do. So you can use this really clunky shot-polling technique that >> is easy to implement, and no one really notices or cares. >> >> >> >> On Tuesday, September 29, 2015 at 6:19:02 PM UTC-4, Iain Duncan wrote: >>> >>> Thanks Jonathan, your solution is what I was leaning to for >>> proof-of-concept, good to know it's somewhat performant too. I don't know >>> enough about fast response time situations to really know the pros and cons >>> of short-polling, long-polling, and websockets. >>> >>> Did you also look into erlang at all? I think all our complex domain >>> login will stay on python, with some apps/services done in pyramid and some >>> in Django, but we are looking into adding more messaging to it, and I >>> wonder if for sending stuff back we might want to look at Erlang + Redis or >>> Erlang + RabbitMQ. >>> >>> >> >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "pylons-discuss" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To post to this group, send email to [email protected]. >> Visit this group at http://groups.google.com/group/pylons-discuss. >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "pylons-discuss" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/d/optout.
