On Feb 7, 2014, at 11:12 AM, Chris Behrens <[email protected]> wrote:

> 
> On Feb 7, 2014, at 8:21 AM, Jesse Noller <[email protected]> wrote:
> 
>> It seems that baking concurrency models into the individual clients / 
>> services adds some opinionated choices that may not scale, or fit the needs 
>> of a large-scale deployment. This is one of the things looking at the client 
>> tools I’ve noticed - don’t dictate a concurrency backend, treat it as 
>> producer/consumer/message passing and you end up with something that can 
>> potentially scale out a lot more. 
> 
> I agree, and I think we should do this with our own clients. However, on the 
> service side, there are a lot of 3rd party modules that would need the 
> support as well. libvirt, xenapi, pyamqp, qpid, kombu (sits on pyamqp), etc, 
> come to mind as the top possibilities.
> 
> I was also going to change direction in this reply and say that we should 
> back up and come up with a basic set of requirements. In this thread, I think 
> I’ve only seen arguments against various technology choices without a clear 
> list of our requirements. Since Chuck has posted in the meantime, I’m going 
> to start (what I view) should be some of our requirements in reply to him.
> 
> - Chris
> 

Definitely +1 - we’re talking solutions and technology when the requirements 
aren’t clear - I mean, heck - some might want to use Celery/etc, some might 
want to use cloud queues and other things on the server side. I’d like to help 
get us to a point where we could enable operators to swap in the client side 
and server side things that work best for their topology 
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to