Hi, just some oslo.messaging thoughts about having multiple nova-scheduler processes (can also apply to any other daemon acting as rpc server),

nova-scheduler use service.Service.create() to create a rpc server, that one is identified by a 'topic' and a 'server' (the oslo.messaging.Target). Creating multiple workers like [1] does, will result to have all workers that share the same identity. This is usually because the 'server' is set with the 'hostname', to make our life easier. With rabbitmq for example, the 'server' attribute of the oslo.messaging.Target is used for a queue name, you usually have the following queues created:

    scheduler
    scheduler.scheduler-node-1
    scheduler.scheduler-node-2
    scheduler.scheduler-node-3
    ...

Keep things as-is will result that messages that go to scheduler.scheduler-node-1 will be processed randomly by the first ready worker. You will not be able to identify workers from the amqp point of view. The side effect of that is if a worker stuck, bug or whatever and doesn't consume messages anymore, we will not be able to see it. One of the other worker will continue to notify that scheduler-node-1 works and consume new messages even if all of them are dead/stuck except one.

So I think that each rpc servers (each workers) should have a different 'server', to get amqp queues like that:

    scheduler
    scheduler.scheduler-node-1-worker-1
    scheduler.scheduler-node-1-worker-2
    scheduler.scheduler-node-1-worker-3
    scheduler.scheduler-node-2-worker-1
    scheduler.scheduler-node-2-worker-2
    scheduler.scheduler-node-3-worker-1
    scheduler.scheduler-node-3-worker-2
    ...

Cheers,


[1] https://review.openstack.org/#/c/159382/
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to