Ok, news so far:

It works like a magic. Nova have option
[glance]
host=127.0.0.1

And I do not need to cheat with endpoint resolving (my initial plan was to resolve glance endpoint to 127.0.0.1 with /etc/hosts magic). Normal glance-api reply to external clients requests (image-create/download/list/etc), and local glance-apis (per compute) are used to connect to swift.

Glance registry works in normal mode (only on 'official' api servers).

I don't see any reason why we should centralize all traffic to swift through special dedicated servers, investing in fast CPU and 10G links.

With that solution CPU load on glance-api is distributed evenly on all compute nodes, and overall snapshot traffic (on ports) was cut down 3 times!

Why I didn't thought about this earlier?

On 01/16/2015 12:20 AM, George Shuklin wrote:
Hello everyone.

One more thing in the light of small openstack.

I really dislike tripple network load caused by current glance snapshot operations. When compute do snapshot, it playing with files locally, than it sends them to glance-api, and (if glance API is linked to swift), glance sends them to swift. Basically, for each 100Gb disk there is 300Gb on network operations. It is specially painful for glance-api, which need to get more CPU and network bandwidth than we want to spend on it.

So idea: put glance-api on each compute node without cache.

To help compute to go to the proper glance, endpoint points to fqdn, and on each compute that fqdn is pointing to localhost (where glance-api is live). Plus normal glance-api on API/controller node to serve dashboard/api clients.

I didn't test it yet.

Any ideas on possible problems/bottlenecks? And how many glance-registry I need for this?


_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to