We do not using centralized storages (all instances running with local drives). And I just can't express my happiness about this. Every time monitoring send me '** PROBLEM ALERT bla-bla-bla', I know it not a big deal. Just one server.

I do not want to turn gray prematurely. Just light glance on https://www.google.com/search?q=ceph+crash+corruption give me strong feeling I don't want to centralize points of failures.

Btw: If I sold nodes designated for Ceph as normal compute nodes, it will be more effective than sell only space from them (and buy more compute nodes for actual work).

On 01/16/2015 12:31 AM, Abel Lopez wrote:
That specific bottleneck can be solved by running glance on ceph, and running ephemeral instances also on ceph. Snapshots are a quick backend operation then. But you've made your installation on a house of cards.

On Thursday, January 15, 2015, George Shuklin <[email protected] <mailto:[email protected]>> wrote:

    Hello everyone.

    One more thing in the light of small openstack.

    I really dislike tripple network load caused by current glance
    snapshot operations. When compute do snapshot, it playing with
    files locally, than it sends them to glance-api, and (if glance
    API is linked to swift), glance sends them to swift. Basically,
    for each 100Gb disk there is 300Gb on network operations. It is
    specially painful for glance-api, which need to get more CPU and
    network bandwidth than we want to spend on it.

    So idea: put glance-api on each compute node without cache.

    To help compute to go to the proper glance, endpoint points to
    fqdn, and on each compute that fqdn is pointing to localhost
    (where glance-api is live). Plus normal glance-api on
    API/controller node to serve dashboard/api clients.

    I didn't test it yet.

    Any ideas on possible problems/bottlenecks? And how many
    glance-registry I need for this?

    _______________________________________________
    OpenStack-operators mailing list
    [email protected]
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to