Actually I have similar idea, an plan to work on it at L by a nova-spec (is it worth a spec?).
But this idea not come from this bug, it's come from other cases: 1. Currently we need specified 'on_shared_storage' and 'block_migration' when evacuate and live_migration. After we tracking the shared storage, we needn't user specified those parameter. Also scheduler can have priority to choice the host have shared-storage with previous host. 2. Currently nova compute won't release resources for stopped instance, and won't rescheduler when start the stopped instance. To implement this, it need check the instance is on shared storage or not. That make the code very complex. After scheduler tracking the shared-storage, we can implement this smart. There is an option specified whether rescheduler stopped instance when instance isn't on shared storage, because block migration is waste. 3. Other intelligent scheduling. The basic idea is add new column in compute_node table, and the new column store an ID that identifying a storage. If two compute nodes have same storage id, that means the two nodes on the shared storage. There will be different way to generate the ID for different type storage, like NFS, ceph, lvm.... Thanks Alex 2015-02-25 22:08 GMT+08:00 Gary Kotton <gkot...@vmware.com>: > Hi, > There is an issue with the statistics reported when a nova compute driver > has shared storage attached. That is, there may be more than one compute > node reporting on the shared storage. A patch has been posted - > https://review.openstack.org/#/c/155184. The direction here was to add a > extra parameter to the dictionary that the driver returns for the resource > utilization. The DB statistics calculation would take this into account and > then do calculations accordingly. > I am not really in favor of the approach for a number of reasons: > > 1. Over the last few cycles we have been making a move to trying to > better define data structures and models that we use. More specifically we > have been moving to object support > 2. A change in the DB layer may break this support. > 3. We are trying to have versioning of various blobs of data that are > passed around > > My thinking is that the resource tracker should be aware that the compute > node has shared storage and the changes done there. I do not think that the > compute node should rely on the changes being done in the DB layer – that > may be on a different host and even run a different version. > > I understand that this is a high or critical bug but I think that we > need to discuss more on it and try have a more robust model. > Thanks > Gary > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev