On 25 February 2014 15:44, Chris Friesen <[email protected]> wrote: > On 02/25/2014 05:15 AM, John Garbutt wrote: >> >> On 24 February 2014 22:14, Chris Friesen <[email protected]> >> wrote: > > > >>> What happens if we have a shared-storage instance that we try to migrate >>> and >>> fail and end up rolling back? Are we going to end up with messed-up >>> networking on the destination host because we never actually cleaned it >>> up? >> >> >> I had some WIP code up to clean that up, as part as the move to >> conductor, its massively confusing right now. >> >> Looks like a bug to me. >> >> I suspect the real issue is that some parts of: >> self.driver.rollback_live_migration_at_destination(ontext, instance, >> network_info, block_device_info) >> Need more information about if there is shared storage being used or not. > > > What's the timeframe on the move to conductor?
Not before Juno now :( It got cut just before Havana shipped, and so it needed a complete re-write once Icehouse opened, which didn't get completed in time. Sorry. I have a better approach now though, but needs coding up properly. > I'm looking at fixing up the resource tracking over a live migration > (currently we just rely on the audit fixing things up whenever it gets > around to running) but to make that work properly I need to unconditionally > run rollback code on the destination. Ok, ouch. Thats worth doing. I have added a new bug tag "live-migrate", so I would love to document all these bugs people are finding under that tag. I want to spend some time working through some of these bugs as go into bug fixing mode. John _______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
