* Dr. David Alan Gilbert (dgilb...@redhat.com) wrote: > * Zhangbo (Oscar) (oscar.zhan...@huawei.com) wrote: > > Thank you David and Jason! > > > > BTW, I noticed that Vmware did the same work alike us, but the situation is > > a little different: > > they proposed postcopy(in the name of QuickResume) in vSphere4.1, but > > they substituted it with SDPS(similar to CPU-THROTTLE) from vSphere5, do > > you know the reason behind this? > > Reference: > > https://qianr.wordpress.com/2013/10/14/vmware-vm-live-migration-vmotion/ > > It's told that they've already introduced a shared storage to avoid losing > > the guest when the network connection is lost. So what's their concern of > > disposing QuickResume? > > Hmm that's a good summary; I'd not seen any detail of how vmware's systems > worked before. > I'm not exactly sure; but I think that's saying that they store the > outstanding pages that haven't been transferred in a file on disk > so that they could be used to recover the VM later if the network failed. > It's not clear to me from that description if they do that only when the > network fails, or as part of a normal postcopy flow.
I was thinking about this a bit more; recovering from a failed network connection for postcopy might be doable, I can kind of see how to do it *without* using a disk intermediate, but with a disk it's a bit trickier. Assuming the network connection is lost after we enter postcopy mode, we've also lost a bit of state - because the source doesn't know exactly which pages the destination has already received. This is also something that we don't actually keep track of on the destination, we leave it up to the source to tell us when we're finished. What we could do is: a) When the network connection fails make sure we dont kill the destination, and go into some form of paused mode b) Also make sure the source doesn't lose the migration state. c) Now, get the destination to listen for a connection (something like migrate_incoming but we don't want to reset the state, but we do need the ability to specify a different network setup) d) Tell the source to connect to the destination again e) The source does *not* carry on any background transfer - it only transfers pages that the destination asks for. f) We start a recovery thread on the destination that just walks all of memory reading one byte from each page; it should get stuck on any outstanding pages and cause the page to be requested. g) The source must send a requested page, even if it had previously sent it because it might have been lost in network buffers. h) Once that recovery thread finishes we know we've received all pages, so we're good. That all sounds doable; the tricky bit is making sure the destination copes with the failure enough to be able to recover; it mustn't cause an exit or a cleanup of the migration state when the network errors. Also, if a device tries to access a page of memory, it had better not block the monitor, since we'll need that to recover. Could we do it to file, like that description of VMWare? Again the tricky bit is to do with the pages that may or may not have been lost in the packet buffer. If we did a migrate-to-file/snapshot then that would have all of memory, rather than the nice small chunk that article describes, but we could recover from a whole migrate-to-file if we did something special to make the postcopy load from that (like the userfault driven loadvm work people have done). To keep the file small we would have to be smarter. We'd have to include all the pages that we *know* we haven't sent, but we'd also have to include pages that might not have been received; e.g. resend say the last ~20MByte (enough for a few packet buffers????) into that file. Then the destination would have to do the recovery thread like above, and also only load pages it was asked for. One trick to make this easier, would be to have something that loaded this recovery file and pretended to be a source-vm, then we could use the sequence above on the destination side without any changes. Dave > > Are there any other prices we need to pay to have postcopy? > > The precopy phase in postcopy mode is slower than normal precopy migration, > but I think that's the only other penalty. > > Dave > > > > > -----邮件原件----- > > 发件人: Jason J. Herne [mailto:jjhe...@linux.vnet.ibm.com] > > 发送时间: 2016年1月7日 3:43 > > 收件人: Dr. David Alan Gilbert; Zhangbo (Oscar) > > 抄送: zhouyimin Zhou(Yimin); Zhanghailiang; Yanqiangjun; Huangpeng (Peter); > > email@example.com; Herongguang (Stephen); Linqiangmin; Huangzhichao; > > Wangyufei (James) > > 主题: Re: [Qemu-devel] What's the advantages of POSTCOPY over CPU-THROTTLE? > > > > On 01/06/2016 04:57 AM, Dr. David Alan Gilbert wrote: > > > * Zhangbo (Oscar) (oscar.zhan...@huawei.com) wrote: > > >> Hi all: > > >> Postcopy is suitable for migrating guests which have large page change > > >> rates. It > > >> 1 makes the guest run at the destination ASAP. > > >> 2 makes the downtime of the guest small enough. > > >> If we don't take the 1st advantage into account, then, its benefit > > >> seems similar with CPU-THROTTLE: both of them make the guest's downtime > > >> small during migration. > > >> > > >> CPU-THROTTLE would make the guest's dirtypage rate *smaller than > > >> the network bandwidth*, in order to make the to_send_page_number in each > > >> iteration convergent and achieve the small-enough downtime during the > > >> last iteration. > > >> If we adopt POST-COPY here, the guest's dirtypage rate would > > >> *become equal to the bandwidth*, because we have to fetch its memory > > >> from the source side, via the network. > > >> Both of them would introduce performance degradations of the guest, > > >> which may in turn cause downtime larger. > > >> > > >> So, here comes the question: If we just compare POSTCOPY with > > >> CPU-THROTTLE for their advantages in decreasing downtime, POSTCOPY seems > > >> has no pos over CPU-THROTTLE, is that right? > > >> > > >> Meanwhile, Are there any other benifits of POSTCOPY besides the 2 > > >> mentioned above? > > > > > > It's a good question and they do both try and help solve the same problem. > > > One problem with cpu-throttle is whether you can throttle the CPU > > > enough to get the dirty-rate below the rate of the network, and the > > > answer to that is very workload dependent. On a large, many-core VM, > > > even a little bit of CPU can dirty a lot of memory. Postcopy is > > > guaranteed to finish migration, irrespective of the workload. > > > > > > Postcopy is pretty fine-grained, in that only threads that are > > > accessing pages that are still on the source are blocked, since it > > > allows the use of async page faults, that means it's even finer > > > grained than the vCPU level, so many threads come back up to full > > > performance pretty quickly even if there are a few pages left. > > > > > > > Good answer Dave. FWIW, I completely agree. Using cpu throttling can help > > the situation depending on workload. Postcopy will *always* work. > > One possible side effect of Postcopy is loss of the guest if the network > > connection dies during the postcopy phase of migration. This should be a > > very rare occurrence however. So both methods have their uses. > > > > -- > > -- Jason J. Herne (jjhe...@linux.vnet.ibm.com) > > > -- > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK