> > Note that migration between CPUs of different vendors is not a > supported > use case (it will always depend on specific models, kernel versions, > etc.), so we can only justify not adding it to the new default model > if > it doesn't make life worse for everybody else. > > And I'd be a bit careful to jump to general conclusions just from one > forum post. > > It seems like you were the one adding the flag ;) > > yes, I known ^_^ (It's the default too on rhev, but they are not supporting amd-intel migration too).
> and the LWN-archived mail linked in the commit message says > > > Ticket locks have an inherent problem in a virtualized case, > > because > > the vCPUs are scheduled rather than running concurrently (ignoring > > gang scheduled vCPUs). This can result in catastrophic performance > > collapses when the vCPU scheduler doesn't schedule the correct > > "next" > > vCPU, and ends up scheduling a vCPU which burns its entire > > timeslice > > spinning. (Note that this is not the same problem as lock-holder > > preemption, which this series also addresses; that's also a > > problem, > > but not catastrophic). > > "catastrophic performance collapses" doesn't sound very promising :/ > I have found another thread here: https://lore.kernel.org/all/0484ea3f-4ba7-4b93-e976-098c57171...@redhat.com/ where paolo have done benchmark with only 3% difference. but yes, still slower anyway. at minimum, it could be interesting to expose the flag in the gui, for users really needed intel-amd migration. > But if we find that > kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4. > 1,+sse4.2,+ssse3 > causes issues (even if not cross-vendor live-migrating) with the > +kvm_pv_unhalt flag, but not without, it would be a much more > convincing > reason against adding the flag for the new default. > ok ! (I will send a new patch version today) _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel