On Tue, Jun 23, 2015 at 06:13:24PM +0100, Daniel P. Berrange wrote: > On Tue, Jun 23, 2015 at 06:47:16PM +0200, Andreas Färber wrote: > > Am 23.06.2015 um 18:42 schrieb Daniel P. Berrange: > > > On Tue, Jun 23, 2015 at 06:33:05PM +0200, Michael S. Tsirkin wrote: > > >> On Tue, Jun 23, 2015 at 05:25:55PM +0100, Daniel P. Berrange wrote: > > >>> On Tue, Jun 23, 2015 at 06:15:51PM +0200, Andreas Färber wrote: > > >>>> Am 23.06.2015 um 17:58 schrieb Eduardo Habkost: > > >>>> I've always advocated remaining backwards-compatible and only making > > >>>> CPU > > >>>> model changes for new machines. You among others felt that was not > > >>>> always necessary, and now you're using the lack thereof as an argument > > >>>> to stop using QEMU's CPU models at all? That sounds convoluted... > > >>> > > >>> Whether QEMU changed the CPU for existing machines, or only for new > > >>> machines is actually not the core problem. Even if we only changed > > >>> the CPU in new machines that would still be an unsatisfactory situation > > >>> because we want to be able to be able to access different versions of > > >>> the CPU without the machine type changing, and access different versions > > >>> of the machine type, without the CPU changing. IOW it is the fact that > > >>> the > > >>> changes in CPU are tied to changes in machine type that is the core > > >>> problem. > > >> > > >> But that's because we are fixing bugs. If CPU X used to work on > > >> hardware Y in machine type A and stopped in machine type B, this is > > >> because we have determined that it's the right thing to do for the > > >> guests and the users. We don't break stuff just for fun. > > >> Why do you want to bring back the bugs we fixed? > > > > > > Huh, I never said we wanted to bring back bugs. This is about allowing > > > libvirt to fix the CPU bugs in a way that is independant of the machine > > > types and portable across hypervisors we deal with. We're absolutely > > > still going to fix CPU model bugs and ensure stable guest ABI. > > > > No, that's contradictory! Through the -x.y machines we leave bugs in the > > old models *exactly* to assure a stable guest ABI. Fixes are only be > > applied to new machines, thus I'm pointing out that you should not use a > > new CPU model with an old machine type. > > I'm not saying that libvirt would ever allow a silent guest ABI change. > Given a libvirt XML config, the guest ABI will never change without an > explicit action on the part of the app/user to change the XML. > > This is all about dealing with the case where the app / user conciously > needs/wants to opt-in to a guest ABI change for the guest. eg they wish > to make use of some bug fix or feature improvement in the new machine > type, but they do *not* wish to have the CPU model changed, because > of some CPU model change that is incompatible with their hosts' CPUs. > Conversely, they may wish to get access to a new CPU model, but not > wish to have the rest of the guest ABI change. In both cases the user > is explicitly opt-ing in the ABI change with knowledge about what > this might mean for the guest OS. Currently we are tieing users > hands by forcing CPU and machine types to change in lockstep. > > Regards, > Daniel
Can we have a specific example please? It's hard to understand the facts based on such generic statements. > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt-manager.org :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|