Rusty Russell wrote: > Grrr.... Andi refused to take my "rdmsr64" patch which moved to a > function-like interface for MSRs, dismissing it as pointless churn. > > paravirt_ops cleanups changed a macro to an inline and spotted this > kvm bug. > > Signed-off-by: Rusty Russell <[EMAIL PROTECTED]> > > diff -r 47c6ee74a5c5 drivers/kvm/vmx.c > --- a/drivers/kvm/vmx.c Thu Mar 22 12:57:44 2007 +1100 > +++ b/drivers/kvm/vmx.c Thu Mar 22 13:38:24 2007 +1100 > @@ -1127,7 +1127,7 @@ static int vmx_vcpu_setup(struct kvm_vcp > u64 data; > int j = vcpu->nmsrs; > > - if (rdmsr_safe(index, &data_low, &data_high) < 0) > + if (rdmsr_safe(index, data_low, data_high) < 0) > continue; > if (wrmsr_safe(index, data_low, data_high) < 0) > continue; > > >
My rdmsr_safe (x86_64, i386 is similar/same) is #define rdmsr_safe(msr,a,b) \ ({ int ret__; \ asm volatile ("1: rdmsr\n" \ "2:\n" \ ".section .fixup,\"ax\"\n" \ "3: movl %4,%0\n" \ " jmp 2b\n" \ ".previous\n" \ ".section __ex_table,\"a\"\n" \ " .align 8\n" \ " .quad 1b,3b\n" \ ".previous":"=&bDS" (ret__), "=a"(*(a)), "=d"(*(b))\ :"c"(msr), "i"(-EIO), "0"(0)); \ ret__; }) Which seems quite happy to accept pointers to the values. The one in asm/i386/paravirt.h has a similar calling convention. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel