Avi Kivity wrote:
> Alexander Graf wrote:
>> X86 CPUs need to have some magic happening to enable the virtualization
>> extensions on them. This magic can result in unpleasant results for
>> users, like blocking other VMMs from working (vmx) or using invalid TLB
>> entries (svm).
>>
>> Currently KVM activates virtualization when the respective kernel module
>> is loaded. This blocks us from autoloading KVM modules without breaking
>> other VMMs.
>>
>> To circumvent this problem at least a bit, this patch introduces on
>> demand activation of virtualization. This means, that instead
>> virtualization is enabled on creation of the first virtual machine
>> and disabled on removal of the last one.
>>
>> So using this, KVM can be easily autoloaded, while keeping other
>> hypervisors usable.
>>
>> v2 adds returns to non-x86 hardware_enables and adds IA64 change
[snip]
>
>> @@ -660,6 +674,8 @@ static void kvm_destroy_vm(struct kvm *kvm)
>> mmu_notifier_unregister(&kvm->mmu_notifier, kvm->mm);
>> #endif
>> kvm_arch_destroy_vm(kvm);
>> + if (atomic_dec_and_test(&kvm_usage_count))
>> + on_each_cpu(hardware_disable, NULL, 1);
>> mmdrop(mm);
>> }
>>
>
> And again. I suggest returning to spinlocks (and placing the
> duplicated disable code in a function).
OK.
>
>>
>> -static void hardware_enable(void *junk)
>> +static void hardware_enable(void *_r)
>> {
>> int cpu = raw_smp_processor_id();
>> + int r;
>>
>> if (cpu_isset(cpu, cpus_hardware_enabled))
>> return;
>> + r = kvm_arch_hardware_enable(NULL);
>> + if (_r)
>> + *((int*)_r) = r;
>> + if (r) {
>> + printk(KERN_INFO "kvm: enabling virtualization on "
>> + "CPU%d failed\n", cpu);
>> + return;
>> + }
>> +
>> cpu_set(cpu, cpus_hardware_enabled);
>> - kvm_arch_hardware_enable(NULL);
>> }
>>
>
> We'll be in a nice fix if we can only enable virtualization on some
> processors; that's the reason hardware_enable() was originally
> specified as returning void.
>
> I don't see an easy way out, but it's hardly a likely event.
I don't think there's any way we can circumvent that.
What I've wanted to ask for some time already: How does suspend/resume
work? I only see one suspend/resume hook that disables virt on the
currently running CPU. Why don't we have to loop through the CPUs to
enable/disable all of them?
At least for suspend-to-disk this sounds pretty necessary.
>> printk(KERN_INFO "kvm: disabling virtualization on CPU%d\n",
>> cpu);
>> - smp_call_function_single(cpu, hardware_disable, NULL, 1);
>> + if (atomic_read(&kvm_usage_count))
>> + smp_call_function_single(cpu, hardware_disable,
>> + NULL, 1);
>> break;
>> case CPU_ONLINE:
>> printk(KERN_INFO "kvm: enabling virtualization on CPU%d\n",
>> cpu);
>> - smp_call_function_single(cpu, hardware_enable, NULL, 1);
>> + if (atomic_read(&kvm_usage_count))
>> + smp_call_function_single(cpu, hardware_enable,
>> + NULL, 1);
>> break;
>>
> case CPU_UP_CANCELED:
>
> Are these called in a point where processes can't run? Otherwise
> there's a race here.
Yes.
static struct notifier_block kvm_cpu_notifier = {
.notifier_call = kvm_cpu_hotplug,
.priority = 20, /* must be > scheduler priority */
};
>
>> static int kvm_resume(struct sys_device *dev)
>> {
>> - hardware_enable(NULL);
>> + if (atomic_read(&kvm_usage_count))
>> + hardware_enable(NULL);
>> return 0;
>> }
>>
>
> Move the test to hardware_enable()? It's repeated too often.
What do we do about the on_each_cpu(hardware_enable) cases? We couldn't
tell when to activate/deactive virtualization then, as that's
semantically bound to "amount of VMs".
Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html