On 06/16/2010 07:55 PM, Dave Hansen wrote:
On Wed, 2010-06-16 at 11:48 +0300, Avi Kivity wrote:
+static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr)
+{
+ kvm-arch.n_used_mmu_pages += nr;
+ kvm_total_used_mmu_pages += nr;
Needs an atomic operation, since
On 06/16/2010 06:06 PM, Dave Hansen wrote:
+static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr)
+{
+ kvm-arch.n_used_mmu_pages += nr;
+ kvm_total_used_mmu_pages += nr;
Needs an atomic operation, since there's no global lock here. To avoid
bouncing this
On 06/15/2010 04:55 PM, Dave Hansen wrote:
Note: this is the real meat of the patch set. It can be applied up
to this point, and everything will probably be improved, at least
a bit.
Of slab shrinkers, the VM code says:
* Note that 'shrink' will be passed nr_to_scan == 0 when the VM is
*
On Wed, 2010-06-16 at 11:48 +0300, Avi Kivity wrote:
On 06/15/2010 04:55 PM, Dave Hansen wrote:
+/*
+ * This value is the sum of all of the kvm instances's
+ * kvm-arch.n_used_mmu_pages values. We need a global,
+ * aggregate version in order to make the slab shrinker
+ * faster
+ */
On Wed, 2010-06-16 at 11:48 +0300, Avi Kivity wrote:
+static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr)
+{
+ kvm-arch.n_used_mmu_pages += nr;
+ kvm_total_used_mmu_pages += nr;
Needs an atomic operation, since there's no global lock here. To avoid
Note: this is the real meat of the patch set. It can be applied up
to this point, and everything will probably be improved, at least
a bit.
Of slab shrinkers, the VM code says:
* Note that 'shrink' will be passed nr_to_scan == 0 when the VM is
* querying the cache size, so a fastpath for