On Sunday 06 April 2008 00:53:39 Balaji Rao wrote:
On Friday 04 April 2008 01:46:21 pm Balaji Rao wrote:
Hi Rusty,
I hit a bug in virtio_ring.c:218 when I was stressing virtio_net using
kvm with -smp 4.
static void vring_disable_cb(struct virtqueue *_vq)
{
struct
Arjun wrote:
Hi Folks,
A fellow student and I wish to do run some experiments with KVM.
Specifically, we would like to examine
KVM's guest paging/swapping mechanism, make some changes and run some
tests. After a brief search through
the docs and code, we would greatly appreciate help with
On Sunday 06 April 2008 12:56:33 pm Rusty Russell wrote:
On Sunday 06 April 2008 00:53:39 Balaji Rao wrote:
On Friday 04 April 2008 01:46:21 pm Balaji Rao wrote:
Hi Rusty,
I hit a bug in virtio_ring.c:218 when I was stressing virtio_net using
kvm with -smp 4.
static void
Marcelo Tosatti wrote:
Fixes loadvm/savem on SMP.
Signed-off-by: Marcelo Tosatti [EMAIL PROTECTED]
Index: kvm-userspace.io/qemu/hw/apic.c
===
--- kvm-userspace.io.orig/qemu/hw/apic.c
+++ kvm-userspace.io/qemu/hw/apic.c
@@
Zhang, Xiantao wrote:
Compared with V9, just fixed indentation issues in patch 12. I put it
the patchset in
git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git
kvm-ia64-mc10. Please help to review.
Specially, the first two patches (TR Management patch and
Marcelo Tosatti wrote:
In the -incoming case the apic regs are not initialized and therefore
bogus.
Signed-off-by: Marcelo Tosatti [EMAIL PROTECTED]
Index: kvm-userspace.io/qemu/qemu-kvm-x86.c
===
---
Marcelo Tosatti wrote:
Otherwise a signal can be received in userspace and a vcpu goes back
to the kernel while it should stay still.
Signed-off-by: Marcelo Tosatti [EMAIL PROTECTED]
Index: kvm-userspace.io/qemu/qemu-kvm.c
===
Anthony Liguori wrote:
This patch introduces a gfn_to_pfn() function and corresponding functions like
kvm_release_pfn_dirty(). Using these new functions, we can modify the x86
MMU to no longer assume that it can always get a struct page for any given
gfn.
We don't want to eliminate
Kevin O'Connor wrote:
I have been working on a port of bochs bios to gcc. This port is
nearly complete. The new code does not rely on bcc or dev86.
Instead, it uses standard gcc and gas. It should compile on any
recent Linux distribution.
I'm sending this email because I understand kvm
On Sun, Apr 06, 2008 at 03:05:11PM +0300, Avi Kivity wrote:
Kevin O'Connor wrote:
I have been working on a port of bochs bios to gcc.
While moving away from the horror that is bcc is a blessing, the way to
really benefit from it is to have this code replace the original bochs
bios. This
The registration site for KVM developer forum 2008 is now open
(http://kforum.qumranet.com/KVMForum/register_now.php )
The participation fee is US$ 695 for early bird subscribers up to May
1st, 2008. After May 1st 2008, the participation fee is US$ 790.
Developers whose registration fee is not
I want to pamper myself today, but I do not want to burn a hole in my pocket,
so I tried this. http://www.fleckoatin.com/
-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Register now and save $200.
On 06/04/2008, Anthony Liguori [EMAIL PROTECTED] wrote:
Blue Swirl wrote:
To support Sparc IOMMU and DMA controller
I need a way to call a series of different translation functions
depending on the bus where we are. For the byte swapping case the
memcpy functions must be dynamic as well.
andrzej zaborowski wrote:
On 06/04/2008, Anthony Liguori [EMAIL PROTECTED] wrote:
Blue Swirl wrote:
To support Sparc IOMMU and DMA controller
I need a way to call a series of different translation functions
depending on the bus where we are. For the byte swapping case the
memcpy
Hi Dennis,
thanks a lot for Your reply, the problem really was that I didn't know
that virtio_pci module is also needed.
I'm not using LVM in guests, but it's good to know recent version would be
needed for such case.
BR
nik.
On Sun, 6 Apr 2008, Dennis Jacobfeuerborn wrote:
Nikola Ciprich
The big item (in more ways than one) for this release is the addition of
s390 support. As it is not actually provided in the tarball, you will
need to use git to fetch it. You will also need a mainframe.
On x86, the most interesting change is the separation of timer and I/O
completion
Hi,
I spent some time trying to tune performance of KVM guest using kernel
compilation as a kind of benchmark (I'm using virtual machines for
compiling a lot, so it's good benchmark for me in general)
Host machine: 2x quad core XEON E5420 @ 2.50GHz, 4GB RAM, 2.6.24 + kvm-64
guest
On Sun, 2008-04-06 at 21:56 +0200, Nikola Ciprich wrote:
Hi,
I spent some time trying to tune performance of KVM guest using kernel
compilation as a kind of benchmark (I'm using virtual machines for
compiling a lot, so it's good benchmark for me in general)
Host machine: 2x quad core
Nikola Ciprich wrote:
Hi,
I spent some time trying to tune performance of KVM guest using kernel
compilation as a kind of benchmark (I'm using virtual machines for
compiling a lot, so it's good benchmark for me in general)
Host machine: 2x quad core XEON E5420 @ 2.50GHz, 4GB RAM, 2.6.24 +
Hi Anthony!
Anthony Liguori wrote:
I would think you should get about 70% of native with what you've done
about. I've not seen instabilities with CONFIG_KVM_CLOCK myself.
Setting up a hugetlbfs mount and using -mem-path may give you a bit of
a bump too but I'd be surprised if it was more
Nikola Ciprich wrote:
Hi Anthony!
Anthony Liguori wrote:
I would think you should get about 70% of native with what you've done
about. I've not seen instabilities with CONFIG_KVM_CLOCK myself.
Setting up a hugetlbfs mount and using -mem-path may give you a bit of
a bump too but I'd
Anthony Liguori wrote:
You won't see a gain with tmpfs. Make sure you reserve huge pages
first. For a 1GB guest, you'll need something like:
echo 540 /proc/sys/vm/nr_hugepages
When you create a VM, you need a bit more memory than 1GB for
per-guest overhead. That's why I reserve 540
Nikola Ciprich wrote:
Anthony Liguori wrote:
You won't see a gain with tmpfs. Make sure you reserve huge pages
first. For a 1GB guest, you'll need something like:
echo 540 /proc/sys/vm/nr_hugepages
When you create a VM, you need a bit more memory than 1GB for
per-guest overhead.
complement 64173d009c1f4d163c425b14aa650df5b982428a to avoid :
kvm-65/qemu/hw/apic.c: In function `apic_mem_readl':
kvm-65/qemu/hw/apic.c:592: warning: 'val' might be used uninitialized in this
function
Signed-off-by: Carlo Marcelo Arenas Belon [EMAIL PROTECTED]
---
qemu/hw/apic.c |1 +
Avi,
Thanks for your response.
Regarding the query on Guest Swapping, I'm referring to the mechanism that
the KVM host can use to swap out a guest's pages. Since a guest OS will have
its own swapping mechanism, then how will the host ensure that if it choses
to swap out a guest's page, it will
As a note, the DMA controllers in the ARM system-on-chip's can
byte-swap, do 90deg rotation of 2D arrays, transparency (probably
intened for image blitting, but still available on any kind of
transfers), etc., and most importantly issue interrupts on reaching
different points of a transfer.
Инoстpанные pабoтники-oсoбеннoсти тpудoвых oтнoшений и налoгooблoжения.
Нoвшества в закoнoдательстве
Семинаp пpoйдёт 10 апpеля 2008, г. Мoсква
Пpoгpамма:
1. Статус инoстpаннoгo гpажданина в pФ. pезиденты и неpезиденты. pегистpация пo
месту жительства.
2. Участие инoстpанцев в тpудoвых
Nikola Ciprich wrote:
Hi,
I spent some time trying to tune performance of KVM guest using kernel
compilation as a kind of benchmark (I'm using virtual machines for
compiling a lot, so it's good benchmark for me in general)
Host machine: 2x quad core XEON E5420 @ 2.50GHz, 4GB RAM, 2.6.24 +
On Sat, 5 Apr 2008, Andrea Arcangeli wrote:
In short when working with single pages it's a waste to block the
secondary-mmu page fault, because it's zero cost to invalidate_page
before put_page. Not even GRU need to do that.
That depends on what the notifier is being used for. Some
On Sat, 5 Apr 2008, Andrea Arcangeli wrote:
+ rcu_assign_pointer(mm-emm_notifier, e);
+ mm_unlock(mm);
My mm_lock solution makes all rcu serialization an unnecessary
overhead so you should remove it like I already did in #v11. If it
wasn't the case, then mm_lock wouldn't be a
30 matches
Mail list logo