Avi Kivity wrote:
Okay, I committed the patch without the flood count == 5.
I've continued testing the RHEL3 guests with the flood count at 3, and I
am right back to where I started. With the patch and the flood count at
3, I had 2 runs totaling around 24 hours that looked really good. Now,
Avi Kivity wrote:
Not so fast... the patch updates the flood count to 5. Can you check
if a lower value still works? Also, whether updating the flood count to
5 (without the rest of the patch) works?
Unconditionally bumping the flood count to 5 will likely cause a
performance regression
That does the trick with kscand.
Do you have recommendations for clock source settings? For example in my
test case for this patch the guest gained 73 seconds (ahead of real
time) after only 3 hours, 5 min of uptime.
thanks,
david
Avi Kivity wrote:
Avi Kivity wrote:
Avi Kivity wrote:
I
Avi Kivity wrote:
David S. Ahern wrote:
Another tidbit for you guys as I make my way through various
permutations:
I installed the RHEL3 hugemem kernel and the guest behavior is *much*
better.
System time still has some regular hiccups that are higher than xen
and esx
(e.g., 1 minute
to capture some kind of workload that exhibits the
problem. It will be a couple of days.
david
Daniel P. Berrange wrote:
On Wed, Apr 30, 2008 at 07:39:53AM -0600, David S. Ahern wrote:
Avi Kivity wrote:
David S. Ahern wrote:
Another tidbit for you guys as I make my way through various
.
Alternatively, during the hang on a restart I can kill the guest, and then on
restart choose the normal, 32-bit smp kernel and the guest boots just fine. At
this point I can shutdown the guest and restart with the hugemem kernel and it
boots just fine.
david
David S. Ahern wrote:
Hi Marcelo
David S. Ahern wrote:
Avi Kivity wrote:
David S. Ahern wrote:
I added the traces and captured data over another apparent lockup of
the guest.
This seems to be representative of the sequence (pid/vcpu removed).
(+4776) VMEXIT [ exitcode = 0x, rip = 0x
c016127c
) is what the 0xfffb63b0 corresponds to in the guest. Any
ideas?
Also, the expensive page fault occurs on errorcode = 0x000b (PAGE_FAULT
trace data). What does the 4th bit in 0xb mean? bit 0 set means
PFERR_PRESENT_MASK is set, and bit 1 means PT_WRITABLE_MASK. What is bit 3?
david
David S
Avi Kivity wrote:
Ah! The flood detector is not seeing the access through the
kmap_atomic() pte, because that access has gone through the emulator.
last_updated_pte_accessed(vcpu) will never return true.
Can you verify that last_updated_pte_accessed(vcpu) indeed always
returns false?
() were both run 214,270 times,
most of them relatively quickly.
Note: I bumped the scheduling priority of the qemu threads to RR 1 so that few
host processes could interrupt it.
david
Avi Kivity wrote:
David S. Ahern wrote:
I added the traces and captured data over another apparent lockup
I added the traces and captured data over another apparent lockup of the guest.
This seems to be representative of the sequence (pid/vcpu removed).
(+4776) VMEXIT [ exitcode = 0x, rip = 0x c016127c ]
(+ 0) PAGE_FAULT [ errorcode = 0x0003, virt = 0x
DOH. I had the 2 new ones backwards in the formats file.
thanks for pointing that out,
david
Liu, Eric E wrote:
I mean the value of PTE_WRITE you write in the formats file ( 0x00020016
)should be same with KVM_TRC_PTE_WRITE you define in kvm.h,
but now it is 0x00020015. if not what you
I am trying to add a trace marker and the data is coming out all 0's. e.g.,
0 (+ 0) PTE_WRITE vcpu = 0x0001 pid = 0x240d [ gpa =
0x gpte = 0x ]
Patch is attached. I know the data is non-zero as I added an if check before
calling the trace to
inline.
Liu, Eric E wrote:
David S. Ahern wrote:
I am trying to add a trace marker and the data is coming out all 0's.
e.g.,
0 (+ 0) PTE_WRITE vcpu = 0x0001 pid = 0x240d [
gpa = 0x gpte = 0x ]
Patch is attached. I know the data is non
vcpu = 0x pid = 0x11ea [ errorcode =
0x0009, virt = 0x fffb6d30 ]
Avi Kivity wrote:
David S. Ahern wrote:
I have been looking at RHEL3 based guests lately, and to say the least
the
performance is horrible. Rather than write a long tome on what I've
done
I have been looking at RHEL3 based guests lately, and to say the least the
performance is horrible. Rather than write a long tome on what I've done and
observed, I'd like to find out if anyone has some insights or known problem
areas running 2.4 guests. The short of it is that % system time
16 matches
Mail list logo