Hi all,
Currently, we adopted static RID partition solution to virtualize RID.
Why do we need to virtualize RID?
We have the assumption in mind that purge all is very expensive.
If we don't virtualize RID, we need to purge all when VCPU switch happens.
We did following test to see how many the
On Thu, May 24, 2007 at 02:02:32PM +0800, Xu, Anthony wrote:
Currently, we adopted static RID partition solution to virtualize RID.
Why do we need to virtualize RID?
We have the assumption in mind that purge all is very expensive.
If we don't virtualize RID, we need to purge all when VCPU
On Thu, May 24, 2007 at 11:42:56AM +0900, Isaku Yamahata wrote:
elilo tries the physical address given by its elf header at first,
then tries to relocate the image only when administrater allows elilo
to relocate explicitly with warning message.
I only took look at elilo code so that I don't
implemented XENMEM_maximum_gpfn for domain save/restore with domain memmap.
--
yamahata
# HG changeset patch
# User [EMAIL PROTECTED]
# Date 1179733667 -32400
# Node ID c4006953d0d12b2d06347c2f1c6f34ff49e0651f
# Parent 03e04b861d91b78fbbf392b118dabfbbd61fe75f
implemented XENMEM_maximum_gpfn for
support save/restore with domain memmap.
--
yamahata
# HG changeset patch
# User [EMAIL PROTECTED]
# Date 1179994495 -32400
# Node ID 1f567858b31f3694658def5bbae7c0289682b7f7
# Parent c4006953d0d12b2d06347c2f1c6f34ff49e0651f
support save/restore with domain memmap.
PATCHNAME:
Isaku Yamahata wrote:
On Thu, May 24, 2007 at 11:42:56AM +0900, Isaku Yamahata wrote:
elilo tries the physical address given by its elf header at first,
then tries to relocate the image only when administrater allows elilo
to relocate explicitly with warning message.
I only took look at
On Thu, May 24, 2007 at 04:25:47PM +0800, Xu, Anthony wrote:
Isaku Yamahata
Sent: 2007年5月24日 14:49
Could you explain your test in detail?
I suppose KB = Kernel Bnech = measuring kernel complile time, right?
How domains are created? There are four cases with/without patches.
Did you run
From: Isaku Yamahata
Sent: 2007年5月24日 17:00
To: Xu, Anthony
Cc: Xen-ia64-devel
Subject: Re: [Xen-ia64-devel]RID virtualization discussion
We have tested following cases
There are 6 physical processors.
And local_purge_all is executed about 2000 per second on each processor.
Dom0(1vcpu) +
Hi!
I've got xen from http://xenbits.xensource.com/ext/xen-ia64-unstable.hg
, compiled everything and..
the problem is that no matter if I do a PAE kernel or non-PAE, the
result is after decompressing kernel, the screen goes black, the cursor
flashes on the screen left-bottom, and it hangs.
Radek Antoniuk wrote:
Hi!
I've got xen from http://xenbits.xensource.com/ext/xen-ia64-unstable.hg
, compiled everything and..
the problem is that no matter if I do a PAE kernel or non-PAE, the
result is after decompressing kernel, the screen goes black, the cursor
flashes on the screen
Jürgen Groß napisał(a):
Radek Antoniuk wrote:
Hi!
I've got xen from http://xenbits.xensource.com/ext/xen-ia64-unstable.hg
, compiled everything and..
the problem is that no matter if I do a PAE kernel or non-PAE, the
result is after decompressing kernel, the screen goes black, the cursor
On Thu, May 24, 2007 at 10:34:22AM +0200, Jes Sorensen wrote:
ie. on SN2 we don't have anything below 0x30 and we also encode
the node ID in the address so on multi node systems the addresses go up.
Ok.
dom0 needs the metaphysical addresses or nothing can work as we use the
node
Akio Takebe wrote:
Hi, Jes
Just a single node, 2 sockets
Xen on ia64 create a vcpu for dom0 by default.
If you want to create more vcpus,
add dom0_max_vcpus=N in bootparameter of xen.
I see the below in your log. Xen get information of your node.
(XEN) Brought up 2 CPUs
Whoops,
Isaku Yamahata wrote:
On Thu, May 24, 2007 at 10:34:22AM +0200, Jes Sorensen wrote:
dom0 needs the metaphysical addresses or nothing can work as we use the
node info for talking to the PCI controllers, memory controllers (IPIs
and TLB flushes) etc.
I don't understand why 'must match' here.
On Wed, May 23, 2007 at 01:36:45PM +0900, Akio Takebe wrote:
Hi, Horms
After kexec, what kernel do you use?
Xen or linux kernel?
I think you don't call iosapic_register_intr(),
arguments of iosapic_register_intr() are wrong.
Can you check that efi.hcdp is passed to second kernel
On Thu, May 24, 2007 at 11:38:19AM +0200, Jes Sorensen wrote:
Isaku Yamahata wrote:
On Thu, May 24, 2007 at 10:34:22AM +0200, Jes Sorensen wrote:
dom0 needs the metaphysical addresses or nothing can work as we use the
node info for talking to the PCI controllers, memory controllers (IPIs
On Thu, May 24, 2007 at 12:28:46PM +0200, Jes Sorensen wrote:
Isaku Yamahata wrote:
On Thu, May 24, 2007 at 11:38:19AM +0200, Jes Sorensen wrote:
P=M will break the current grant table api. It means all of virtual
io device (balloon, vbd, vnif, ...) will be broken.
What do you think about
Am Donnerstag, 24. Mai 2007 schrieb Radek Antoniuk:
Hi!
I've got xen from http://xenbits.xensource.com/ext/xen-ia64-unstable.hg
, compiled everything and..
the problem is that no matter if I do a PAE kernel or non-PAE, the
result is after decompressing kernel, the screen goes black, the
Dietmar Hahn napisał(a):
Am Donnerstag, 24. Mai 2007 schrieb Radek Antoniuk:
Hi!
I've got xen from http://xenbits.xensource.com/ext/xen-ia64-unstable.hg
, compiled everything and..
the problem is that no matter if I do a PAE kernel or non-PAE, the
result is after decompressing kernel, the
Am Donnerstag, 24. Mai 2007 schrieb Radek Antoniuk:
Dietmar Hahn napisał(a):
Am Donnerstag, 24. Mai 2007 schrieb Radek Antoniuk:
Hi!
I've got xen from http://xenbits.xensource.com/ext/xen-ia64-unstable.hg
, compiled everything and..
the problem is that no matter if I do a PAE kernel
I'am not sure about debian, I only use SLES (SuSE)!
The standard SuSE distribution doesn't install the extended elilo.efi binary.
I had to compile and install it myself.
The other thing are the right parameters to set up the serial console, where
you can see all the XEN (hypervisor) messages.
Isaku Yamahata wrote:
On Thu, May 24, 2007 at 12:28:46PM +0200, Jes Sorensen wrote:
I don't agree here.
I think fixing it with P=M requires much bigger efforts than
paravirtualizing the files under sn/ directory.
Looking around sn/ directory, I found that address conversion
functions are
Isaku == Isaku Yamahata [EMAIL PROTECTED] writes:
Isaku 9 / 14
Isaku,
This is really awful :-( The Xen/ia64 codebase has already had the
benefit of being formatted sanely, please don't migrate sane code over
to the broken style just because it's being moved around.
Best regards,
Jes
Hi,
Guess a patch speaks a million words :)
This one Lindents the dom_fw_foo files to match the more reasonable
formatting they had prior to being split up.
It reduces the overall file size by about 15% and makes it a lot easier
to apply patches that were generated against dom_fw.c before the
We discussed this a little bit at Xen Summit, but we didn't leave
with a plan to move forward. Jes is now to the point where he's got
Altix booting to some extent and we need to be in agreement on what NUMA
support in Xen/ia64 is going to look like.
First, there are a couple ways that
On Wed, 2007-05-23 at 08:46 +0200, Jes Sorensen wrote:
Alex Williamson wrote:
On Tue, 2007-05-22 at 15:09 +0200, Jes Sorensen wrote:
Hi,
We need some more SAL calls emulated on SN2.
Are we getting to a point that we should should expand the machine
vectors when running on Xen to
On Wed, 2007-05-23 at 20:08 +0900, Akio Takebe wrote:
Hi,
This patch cleanup the following warning.
(XEN) mm.c:497:d0 Warning: UC to WB for mpaddr=
Hi Akio,
Is this cleanup ok?
Signed-off-by: Akio Takebe [EMAIL PROTECTED]
Signed-off-by: Alex Williamson [EMAIL PROTECTED]
---
On Thu, 2007-05-24 at 17:46 +0200, Jes Sorensen wrote:
Hi,
Guess a patch speaks a million words :)
This one Lindents the dom_fw_foo files to match the more reasonable
formatting they had prior to being split up.
It reduces the overall file size by about 15% and makes it a lot easier
to
On Wed, 2007-05-23 at 11:08 +0900, Kouya SHIMURA wrote:
Hi,
With this patch,
* XEN correctly emulates ld.s for HVM
* original memory attribute is preserved in vcpu-arch.vtlb
Without this, XEN infrequently calls panic_domain() by mistake for windows.
Applied. Thanks,
Alex
--
On Wed, 2007-05-23 at 10:58 +0200, Jes Sorensen wrote:
Hi,
Small patch to not call scrub_heap_pages() when running on Medusa.
Can't use the running_on_sim flag for this as that flag impacts too
many other things that make it fail for us.
Applied. Thanks,
Alex
--
Alex
On Wed, 2007-05-23 at 20:54 +0900, Isaku Yamahata wrote:
fix assertion in vmx_init_env().
With debug=y (i.e. without NDEBUG), xen/ia64 panics.
Applied. Thanks,
Alex
--
Alex Williamson HP Open Source Linux Org.
On Thu, 2007-05-24 at 17:18 +0900, Isaku Yamahata wrote:
implemented XENMEM_maximum_gpfn for domain save/restore with domain memmap.
Applied both of these. Thanks,
Alex
--
Alex Williamson HP Open Source Linux Org.
On Thu, May 24, 2007 at 04:20:07PM -0600, Alex Williamson wrote:
this one that convert to Linux style. Are there other opinions on this
before we set a precedent? Thanks,
I don't insist on any particular styles.
If linux style under ia64 directory is the consensus,
I will follow it.
If the
Hi all:
I am pleased say that NVRAM has been checked in Xen staging tree
(changeset: 15132:17f6163ae930c007d13abc1e3dbc06a624fb5a21).
Thanks for all your efforts on pushing Cambridge to pay attention to
it. We will release corresponding GFW soon. Thx☺
Good good study,day day up
Alex started new thread whose subject How to support NUMA?
Let's move the discussion there.
The patch itself looks ok to me except the comment.
Probably posting your local patches would help discussions greatly.
Some of them might be easily merged, others might need discussion.
On Thu, May 24,
Hi, Alex
Is this cleanup ok?
It's OK, thanks.
Best Regards,
Akio Takebe
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel
Hi, Wing
Hi all:
I am pleased say that NVRAM has been checked in Xen staging tree
(changeset
: 15132:17f6163ae930c007d13abc1e3dbc06a624fb5a21).
Thanks for all your efforts on pushing Cambridge to pay attention to it.
We
will release corresponding GFW soon. Thx?
Good news! Thank
37 matches
Mail list logo