Re: [PATCH] fix dom0 builder TAKE 2(was Re: [Xen-ia64-devel] [PATCH] fix dom0 builder)

2006-10-16 Thread Isaku Yamahata
Hi Alex.

If CONFIG_FLATMEM=y is enabled(xenLinux default is so),
it seems sane that the total is about 5GB.
Do disabling CONFIG_FLATMEM=n and enabling CONFIG_DISCONTIGMEM=y
(or CONFIG_SPARSEMEM=y) make difference?
And I also noticed that xenLinux default config disables
CONFIG_VIRTUAL_MEM_MAP=n.
Since dom0 may see underlying machine memory layout with the patch,
should we revise default Linux config?

Thanks.

On Sun, Oct 15, 2006 at 10:35:42PM -0600, Alex Williamson wrote:
 On Mon, 2006-10-16 at 12:34 +0900, Isaku Yamahata wrote:
  Looks good and much better than mine.
 
Thanks, unfortunately it doesn't behave very well.  A zx1 box has
 memory from 0 - 1 GB, and the rest lives above 4GB.  Therefore, if I
 boot with dom0_mem=5G, I should have 2GB of memory for dom0 (0-1G,
 4-5G).  Here's what the memory map looks like:
 
 (XEN) dom mem: type=13, attr=0x8008, 
 range=[0x-0x1000) (4KB)
 (XEN) dom mem: type=10, attr=0x8008, 
 range=[0x1000-0x2000) (4KB)
 (XEN) dom mem: type= 6, attr=0x8008, 
 range=[0x2000-0x3000) (4KB)
 (XEN) dom mem: type= 7, attr=0x0008, 
 range=[0x3000-0x000a) (628KB)
 (XEN) dom mem: type=11, attr=0x0003, 
 range=[0x000a-0x000c) (128KB)
 (XEN) dom mem: type= 5, attr=0x8001, 
 range=[0x000c-0x0010) (256KB)
 (XEN) dom mem: type= 7, attr=0x0008, 
 range=[0x0010-0x0400) (63MB)
 (XEN) dom mem: type= 2, attr=0x0008, 
 range=[0x0400-0x0813a000) (65MB)
 (XEN) dom mem: type= 7, attr=0x0008, 
 range=[0x0813a000-0x3f5e4000) (884MB)
 (XEN) dom mem: type= 5, attr=0x8008, 
 range=[0x3f5e4000-0x3fac) (4MB)
 (XEN) dom mem: type= 7, attr=0x0008, 
 range=[0x3fb0-0x3fb08000) (32KB)
 (XEN) dom mem: type= 4, attr=0x0008, 
 range=[0x3fb08000-0x3fb2c000) (144KB)
 (XEN) dom mem: type= 9, attr=0x0008, 
 range=[0x3fb2c000-0x3fb38000) (48KB)
 (XEN) dom mem: type= 6, attr=0x8008, 
 range=[0x3fb38000-0x4000) (4MB)
 (XEN) dom mem: type=11, attr=0x0001, 
 range=[0x8000-0xfe00) (2016MB)
 (XEN) dom mem: type=11, attr=0x8001, 
 range=[0xfed0-0x0001) (19MB)
 (XEN) dom mem: type= 7, attr=0x0008, 
 range=[0x0001-0x00014000) (1024MB)
 (XEN) dom mem: type= 6, attr=0x8008, 
 range=[0x00027fffe000-0x00028000) (8KB)
 (XEN) dom mem: type= 5, attr=0x8008, 
 range=[0x0040ffd6a000-0x0040ffda6000) (240KB)
 (XEN) dom mem: type= 5, attr=0x8008, 
 range=[0x0040ffe12000-0x0040ffe8) (440KB)
 (XEN) dom mem: type= 6, attr=0x8008, 
 range=[0x0040fffc-0x0041) (256KB)
 (XEN) dom mem: type=11, attr=0x0001, 
 range=[0x0800-0x1000) (8388608MB)
 (XEN) dom mem: type=12, attr=0x8001, 
 range=[0x0003fc00-0x0004) (64MB)
 
 I see about 2GB of memory in there, we're ok so far.  Dom0 boots up, and
 I see:
 
 Memory: 1948096k/2064384k available (10405k code, 115072k reserved, 4256k 
 data, 288k init)
 
 Everything is still as expected.  Then I login to the console and run
 'free':
 
 total   used   free sharedbuffers cached
 Mem:   522780833726081855200  0   5408  31552
 -/+ buffers/cache:33356481892160
 Swap:0  0  0
 
 This is the first sign that something is really wrong.  When I tried
 'xend start', the oom killer took over my system and never gave it back.
 Any thoughts?  Thanks,
 
   Alex
 
 -- 
 Alex Williamson HP Open Source  Linux Org.
 
 
 ___
 Xen-ia64-devel mailing list
 Xen-ia64-devel@lists.xensource.com
 http://lists.xensource.com/xen-ia64-devel

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] tlbflush_clock

2006-10-16 Thread Isaku Yamahata

tlbflush_clock
This patch introduces xen compile time option, xen_ia64_tlbflush_clock=y.

-- 
yamahata
# HG changeset patch
# User [EMAIL PROTECTED]
# Date 1160981399 -32400
# Node ID 0b16566e129874f29708d8b5c40c8542649a7bcb
# Parent  fcd746cf4647e06b8e88e620c29610ba43e3ad7c
tlbflush_clock
This patch introduces xen compile time option, xen_ia64_tlbflush_clock=y.
PATCHNAME: tlbflush_clock

Signed-off-by: Isaku Yamahata [EMAIL PROTECTED]

diff -r fcd746cf4647 -r 0b16566e1298 xen/arch/ia64/Rules.mk
--- a/xen/arch/ia64/Rules.mkSat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/Rules.mkMon Oct 16 15:49:59 2006 +0900
@@ -9,6 +9,7 @@ xen_ia64_pervcpu_vhpt   ?= y
 xen_ia64_pervcpu_vhpt  ?= y
 xen_ia64_tlb_track ?= y
 xen_ia64_tlb_track_cnt ?= n
+xen_ia64_tlbflush_clock?= y
 
 ifneq ($(COMPILE_ARCH),$(TARGET_ARCH))
 CROSS_COMPILE ?= /usr/local/sp_env/v2.2.5/i686/bin/ia64-unknown-linux-
@@ -52,6 +53,9 @@ ifeq ($(xen_ia64_tlb_track_cnt),y)
 ifeq ($(xen_ia64_tlb_track_cnt),y)
 CFLAGS += -DCONFIG_TLB_TRACK_CNT
 endif
+ifeq ($(xen_ia64_tlbflush_clock),y)
+CFLAGS += -DCONFIG_XEN_IA64_TLBFLUSH_CLOCK
+endif
 ifeq ($(no_warns),y)
 CFLAGS += -Wa,--fatal-warnings -Werror -Wno-uninitialized
 endif
diff -r fcd746cf4647 -r 0b16566e1298 xen/arch/ia64/linux-xen/tlb.c
--- a/xen/arch/ia64/linux-xen/tlb.c Sat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/linux-xen/tlb.c Mon Oct 16 15:49:59 2006 +0900
@@ -111,7 +111,10 @@ local_flush_tlb_all (void)
 local_flush_tlb_all (void)
 {
unsigned long i, j, flags, count0, count1, stride0, stride1, addr;
-
+#if defined(XEN)
+   /* increment flush clock before mTLB flush */
+   u32 flush_time = tlbflush_clock_inc_and_return();
+#endif
addr= local_cpu_data-ptce_base;
count0  = local_cpu_data-ptce_count[0];
count1  = local_cpu_data-ptce_count[1];
@@ -128,6 +131,10 @@ local_flush_tlb_all (void)
}
local_irq_restore(flags);
ia64_srlz_i();  /* srlz.i implies srlz.d */
+#if defined(XEN)
+   /* update after mTLB flush. */
+   tlbflush_update_time(__get_cpu_var(tlbflush_time), flush_time);
+#endif
 }
 EXPORT_SYMBOL(local_flush_tlb_all);
 
diff -r fcd746cf4647 -r 0b16566e1298 xen/arch/ia64/xen/Makefile
--- a/xen/arch/ia64/xen/MakefileSat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/xen/MakefileMon Oct 16 15:49:59 2006 +0900
@@ -30,3 +30,4 @@ obj-y += xencomm.o
 
 obj-$(crash_debug) += gdbstub.o
 obj-$(xen_ia64_tlb_track) += tlb_track.o
+obj-$(xen_ia64_tlbflush_clock) += flushtlb.o
diff -r fcd746cf4647 -r 0b16566e1298 xen/arch/ia64/xen/domain.c
--- a/xen/arch/ia64/xen/domain.cSat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/xen/domain.cMon Oct 16 15:49:59 2006 +0900
@@ -80,35 +80,52 @@ ia64_disable_vhpt_walker(void)
ia64_set_pta(VHPT_SIZE_LOG2  2);
 }
 
-static void flush_vtlb_for_context_switch(struct vcpu* vcpu)
+static void flush_vtlb_for_context_switch(struct vcpu* prev, struct vcpu* next)
 {
int cpu = smp_processor_id();
-   int last_vcpu_id = vcpu-domain-arch.last_vcpu[cpu].vcpu_id;
-   int last_processor = vcpu-arch.last_processor;
-
-   if (is_idle_domain(vcpu-domain))
+   int last_vcpu_id = next-domain-arch.last_vcpu[cpu].vcpu_id;
+   int last_processor = next-arch.last_processor;
+
+   if (!is_idle_domain(prev-domain))
+   
tlbflush_update_time(prev-domain-arch.last_vcpu[cpu].tlbflush_timestamp, 
tlbflush_current_time());
+
+   if (is_idle_domain(next-domain))
return;
-   
-   vcpu-domain-arch.last_vcpu[cpu].vcpu_id = vcpu-vcpu_id;
-   vcpu-arch.last_processor = cpu;
-
-   if ((last_vcpu_id != vcpu-vcpu_id 
+
+   next-domain-arch.last_vcpu[cpu].vcpu_id = next-vcpu_id;
+   next-arch.last_processor = cpu;
+   if ((last_vcpu_id != next-vcpu_id 
 last_vcpu_id != INVALID_VCPU_ID) ||
-   (last_vcpu_id == vcpu-vcpu_id 
+   (last_vcpu_id == next-vcpu_id 
 last_processor != cpu 
 last_processor != INVALID_PROCESSOR)) {
+#ifdef CONFIG_XEN_IA64_TLBFLUSH_CLOCK
+   u32 last_tlbflush_timestamp =
+   next-domain-arch.last_vcpu[cpu].tlbflush_timestamp;
+#endif
+   int vhpt_is_flushed = 0;
 
// if the vTLB implementation was changed,
// the followings must be updated either.
-   if (VMX_DOMAIN(vcpu)) {
+   if (VMX_DOMAIN(next)) {
// currently vTLB for vt-i domian is per vcpu.
// so any flushing isn't needed.
-   } else if (HAS_PERVCPU_VHPT(vcpu-domain)) {
+   } else if (HAS_PERVCPU_VHPT(next-domain)) {
// nothing to do
} else {
-   local_vhpt_flush();
-   }
-   local_flush_tlb_all();
+   if 

[Xen-ia64-devel] [PATCH] micro optimize __domain_flush_vtlb_track_entry

2006-10-16 Thread Isaku Yamahata

micro optimize __domain_flush_vtlb_track_entry.
try to use local purge instead of global purge when possible.

-- 
yamahata
# HG changeset patch
# User [EMAIL PROTECTED]
# Date 1160981797 -32400
# Node ID a8882084d080692b970edb835af909cbe8aa5f96
# Parent  0b16566e129874f29708d8b5c40c8542649a7bcb
micro optimize __domain_flush_vtlb_track_entry.
try to use local purge instead of global purge when possible.
PATCHNAME: micro_optimizedomain_flush_vtlb_track_entry

Signed-off-by: Isaku Yamahata [EMAIL PROTECTED]

diff -r 0b16566e1298 -r a8882084d080 xen/arch/ia64/xen/vhpt.c
--- a/xen/arch/ia64/xen/vhpt.c  Mon Oct 16 15:49:59 2006 +0900
+++ b/xen/arch/ia64/xen/vhpt.c  Mon Oct 16 15:56:37 2006 +0900
@@ -381,7 +381,8 @@ __domain_flush_vtlb_track_entry(struct d
struct vcpu* v;
int cpu;
int vcpu;
-
+   int local_purge = 1;
+   
BUG_ON((vaddr  VRN_SHIFT) != VRN7);
/*
 * heuristic:
@@ -414,17 +415,35 @@ __domain_flush_vtlb_track_entry(struct d
 
/* Invalidate VHPT entries.  */
vcpu_flush_vhpt_range(v, vaddr, PAGE_SIZE);
+
+   /*
+* current-processor == v-processor
+* is racy. we may see old v-processor and
+* a new physical processor of v might see old
+* vhpt entry and insert tlb.
+*/
+   if (v != current)
+   local_purge = 0;
}
} else {
for_each_cpu_mask(cpu, entry-pcpu_dirty_mask) {
/* Invalidate VHPT entries.  */
cpu_flush_vhpt_range(cpu, vaddr, PAGE_SIZE);
+
+   if (d-vcpu[cpu] != current)
+   local_purge = 0;
}
}
-   /* ptc.ga has release semantics. */
 
/* ptc.ga  */
-   ia64_global_tlb_purge(vaddr, vaddr + PAGE_SIZE, PAGE_SHIFT);
+   if (local_purge) {
+   ia64_ptcl(vaddr, PAGE_SHIFT  2);
+   perfc_incrc(domain_flush_vtlb_local);
+   } else {
+   /* ptc.ga has release semantics. */
+   ia64_global_tlb_purge(vaddr, vaddr + PAGE_SIZE, PAGE_SHIFT);
+   perfc_incrc(domain_flush_vtlb_global);
+   }
 
if (swap_rr0) {
vcpu_set_rr(current, 0, old_rid);
diff -r 0b16566e1298 -r a8882084d080 xen/include/asm-ia64/perfc_defn.h
--- a/xen/include/asm-ia64/perfc_defn.h Mon Oct 16 15:49:59 2006 +0900
+++ b/xen/include/asm-ia64/perfc_defn.h Mon Oct 16 15:56:37 2006 +0900
@@ -115,6 +115,8 @@ PERFCOUNTER_CPU(domain_flush_vtlb_all,  
 PERFCOUNTER_CPU(domain_flush_vtlb_all,  domain_flush_vtlb_all)
 PERFCOUNTER_CPU(vcpu_flush_tlb_vhpt_range,  vcpu_flush_tlb_vhpt_range)
 PERFCOUNTER_CPU(domain_flush_vtlb_track_entry,  
domain_flush_vtlb_track_entry)
+PERFCOUNTER_CPU(domain_flush_vtlb_local,domain_flush_vtlb_local)
+PERFCOUNTER_CPU(domain_flush_vtlb_global,   domain_flush_vtlb_global)
 PERFCOUNTER_CPU(domain_flush_vtlb_range,domain_flush_vtlb_range)
 
 // domain.c
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [Q] about PCI front/IA64

2006-10-16 Thread Atsushi SAKAI
Hi, Tristan 

Thank you for your comments.

I am wondering the code of arch/i386/pci/pcifront.c and irq-xen.c for x86.
(IA64 does not have it)

The [EMAIL PROTECTED]/i386/pci/irq-xen.c is  pci_sal_read/write for IA64.
(DomU/VTI cannot access PCI configuration)


Thanks
Atsushi SAKAI


Le Vendredi 13 Octobre 2006 13:52, Atsushi SAKAI a ecrit :
 I'm just watching through the source code of PCI front/back driver.
 But I did not understand the interface of Para/IA64 kernel.
 (x86 has driver at linux-sparce/arch/x86/driver/pci
 but IA64 does not have such.)

 If anyone know the interface, please let me know thanks.
pcifront implements a 'driver' to manage a pci bus.
pciback is a PCI driver, almost like any standard PCI driver.

I do not really understand your question.  Please, be more specific.

Tristan.







___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [Q] about PCI front/IA64

2006-10-16 Thread Tristan Gingold
Le Lundi 16 Octobre 2006 11:57, Atsushi SAKAI a écrit :
 Hi, Tristan

 Thank you for your comments.

 I am wondering the code of arch/i386/pci/pcifront.c and irq-xen.c for x86.
 (IA64 does not have it)

 The [EMAIL PROTECTED]/i386/pci/irq-xen.c is  pci_sal_read/write
 for IA64. (DomU/VTI cannot access PCI configuration)
HI,

I am not 100% sure, but I think the routing table is extracted from ACPI 
tables.
DomU doesn't really need this because it uses event channel.

domVTI should use pci_sal_read/write which is emulated by qemu.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH] Xen panics when domvti is destroyed

2006-10-16 Thread Kouya SHIMURA
Hi Alex,

Could you apply an attached patch?
There is no difference between Anthony's patch and my old one
because all vcpus are stopped completely.

Thanks,
Kouya

Signed-off-by: Kouya SHIMURA [EMAIL PROTECTED]
Signed-off-by: Anthony Xu  [EMAIL PROTECTED] 

diff -r fcd746cf4647 xen/arch/ia64/xen/domain.c
--- a/xen/arch/ia64/xen/domain.cSat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/xen/domain.cMon Oct 16 20:42:45 2006 +0900
@@ -342,7 +342,7 @@ void relinquish_vcpu_resources(struct vc
 
 void free_vcpu_struct(struct vcpu *v)
 {
-   if (VMX_DOMAIN(v))
+   if (v-domain-arch.is_vti)
vmx_relinquish_vcpu_resources(v);
else
relinquish_vcpu_resources(v);

Xu, Anthony writes:
  Hi Kouya,
  
  Good catch!
  
  I think the root cause is, when VTI-domain is destroyed, vti-flag in vcpu 
  structure is not set while vti-flag in domain structure is set, so XEN think 
  of this vcpu as domU vcpu by mistake, then issue appears,
  
  Yes, your patch can fix this issue, but seems it may incur memory leak. 
  Maybe following small modification is needed.
  
  Anthony
  
  --- a/xen/arch/ia64/xen/domain.c Sun Oct 08 18:55:12 2006 -0600
  +++ b/xen/arch/ia64/xen/domain.c Tue Oct 10 19:06:44 2006 +0900
  @@ -341,9 +341,11 @@ void relinquish_vcpu_resources(struct vc
   
   void free_vcpu_struct(struct vcpu *v)
   {
  -if (VMX_DOMAIN(v))
  -vmx_relinquish_vcpu_resources(v);
  -else
  +if (v-domain-arch.is_vti) {
  +vmx_relinquish_vcpu_resources(v);
  +} else
   relinquish_vcpu_resources(v);
   
   free_xenheap_pages(v, KERNEL_STACK_SIZE_ORDER);
  
  
  
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Kouya SHIMURA
  Sent: 2006年10月10日 18:31
  To: xen-ia64-devel@lists.xensource.com
  Subject: [Xen-ia64-devel] [PATCH] Xen panics when domvti is destroyed
  
  Hi,
  
  I got the following panic message when I destroyed a domvti which has
  2 vcpus and 2nd vcpu is not booted yet. This panic occurs from cset 11745.
  Attached patch fixes it.
  
  (XEN) ia64_fault, vector=0x1e, ifa=0xf414802a, 
  iip=0xf4030ef0, i
  psr=0x121008226018, isr=0x0a06
  (XEN) Unaligned Reference.
  (XEN) d 0xf7d5c080 domid 0
  (XEN) vcpu 0xf7d3 vcpu 0
  (XEN)
  (XEN) CPU 1
  (XEN) psr : 121008226018 ifs : 840b ip  : 
  [f4030ef1]
  (XEN) ip is at free_domheap_pages+0x131/0x7f0
  (XEN) unat:  pfs : 0206 rsc : 0003
  (XEN) rnat: 0206 bsps: 0003 pr  : 05569aab
  (XEN) ldrs:  ccv :  fpsr: 0009804c0270033f
  (XEN) csd :  ssd : 
  (XEN) b0  : f4074930 b6  : f4034130 b7  : a00100067a90
  (XEN) f6  : 1003e000670a4 f7  : 1003ecccd
  (XEN) f8  : 1003e000c2d06 f9  : 10001c000
  (XEN) f10 : 100099c1aaa0e9000 f11 : 1003e04e0
  (XEN) r1  : f4316d10 r2  :  r3  : f7d37fe8
  (XEN) r8  : 0040 r9  :  r10 : 
  (XEN) r11 : 0009804c0270033f r12 : f7d37930 r13 : f7d3
  (XEN) r14 :  r15 : 00019c29 r16 : 0001
  (XEN) r17 : f7ff7510 r18 : f7a0eb00 r19 : f7ff7500
  (XEN) r20 : f7a0eb08 r21 : f414803a r22 : f414802a
  (XEN) r23 : f40e8200 r24 : 001008226018 r25 : f411f420
  (XEN) r26 : f4119300 r27 :  r28 : fff1
  (XEN) r29 : 8000 r30 : 0001 r31 : f414802a
  (XEN)
  (XEN) Call Trace:
  (XEN)  [f4099b40] show_stack+0x80/0xa0
  (XEN) sp=f7d37560 
  bsp=f7d310d0
  (XEN)  [f406b050] ia64_fault+0x280/0x670
  (XEN) sp=f7d37730 
  bsp=f7d31098
  (XEN)  [f4096b00] ia64_leave_kernel+0x0/0x310
  (XEN) sp=f7d37730 
  bsp=f7d31098
  (XEN)  [f4030ef0] free_domheap_pages+0x130/0x7f0
  (XEN) sp=f7d37930 
  bsp=f7d31040
  (XEN)  [f4074930] pervcpu_vhpt_free+0x30/0x50
  (XEN) sp=f7d37930 
  bsp=f7d31020
  (XEN)  [f40505d0] relinquish_vcpu_resources+0x50/0xf0
  (XEN) sp=f7d37930 
  bsp=f7d30ff0
  (XEN)  [f4050700] free_vcpu_struct+0x90/0xc0
  (XEN) sp=f7d37930 
  bsp=f7d30fd0
  (XEN)  [f401e380] free_domain+0x50/0x90
  (XEN) sp=f7d37930 
  bsp=f7d30fa0
  (XEN)  [f401f100] domain_destroy+0x2e0/0x320
  (XEN)  

[Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-16 Thread DOI Tsunehisa
Hi all,

  We've ported PV-on-HVM drivers for IPF. But I think that
only few tries it. Thus, I try to describe to use it.

  And I attach several patches about PV-on-HVM.

+ fix-warning.patch
  - warning fix for HVM PV driver
+ notsafe-comment.patch
  - add not-SMP-safe comment about PV-on-HVM
  - to take Isaku's suggestion.
+ pv-backport.patch (preliminary)
  - current HVM PV driver for only 2.6.16 or 2.6.16.* kernel
  - this is preliminary patch for backporting to before 2.6.16
kernel
  - we tested only compiling on RHEL4.

[Usage of PV-on-HVM]

  1) get xen-ia64-unstable.hg tree (after cs:11805) and built it.

  2) create a guest system image.
 - simply, install guest system on VT-i domain

  3) build linux-2.6.16 kernel for guest system
 - get linux-2.6.16 kernel source and build

  4) change guest kernel in the image to linux-2.6.16 kernel
 - edit config file of boot loader

  5) build PV-on-HVM drivers
 # cd xen-ia64-unstable.hg/unmodified_drivers/linux-2.6
 # sh mkbuildtree
 # make -C /usr/src/linux-2.6.16 M=$PWD modules

  6) copy the drivers to guest system image
 - mount guest system image with lomount command.
 - copy the drivers to guest system image
   # cp -p */*.ko guest_system...

  7) start VT-i domain

  8) attach drivers
domvti# insmod xen-platform-pci.ko
domvti# insmod xenbus.ko
domvti# insmod xen-vbd.ko
domvti# insmod xen-vnif.ko

  9) attach devices with xm block-attach/network-attach
 - this operation is same for dom-u

Thanks,
- Tsunehisa Doi
# HG changeset patch
# User [EMAIL PROTECTED]
# Node ID 199aa46b3aa2bd3e9e684344e000d4ad40177541
# Parent  bf0a6f241c5eb7bea8b178b490ed32178c7b5bff
warning fix for HVM PV driver

Signed-off-by: Tsunehisa Doi [EMAIL PROTECTED]

diff -r bf0a6f241c5e -r 199aa46b3aa2 
linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c
--- a/linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c  Mon Oct 16 20:00:12 
2006 +0900
+++ b/linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c  Mon Oct 16 20:20:16 
2006 +0900
@@ -36,8 +36,10 @@ unsigned long
 unsigned long
 xencomm_vaddr_to_paddr(unsigned long vaddr)
 {
+#ifndef CONFIG_VMX_GUEST
struct page *page;
struct vm_area_struct *vma;
+#endif
 
if (vaddr == 0)
return 0;
# HG changeset patch
# User [EMAIL PROTECTED]
# Node ID bf0a6f241c5eb7bea8b178b490ed32178c7b5bff
# Parent  fcd746cf4647e06b8e88e620c29610ba43e3ad7c
Add not-SMP-safe comment about PV-on-HVM

Signed-off-by: Tsunehisa Doi [EMAIL PROTECTED]

diff -r fcd746cf4647 -r bf0a6f241c5e xen/arch/ia64/xen/mm.c
--- a/xen/arch/ia64/xen/mm.cSat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/xen/mm.cMon Oct 16 20:00:12 2006 +0900
@@ -400,6 +400,7 @@ gmfn_to_mfn_foreign(struct domain *d, un
 
// This function may be called from __gnttab_copy()
// during destruction of VT-i domain with PV-on-HVM driver.
+   // ** FIXME: This is not SMP-safe yet about p2m table. **
if (unlikely(d-arch.mm.pgd == NULL)) {
if (VMX_DOMAIN(d-vcpu[0]))
return INVALID_MFN;
diff -r fcd746cf4647 -r bf0a6f241c5e xen/arch/ia64/xen/vhpt.c
--- a/xen/arch/ia64/xen/vhpt.c  Sat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/xen/vhpt.c  Mon Oct 16 20:00:12 2006 +0900
@@ -216,6 +216,7 @@ void vcpu_flush_vtlb_all(struct vcpu *v)
   grant_table share page from guest_physmap_remove_page()
   in arch_memory_op() XENMEM_add_to_physmap to realize
   PV-on-HVM feature. */
+   /* FIXME: This is not SMP-safe yet about p2m table */
/* Purge vTLB for VT-i domain */
thash_purge_all(v);
}
# HG changeset patch
# User [EMAIL PROTECTED]
# Node ID 7089d11a9e0b723079c83697c529970d7b3b0750
# Parent  199aa46b3aa2bd3e9e684344e000d4ad40177541
Modify for PV-on-HVM backport

diff -r 199aa46b3aa2 -r 7089d11a9e0b 
linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c
--- a/linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c  Mon Oct 16 20:20:16 
2006 +0900
+++ b/linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c  Mon Oct 16 20:21:40 
2006 +0900
@@ -22,6 +22,10 @@
 #include asm/page.h
 #include asm/xen/xencomm.h
 
+#ifdef HAVE_COMPAT_H
+#include compat.h
+#endif
+
 static int xencomm_debug = 0;
 
 static unsigned long kernel_start_pa;
diff -r 199aa46b3aa2 -r 7089d11a9e0b 
linux-2.6-xen-sparse/drivers/xen/blkfront/blkfront.c
--- a/linux-2.6-xen-sparse/drivers/xen/blkfront/blkfront.c  Mon Oct 16 
20:20:16 2006 +0900
+++ b/linux-2.6-xen-sparse/drivers/xen/blkfront/blkfront.c  Mon Oct 16 
20:21:40 2006 +0900
@@ -48,6 +48,10 @@
 #include asm/hypervisor.h
 #include asm/maddr.h
 
+#ifdef HAVE_COMPAT_H
+#include compat.h
+#endif
+
 #define BLKIF_STATE_DISCONNECTED 0
 #define BLKIF_STATE_CONNECTED1
 #define BLKIF_STATE_SUSPENDED2
@@ -468,6 +472,27 @@ int blkif_ioctl(struct inode *inode, str
  command, (long)argument, 

Re: [PATCH] fix dom0 builder TAKE 2(was Re: [Xen-ia64-devel] [PATCH] fix dom0 builder)

2006-10-16 Thread Alex Williamson
On Mon, 2006-10-16 at 15:06 +0900, Isaku Yamahata wrote:
 Hi Alex.
 
 If CONFIG_FLATMEM=y is enabled(xenLinux default is so),
 it seems sane that the total is about 5GB.
 Do disabling CONFIG_FLATMEM=n and enabling CONFIG_DISCONTIGMEM=y
 (or CONFIG_SPARSEMEM=y) make difference?
 And I also noticed that xenLinux default config disables
 CONFIG_VIRTUAL_MEM_MAP=n.
 Since dom0 may see underlying machine memory layout with the patch,
 should we revise default Linux config?

Hi Isaku,

   Yes, I think we'll need to switch to discontig/sparsemem for the dom0
kernel if the memmory map is going to reflect the bare metal hardware
memory layout.  Unfortunately, just switching to discontig w/ virtual
memmap does not solve the problem I'm seeing.  The dom0 kernel reports
the correct amount of memory early in bootup.  Perhaps the balloon
driver is causing this(?)  Thanks,

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH 2/12]MCA handler support for Xen/ia64 TAKE 2

2006-10-16 Thread Alex Williamson
On Tue, 2006-10-10 at 20:02 +0900, SUZUKI Kazuhiro wrote:

 +#ifdef XEN
 + // 5. VHPT
 +#if VHPT_ENABLED
 + mov r24=VHPT_SIZE_LOG22
 + movl r22=VHPT_ADDR
 + mov r21=IA64_TR_VHPT
...

 +#ifdef XEN
 + // 5. VHPT
 +#if VHPT_ENABLED
 + mov r24=VHPT_SIZE_LOG22
 + movl r22=VHPT_ADDR

Hi Kaz,

   VHPT_ADDR was just removed from the tree in this patch:

http://xenbits.xensource.com/ext/xen-ia64-unstable.hg?cs=685586518b2e

Can you send me a patch to apply on top of this one that removes
VHPT_ADDR?  Thanks,

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [Patch] Add buffer IO mechanism for Xen/VTi domain.

2006-10-16 Thread Zhang, Xiantao
This patch adds buffer IO mechanism for Xen/VTi domain. It catches up
with Xen/IA32 side. Current implementation can accelerate Windows
geust's dense IO operations @ boot time. 
I divided it into two parts. One is only related to Qemu, and the other
one is main body. 
Signed-off-by : Zhang xiantao [EMAIL PROTECTED]
Thanks  Best Regards
-Xiantao

OTC,Intel Corporation

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [Patch]Add buffer IO mechanism for Xen/VTi domain[Part 2]

2006-10-16 Thread Zhang, Xiantao
Main part. 
Signed-off-by: Zhang xiantao [EMAIL PROTECTED]
Thanks  Best Regards
-Xiantao

OTC,Intel Corporation



buffer_io.patch
Description: buffer_io.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] VTI also can not be created in rawhide.

2006-10-16 Thread You, Yongkang
Hi all, 

When I tried to create VTI in kernel-xen-2.6.18-1.2784.fc6, I met another 
strange issue.

It would report disk image does not exist. If I not giving disk option, it 
will report guest firmware does not exist (kernel). But they are all in the 
path. :(

Did you meet it? My xen is xen-3.0.2-44.

Best Regards,
Yongkang (Kangkang) 永康

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] VTI also can not be created in rawhide.

2006-10-16 Thread You, Yongkang
Sorry to send to the wrong mailling list. :(
I should send it to fedora-ia64-xen.

Best Regards,
Yongkang (Kangkang) 永康

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of You,
Yongkang
Sent: 2006年10月17日 11:40
To: xen-ia64-devel@lists.xensource.com
Subject: [Xen-ia64-devel] VTI also can not be created in rawhide.

Hi all,

When I tried to create VTI in kernel-xen-2.6.18-1.2784.fc6, I met another
strange issue.

It would report disk image does not exist. If I not giving disk option, it 
will
report guest firmware does not exist (kernel). But they are all in the path. 
:(

Did you meet it? My xen is xen-3.0.2-44.

Best Regards,
Yongkang (Kangkang) 永康

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel