Re: [Xen-ia64-devel] Re: PATCH: all registers in vcpu_guest_context
Quoting Alex Williamson [EMAIL PROTECTED]: On Mon, 2007-05-07 at 18:03 -0600, Alex Williamson wrote: On Sun, 2007-05-06 at 06:43 +0200, Tristan Gingold wrote: Hi, I have fixed a stupid bug (I missed a few registers, including sp!). I can now saverestore although I have bad mpa in the p2m_expose area after restoring (but I think p2m_expose is incompatible with saverestore - Isaku, please confirm or infirm). Applied. Thanks for the fix. Hi Tristan, I just noticed another problem, checker doesn't build on x86_32/64 with this: Yes, I saw this yesterday during a cross-make. Could you take a look? Thanks, I won't be able to fix this before 2 or 3 weeks (I am away). The fix is not trivial because mkheader.py doesn't handle union yet. A simple work-around is not to build ia64.h on x86. Sorry for this issue. Tristan. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] VTD is coming
Hi all, This link is VTD spec. http://download.intel.com/technology/computing/vptech/Intel(r)_VT_for_Di rect_IO.pdf VTD is used to translate GPA (issued by device)to HPA . This way dom0 can assign a device to domU or VTi-domain. Then domU and VTi-domain can directly access this device using GPA, This will definitely improve performance greatly. For example, Dom0 assigns a NIC to domU, Then domU commands NIC to read data from main memory with GPA address by using DMA, DMA for NIC read data from main memory with GPA, it is VTD to translate GPA to HPA. It is no doubt we should introduce VTD in XEN/IPF For supporting VTD, we need to provide VTD a page table, from which VTD can get GPA to HPA translations. We already have P2M which describes GPA to HPA mapping for every domU or VTI-domain. It is natural for VTD to use P2M as its page table. However, P2M is using 16K page size on IPF/xen, while VTD is using 4k page size ( or 2M ...). There are two solutions. 1. Use separate page table for VTD. Pro: Maybe is simpler way.( I'm not sure) Con: 1. Waste some memory 2. Xen needs to synchronize VTD page table and P2M, when foreign map or swaping page happens. 2. Change P2M according to VTD page table format. Pro; Only one table to describe GPA to HPA mapping Con: We need to change related part inside xen, due to VTD page table is using 4K page size. I prefer the second solution, which is a more clean way. What's your opinion? Thanks, Anthony ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] VTD is coming
Hi Anthony. It's the interesting topic. At first, let me say I haven't taken close look into VT-d specification /x86 VT-d code yet. You seems to be thinking about IPF specific implementation, right? The x86 VT-d patches which was posted to xen-devel has separate page table for VTD. I don't know its current development status, though. Making it arch-generic, it might be reused. pros: easy and fast. Probably appropriate as 1st step. Potentially we can have common code for both x86 and ia64. cons: The resulting code might be ugly. Later we want cleaner code/performance, we might have to reimplement as 2nd step. thanks. On Thu, May 10, 2007 at 03:42:05PM +0800, Xu, Anthony wrote: Hi all, This link is VTD spec. http://download.intel.com/technology/computing/vptech/Intel(r) _VT_for_Direct_IO.pdf VTD is used to translate GPA (issued by device)to HPA . This way dom0 can assign a device to domU or VTi-domain. Then domU and VTi-domain can directly access this device using GPA, This will definitely improve performance greatly. For example, Dom0 assigns a NIC to domU, Then domU commands NIC to read data from main memory with GPA address by using DMA, DMA for NIC read data from main memory with GPA, it is VTD to translate GPA to HPA. It is no doubt we should introduce VTD in XEN/IPF For supporting VTD, we need to provide VTD a page table, from which VTD can get GPA to HPA translations. We already have P2M which describes GPA to HPA mapping for every domU or VTI-domain. It is natural for VTD to use P2M as its page table. However, P2M is using 16K page size on IPF/xen, while VTD is using 4k page size ( or 2M ?). There are two solutions. 1. Use separate page table for VTD. Pro: Maybe is simpler way.( I?m not sure) Con: 1. Waste some memory 2. Xen needs to synchronize VTD page table and P2M, when foreign map or swaping page happens. 2. Change P2M according to VTD page table format. Pro; Only one table to describe GPA to HPA mapping Con: We need to change related part inside xen, due to VTD page table is using 4K page size. I prefer the second solution, which is a more clean way. What?s your opinion? Thanks, Anthony ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel -- yamahata ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] [PATCH] implement XENMEM_machine_memory_map on ia64.
implement XENMEM_machine_memory_map on ia64. This is necessary for kexec/kdump for xen/ia64. kexec-tools needs to know real machine's memory map. -- yamahata # HG changeset patch # User [EMAIL PROTECTED] # Date 1178790575 -32400 # Node ID bf12ab84848a384f9744d751605296225ac906b5 # Parent eabda101b0c54ac51d4a7d335e45144425ca2fda implement XENMEM_machine_memory_map on ia64. This is necessary for kexec/kdump for xen/ia64. kexec-tools needs to know real machine's memory map. PATCHNAME: implement_xenmem_machine_memory_map Signed-off-by: Isaku Yamahata [EMAIL PROTECTED] diff -r eabda101b0c5 -r bf12ab84848a linux-2.6-xen-sparse/arch/ia64/xen/xcom_hcall.c --- a/linux-2.6-xen-sparse/arch/ia64/xen/xcom_hcall.c Tue May 08 13:12:52 2007 -0600 +++ b/linux-2.6-xen-sparse/arch/ia64/xen/xcom_hcall.c Thu May 10 18:49:35 2007 +0900 @@ -226,6 +226,8 @@ xencomm_hypercall_memory_op(unsigned int { XEN_GUEST_HANDLE(xen_pfn_t) extent_start_va[2]; xen_memory_reservation_t *xmr = NULL, *xme_in = NULL, *xme_out = NULL; + xen_memory_map_t *memmap = NULL; + XEN_GUEST_HANDLE(void) buffer; int rc; switch (cmd) { @@ -254,6 +256,14 @@ xencomm_hypercall_memory_op(unsigned int (((xen_memory_exchange_t *)arg)-out); break; + case XENMEM_machine_memory_map: + memmap = (xen_memory_map_t *)arg; + xen_guest_handle(buffer) = xen_guest_handle(memmap-buffer); + set_xen_guest_handle(memmap-buffer, + (void *)xencomm_create_inline( +xen_guest_handle(memmap-buffer))); + break; + default: printk(%s: unknown memory op %d\n, __func__, cmd); return -ENOSYS; @@ -274,6 +284,10 @@ xencomm_hypercall_memory_op(unsigned int xen_guest_handle(extent_start_va[0]); xen_guest_handle(xme_out-extent_start) = xen_guest_handle(extent_start_va[1]); + break; + + case XENMEM_machine_memory_map: + xen_guest_handle(memmap-buffer) = xen_guest_handle(buffer); break; } diff -r eabda101b0c5 -r bf12ab84848a linux-2.6-xen-sparse/arch/ia64/xen/xcom_mini.c --- a/linux-2.6-xen-sparse/arch/ia64/xen/xcom_mini.c Tue May 08 13:12:52 2007 -0600 +++ b/linux-2.6-xen-sparse/arch/ia64/xen/xcom_mini.c Thu May 10 18:49:35 2007 +0900 @@ -238,6 +238,19 @@ xencomm_mini_hypercall_memory_op(unsigne argsize = sizeof (xen_add_to_physmap_t); break; + case XENMEM_machine_memory_map: + { + xen_memory_map_t *memmap = (xen_memory_map_t *)arg; + argsize = sizeof(*memmap); + rc = xencomm_create_mini(xc_area, nbr_area, + xen_guest_handle(memmap-buffer), + memmap-nr_entries, desc); + if (rc) + return rc; + set_xen_guest_handle(memmap-buffer, (void *)desc); + break; + } + default: printk(%s: unknown mini memory op %d\n, __func__, cmd); return -ENOSYS; diff -r eabda101b0c5 -r bf12ab84848a xen/arch/ia64/xen/mm.c --- a/xen/arch/ia64/xen/mm.c Tue May 08 13:12:52 2007 -0600 +++ b/xen/arch/ia64/xen/mm.c Thu May 10 18:49:35 2007 +0900 @@ -2142,6 +2142,37 @@ arch_memory_op(int op, XEN_GUEST_HANDLE( break; } +case XENMEM_machine_memory_map: +{ +struct xen_memory_map memmap; +struct xen_ia64_memmap_info memmap_info; +XEN_GUEST_HANDLE(char) buffer; + +if (!IS_PRIV(current-domain)) +return -EINVAL; +if (copy_from_guest(memmap, arg, 1)) +return -EFAULT; +if (memmap.nr_entries +sizeof(memmap_info) + ia64_boot_param-efi_memmap_size) +return -EINVAL; + +memmap.nr_entries = +sizeof(memmap_info) + ia64_boot_param-efi_memmap_size; +memset(memmap_info, 0, sizeof(memmap_info)); +memmap_info.efi_memmap_size = ia64_boot_param-efi_memmap_size; +memmap_info.efi_memdesc_size = ia64_boot_param-efi_memdesc_size; +memmap_info.efi_memdesc_version = ia64_boot_param-efi_memdesc_version; + +buffer = guest_handle_cast(memmap.buffer, char); +if (copy_to_guest(buffer, (char*)memmap_info, sizeof(memmap_info)) || +copy_to_guest_offset(buffer, sizeof(memmap_info), + (char*)__va(ia64_boot_param-efi_memmap), + ia64_boot_param-efi_memmap_size) || +copy_to_guest(arg, memmap, 1)) +return -EFAULT; +return 0; +} + default: return -ENOSYS; } diff -r eabda101b0c5 -r bf12ab84848a xen/include/public/arch-ia64.h --- a/xen/include/public/arch-ia64.h Tue May 08 13:12:52 2007 -0600 +++ b/xen/include/public/arch-ia64.h Thu May 10 18:49:35 2007 +0900 @@ -317,6 +317,21 @@ struct arch_vcpu_info { }; typedef struct arch_vcpu_info arch_vcpu_info_t; +/* + * This structure is used for magic page in domain pseudo physical address + * space and the result of XENMEM_machine_memory_map. + * As the XENMEM_machine_memory_map result, + * xen_memory_map::nr_entries indicates the size in bytes + * including struct xen_ia64_memmap_info. Not the number of entries. + */ +struct xen_ia64_memmap_info { +uint64_t efi_memmap_size; /* size of EFI memory map */ +
[Xen-ia64-devel] [PATCH] Return ENOMEM if VPD allocation failed
Hi, Usually ASSRET() is (void)0. Therefore if VPD allocation fails with xenheap shortage or fragmentation, NULL pointer access occurs in vmx_final_setup_guest(). This patch fixes it. BTW, I succeeded to creating 60 UP-domains. But I failed to creating many SMP-domains with xenheap shortage. I failed in the following environments. - 55 domains, and - each 5 vcpus If we would like to support many domains and many vcpus, I think that we should expand xenheap. I think that the simplest method is changing PS of ITR[0] and DTR[0] to 256M byte. Do you have good ideas? Signed-off-by: Masaki Kanno [EMAIL PROTECTED] Best regards, Kan alloc_vpd.patch Description: Binary data ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] [PATCH] Fix allocate_rid_range()
Hi, I found a bug in allocate_rid_range(). Though there is a free ridblock_owner[], allocate_rid_range() cannot allocate it. Signed-off-by: Masaki Kanno [EMAIL PROTECTED] Best regards, Kan alloc_rid.patch Description: Binary data ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] Re: PATCH: all registers in vcpu_guest_context
On Thu, 2007-05-10 at 09:33 +0200, [EMAIL PROTECTED] wrote: Quoting Alex Williamson [EMAIL PROTECTED]: I just noticed another problem, checker doesn't build on x86_32/64 with this: Yes, I saw this yesterday during a cross-make. Could you take a look? Thanks, I won't be able to fix this before 2 or 3 weeks (I am away). The fix is not trivial because mkheader.py doesn't handle union yet. A simple work-around is not to build ia64.h on x86. Sorry for this issue. Hi Tristan, Is something like the patch below what you had in mind? Thanks, Alex Signed-off-by: Alex Williamson [EMAIL PROTECTED] --- diff -r eabda101b0c5 xen/include/public/foreign/mkheader.py --- a/xen/include/public/foreign/mkheader.pyTue May 08 13:12:52 2007 -0600 +++ b/xen/include/public/foreign/mkheader.pyThu May 10 12:39:21 2007 -0600 @@ -1,7 +1,7 @@ #!/usr/bin/python import sys, re; -from structs import structs, defines; +from structs import unions, structs, defines; # command line arguments arch= sys.argv[1]; @@ -110,6 +110,16 @@ input = re.compile(/\*(.*?)\*/, re.S). input = re.compile(/\*(.*?)\*/, re.S).sub(, input) input = re.compile(\n\s*\n, re.S).sub(\n, input); +# add unions to output +for union in unions: +regex = union\s+%s\s*\{(.*?)\n\}; % union; +match = re.search(regex, input, re.S) +if None == match: +output += #define %s_has_no_%s 1\n % (arch, union); +else: +output += union %s_%s {%s\n};\n % (union, arch, match.group(1)); +output += \n; + # add structs to output for struct in structs: regex = struct\s+%s\s*\{(.*?)\n\}; % struct; @@ -135,6 +145,10 @@ for define in defines: replace = define + _ + arch; output = re.sub(\\b%s\\b % define, replace, output); +# replace: unions +for union in unions: +output = re.sub(\\b(union\s+%s)\\b % union, \\1_%s % arch, output); + # replace: structs + struct typedefs for struct in structs: output = re.sub(\\b(struct\s+%s)\\b % struct, \\1_%s % arch, output); diff -r eabda101b0c5 xen/include/public/foreign/reference.size --- a/xen/include/public/foreign/reference.size Tue May 08 13:12:52 2007 -0600 +++ b/xen/include/public/foreign/reference.size Thu May 10 12:53:04 2007 -0600 @@ -7,7 +7,8 @@ cpu_user_regs| 68 200 cpu_user_regs| 68 200 496 xen_ia64_boot_param | - - 96 ia64_tr_entry| - - 32 -vcpu_extra_regs | - - - +vcpu_tr_regs | - - 512 +vcpu_guest_context_regs | - - 21872 vcpu_guest_context |28005168 21904 arch_vcpu_info | 24 16 0 vcpu_time_info | 32 32 32 diff -r eabda101b0c5 xen/include/public/foreign/structs.py --- a/xen/include/public/foreign/structs.py Tue May 08 13:12:52 2007 -0600 +++ b/xen/include/public/foreign/structs.py Thu May 10 12:49:41 2007 -0600 @@ -1,4 +1,7 @@ # configuration: what needs translation + +unions = [ vcpu_cr_regs, +vcpu_ar_regs ]; structs = [ start_info, trap_info, @@ -6,7 +9,8 @@ structs = [ start_info, cpu_user_regs, xen_ia64_boot_param, ia64_tr_entry, -vcpu_extra_regs, +vcpu_tr_regs, +vcpu_guest_context_regs, vcpu_guest_context, arch_vcpu_info, vcpu_time_info, ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] PATCH: rewrite vcpu_get_psr
On Wed, 2007-05-09 at 06:10 +0200, Tristan Gingold wrote: Hi, a stripped-down version of a previous patch: reimplement vcpu_get_psr. Should be a noop, I don't think performance should be affected. Unfortunately... with patch: real6m0.483s user21m10.680s sys 1m18.470s without patch: real4m47.767s user16m55.060s sys 0m53.240s :( Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel][PATCH]get guest os type
Hi Anthony, Cool, I'm glad this works. Couple minor comments... On Wed, 2007-05-09 at 17:11 +0800, Xu, Anthony wrote: diff -r eabda101b0c5 xen/arch/ia64/vmx/mmio.c --- a/xen/arch/ia64/vmx/mmio.c Tue May 08 13:12:52 2007 -0600 +++ b/xen/arch/ia64/vmx/mmio.c Wed May 09 16:10:28 2007 +0800 @@ -188,6 +188,13 @@ int vmx_ide_pio_intercept(ioreq_t *p, u6 #define TO_LEGACY_IO(pa) (((pa)122)|((pa)0x3)) +static inline void set_os_type(VCPU *v, u64 type) +{ +if(typeOS_BASE typeOS_END) +v-domain-arch.vmx_platform.gos_type = type; +} I think a gdprintk at some level that won't typically get printed would be appropriate here. @@ -210,7 +217,9 @@ static void legacy_io_access(VCPU *vcpu, p-df = 0; p-io_count++; - + +if(dir==IOREQ_WRITE p-addr==OS_TYPE_PORT) +set_os_type(v, *val); Should we 'return' here? Any chance Intel could also implement the GFW hooks for this in http://xenbits.xensource.com/ext/efi-vfirmware.hg? Thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] PATCH: define guest_mode (instead of user_mode)
On Wed, 2007-05-09 at 05:33 +0200, Tristan Gingold wrote: On Tue, May 08, 2007 at 10:57:42AM -0600, Alex Williamson wrote: On Tue, 2007-05-08 at 15:26 +0200, Tristan Gingold wrote: diff -r 8519e5db6510 -r 8e5083feaa52 xen/arch/ia64/vmx/vmx_process.c --- a/xen/arch/ia64/vmx/vmx_process.c Tue May 08 15:07:51 2007 +0200 +++ b/xen/arch/ia64/vmx/vmx_process.c Tue May 08 15:24:57 2007 +0200 @@ -164,7 +164,7 @@ vmx_ia64_handle_break (unsigned long ifa if (iim == 0) vmx_die_if_kernel(Break 0 in Hypervisor., regs, iim); -if (!user_mode(regs)) { +if (ia64_psr(regs)-cpl == 0) { Why is this first one a special case? ie. why not !guest_mode(regs) same as the next one? Thanks, This is VTi code. In my opinion, guest_mode makes sense only in PV mode. Here we are testing wether kernel code is executing and not wether guest is. Ok. Applied. Thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] [PATCH] quieten lookup_domain_mpa() when domain is dying.
On Wed, 2007-05-09 at 15:15 +0900, Isaku Yamahata wrote: quieten lookup_domain_mpa() when domain is dying. message clean up in lookup_domain_mpa(). It is possible that current != d. This patch addresses the bug 944. http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=944 How to reproduce 1. create domU 2. From dom0, 'ping -f domU ' 3. 'xm destroy domU' Applied. I can still get some of those errors continuously rebooting a domain, but they're definitely getting better. Thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] [PATCH] implement XENMEM_machine_memory_map on ia64.
On Thu, 2007-05-10 at 18:51 +0900, Isaku Yamahata wrote: implement XENMEM_machine_memory_map on ia64. This is necessary for kexec/kdump for xen/ia64. kexec-tools needs to know real machine's memory map. Applied. Thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] [PATCH] Return ENOMEM if VPD allocation failed
If we would like to support many domains and many vcpus, I think that we should expand xenheap. I think that the simplest method is changing PS of ITR[0] and DTR[0] to 256M byte. Do you have good ideas? Agree, we should expand xenheap if we want to support more domain/vcpu. Anthony -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Masaki Kanno Sent: 2007年5月10日 20:01 To: xen-ia64-devel@lists.xensource.com Subject: [Xen-ia64-devel] [PATCH] Return ENOMEM if VPD allocation failed Hi, Usually ASSRET() is (void)0. Therefore if VPD allocation fails with xenheap shortage or fragmentation, NULL pointer access occurs in vmx_final_setup_guest(). This patch fixes it. BTW, I succeeded to creating 60 UP-domains. But I failed to creating many SMP-domains with xenheap shortage. I failed in the following environments. - 55 domains, and - each 5 vcpus If we would like to support many domains and many vcpus, I think that we should expand xenheap. I think that the simplest method is changing PS of ITR[0] and DTR[0] to 256M byte. Do you have good ideas? Signed-off-by: Masaki Kanno [EMAIL PROTECTED] Best regards, Kan ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] Xen/IA64 Healthiness Report -Cset#15048
Xen/IA64 Healthiness Report All the cases in this Cset have passed. Testing Environment: Platform: Tiger4 Processor: Itanium 2 Processor Logic Processors number: 8 (2 processors with Due Core) PAL version: 8.47 Service OS: RHEL4u3 IA64 SMP with 2 VCPUs VTI Guest OS: RHEL4u2 RHEL4u3 XenU Guest OS: RHEL4u2 Xen IA64 Unstable tree: 15048:7d8acd319d5b Xen Schedule: credit VTI Guest Firmware Flash.fd.2007.04.11 Summary Test Report: - Total cases: 18 Passed:18 Failed: 0 Case Name Status Case Description Pv pass Win_PV pass Four_SMPVTI_Coexistpass 4 VTI (mem=256, vcpus=2) Two_UP_VTI_Co pass 2 UP_VTI (mem=256) One_UP_VTIpass1 UP_VTI (mem=256) One_UP_XenU pass1 UP_xenU(mem=256) SMPVTI_LTPpassVTI (vcpus=4, mem=512) run LTP SMPVTI_and_SMPXenU pass 1 VTI + 1 xenU (mem=256 vcpus=2) Two_SMPXenU_Coexistpass2 xenU (mem=256, vcpus=2) One_SMPVTI_4096M pass 1 VTI (vcpus=2, mem=4096M) SMPVTI_Network pass 1 VTI (mem=256,vcpu=2) and 'ping' SMPXenU_Networkpass 1 XenU (vcpus=2) and 'ping' One_SMP_XenU pass 1 SMP xenU (vcpus=2) One_SMP_VTIpass 1 SMP VTI (vcpus=2) SMPVTI_Kernel_Build pass VTI (vcpus=4) and do Kernel Build Four_SMPVTI_Coexist pass 4 VTI domains( mem=256, vcpu=2) SMPVTI_Windows pass SMPVTI windows(vcpu=2) SMPWin_SMPVTI_SMPxenU pass SMPVTI Linux/Windows XenU UPVTI_Kernel_Build pass 1 UP VTI and do kernel build Notes: - The last stable changeset: - 15044:eabda101b0c5 Best Regards Liuqing ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
You seems to be thinking about IPF specific implementation, right? The x86 VT-d patches which was posted to xen-devel has separate page table for VTD. I don't know its current development status, though. Making it arch-generic, it might be reused. Even in ia32/xen, they are planning to merge these two tables. If host processor is ia32e (64bit). Processor page table and vtd page table have similar format. That's reasonable and easy to merge these two page tables. If host processor is ia32(32bit) These two page tables can't merge. Because P2M is used as shadow page table (which is accessed by hardware) when guest is in protect mode with paging disabled and processor page table and vtd page table are very different. XEN will decide if it will use one page table or separate page tables at boot process when it can find out the underlying processor type. For IPF/xen side, P2M is not accessed by hardware, So we can merge these two page tables from beginning. Thanks, Anthony -Original Message- From: Isaku Yamahata [mailto:[EMAIL PROTECTED] Sent: 2007年5月10日 17:29 To: Xu, Anthony Cc: xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel] VTD is coming Hi Anthony. It's the interesting topic. At first, let me say I haven't taken close look into VT-d specification /x86 VT-d code yet. You seems to be thinking about IPF specific implementation, right? The x86 VT-d patches which was posted to xen-devel has separate page table for VTD. I don't know its current development status, though. Making it arch-generic, it might be reused. pros: easy and fast. Probably appropriate as 1st step. Potentially we can have common code for both x86 and ia64. cons: The resulting code might be ugly. Later we want cleaner code/performance, we might have to reimplement as 2nd step. thanks. On Thu, May 10, 2007 at 03:42:05PM +0800, Xu, Anthony wrote: Hi all, This link is VTD spec. http://download.intel.com/technology/computing/vptech/Intel(r) _VT_for_Direct_IO.pdf VTD is used to translate GPA (issued by device)to HPA . This way dom0 can assign a device to domU or VTi-domain. Then domU and VTi-domain can directly access this device using GPA, This will definitely improve performance greatly. For example, Dom0 assigns a NIC to domU, Then domU commands NIC to read data from main memory with GPA address by using DMA, DMA for NIC read data from main memory with GPA, it is VTD to translate GPA to HPA. It is no doubt we should introduce VTD in XEN/IPF For supporting VTD, we need to provide VTD a page table, from which VTD can get GPA to HPA translations. We already have P2M which describes GPA to HPA mapping for every domU or VTI-domain. It is natural for VTD to use P2M as its page table. However, P2M is using 16K page size on IPF/xen, while VTD is using 4k page size ( or 2M ?). There are two solutions. 1. Use separate page table for VTD. Pro: Maybe is simpler way.( I?m not sure) Con: 1. Waste some memory 2. Xen needs to synchronize VTD page table and P2M, when foreign map or swaping page happens. 2. Change P2M according to VTD page table format. Pro; Only one table to describe GPA to HPA mapping Con: We need to change related part inside xen, due to VTD page table is using 4K page size. I prefer the second solution, which is a more clean way. What?s your opinion? Thanks, Anthony ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel -- yamahata ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel][PATCH]get guest os type
The revised one per your comments Any chance Intel could also implement the GFW hooks for this in http://xenbits.xensource.com/ext/efi-vfirmware.hg? We'll look at this. Thanks, Anthony -Original Message- From: Alex Williamson [mailto:[EMAIL PROTECTED] Sent: 2007年5月11日 5:42 To: Xu, Anthony Cc: xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel][PATCH]get guest os type Hi Anthony, Cool, I'm glad this works. Couple minor comments... On Wed, 2007-05-09 at 17:11 +0800, Xu, Anthony wrote: diff -r eabda101b0c5 xen/arch/ia64/vmx/mmio.c --- a/xen/arch/ia64/vmx/mmio.c Tue May 08 13:12:52 2007 -0600 +++ b/xen/arch/ia64/vmx/mmio.c Wed May 09 16:10:28 2007 +0800 @@ -188,6 +188,13 @@ int vmx_ide_pio_intercept(ioreq_t *p, u6 #define TO_LEGACY_IO(pa) (((pa)122)|((pa)0x3)) +static inline void set_os_type(VCPU *v, u64 type) +{ +if(typeOS_BASE typeOS_END) +v-domain-arch.vmx_platform.gos_type = type; +} I think a gdprintk at some level that won't typically get printed would be appropriate here. @@ -210,7 +217,9 @@ static void legacy_io_access(VCPU *vcpu, p-df = 0; p-io_count++; - + +if(dir==IOREQ_WRITE p-addr==OS_TYPE_PORT) +set_os_type(v, *val); Should we 'return' here? Any chance Intel could also implement the GFW hooks for this in http://xenbits.xensource.com/ext/efi-vfirmware.hg? Thanks, Alex -- Alex Williamson HP Open Source Linux Org. guest_os_type2.patch Description: guest_os_type2.patch ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] VTD is coming
On Fri, May 11, 2007 at 09:19:45AM +0800, Xu, Anthony wrote: You seems to be thinking about IPF specific implementation, right? The x86 VT-d patches which was posted to xen-devel has separate page table for VTD. I don't know its current development status, though. Making it arch-generic, it might be reused. Even in ia32/xen, they are planning to merge these two tables. If host processor is ia32e (64bit). Processor page table and vtd page table have similar format. That's reasonable and easy to merge these two page tables. If host processor is ia32(32bit) These two page tables can't merge. Because P2M is used as shadow page table (which is accessed by hardware) when guest is in protect mode with paging disabled and processor page table and vtd page table are very different. XEN will decide if it will use one page table or separate page tables at boot process when it can find out the underlying processor type. Do you mean nexted page table(NPT) by 'processor page table'? ia32 VT-d code would be very specific to ia32 so that it seems very difficult to have arch generic code. It would be reasonable to go for ia64 specific VT-d implementation. -- yamahata ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
-Original Message- From: Isaku Yamahata [mailto:[EMAIL PROTECTED] Sent: 2007年5月11日 10:33 To: Xu, Anthony Cc: xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel] VTD is coming Do you mean nexted page table(NPT) by 'processor page table'? Processor page table refers to shadow page table which is used to emulate guest page table, this is not related to NPT. ia32 VT-d code would be very specific to ia32 so that it seems very difficult to have arch generic code. It would be reasonable to go for ia64 specific VT-d implementation. Yes, we need to prepare for VTD/IPF, I think the first step is to change P2M to vtd page table format. P2M is related to several components inside XEN, it might take a while, It is better, if this can be done earlier. Then we can leverage code from IA32 as possible, after vtd/ia32 code is checked in. I think we can reuse most of the code, especially the codes in qemu and control panel. Finally, we can debug vtd code when VTD/IPF platform is present. Thanks, Anthony -- yamahata ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
Hi, Anthony I have a question. Do we need to set not only tables included dma page but also all page table to VTd? If yes, do we need to diable dma even when we chage any page table not related in dma-remapping? Best Regards, Akio Takebe ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
From: Akio Takebe [mailto:[EMAIL PROTECTED] Sent: 2007年5月11日 10:54 To: Xu, Anthony; Isaku Yamahata Cc: xen-ia64-devel@lists.xensource.com Subject: RE: [Xen-ia64-devel] VTD is coming Hi, Anthony I have a question. Do we need to set not only tables included dma page but also all page table to VTd? We don't know which pages guest OS will use as dma page, So we let vtd page table translate all physical address belonging to guest. If yes, do we need to diable dma even when we chage any page table not related in dma-remapping? we needn't and can't. Vtd page table is maintained by xen. When xen changes vtd page table, the changed entries should not be used by DMA operation. What xen needs to do is to flush corresponding IO-TLBs. Do you find the scenarios where race conditions exist? Best Regards, Akio Takebe ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] VTD is coming
On Fri, May 11, 2007 at 10:44:48AM +0800, Xu, Anthony wrote: Do you mean nexted page table(NPT) by 'processor page table'? Processor page table refers to shadow page table which is used to emulate guest page table, this is not related to NPT. I meant EPT(Extended Page tables) in Intel terminology. Probably I need to read VT-d spec more carefully. ia32 VT-d code would be very specific to ia32 so that it seems very difficult to have arch generic code. It would be reasonable to go for ia64 specific VT-d implementation. Yes, we need to prepare for VTD/IPF, I think the first step is to change P2M to vtd page table format. P2M is related to several components inside XEN, it might take a while, It is better, if this can be done earlier. VT-d spec says only 4bits in vtd page table entry are avilable for software . On the other hand we're using many bits more than 4. So at this moment I'm not sure which is better to unify P2M with VT-d table or to have separated tables. Anyhow, it would be necessary to make P2M VT-d friendly somehow. Then we can leverage code from IA32 as possible, after vtd/ia32 code is checked in. I think we can reuse most of the code, especially the codes in qemu and control panel. Finally, we can debug vtd code when VTD/IPF platform is present. Sounds reasonable. -- yamahata ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] [Xen-ia64][PATCH] _OSI interface for Tristan's GFW
This patch enable _OSI interface in DSDT table. So Xen can know which OS running on guest BTW: Due to my EDK2 environment is broken, so I don't compile it. But I think it's ok. It requires the version of ASL complier is 3.0 Signed-off-by, Zhang Xin [EMAIL PROTECTED] Good good study,day day up ! ^_^ -Wing(zhang xin) OTC,Intel Corporation gfw_OSI.patch Description: gfw_OSI.patch ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] [Xen-ia64][PATCH] _OSI interface for Tristan's GFW
On Fri, 2007-05-11 at 11:09 +0800, Zhang, Xing Z wrote: This patch enable _OSI interface in DSDT table. So Xen can know which OS running on guest We'll have to wait for Tristan to get back to check this into the efi-vfirmware tree, but thank you for sending this out. Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
Hi, Anthony Thank you for your explanation. I have a question. Do we need to set not only tables included dma page but also all page table to VTd? We don't know which pages guest OS will use as dma page, So we let vtd page table translate all physical address belonging to guest. If yes, do we need to diable dma even when we chage any page table not related in dma-remapping? we needn't and can't. Vtd page table is maintained by xen. When xen changes vtd page table, the changed entries should not be used by DMA operation. What xen needs to do is to flush corresponding IO- TLBs. Thanks, I understand. Another question, xen don't know dma pages used by guest, how about can xen protect the dma pages? (Sorry, I'll read VTd spec much more.) Do you find the scenarios where race conditions exist? No, I was warried about performance at the time of changing page table. Best Regards, Akio Takebe ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] Re: PATCH: all registers in vcpu_guest_context
On Thu, May 10, 2007 at 12:56:16PM -0600, Alex Williamson wrote: Hi Tristan, Is something like the patch below what you had in mind? Thanks, Thank you very much. After more thinking I thought I could move the union into structure and make the union anonymous. But your code is more general. Tristan. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] [PATCH] Return ENOMEM if VPD allocation failed
On Fri, May 11, 2007 at 09:06:11AM +0800, Xu, Anthony wrote: If we would like to support many domains and many vcpus, I think that we should expand xenheap. I think that the simplest method is changing PS of ITR[0] and DTR[0] to 256M byte. Do you have good ideas? Agree, we should expand xenheap if we want to support more domain/vcpu. IIRC xenheap is 64MB now. Does it mean that each domain uses about 1MB ? (Seems to be big). Or the initial allocation is important ? We could increase xenheap but it would be a good to have some figures before. Tristan. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] VTD is coming
On Fri, May 11, 2007 at 09:19:45AM +0800, Xu, Anthony wrote: You seems to be thinking about IPF specific implementation, right? The x86 VT-d patches which was posted to xen-devel has separate page table for VTD. I don't know its current development status, though. Making it arch-generic, it might be reused. BTW using VTd tables for p2m makes Xen intel-locked (because p2m tables are visible from the domains). Is VT-d the only IOMMU on ia64 ? I don't know the reply but we should ask to vendors (HP, Fujitsu, Hitachi, NEC, Unisys...) Tristan. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] VTD is coming
On Fri, 2007-05-11 at 07:04 +0200, Tristan Gingold wrote: On Fri, May 11, 2007 at 09:19:45AM +0800, Xu, Anthony wrote: You seems to be thinking about IPF specific implementation, right? The x86 VT-d patches which was posted to xen-devel has separate page table for VTD. I don't know its current development status, though. Making it arch-generic, it might be reused. BTW using VTd tables for p2m makes Xen intel-locked (because p2m tables are visible from the domains). Is VT-d the only IOMMU on ia64 ? I don't know the reply but we should ask to vendors (HP, Fujitsu, Hitachi, NEC, Unisys...) Good point Tristan. I think we should treat VTd as one of potentially several io virtualization abstractions. Thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
From: Isaku Yamahata [mailto:[EMAIL PROTECTED] Sent: 2007年5月11日 11:10 To: Xu, Anthony Cc: xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel] VTD is coming On Fri, May 11, 2007 at 10:44:48AM +0800, Xu, Anthony wrote: Do you mean nexted page table(NPT) by 'processor page table'? Processor page table refers to shadow page table which is used to emulate guest page table, this is not related to NPT. I meant EPT(Extended Page tables) in Intel terminology. Probably I need to read VT-d spec more carefully. I know EPT, I don't mean that. When guest is protect mode with paging disabled. Linear address is equal to physical address. At this time, xen temporarily refers P2M as shadow page table (machine cr3 is pointing this page table) to emulate this mode. And there is another dedicate shadow page table to emulate protect mode with paging enabled. Maybe when EPT/NPT is introduced, P2M, EPT and VTD share the same page table. I don’t know the detail about EPT VT-d spec says only 4bits in vtd page table entry are avilable for software . On the other hand we're using many bits more than 4. So at this moment I'm not sure which is better to unify P2M with VT-d table or to have separated tables. Anyhow, it would be necessary to make P2M VT-d friendly somehow. I'm also thinking about this issue. There are 4 bits, means 16 value, I think it is enough; currently we only use less 10 values. We can compact these into 4 bits Thanks, Anthony ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] PATCH: rewrite vcpu_get_psr
On Thu, May 10, 2007 at 02:15:34PM -0600, Alex Williamson wrote: On Wed, 2007-05-09 at 06:10 +0200, Tristan Gingold wrote: Hi, a stripped-down version of a previous patch: reimplement vcpu_get_psr. Should be a noop, I don't think performance should be affected. Unfortunately... It's a little bit puzzling. I didn't think the performace of mov =psr.l was to important! Have you tested my other patch ? It should be independant from this one. Thanks for all your work, Tristan. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
From: Akio Takebe [mailto:[EMAIL PROTECTED] Sent: 2007年5月11日 11:58 To: Xu, Anthony; Isaku Yamahata Cc: xen-ia64-devel@lists.xensource.com; Akio Takebe Subject: RE: [Xen-ia64-devel] VTD is coming Another question, xen don't know dma pages used by guest, how about can xen protect the dma pages? (Sorry, I'll read VTd spec much more.) Per VTD spec, there is a vtd page table for every device. If two devices belong to a domain, they can share a vtd page table. Xen doesn't need to protect the dma pages, which are already protected by different VTD page table. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
From: Tristan Gingold [mailto:[EMAIL PROTECTED] Sent: 2007年5月11日 13:05 To: Xu, Anthony Cc: Isaku Yamahata; xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel] VTD is coming BTW using VTd tables for p2m makes Xen intel-locked (because p2m tables are visible from the domains). If domain only reads p2m table, I think that is OK, Definitely domain can't directly modify P2M table, It needs to call hypercall to modify P2M table. Is VT-d the only IOMMU on ia64 ? I have no idea. I don't know the reply but we should ask to vendors (HP, Fujitsu, Hitachi, NEC, Unisys...) Tristan. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
Maybe when EPT/NPT is introduced, P2M, EPT and VTD share the same page table. I don’t know the detail about EPT BTW, there is no EPT/NPT in IPF due to different architecture. Thanks, Anthony -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Xu, Anthony Sent: 2007年5月11日 13:13 To: Isaku Yamahata Cc: xen-ia64-devel@lists.xensource.com Subject: RE: [Xen-ia64-devel] VTD is coming From: Isaku Yamahata [mailto:[EMAIL PROTECTED] Sent: 2007年5月11日 11:10 To: Xu, Anthony Cc: xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel] VTD is coming On Fri, May 11, 2007 at 10:44:48AM +0800, Xu, Anthony wrote: Do you mean nexted page table(NPT) by 'processor page table'? Processor page table refers to shadow page table which is used to emulate guest page table, this is not related to NPT. I meant EPT(Extended Page tables) in Intel terminology. Probably I need to read VT-d spec more carefully. I know EPT, I don't mean that. When guest is protect mode with paging disabled. Linear address is equal to physical address. At this time, xen temporarily refers P2M as shadow page table (machine cr3 is pointing this page table) to emulate this mode. And there is another dedicate shadow page table to emulate protect mode with paging enabled. Maybe when EPT/NPT is introduced, P2M, EPT and VTD share the same page table. I don’t know the detail about EPT VT-d spec says only 4bits in vtd page table entry are avilable for software . On the other hand we're using many bits more than 4. So at this moment I'm not sure which is better to unify P2M with VT-d table or to have separated tables. Anyhow, it would be necessary to make P2M VT-d friendly somehow. I'm also thinking about this issue. There are 4 bits, means 16 value, I think it is enough; currently we only use less 10 values. We can compact these into 4 bits Thanks, Anthony ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTD is coming
From: Alex Williamson [mailto:[EMAIL PROTECTED] Sent: 2007年5月11日 13:05 To: Tristan Gingold Cc: Xu, Anthony; Isaku Yamahata; xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel] VTD is coming BTW using VTd tables for p2m makes Xen intel-locked (because p2m tables are visible from the domains). Is VT-d the only IOMMU on ia64 ? I don't know the reply but we should ask to vendors (HP, Fujitsu, Hitachi, NEC, Unisys...) Good point Tristan. I think we should treat VTd as one of potentially several io virtualization abstractions. Thanks, Agree, we need to leave space for other IOMMU implementation when we introduce VTD. Thanks, Anthony ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] PATCH: rewrite vcpu_get_psr
Tristan Gingold Sent: 2007年5月11日 13:23 To: Alex Williamson Cc: Xen-ia64-devel Subject: Re: [Xen-ia64-devel] PATCH: rewrite vcpu_get_psr Unfortunately... It's a little bit puzzling. I didn't think the performace of mov =psr.l was to important! + + if (!PSCB(vcpu, metaphysical_mode)) + newpsr.i64 |= IA64_PSR_DT | IA64_PSR_RT | IA64_PSR_IT; - if (PSCB(vcpu, metaphysical_mode)) - newpsr.dt = 0; Above old code would be translated to if (PSCB(vcpu, metaphysical_mode)) newpsr.i64 = ~IA64_PSR_DT; Thanks, Anthony ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel