[Xen-ia64-devel] [PATCH] Fix typo

2009-06-02 Thread Masaki Kanno
Hi,

Typo fix.
  occured - occurred

Signed-off-by: Masaki Kanno kanno.mas...@jp.fujitsu.com

Best regards,
 Kan



typo_occured.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [RFC] necessary of `handling regNaT fault' message

2008-04-22 Thread Masaki Kanno
Hi all,

I'm not sure that the check is required, though, I think that the 
message is noisy.  So, I make a proposal that replaces the printk() 
by gdprintk(XENLOG_DEBUG, ...).  What do you think?

Best regards,
 Kan

Fri, 11 Apr 2008 17:21:34 +0900 (JST), KUWAMURA Shin'ya wrote:

Hi,

Running i386 version LMbench on IA32-EL of dom0 generates many
message outputs: # tens of lines per a second
  (XEN) ia64_handle_reflection: handling regNaT fault

Is the check required?

I attached the patch that remove printk().

How to reproduce:
1. build LMbench on an i386 machine
2. run par_mem on ia64
   cd LMbench directory/bin/i686-pc-linux-gnu
   ./par_mem -L 128 -M 16M

Best Regards,
-- 
  KUWAMURA Shin'ya

---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] New error trying to create a domain (using latestxend-unstable

2008-03-18 Thread Masaki Kanno
Hi Kayvan,

Could you define cpus into the domain configuration file?
 e.g. cpus = 0-3

And could you show a result of xm info?

Best regards,
 Kan

Tue, 18 Mar 2008 01:23:25 -0700, Kayvan Sylvan wrote:

Hi everyone,

I am getting the following error on an HP superdome when trying to xm 
create an HVM domain. I am stumped about what to do next to debug this. 
This exact same configuration has worked on Intel and NEC machines. Any 
help is greatly appreciated.

# xm create -c PSI52.hvm
Using config file ./PSI52.hvm.
Error: (22, 'Invalid argument')

The config file is pretty simple:

import os, re
arch_libdir = 'lib'
loader = /root/efi-vfirmware.hg/binaries/xenia64-gfw.bin
builder='hvm'
memory = 2048
shadow_memory = 96
name = PSI52
vcpus=4
vif = [ 'type=ioemu, mac=00:15:60:04:d5:10, bridge=xenbr0' ]
disk = [ 'file:/root/nfs/sd3p2-PSI52.img,hda,w', 'file:/root/Xen/isos/ML5.2
-IA64.iso,hdc:cdrom,r' ]
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm.debug'
boot=d
sdl=0
vnc=1
vnclisten=0.0.0.0
vncdisplay=52
vncpasswd='xxxyyyzzz'
stdvga=0
serial='pty'

Some more info (from xm dmesg):

(XEN) Xen version 3.3-unstable ([EMAIL PROTECTED]) (gcc version 3.4.6 
20060404
 (Red Hat 3.4.6-3)) Tue Mar 18 01:17:19 GMT 2008
(XEN) Latest ChangeSet: Fri Mar 14 15:07:45 2008 -0600 17209:8c921adf4833
(XEN) Xen command line: BOOT_IMAGE=scsi0:EFI\redhat\xen-3.3.gz  com1=115200
,8n1
dom0_mem=2048M
(XEN) xen image pstart: 0x400, xenheap pend: 0xc00

The following is from /var/log/xen/xend.log when this error occurs:

[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:84) XendDomainInfo.create(
['vm', ['name', 'PSI52'], ['memory', 2048], ['shadow_memory', 96], ['vcpus'
, 4], ['on_xend_start', 'ignore'], ['on_xend_stop', 'ignore'], ['image', [
'hvm', ['loader', '/root/efi-vfirmware.hg/binaries/xenia64-gfw.bin'], [
'device_model', '/usr/lib/xen/bin/qemu-dm.debug'], ['pae', 1], ['vcpus', 4]
, ['boot', 'd'], ['fda', ''], ['fdb', ''], ['timer_mode', 0], ['localtime',
 0], ['serial', 'pty'], ['stdvga', 0], ['isa', 0], ['nographic', 0], [
'soundhw', ''], ['vnc', 1], ['vncdisplay', 52], ['vncunused', 1], [
'vnclisten', '0.0.0.0'], ['sdl', 0], ['display', 'localhost:10.0'], [
'xauthority', '/root/.Xauthority'], ['rtc_timeoffset', '0'], ['monitor', 0]
, ['acpi', 1], ['apic', 1], ['usb', 0], ['usbdevice', ''], ['keymap', ''], 
['pci', []], ['hpet', 0], ['guest_os_type', 'default'], ['hap', 1], [
'vncpasswd', '']]], ['device', ['vbd', ['uname', 'file:/root/nfs/
sd3p2-PSI52.img'], ['dev', 'hda'], ['mode', 'w']]], ['device', ['vbd', [
'uname', 'file:/root/Xen/isos/ML5.2-IA64.iso'], ['dev', 'hdc:cdrom'], [
'mode', 'r']]], ['device', ['vif', ['bridge', 'xenbr0'], ['mac', '00:15:60:
04:d5:10'], ['type', 'ioemu')
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:1846) XendDomainInfo.
constructDomain
[2008-03-18 08:21:58 5348] DEBUG (balloon:132) Balloon: 14419536 KiB free; 
need 2048; done.
[2008-03-18 08:21:58 5348] DEBUG (XendDomain:445) Adding Domain: 3
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:1949) XendDomainInfo.
initDomain: 3 256
[2008-03-18 08:21:58 5348] DEBUG (image:237) Stored a VNC password for vfb 
access
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: boot, val: d
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: fda, val: None
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: fdb, val: None
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: soundhw, val: None
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: localtime, val: 0
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: serial, val: pty
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: std-vga, val: 0
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: isa, val: 0
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: acpi, val: 1
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: usb, val: 0
[2008-03-18 08:21:58 5348] DEBUG (image:553) args: usbdevice, val: None
[2008-03-18 08:21:58 5348] ERROR (XendDomainInfo:2059) XendDomainInfo.
initDomain: exception occurred
Traceback (most recent call last):
  File //usr/lib/python/xen/xend/XendDomainInfo.py, line 2001, in 
_initDomain
xc.vcpu_setaffinity(self.domid, v, cpumask)
Error: (22, 'Invalid argument')
[2008-03-18 08:21:58 5348] ERROR (XendDomainInfo:440) VM start failed
Traceback (most recent call last):
  File //usr/lib/python/xen/xend/XendDomainInfo.py, line 420, in start
XendTask.log_progress(31, 60, self._initDomain)
  File //usr/lib/python/xen/xend/XendTask.py, line 209, in log_progress
retval = func(*args, **kwds)
  File //usr/lib/python/xen/xend/XendDomainInfo.py, line 2062, in 
_initDomain
raise VmError(str(exn))
VmError: (22, 'Invalid argument')
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:2176) XendDomainInfo.
destroy: domid=3
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:2193) XendDomainInfo.
destroyDomain(3)
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:1757) Destroying device 
model
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:1764) Releasing devices

[Xen-ia64-devel] Re: [Xen-devel] [IA64] Weekly benchmark results [ww10]

2008-03-14 Thread Masaki Kanno
Hi,

create_workqueue() of the RHEL4U2 kernel has upper limit of 1st argument 
length.  The limit is 10.  The following code is long. 

  accel_watch_workqueue = create_workqueue(accel_watch);

Best regards,
 Kan

Fri, 14 Mar 2008 17:27:38 +0900 (JST), KUWAMURA Shin'ya wrote:

Hi,

I report a benchmark result of this week on IPF using
ia64/xen-unstable and ia64/linux-2.6.18-xen.

On DomVTi, a kernel panic occurred when xen-vnif.ko was loaded. The
issue was fixed by reverting the following patch:
  http://xenbits.xensource.com/ext/ia64/linux-2.6.18-xen.hg/rev/c48f54365060

TEST ENVIRONMENT
Machine  : Tiger4
Kernel   : 2.6.18.8-xen
Changeset: 17205:716a637722e4 (ia64/xen-unstable)
  471:ba72914de93a   (ia64/linux-2.6.18-xen)
  78:9e4b5bb76049(efi-vfirmware)
Dom0 OS  : RHEL4 U2 (2P)
DomU OS  : RHEL4 U2 (8P, using tap:aio)
DomVTi OS: RHEL4 U2 (8P, with PV-on-HVM drivers)
Scheduler: credit

TEST RESULTS
  DomU:
unixbench4.1.0: Pass
bonnie++-1.03 : Pass
ltp-full-20070930 : Pass
iozone3_191   : Pass
lmbench-3.0-a5: Pass
  DomVTi:
unixbench4.1.0: Pass
bonnie++-1.03 : Pass
ltp-full-20070930 : Pass
iozone3_191   : Pass
lmbench-3.0-a5: Pass

Best regards,
KUWAMURA and Fujitsu members

___
Xen-devel mailing list
[EMAIL PROTECTED]
http://lists.xensource.com/xen-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Fix xm shutdown/reboot for HVM domain of IA64

2008-01-31 Thread Masaki Kanno
Hi,

xc.domain_destroy_hook() is called twice when we execute 
xm shutdown/reboot command to an HVM domain without PV drivers. 
The first calling is from shutdown() in XendDomainInfo.py. 
The second calling is from destroyDomain() in XendDomainInfo.py. 
The first calling is not necessary, so this patch removes it. 

A discussion about this patch is as follows. 
 http://lists.xensource.com/archives/html/xen-ia64-devel/2008-01/msg00232.html

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



destroy_hook.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] Fix the domain refernece counting

2008-01-31 Thread Masaki Kanno
Hi,

I tested the patch.  With the patch, guest domains did not exist in 
hypervisor after destroying them by xm destroy command. 
But, without the patch, guest domains still existed in hypervisor as 
follows after that. 

(XEN) *** Serial input - Xen (type 'CTRL-a' three times to switch input to 
DOM0)
(XEN) 'q' pressed - dumping domain info (now=0x69:8D16B138)
snip
(XEN) General information for domain 1:
(XEN) refcnt=1 nr_pages=-5 xenheap_pages=5 dirty_cpus={}
(XEN) handle=0c83ca66-94a2-9d78-fbe2-dc363be859eb vm_assist=
(XEN) Rangesets belonging to domain 1:
(XEN) Interrupts { }
(XEN) I/O Memory { }
(XEN) I/O Ports  { }
(XEN) dump_pageframe_info not implemented
(XEN) VCPU information and callbacks for domain 1:
(XEN) VCPU0: CPU15 [has=F] flags=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-63}
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU1: CPU5 [has=F] flags=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-63}
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU2: CPU10 [has=F] flags=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-63}
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU3: CPU2 [has=F] flags=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-63}
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)

Best regards,
 Kan

Thu, 31 Jan 2008 14:01:04 +0900, Isaku Yamahata wrote:

Fix the domain refernece counting caused by allocated pages from domheap for
shared page and hyperregister page.
Calling share_xen_page_with_guest() with domain heap page is wrong so that
it increments domian-xenpages which is never decremented. Thus the domian
refcount doesn't decrease to 0 so that destroy_domain() is never called.
This patch make the allocation done from xenheap again.

The other way to fix it is to work around domain-xenheap and the page
refrence count somehow, but it would be very ugly. The right way to do so
is to enhance the xen page allocator to be aware of this kind of page
in addition to xenheap and domheap. But we don't want to touch the
common code.
And given that the limitation on xenheap of xen/ia64 is much relaxed,
probably it isn't necessary to be so nervouse not to allocate those pages
from xenheap.
If it happend to be necessary to allocate those pages from domheap,
we could address it at that time. For now just allocate them from
xenheap.

-- 
yamahata



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [Q] About xc.domain_destroy_hook

2008-01-30 Thread Masaki Kanno
Hi Wing,

Thanks for your reply. 
I removed the line from XendDomainInfo.py, then I tested the following 
commands.  I didn't see the error message with the following commands. 
 - xm shutdown
 - xm reboot
 - xm destroy
 - shutdown on guest OS
 - reboot on guest OS

I will send a patch to solve the problem. 

Best regards,
 Kan

Thu, 31 Jan 2008 13:52:21 +0800, Zhang, Xing Z wrote:

Hi Kan:
  When I implemented NVRAM, I found there were many difference shutdown ways
 for HVM domain. So I added hook on each flow. 
  Maybe Xend code merged some shutdown paths. You can try to remove it to 
 see if NVRAM still works. If fine, I think it is removable. Thx.

Good good study,day day up ! ^_^
-Wing(zhang xin)

OTC,Intel Corporation
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
Behalf Of Masaki Kanno
Sent: 2008?1?31? 9:19
To: xen-ia64-devel@lists.xensource.com
Subject: [Xen-ia64-devel] [Q] About xc.domain_destroy_hook

Hi,

I have a question.

XendDomainInfo.py:
def shutdown(self, reason):
Shutdown a domain by signalling this via
xenstored.
log.debug('XendDomainInfo.shutdown(%s)', reason)
snip
# HVM domain shuts itself down only if it has PV drivers
if self.info.is_hvm():
hvm_pvdrv = xc.hvm_get_param(self.domid,
HVM_PARAM_CALLBACK_IRQ)
if not hvm_pvdrv:
code = REVERSE_DOMAIN_SHUTDOWN_REASONS[reason]
here! --   xc.domain_destroy_hook(self.domid)
log.info(HVM save:remote shutdown dom %d!,
self.domid)
xc.domain_shutdown(self.domid, code)

[Q] The line does not need, does it?


When I tested xm shutdown command for an HVM domain, I saw the
following error message in xend-debug.log.

  Nvram save successful!
  ERROR Internal error: Save to nvram fail!
   (9 = Bad file descriptor)

Also same message was seen in xm reboot command.

  Nvram save successful!
  ERROR Internal error: Save to nvram fail!
   (9 = Bad file descriptor)

I think that xc.domain_destroy_hook() is called twice.



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Close nvram file when rebooting HVM domain

2007-12-10 Thread Masaki Kanno
Hi,

On ia64, when I rebooted a HVM domain, a nvram file for the HVM domain 
was not closed.  When I repeated rebooting the HVM domain, a lot of the 
nvram files have still opened as follows. 

# ps aux | grep xend
root  4215  0.0  0.2  81600 11840 ?S10:00   0:00 python 
/usr/sbin/xend start
root  4221  0.3  0.7 223664 29136 ?Sl   10:00   0:03 python 
/usr/sbin/xend start
root  5189  0.0  0.0  60432  1824 pts/0S+   10:17   0:00 grep xend
# ls -l /proc/4221/fd | grep nvram
lrwx-- 1 root root 64 Dec 11 10:05 23 - 
/var/lib/xen/nvram/nvram_HVMdomain.1
lrwx-- 1 root root 64 Dec 11 10:14 24 - 
/var/lib/xen/nvram/nvram_HVMdomain.1
lrwx-- 1 root root 64 Dec 11 10:14 25 - 
/var/lib/xen/nvram/nvram_HVMdomain.1
lrwx-- 1 root root 64 Dec 11 10:16 26 - 
/var/lib/xen/nvram/nvram_HVMdomain.1
lrwx-- 1 root root 64 Dec 11 10:08 4 - /var/lib/xen/nvram/nvram_HVMdomain.1

This patch closes a nvram file for a HVM domain when the HVM domain 
rebooted.


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



close_nvram_file.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] PATCH: remove is_vti

2007-12-03 Thread Masaki Kanno
Hi Tristan,

 I have two questions.

The general answer is: this is for backward compatibility.  I think we could
remove that in the future.

Thanks for your answer.

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] PATCH: remove is_vti

2007-12-02 Thread Masaki Kanno
Hi Tristan,

I have two questions.

diff -r 32ec5dbe2978 -r c5ffe6158794 xen/arch/ia64/xen/dom0_ops.c
--- a/xen/arch/ia64/xen/dom0_ops.c Fri Nov 30 08:54:33 2007 -0700
+++ b/xen/arch/ia64/xen/dom0_ops.c Mon Dec 03 06:45:23 2007 +0100
@@ -89,7 +89,7 @@ long arch_do_domctl(xen_domctl_t *op, XE
 
 if (ds-flags  XEN_DOMAINSETUP_query) {
 /* Set flags.  */
-if (d-arch.is_vti)
+if (is_hvm_domain (d))
 ds-flags |= XEN_DOMAINSETUP_hvm_guest;
 /* Set params.  */
 ds-bp = 0;   /* unknown.  */
@@ -104,12 +104,13 @@ long arch_do_domctl(xen_domctl_t *op, XE
 ret = -EFAULT;
 }
 else {
-if (ds-flags  XEN_DOMAINSETUP_hvm_guest) {
+if (is_hvm_domain (d)
+|| (ds-flags  XEN_DOMAINSETUP_hvm_guest)) {

Why should we check both flags?
Is XEN_DOMAINSETUP_hvm_guest flag for xm restore command?


 if (!vmx_enabled) {
 printk(No VMX hardware feature for vmx domain.\n);
 ret = -EINVAL;
 } else {
-d-arch.is_vti = 1;
+d-is_hvm = 1;

Why should we set is_hvm flag?
I think that is_hvm flag was already set in domain_create().


 xen_ia64_set_convmem_end(d, ds-maxmem);
 ret = vmx_setup_platform(d);
 }

Best regards,
 Kan

Mon, 3 Dec 2007 06:46:19 +0100, [EMAIL PROTECTED] wrote:

Hi,

in fact is_vti is a duplicate of is_hvm.  This patch remove the ia64 is_vti
flag.

Tristan.

---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH][RFC] Fix error message for xm create command

2007-08-27 Thread Masaki Kanno
Hi,

When I tested xm create command, I saw the following error message. 
I expected an error message Error: (12, 'Cannot allocate memory') 
because I intentionally caused a memory shortage on the test. 
But the error message was different from my expectation. 

# xm create /xen/HVMdomain.1
Using config file /xen/HVMdomain.1.
Error: an integer is required

I looked at xend.log to examine the cause why the error message was 
shown.  (Could you see the attached xend.log?) 
xend had the error message Error: (12, 'Cannot allocate memory') 
first.  But xend changed the error message to Error: an integer is 
required halfway. 
I'm not sure about the cause why an exception occurred in logging 
processing.  But when I applied an attached patch, I confirmed that 
the error message that I expected was shown.  The patch does not 
call xc.domain_destroy_hook() if self.domid is None. 

Could you comment on the patch?  I'd like to solve the problem 
because I think that users want correct error message. 


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



xend.log
Description: Binary data


domain_destroy_hook.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [BUG]Dom0 panic if dom0_mem field is notexplicitly set in elilo.conf

2007-08-23 Thread Masaki Kanno
Hi Wing,

No, I don't meet it. 

[EMAIL PROTECTED] ~]# xm info
host   : tiger201.soft.fujitsu.com
release: 2.6.18-xen
version: #1 SMP Fri Aug 17 19:38:19 JST 2007
machine: ia64
nr_cpus: 16
nr_nodes   : 1
sockets_per_node   : 4
cores_per_socket   : 2
threads_per_core   : 2
cpu_mhz: 797
hw_caps: 
::::::::
total_memory   : 8166
free_memory: 4020
node_to_cpu: node0:0-15
xen_major  : 3
xen_minor  : 0
xen_extra  : -unstable
xen_caps   : xen-3.0-ia64 xen-3.0-ia64be hvm-3.0-ia64 
xen_scheduler  : credit
xen_pagesize   : 16384
platform_params: virt_start=0xf000
xen_changeset  : Thu Aug 16 13:46:50 2007 -0600 15760:049d4baa9965
cc_compiler: gcc version 4.1.1 20070105 (Red Hat 4.1.1-52)
cc_compile_by  : root
cc_compile_domain  : soft.fujitsu.com
cc_compile_date: Mon Aug 20 15:45:26 JST 2007
xend_config_format : 4
[EMAIL PROTECTED] ~]# cat /boot/efi/efi/redhat/elilo.conf 
prompt
timeout=20
#default=linux
#default=kanno
default=kanno-test
relocatable

image=vmlinuz-2.6.18-8.el5
label=linux
initrd=initrd-2.6.18-8.el5.img
read-only
append=rhgb quiet root=LABEL=/
image=../xen/vmlinuz-2.6.18-xen-kanno
label=kanno
vmm=../xen/xen.gz-kanno
initrd=../xen/initrd-2.6.18-xen.img-kanno
read-only
append=loglvl=all guest_loglvl=all -- 3 console=tty0 rhgb quiet 
root=LABEL=/
image=../xen/vmlinuz-2.6.18-xen-kanno
label=kanno-test
vmm=../xen/xen.gz-kanno-test
initrd=../xen/initrd-2.6.18-xen.img-kanno
read-only
append=loglvl=all guest_loglvl=all -- 3 console=tty0 rhgb quiet 
root=LABEL=/

Best regards,
 Kan


Thu, 23 Aug 2007 13:36:01 +0800, Zhang, Xing Z wrote:

Hi All:
   I found in current changeset, if you don't explicitly set
dom0_mem field in elilo.conf then a kernel panic raises. Error info
showed below.
   I am not sure which changeset leads this bug, but CSET15410 is
fine. Did anyone meet this?


PID hash table entries: 4096 (order: 12, 32768 bytes)
Console: colour VGA+ 80x25
Kernel panic - not syncing: Failed to setup Xen contiguous region
 1Unable to handle kernel NULL pointer dereference (address
)
swapper[0]: Oops 11012296146944 [1]
Modules linked in:

Pid: 0, CPU 0, comm:  swapper
psr : 1210084a2010 ifs : 8389 ip  : [a001001300d1]
Not
tainted
ip is at kmem_cache_alloc+0x131/0x2e0
unat:  pfs : 4793 rsc : 0007
rnat:  bsps:  pr  : 5989
ldrs:  ccv :  fpsr: 0009804c8a70433f
csd :  ssd : 
b0  : a0010004a2c0 b6  : a0010004b0a0 b7  : a00100014d70
f6  : 1003e5b51e7d1b1a1644c f7  : 1003e9e3779b97f4a7c16
f8  : 1003e0a0010003c32 f9  : 1003e007f
f10 : 1003e0379 f11 : 1003e6db6db6db6db6db7
r1  : a00101146090 r2  : 0001 r3  : fff1
r8  : fff04c18 r9  :  r10 : 0001
r11 :  r12 : a00100d0f6a0 r13 : a00100d08000
r14 : 0001 r15 : fff1 r16 : 
r17 : 1200 r18 : a00100d08018 r19 : a00100d0f72c
r20 : a00100d0f728 r21 :  r22 : 
r23 : a00100d08f24 r24 : a0010003c300 r25 : a001
r26 : a00100d93aa0 r27 : a00100f47600 r28 : a00100d8dff0
r29 : a00100d8dfe0 r30 : 01eb r31 : 0003c340

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[PATCH] Fix libxc and pm_timer (Was: [Xen-ia64-devel] Maybe doman_destroy() was not called?)

2007-08-23 Thread Masaki Kanno
Tue, 21 Aug 2007 09:27:45 +0900, Masaki Kanno wrote:

Hi all,

I tested xm create command with latest xen-ia64-unstable and the 
attached patch.  The attached patch intentionally causes contiguous 
memory shortage in VHPT allocation for HVM domain.  On the test, 
I wanted to confirm that the release proceeding of domain resources 
is working correctly when HVM domain creation failed.  But I could 
not confirm that it is working correctly.  It seemed to be not 
calling domain_destroy(). 
The following messages are the result of the test.  Different RID 
was allocated whenever I created a HVM domain. 
Do you think where a bug hides? 

 (XEN) domain.c:546: arch_domain_create:546 domain 1 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002fd35 hash_size 512
 (XEN) regionreg.c:193: ### domain f40fc080: rid=8-c mp_rid
=2000
 (XEN) domain.c:583: arch_domain_create: domain=f40fc080
 (XEN) vpd base: 0xf7be, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt
 (XEN) domain.c:546: arch_domain_create:546 domain 2 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002f6f8c000 hash_size 512
 (XEN) regionreg.c:193: ### domain f4109380: rid=c-10 
mp_rid=3000
 (XEN) domain.c:583: arch_domain_create: domain=f4109380
 (XEN) vpd base: 0xf7b9, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt
 (XEN) domain.c:546: arch_domain_create:546 domain 3 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002f676c000 hash_size 512
 (XEN) regionreg.c:193: ### domain f7bf1380: rid=10-14 
mp_rid=4000
 (XEN) domain.c:583: arch_domain_create: domain=f7bf1380
 (XEN) vpd base: 0xf7b5, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt


Hi,

I found two bugs in this problem. 

Bug.1:
 copy_from_GFW_to_nvram() in libxc forgot munmap() if NVRAM data 
 invalid.  Also it forgot free() and close() too. 
 The Bug.1 is solved by munmap_nvram_page.patch. 

I tried the test again after Bug.1 was solved.  But hypervisor did 
a panic on the test.  The following messages are the result of the 
test. 

(XEN) domain.c:546: arch_domain_create:546 domain 2 pervcpu_vhpt 1
(XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
(XEN) tlb_track.c:115: hash 0xf002fad0 hash_size 512
(XEN) regionreg.c:193: ### domain f40fc080: rid=8-c mp_rid=2000
(XEN) domain.c:583: arch_domain_create: domain=f40fc080
(XEN) *** xen_handle_domain_access: exception table lookup failed, 
iip=0xf403f530, addr=0x0, spinning...
ip=0xf403f530, addr=0x0, spinning...
(XEN) d 0xf7c5c080 domid 0
(XEN) vcpu 0xf7c4 vcpu 0
(XEN) 
(XEN) CPU 0
(XEN) psr : 101008226018 ifs : 858d ip  : [f403f530]
(XEN) ip is at timer_softirq_action+0x170/0x2e0
(XEN) unat:  pfs : 058d rsc : 0003
(XEN) rnat: 4000 bsps: f7c47e20 pr  : 006a9969
(XEN) ldrs:  ccv :  fpsr: 0009804c0270033f
(XEN) csd :  ssd : 
(XEN) b0  : f403f4f0 b6  : f4038b80 b7  : a00100018570
(XEN) f6  : 1003e01b932157960 f7  : 1003e000281bd3682
(XEN) f8  : 0 f9  : 0
(XEN) f10 : 0 f11 : 0
(XEN) r1  : f438ca40 r2  : 007da3766757 r3  : f7c47fe8
(XEN) r8  : 0001 r9  :  r10 : 
(XEN) r11 : 0009804c0270033f r12 : f7c47e00 r13 : f7c4
(XEN) r14 :  r15 : f40fc9b0 r16 : 0001
(XEN) r17 : f7ceaf18 r18 : 0002 r19 : 0001
(XEN) r20 : f7ceb508 r21 : f40fc9b8 r22 : 0001
(XEN) r23 : 0001 r24 : f7ceaf18 r25 : f7c47e28
(XEN) r26 :  r27 :  r28 : 
(XEN) r29 :  r30 :  r31 : f4400100
(XEN) 
(XEN) Call Trace:
(XEN)  [f40af150] show_stack+0x80/0xa0
(XEN) sp=f7c478b0 bsp=f7c41668
(XEN)  [f4087640] panic_domain+0x120/0x170
(XEN) sp=f7c47a80 bsp=f7c41600
(XEN)  [f407ada0] ia64_do_page_fault+0x6b0/0x6c0
(XEN) sp=f7c47bc0 bsp=f7c41568
(XEN)  [f40a7f40] ia64_leave_kernel+0x0/0x300
(XEN) sp=f7c47c00 bsp=f7c41568
(XEN)  [f403f530] timer_softirq_action+0x170/0x2e0
(XEN) sp=f7c47e00 bsp=f7c41500

RE: [PATCH] Fix libxc and pm_timer (Was: [Xen-ia64-devel] Maybedoman_destroy() was not called?)

2007-08-23 Thread Masaki Kanno
Hi Wing,

Thanks for your review and reply. 

Best regards,
 Kan

Fri, 24 Aug 2007 09:47:43 +0800, Zhang, Xing Z wrote:

 

-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf 
Of Masaki Kanno
Sent: 2007ト\xF3ヤツ24ネユ 0:19
To: xen-ia64-devel@lists.xensource.com
Subject: [PATCH] Fix libxc and pm_timer (Was: [Xen-ia64-devel] 
Maybedoman_destroy() was not called?)

Tue, 21 Aug 2007 09:27:45 +0900, Masaki Kanno wrote:

Hi all,

I tested xm create command with latest xen-ia64-unstable and the 
attached patch.  The attached patch intentionally causes contiguous 
memory shortage in VHPT allocation for HVM domain.  On the test, 
I wanted to confirm that the release proceeding of domain resources 
is working correctly when HVM domain creation failed.  But I could 
not confirm that it is working correctly.  It seemed to be not 
calling domain_destroy(). 
The following messages are the result of the test.  Different RID 
was allocated whenever I created a HVM domain. 
Do you think where a bug hides? 

 (XEN) domain.c:546: arch_domain_create:546 domain 1 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002fd35 hash_size 512
 (XEN) regionreg.c:193: ### domain f40fc080: 
rid=8-c mp_rid
=2000
 (XEN) domain.c:583: arch_domain_create: domain=f40fc080
 (XEN) vpd base: 0xf7be, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt
 (XEN) domain.c:546: arch_domain_create:546 domain 2 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002f6f8c000 hash_size 512
 (XEN) regionreg.c:193: ### domain f4109380: rid=c-10 
mp_rid=3000
 (XEN) domain.c:583: arch_domain_create: domain=f4109380
 (XEN) vpd base: 0xf7b9, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt
 (XEN) domain.c:546: arch_domain_create:546 domain 3 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002f676c000 hash_size 512
 (XEN) regionreg.c:193: ### domain f7bf1380: 
rid=10-14 
mp_rid=4000
 (XEN) domain.c:583: arch_domain_create: domain=f7bf1380
 (XEN) vpd base: 0xf7b5, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt


Hi,

I found two bugs in this problem. 

Bug.1:
 copy_from_GFW_to_nvram() in libxc forgot munmap() if NVRAM data 
 invalid.  Also it forgot free() and close() too. 
 The Bug.1 is solved by munmap_nvram_page.patch. 

I tried the test again after Bug.1 was solved.  But hypervisor did 
a panic on the test.  The following messages are the result of the 
test. 

Thanks for correct me. This case tells me again how important take care 
error handle is.
I will keep high vigilance in future coding. Thanks again.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH] Fix unaligned reference of QEMU

2007-08-21 Thread Masaki Kanno
Hi Duan,

I think that you should post the patch to xen-devel too.

Best regards,
 Kan

Tue, 21 Aug 2007 14:58:21 +0800, Duan, Ronghui wrote:

Hi

In current changeset.,qemu copy data uses memcpy_words, which copies 4
bytes at once if the length is larger than 4. This causes unaligned
reference and leads to a performance downgrade. The issue met in rtl8139
emulator. This patch fixes it.

 

Send Package from guest

Versionsize   speed  time  

Original226MB 1.3MB/s2:54

memcpy  226MB 2.8MB/s1:22

Receive Package

Versionsize   speed time  

Original226MB 1.7MB/s2:10

memcpy   226MB 6.0MB/s38 s

 

Thanks

 

Signed-off-by: Duan Ronghui   [EMAIL PROTECTED]

 

 


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Maybe doman_destroy() was not called?

2007-08-20 Thread Masaki Kanno
Hi all,

I tested xm create command with latest xen-ia64-unstable and the 
attached patch.  The attached patch intentionally causes contiguous 
memory shortage in VHPT allocation for HVM domain.  On the test, 
I wanted to confirm that the release proceeding of domain resources 
is working correctly when HVM domain creation failed.  But I could 
not confirm that it is working correctly.  It seemed to be not 
calling domain_destroy(). 
The following messages are the result of the test.  Different RID 
was allocated whenever I created a HVM domain. 
Do you think where a bug hides? 

 (XEN) domain.c:546: arch_domain_create:546 domain 1 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002fd35 hash_size 512
 (XEN) regionreg.c:193: ### domain f40fc080: rid=8-c mp_rid=2000
 (XEN) domain.c:583: arch_domain_create: domain=f40fc080
 (XEN) vpd base: 0xf7be, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt
 (XEN) domain.c:546: arch_domain_create:546 domain 2 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002f6f8c000 hash_size 512
 (XEN) regionreg.c:193: ### domain f4109380: rid=c-10 
mp_rid=3000
 (XEN) domain.c:583: arch_domain_create: domain=f4109380
 (XEN) vpd base: 0xf7b9, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt
 (XEN) domain.c:546: arch_domain_create:546 domain 3 pervcpu_vhpt 1
 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256
 (XEN) tlb_track.c:115: hash 0xf002f676c000 hash_size 512
 (XEN) regionreg.c:193: ### domain f7bf1380: rid=10-14 
mp_rid=4000
 (XEN) domain.c:583: arch_domain_create: domain=f7bf1380
 (XEN) vpd base: 0xf7b5, vpd size:65536
 (XEN) No enough contiguous memory(16384KB) for init_domain_vhpt

Best regards,
 Kan



test_vhpt.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [RFC] Extended I/O port space support

2007-06-07 Thread Masaki Kanno
Hi Alex,

I have a small comment.  The patch includes indent both tab and 
white space.  It has six lines.  I comment into the patch. 

Best regards,
 Kan


diff -r 0cf6b75423e9 linux-2.6-xen-sparse/arch/ia64/pci/pci.c
--- a/linux-2.6-xen-sparse/arch/ia64/pci/pci.c Mon Jun 04 14:17:54 2007 -0600
+++ b/linux-2.6-xen-sparse/arch/ia64/pci/pci.c Thu Jun 07 11:38:41 2007 -0600
@@ -164,6 +164,11 @@ new_space (u64 phys_base, int sparse)
   i = num_io_spaces++;
   io_space[i].mmio_base = mmio_base;
   io_space[i].sparse = sparse;
+
+#ifdef CONFIG_XEN
+  if (is_initial_xendomain())
+  HYPERVISOR_add_io_space(phys_base, sparse, i);
+#endif
 
   return i;
 }
diff -r 0cf6b75423e9 linux-2.6-xen-sparse/include/asm-ia64/hypercall.h
--- a/linux-2.6-xen-sparse/include/asm-ia64/hypercall.hMon Jun 04 
14:17:54
 2007 -0600
+++ b/linux-2.6-xen-sparse/include/asm-ia64/hypercall.hThu Jun 07 
08:37:01
 2007 -0600
@@ -387,6 +387,15 @@ xencomm_arch_hypercall_fpswa_revision(st
 {
   return _hypercall2(int, ia64_dom0vp_op,
  IA64_DOM0VP_fpswa_revision, arg);
+}
+
+static inline int
+HYPERVISOR_add_io_space(unsigned long phys_base,
+  unsigned long sparse,
+  unsigned long space_number)
+{
+  return _hypercall4(int, ia64_dom0vp_op, IA64_DOM0VP_add_io_space,
+ phys_base, sparse, space_number);
 }
 
 // for balloon driver
diff -r 0cf6b75423e9 xen/arch/ia64/xen/dom0_ops.c
--- a/xen/arch/ia64/xen/dom0_ops.c Mon Jun 04 14:17:54 2007 -0600
+++ b/xen/arch/ia64/xen/dom0_ops.c Thu Jun 07 15:52:28 2007 -0600
@@ -363,6 +363,40 @@ dom0vp_fpswa_revision(XEN_GUEST_HANDLE(u
 return 0;
 }
 
+static unsigned long
+dom0vp_add_io_space(struct domain *d, unsigned long phys_base,
+unsigned long sparse, unsigned long space_number)
+{
+unsigned int fp, lp;
+
+printk(%s(%d, 0x%016lx, %ssparse, %d)\n, __func__, d-domain_id,
+   phys_base, sparse ?  : not-, space_number);
+/*
+ * Registering new io_space roughly based on linux
+ * arch/ia64/pci/pci.c:new_space()
+ */
+if (phys_base == 0)
+return 0;  /* legacy I/O port space */
+
+/*
+ * We may need a valid bit in the io_space struct so we
+ * can initialize these more asynchornously.  But this makes
+ * sure we register spaces in lock step with dom0.
+ */
+if (space_number  MAX_IO_SPACES || space_number != num_io_spaces)
+return -EINVAL;
+
+io_space[space_number].mmio_base = phys_base;
+io_space[space_number].sparse = sparse;
+
+num_io_spaces++;
+
+fp = space_number  IO_SPACE_BITS;
+lp = fp | 0x;
+
+return ioports_permit_access(d, fp, lp);
+}
+
 unsigned long
 do_dom0vp_op(unsigned long cmd,
  unsigned long arg0, unsigned long arg1, unsigned long arg2,
@@ -419,6 +453,9 @@ do_dom0vp_op(unsigned long cmd,
 ret = dom0vp_fpswa_revision(hnd);
 break;
 }
+case IA64_DOM0VP_add_io_space:
+ret = dom0vp_add_io_space(d, arg0, arg1, arg2);
+break;
 default:
 ret = -1;
   printk(unknown dom0_vp_op 0x%lx\n, cmd);
diff -r 0cf6b75423e9 xen/arch/ia64/xen/mm.c
--- a/xen/arch/ia64/xen/mm.c   Mon Jun 04 14:17:54 2007 -0600
+++ b/xen/arch/ia64/xen/mm.c   Thu Jun 07 15:55:58 2007 -0600
@@ -886,69 +886,123 @@ assign_domain_page(struct domain *d,
 }
 
 int
-ioports_permit_access(struct domain *d, unsigned long fp, unsigned long lp)
-{
+ioports_permit_access(struct domain *d, unsigned int fp, unsigned int lp)
+{
+struct io_space *space;
+unsigned long mmio_start, mmio_end, mach_start;
 int ret;
-unsigned long off;
-unsigned long fp_offset;
-unsigned long lp_offset;
-
+
+printk(%s(%d, 0x%x, 0x%x)\n, __func__, d-domain_id, fp, lp);
+
+if (IO_SPACE_NR(fp) = num_io_spaces) {
+dprintk(XENLOG_WARNING, Unknown I/O Port range 0x%x - 0x%x\n, fp
, lp);
+return -EFAULT;
+}
+
+/*
+ * The ioport_cap rangeset tracks the I/O port address including
+ * the port space ID.  This means port space IDs need to match
+ * between Xen and the guest.  This is also a requirement because
+ * the hypercall to pass these port ranges only uses a u32.
+ */
 ret = rangeset_add_range(d-arch.ioport_caps, fp, lp);
 if (ret != 0)
 return ret;
 
-/* Domain 0 doesn't virtualize IO ports space. */
-if (d == dom0)
+space = io_space[IO_SPACE_NR(fp)];
+
+/* Domain 0 doesn't virtualize legacy IO ports space. */
+if (d == dom0  space == io_space[0])
 return 0;
 
-fp_offset = IO_SPACE_SPARSE_ENCODING(fp)  ~PAGE_MASK;
-lp_offset = PAGE_ALIGN(IO_SPACE_SPARSE_ENCODING(lp));
-
-for (off = fp_offset; off = lp_offset; off += PAGE_SIZE)
-(void)__assign_domain_page(d, IO_PORTS_PADDR + off,
-   __pa(ia64_iobase) + off, ASSIGN_nocache);
+fp = IO_SPACE_PORT(fp);
+

Re: [Xen-ia64-devel] Small fix for vpd size

2007-05-17 Thread Masaki Kanno
Hi Xiantao,

What is a version number of new PAL? 
Does old PAL need 128K for VPD?

Best regards,
 Kan

New pal has fixed vpd size issue, so change its size to 64K. 
Xiantao 

---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Small fix for vpd size

2007-05-17 Thread Masaki Kanno
Hi Xiantao,

Thank you. That makes me feel relieved. 

 Kan

Hi Kan,
   This issue is just in early stage pal for internal use, and it doesn't 
exist in any release version, so you can use it safely with any version you
 got. 
Xiantao

 -Original Message-
 From: Masaki Kanno [mailto:[EMAIL PROTECTED]
 Sent: 2007ト\xF3ヤツ18ネユ 12:09
 To: Zhang, Xiantao; xen-ia64-devel
 Cc: Alex Williamson
 Subject: Re: [Xen-ia64-devel] Small fix for vpd size
 
 Hi Xiantao,
 
 What is a version number of new PAL?
 Does old PAL need 128K for VPD?
 
 Best regards,
  Kan
 
 New pal has fixed vpd size issue, so change its size to 64K.
 Xiantao
 
 ---text/plain---
 ___
 Xen-ia64-devel mailing list
 Xen-ia64-devel@lists.xensource.com
 http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH][RFC] allow conflicted rid allocation for64 domain

2007-05-14 Thread Masaki Kanno
Hi Isaku,

We have a environment of creating 60 domains, but do not have 
a environment of creating more domains than it. 
Could you give us time of preparations? We will try your patches. 

Best regards,
 Kan and Fujitsu-team

It seems that we're approaching 64 domain limit.
Here's the experimental patch which removes it.

I don't think that just reducing rid bits given to guest OS is 
enough because presumably we need domains more than 128, 256 or
512 ... eventually.
So my approach is to allow rid regions to conflict.
I haven't test it with 64 domains because my environment
doesn't have enough memory to boot so many domains.
But I tested it with dom_rid_bits=23 xen boot option.

Thanks.
-- 
yamahata

---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH] Return ENOMEM if VPD allocation failed

2007-05-11 Thread Masaki Kanno
Hi Tristan,

On Fri, May 11, 2007 at 09:06:11AM +0800, Xu, Anthony wrote:
 If we would like to support many domains and many vcpus,
 I think that we should expand xenheap.
 I think that the simplest method is changing PS of ITR[0]
 and DTR[0] to 256M byte.  Do you have good ideas?
 
 Agree, we should expand xenheap if we want to support more domain/vcpu.
IIRC xenheap is 64MB now.  Does it mean that each domain uses about 1MB ?
(Seems to be big).  Or the initial allocation is important ?

Sorry, my explanation was not enough.
When I tested to creating 60 UP-domains, xenheap remains was an 
about 33M byte. 
I think that perhaps we can create 60 SMP-domains with each 4 vcpus. 
But I do not test it.

Thanks,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Return ENOMEM if VPD allocation failed

2007-05-10 Thread Masaki Kanno
Hi,

Usually ASSRET() is (void)0.  Therefore if VPD allocation 
fails with xenheap shortage or fragmentation, NULL pointer 
access occurs in vmx_final_setup_guest(). 
This patch fixes it. 


BTW, I succeeded to creating 60 UP-domains.  But I failed to 
creating many SMP-domains with xenheap shortage.  I failed 
in the following environments. 
 - 55 domains, and
 - each 5 vcpus

If we would like to support many domains and many vcpus, 
I think that we should expand xenheap. 
I think that the simplest method is changing PS of ITR[0] 
and DTR[0] to 256M byte.  Do you have good ideas?


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



alloc_vpd.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Fix allocate_rid_range()

2007-05-10 Thread Masaki Kanno
Hi,

I found a bug in allocate_rid_range().  Though there is a free 
ridblock_owner[], allocate_rid_range() cannot allocate it.


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



alloc_rid.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel][Patch]Add two PAL calls which fix SMP windowsinstallation crashing bug

2007-03-26 Thread Masaki Kanno
Hi Wing,

Is the bug 928 solved by this patch?
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=928

Best regards,
 Kan

Add two PAL calls -- PAL_LOGICAL_TO_PHYSICAL and PAL_FIXED_ADDR.

These PAL calls are invoked by SMP windows installation code.

The patch fix SMP windows installation crashing bug.

 

Signed-off-by, Xu, Anthony  [EMAIL PROTECTED] 

Signed-off-by, Zhang Xin  [EMAIL PROTECTED] 

 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation

 


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Improve error message when HVM domain creation failed

2007-03-21 Thread Masaki Kanno
Hi,

Currently, when VHPT or vTLB are not able to allocate to HVM 
domain, the xm create command shows the following error message 
because the return value of vmx_final_setup_guest() is -1(EPREM).

# xm create /xen/HVMdomain.1
Using config file /xen/HVMdomain.1.
Error: (1, 'Operation not permitted')


This patch changes the return value of the following functions 
to -12(ENOMEM). Therefore the error message is changed to 
Cannot allocate memory. 

 - vmx_final_setup_guest()
 - init_domain_tlb()
 - init_domain_vhpt()

# xm create /xen/HVMdomain.1
Using config file /xen/HVMdomain.1.
Error: (12, 'Cannot allocate memory')


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



improve_error_message.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH][Bug 900] Fix xc_vcpu_{set/get}affinity

2007-02-15 Thread Masaki Kanno
Hi,

I fixed the Xen-bugzilla No.900.

http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=900


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



xc_vcpu_affinity.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] Re: [Xen-devel] [PATCH][Bug 900] Fix xc_vcpu_{set/get}affinity

2007-02-15 Thread Masaki Kanno
Hi,

Could you apply this patch? Or do you have comment?
If this issue is not solved, Xen/ia64 is not able to set affinity 
to virtual CPU in Xen 3.0.5. 

Best regards,
 Kan

Hi,

I fixed the Xen-bugzilla No.900.

http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=900


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan


---text/plain---
___
Xen-devel mailing list
[EMAIL PROTECTED]
http://lists.xensource.com/xen-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH][Bug 900] Fixxc_vcpu_{set/get}affinity

2007-02-15 Thread Masaki Kanno
Hi Alex,

Thanks for your information. That makes me feel relieved. 

Best regards,
 Kan

On Fri, 2007-02-16 at 14:06 +0900, Masaki Kanno wrote:
 Hi,
 
 Could you apply this patch? Or do you have comment?
 If this issue is not solved, Xen/ia64 is not able to set affinity 
 to virtual CPU in Xen 3.0.5. 

   It's in the xen-unstable.hg staging tree, should show up whenever
that gets unstuck again.

   Alex

-- 
Alex Williamson HP Open Source  Linux Org.



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH 3/4][IA64][HVM] Windows crashdump support (take 2): Linux arch_ia64 side

2007-02-12 Thread Masaki Kanno
[3/4] Linux arch_ia64 side. 



xm_trigger.linux_arch_ia64.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH 2/4][IA64][HVM] Windows crashdump support (take 2): Xen arch_ia64 side

2007-02-12 Thread Masaki Kanno
[2/4] Xen arch_ia64 side. 



xm_trigger.xen_arch_ia64.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH 0/4][IA64][HVM] Windows crashdump support (take 2)

2007-02-12 Thread Masaki Kanno
Hi,

I changed these patches to the following command syntax. But only 
INIT in Xen/ia64 is available currently. 

  xm trigger Domain nmi|reset|init [VCPU]


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]
Signed-off-by: Akio Takebe [EMAIL PROTECTED]
Signed-off-by: Zhang Xin [EMAIL PROTECTED]

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5][IA64][HVM] Windowscrashdump support

2007-01-24 Thread Masaki Kanno

On Wed, Jan 24, 2007 at 02:27:42PM +0900, Masaki Kanno wrote:
[...]
 Hi Tristan and Keir and all,
 
 Thanks for your idea and comments.
 I will remake and resend these patches in the following command syntax. 
 
  xm trigger Domain VCPU init|reset|nmi
 xm trigger Domain init|reset|nmi [VCPU]
is slightly better.  By default VCPU is 0.

Okay, I will do that.

 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5][IA64][HVM] Windows crashdump support

2007-01-23 Thread Masaki Kanno

On 23/1/07 2:40 am, Masaki Kanno [EMAIL PROTECTED] wrote:

 In the ia64 machine, there is an INIT switch. By pushing the
 INIT switch, an INIT interruption is generated. The INIT
 interruption triggers off when the Windows collects the
 crashdump. (I attach a screenshot of the Windows crashdump.)

Does ia64 Windows generate a dump if you send it an NMI? That would be a
more generic mechanism that would allow x86 HVM to share the same domctl or
hvm_op. Otherwise we need a send_init hypercall for ia64 and a send_nmi
hypercall for x86, or we need a more vague generic name like
trigger_dump_switch (which actually is rather attractive now I think about
it).

Hi Keir,

No, the ia64 Windows does not generate the crashdump by the NMI. 
In the ia64 machine, the NMI interruption and the INIT interruption 
are generated by different mechanism. 
I don't adhere to the name of 'send_init'. Rather I think that 
'trigger_dump_switch' which you named it is better. 

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5][IA64][HVM] Windows crashdump support

2007-01-23 Thread Masaki Kanno

 It just seems a lot of plumbing for something not very useful, at least to
 users. You can dump memory via 'xm dump' after all if that's your goal (or
 will be able to when Isaku's patches go in). Would anyone else in the ai64
 community like to speak up for these patches?
 
 I think there is a big difference between 'xm dump' and a dump done by 
 domU.
 Detailed error analysis might be possible only with a dump taken via domU
  as
 the dump analysis tools will work in most cases on the domU specific dump
 format only.
 Outside the Linux world :-) (e.g. in most UNIX flavors) crash dump analysis
 is very useful!
 I would strongly support Masaki's patches.

If that's the aim then perhaps the xm command should be given a better name.
'xm os-init' is rather meaningless outside the context of physical INIT
buttons on ia64 systems -- not a very generic concept to export to a VM
management tool! Perhaps 'xm os-dump' with a guest-specific backend
implementation in xend (default to fail with an error message).

Hi,

Thanks for your idea. 
I thought about some general concept command name, too. I enumerate 
below it and your idea.

 A) xm os-dump
 B) xm dump-trigger
 C) xm dump-core --trigger
 D) xm dump-switch

Any command name?

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5][IA64][HVM] Windowscrashdump support

2007-01-23 Thread Masaki Kanno

On Tue, Jan 23, 2007 at 08:06:23AM +, Keir Fraser wrote:
 On 23/1/07 2:40 am, Masaki Kanno [EMAIL PROTECTED] wrote:
 
  In the ia64 machine, there is an INIT switch. By pushing the
  INIT switch, an INIT interruption is generated. The INIT
  interruption triggers off when the Windows collects the
  crashdump. (I attach a screenshot of the Windows crashdump.)
 
 Does ia64 Windows generate a dump if you send it an NMI? That would be a
 more generic mechanism that would allow x86 HVM to share the same domctl or
 hvm_op. Otherwise we need a send_init hypercall for ia64 and a send_nmi
 hypercall for x86, or we need a more vague generic name like
 trigger_dump_switch (which actually is rather attractive now I think 
 about
 it).
I support the feature too.

I think being able to post an NMI/INIT is a good feature.  It allows you
to test the feature if your bare hardware don't have it.


Hi Tristan,

Thanks for your comments. 

Maybe os-init is not the best name.

Maybe os-init is not the best command name as you say. If you have idea 
of command name, could you send it?

Maybe the balance between code in hypervisor (very small) and code in xm
(larger) is not very good, but difficule to improve !

I examined oneself. I should have checked a target domain (HVM domain or 
PV domain) in hypervisor. I will send the patch which moved the check of 
target domain into hypervisor. Maybe not difficult to improve.

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5][IA64][HVM] Windowscrashdump support

2007-01-23 Thread Masaki Kanno

On Tue, Jan 23, 2007 at 04:36:57PM +, Keir Fraser wrote:
 On 23/1/07 16:32, Tristan Gingold [EMAIL PROTECTED] wrote:
 
  Maybe os-init is not the best command name as you say. If you have idea
  of command name, could you send it?
  something like
  xm trigger init|reset|nmi
 
 This could be acceptable, mapping to a domctl command with type enumeration
 argument. Specifying vcpu would indeed make sense. It's not clear that such
 a flexible interface would really be needed (maybe xm os-dump vcpu would
 suffice, mapping to the usual 'hardware dump switch' method for the
 particular architecture?) but it's better than 'xm os-init' which I
 definitely dislike, and maybe the extra flexibility could turn out to be
 useful.
For sure this is an export-only command.
I don't really like the xm os-dump name, because the action of init/nmi is
up to the domain.  It may or not dump.

Hi Tristan and Keir and all,

Thanks for your idea and comments.
I will remake and resend these patches in the following command syntax. 

 xm trigger Domain VCPU init|reset|nmi

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH 5/5][IA64][HVM] Windows crashdump support: Tools arch_ia64 side

2007-01-22 Thread Masaki Kanno
[5/5] Tools arch_ia64 side. 



xm_osinit.tools_arch_ia64.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH 2/5][IA64][HVM] Windows crashdump support: Xen arch_ia64 side

2007-01-22 Thread Masaki Kanno
[2/5] Xen arch_ia64 side. 



xm_osinit.xen_arch_ia64.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH 1/5][IA64][HVM] Windows crashdump support: Xen common side

2007-01-22 Thread Masaki Kanno
[1/5] Xen common side. 



xm_osinit.xen_common.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH 3/5][IA64][HVM] Windows crashdump support: Linux arch_ia64 side

2007-01-22 Thread Masaki Kanno
[3/5] Linux arch_ia64 side. 



xm_osinit.linux_arch_ia64.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] Xen crash when creating VTI in some machines.

2007-01-18 Thread Masaki Kanno
Hi Yongkang,

Hmmm... It's the strange issue. I think so, too. 
I'm trying to reproduce the strange issue. But I'm not able to meet 
the strange issue at this stage. 
Currently, the creation of HVM domain does not call the pervcpu_vhpt_free() 
by my patch(changeset:13434). Which function is f7b30080?

Best regards,
 Kan


Hi all,

This is a strange issue, because some machines do not meet this problem, 
including our daily regular testing machine. 

But we found Xen might crash when doing the operation of creating VTI in 
some other machines. Serial console would keep report:
...
(XEN)  [f4080300] pervcpu_vhpt_free+0x30/0x50
(XEN) sp=f7bffe00 bsp=
f7bf93e8
(XEN)  [f7b30080] ???
(XEN) sp=f7bffe00 bsp=
f7bf93e8
...


We found this issue at least in 13438 and 13465. Does anybody meet this 
issue too?

Best Regards,
Yongkang (Kangkang) モタソオ


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Xen crash when creating VTI in some machines.

2007-01-18 Thread Masaki Kanno
Hi Wing and Yongkang,

Oh! I am sorry that it was caused by my patch. 
I will review my patch. 

Best regards,
 Kan

Hi Kan:
   f7b30080 is an address belong to free_domheap_pages(). I 
found 
if reversed your patch, the issue will not meet. Could you give a review to
 your patch, maybe there is something lost. 

Good good study,day day up ! ^_^
-Wing(zhang xin)
 
OTC,Intel Corporation

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Masaki 
Kanno
Sent: 2007年1月19日 13:13
To: You, Yongkang; xen-ia64-devel
Subject: Re: [Xen-ia64-devel] Xen crash when creating VTI in some machines.

Hi Yongkang,

Hmmm... It's the strange issue. I think so, too.
I'm trying to reproduce the strange issue. But I'm not able to meet
the strange issue at this stage.
Currently, the creation of HVM domain does not call the pervcpu_vhpt_free()
by my patch(changeset:13434). Which function is f7b30080?

Best regards,
 Kan


Hi all,

This is a strange issue, because some machines do not meet this problem,
including our daily regular testing machine.

But we found Xen might crash when doing the operation of creating VTI in
some other machines. Serial console would keep report:
...
(XEN)  [f4080300] pervcpu_vhpt_free+0x30/0x50
(XEN) sp=f7bffe00 bsp=
f7bf93e8
(XEN)  [f7b30080] ???
(XEN) sp=f7bffe00 bsp=
f7bf93e8
...


We found this issue at least in 13438 and 13465. Does anybody meet this
issue too?

Best Regards,
Yongkang (Kangkang) モタソオ


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Question about time_interpolator_get_counter() in RHEL4

2007-01-15 Thread Masaki Kanno
Hi Alex,

Could you comment a this issue? 

When we tested RHEL4 on domVTi of SMP, we met with hanging up of the 
RHEL4. We examined this issue with the xenctx. As a result, the each 
VCPUs of domVTi seemed to be looping in the following functions. 
 - A VCPU is looping in time_interpolator_get_counter(). 
 - Other VCPUs are looping in fsys_gettimeofday(). 

When we were examining this issue further, we found your patch about 
time_interpolator_get_counter(). 

  [PATCH] optimize writer path in time_interpolator_get_counter()
  http://lkml.org/lkml/2005/8/1/134

Could I ask you a question without understanding your patch enough? 
The issue that we met and the issue that your patch fixed, do you 
think these issues are the same thing? 

BTW, RHEL4 which applied your patch passed our test! 

Best regards,
 Kan and Akio



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH] Fix freeing of privregs structure fordomVTi

2007-01-15 Thread Masaki Kanno
Hi,

I made a patch to modify initialization of VCPU of dom0/domU. 

1) This patch moved some processing from vcpu_initialise() and 
   added a new function vcpu_late_initialise(). 
   It executes the following initializations for VCPU of 
   dom0/domU. 
- Allocate the VHPT 
- Allocate the privregs area and assign these pages into 
  guest pseudo physical address space. 
- Set the tlbflush_timestamp.

   It is executed in the following sequence. 

   dom0:
 start_kernel()
   -domain_create()
   -alloc_vcpu(VCPU0)
 -alloc_vcpu_struct(VCPU0)
 -vcpu_initialise(VCPU0)
   -vcpu_late_initialise(VCPU0)
 
   -construct_dom0
 -alloc_vcpu(othe VCPUs)
   -alloc_vcpu_struct(other VCPUs)
   -vcpu_initialise(other VCPUs)
 
 ia64_hypercall(FW_HYPERCALL_IPI)
   -fw_hypercall_ipi(XEN_SAL_BOOT_RENDEZ_VEC)
 -arch_set_info_guest(other VCPUs)
   -vcpu_late_initialise(other VCPUs)

   domU:
 do_domctl(XEN_DOMCTL_createdomain)
   -domain_create()
 
 do_domctl(XEN_DOMCTL_max_vcpus)
   -alloc_vcpu(all VCPUs)
 -alloc_vcpu_struct(all VCPUs)
 -vcpu_initialise(all VCPUs)
 
 do_domctl(XEN_DOMCTL_setvcpucontext)
   -set_info_guest(VCPU0)
 -arch_set_info_guest(VCPU0)
   -vcpu_late_initialise(VCPU0)
 
 ia64_hypercall(FW_HYPERCALL_IPI)
   -fw_hypercall_ipi(XEN_SAL_BOOT_RENDEZ_VEC)
 -arch_set_info_guest(other VCPUs)
   -vcpu_late_initialise(other VCPUs)


2) This patch modified the domain_set_shared_info_va(). 
   Currently, initialization of arch.privregs-interrupt_mask_addr 
   of all VCPUs is executed in domain_set_shared_info_va(). 
   However, allocation of privregs area is late by modified of 1). 
   Therefore, this patch modified initialization of 
   arch.privregs-interrupt_mask_addr to the following sequence. 

   dom0 and domU:
 ia64_hypercall(FW_HYPERCALL_SET_SHARED_INFO_VA)
   -domain_set_shared_info_va()
 Initialize interrupt_mask_addr of VCPU0
 
 ia64_hypercall(FW_HYPERCALL_IPI)
   -fw_hypercall_ipi(XEN_SAL_BOOT_RENDEZ_VEC)
 -arch_set_info_guest(other VCPUs)
   -vcpu_late_initialise(other VCPUs)
   Initialize interrupt_mask_addr of other VCPUs


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan

Hi kanno,

 
 I'll try it. Could you wait a few days?
 
That's ok.
Thanks for taking this

Anthony

Masaki Kanno write on 2007年1月12日 16:18:
 Hi Anthiny,
 
 Masaki Kanno write on 2007年1月11日 16:24:
 Hi,
 
 Hi Kanno,
 
 Good catch!
 
 I have below comment.
 
 The root cause is,  vhpt and privregs for xeno are allocated at
 hypercall XEN-DOMCTL_max_vcpus, at that time d-arch.is_vti is not
 set yet. 
 When d-arch.is_vti is set, vhpt and privregs allocated for xeno are
 released, and vhpt and privregs for VTI are allocated at this time.
 
 This logic is a little bit ugly.
 
 I also think so.
 
 Can we postpone vhpt and privregs allocation until d-arch.is_vti is
 set? 
 
 I'll try it. Could you wait a few days?
 
 Best regards,
  Kan
 
 One place is at xen/arch/ia64/xen/arch_set_info_guest,
 
 If(d-arch.is_vti)
 vmx_final_setup_guest(v);
 Else{
 /*TODO  We can move vhpt and privregs logic for xeno here. */
 
 }
 
 What's your opinioin?
 
 Thanks,
 Anthony
 
 
 When I repeated creation and destruction of domVTi, I found a bug.
 It is memory leak of privregs structure.
 This patch fixes the bug.
 
 Signed-off-by: Masaki Kanno [EMAIL PROTECTED]
 
 Best regards,
  Kan


vcpu_initialise.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] [PATCH] Fix freeing of privregs structurefordomVTi

2007-01-15 Thread Masaki Kanno
Sorry, I forgot to attach test results.

 - Booting Xen and booting dom0-SMP(16 CPUs) test passed. 
 - Repeating test of creation and destruction of guest domain 
   passed, and I was not able to find memory leak in xenheap. 
  -- domU-UP
  -- domU-SMP(16 CPUs)
  -- domVTi-UP
  -- domVTi-SMP(16 CPUs)

Best regards,
 Kan

Hi,

I made a patch to modify initialization of VCPU of dom0/domU. 

1) This patch moved some processing from vcpu_initialise() and 
   added a new function vcpu_late_initialise(). 
   It executes the following initializations for VCPU of 
   dom0/domU. 
- Allocate the VHPT 
- Allocate the privregs area and assign these pages into 
  guest pseudo physical address space. 
- Set the tlbflush_timestamp.

   It is executed in the following sequence. 

   dom0:
 start_kernel()
   -domain_create()
   -alloc_vcpu(VCPU0)
 -alloc_vcpu_struct(VCPU0)
 -vcpu_initialise(VCPU0)
   -vcpu_late_initialise(VCPU0)
 
   -construct_dom0
 -alloc_vcpu(othe VCPUs)
   -alloc_vcpu_struct(other VCPUs)
   -vcpu_initialise(other VCPUs)
 
 ia64_hypercall(FW_HYPERCALL_IPI)
   -fw_hypercall_ipi(XEN_SAL_BOOT_RENDEZ_VEC)
 -arch_set_info_guest(other VCPUs)
   -vcpu_late_initialise(other VCPUs)

   domU:
 do_domctl(XEN_DOMCTL_createdomain)
   -domain_create()
 
 do_domctl(XEN_DOMCTL_max_vcpus)
   -alloc_vcpu(all VCPUs)
 -alloc_vcpu_struct(all VCPUs)
 -vcpu_initialise(all VCPUs)
 
 do_domctl(XEN_DOMCTL_setvcpucontext)
   -set_info_guest(VCPU0)
 -arch_set_info_guest(VCPU0)
   -vcpu_late_initialise(VCPU0)
 
 ia64_hypercall(FW_HYPERCALL_IPI)
   -fw_hypercall_ipi(XEN_SAL_BOOT_RENDEZ_VEC)
 -arch_set_info_guest(other VCPUs)
   -vcpu_late_initialise(other VCPUs)


2) This patch modified the domain_set_shared_info_va(). 
   Currently, initialization of arch.privregs-interrupt_mask_addr 
   of all VCPUs is executed in domain_set_shared_info_va(). 
   However, allocation of privregs area is late by modified of 1). 
   Therefore, this patch modified initialization of 
   arch.privregs-interrupt_mask_addr to the following sequence. 

   dom0 and domU:
 ia64_hypercall(FW_HYPERCALL_SET_SHARED_INFO_VA)
   -domain_set_shared_info_va()
 Initialize interrupt_mask_addr of VCPU0
 
 ia64_hypercall(FW_HYPERCALL_IPI)
   -fw_hypercall_ipi(XEN_SAL_BOOT_RENDEZ_VEC)
 -arch_set_info_guest(other VCPUs)
   -vcpu_late_initialise(other VCPUs)
   Initialize interrupt_mask_addr of other VCPUs


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan

Hi kanno,

 
 I'll try it. Could you wait a few days?
 
That's ok.
Thanks for taking this

Anthony

Masaki Kanno write on 2007年1月12日 16:18:
 Hi Anthiny,
 
 Masaki Kanno write on 2007年1月11日 16:24:
 Hi,
 
 Hi Kanno,
 
 Good catch!
 
 I have below comment.
 
 The root cause is,  vhpt and privregs for xeno are allocated at
 hypercall XEN-DOMCTL_max_vcpus, at that time d-arch.is_vti is not
 set yet. 
 When d-arch.is_vti is set, vhpt and privregs allocated for xeno are
 released, and vhpt and privregs for VTI are allocated at this time.
 
 This logic is a little bit ugly.
 
 I also think so.
 
 Can we postpone vhpt and privregs allocation until d-arch.is_vti is
 set? 
 
 I'll try it. Could you wait a few days?
 
 Best regards,
  Kan
 
 One place is at xen/arch/ia64/xen/arch_set_info_guest,
 
 If(d-arch.is_vti)
vmx_final_setup_guest(v);
 Else{
 /*TODO  We can move vhpt and privregs logic for xeno here. */
 
 }
 
 What's your opinioin?
 
 Thanks,
 Anthony
 
 
 When I repeated creation and destruction of domVTi, I found a bug.
 It is memory leak of privregs structure.
 This patch fixes the bug.
 
 Signed-off-by: Masaki Kanno [EMAIL PROTECTED]
 
 Best regards,
  Kan

---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Question abouttime_interpolator_get_counter() in RHEL4

2007-01-15 Thread Masaki Kanno
Hi Alex,

Thanks for your detailed information. 
You are wonderful to remember for one year or more past. 

Best regards,
 Kan

On Mon, 2007-01-15 at 19:02 +0900, Masaki Kanno wrote:
 Hi Alex,
 
 Could you comment a this issue? 
 
 When we tested RHEL4 on domVTi of SMP, we met with hanging up of the 
 RHEL4. We examined this issue with the xenctx. As a result, the each 
 VCPUs of domVTi seemed to be looping in the following functions. 
  - A VCPU is looping in time_interpolator_get_counter(). 
  - Other VCPUs are looping in fsys_gettimeofday(). 
 
 When we were examining this issue further, we found your patch about 
 time_interpolator_get_counter(). 
 
   [PATCH] optimize writer path in time_interpolator_get_counter()
   http://lkml.org/lkml/2005/8/1/134

Hi Kan,

   That sounds like the right scenario.  Both of these code paths are
using seqlocks to allow multiple CPUs to try to update the last_cycle
value via a cmpxchg.  The optimization I added allows the CPU holding
the write lock to update the last_cycle counter and exit without
competing with the other CPUs in the cmpxchg.  This significantly
reduces the contention caused by the cmpxchg and maintains correctness
because the readers (on the fsys_gettimeofday path) cannot exit while a
write seqlock is held.  I only saw a true live-lock in this code segment
with a prototype processor, but the possibility for it certainly exists.
I could see the scheduling of vCPUs potentially making this problem more
likely.  Thanks,

   Alex

-- 
Alex Williamson HP Open Source  Linux Org.



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH] Fix freeing of privregs structure fordomVTi

2007-01-12 Thread Masaki Kanno
Hi Anthiny,

Masaki Kanno write on 2007年1月11日 16:24:
 Hi,

Hi Kanno,

Good catch!

I have below comment.

The root cause is,  vhpt and privregs for xeno are allocated at hypercall
XEN-DOMCTL_max_vcpus, at that time d-arch.is_vti is not set yet.
When d-arch.is_vti is set, vhpt and privregs allocated for xeno are 
released, 
and vhpt and privregs for VTI are allocated at this time.

This logic is a little bit ugly.

I also think so.

Can we postpone vhpt and privregs allocation until d-arch.is_vti is set?

I'll try it. Could you wait a few days?

Best regards,
 Kan

One place is at xen/arch/ia64/xen/arch_set_info_guest,

If(d-arch.is_vti)
   vmx_final_setup_guest(v);
Else{ 
/*TODO  We can move vhpt and privregs logic for xeno here. */

}

What's your opinioin?

Thanks,
Anthony

 
 When I repeated creation and destruction of domVTi, I found a bug.
 It is memory leak of privregs structure.
 This patch fixes the bug.
 
 Signed-off-by: Masaki Kanno [EMAIL PROTECTED]
 
 Best regards,
  Kan


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Fix freeing of privregs structure for domVTi

2007-01-11 Thread Masaki Kanno
Hi,

When I repeated creation and destruction of domVTi, I found a bug. 
It is memory leak of privregs structure. 
This patch fixes the bug. 

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



free_privregs.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] VTi domain creating failed

2006-12-20 Thread Masaki Kanno
Hi,

When I tried to create the VTi domain, xm create command showed 
the following error message. 

  Error: (1, 'Internal error', 'Error constructing guest OS')

And, the following message was shown in the serial console at 
that time. 

  (XEN) xencomm_paddr_to_maddr: called with bad memory address: 0x1e68 - 
iip=a0010006fcd0


Reproduction procedure:
 - Create and destroy the VTi domain with attached file 
   domVTi-test.sh.

Attached files:
 - domVTi-test.sh
This file is the easy shell script to repeat the creation 
and the destruction of the VTi domain. 
 - rhel4u2-10.124.50.197-5GB-shared-VTi.conf
My domain configuration file.
 - create-destroy.log
This file is the console log when the error message is 
shown. 
 - xendmesg.txt
This file is the serial console log when the error message 
is shown.

Best regards,
 Kan



domVTi-test.sh
Description: Binary data


rhel4u2-10.124.50.197-5GB-shared-VTi.conf
Description: Binary data


create-destroy.log
Description: Binary data
(XEN) PID hash table entries: 4096 (order: 12, 131072 bytes)
(XEN) Console: colour VGA+ 80x25
(XEN) arch_boot_vcpu: vcpu 1 awaken
(XEN) vcpu_get_lrr0: Unmasked interrupts unsupported
(XEN) vcpu_get_lrr1: Unmasked interrupts unsupported
(XEN) arch_boot_vcpu: vcpu 2 awaken
(XEN) vcpu_get_lrr0: Unmasked interrupts unsupported
(XEN) vcpu_get_lrr1: Unmasked interrupts unsupported
(XEN) arch_boot_vcpu: vcpu 3 awaken
(XEN) vcpu_get_lrr0: Unmasked interrupts unsupported
(XEN) vcpu_get_lrr1: Unmasked interrupts unsupported
(XEN) domain.c:470: arch_domain_create:470 domain 1 pervcpu_vhpt 1
(XEN) tlb_track_allocate_entries:69 allocated 256 num_entries 256 num_free 256
(XEN) tlb_track_create:115 hash 0xf000701b hash_size 512 
(XEN) ### domain f7b60080: rid=8-c mp_rid=2000
(XEN) arch_domain_create: domain=f7b60080
(XEN) vpd base: 0xf7a0, vpd size:65536
(XEN) Allocate domain vhpt at 0xf002e400
(XEN) Allocate domain vtlb at 0xf002e510
(XEN) ivt_base: 0xf401
(XEN) domain.c:470: arch_domain_create:470 domain 2 pervcpu_vhpt 1
(XEN) tlb_track_allocate_entries:69 allocated 256 num_entries 256 num_free 256
(XEN) tlb_track_create:115 hash 0xf0007020c000 hash_size 512 
(XEN) ### domain f7b00080: rid=8-c mp_rid=2000
(XEN) arch_domain_create: domain=f7b00080
(XEN) vpd base: 0xf7ae, vpd size:65536
(XEN) Allocate domain vhpt at 0xf002e400
(XEN) Allocate domain vtlb at 0xf002e510
(XEN) ivt_base: 0xf401
(XEN) domain.c:470: arch_domain_create:470 domain 3 pervcpu_vhpt 1
(XEN) tlb_track_allocate_entries:69 allocated 256 num_entries 256 num_free 256
(XEN) tlb_track_create:115 hash 0xf001f7e14000 hash_size 512 
(XEN) ### domain f7b30080: rid=8-c mp_rid=2000
(XEN) arch_domain_create: domain=f7b30080
(XEN) vpd base: 0xf7ae, vpd size:65536
(XEN) Allocate domain vhpt at 0xf002e400
(XEN) Allocate domain vtlb at 0xf002e510
(XEN) ivt_base: 0xf401
(XEN) domain.c:470: arch_domain_create:470 domain 4 pervcpu_vhpt 1
(XEN) tlb_track_allocate_entries:69 allocated 256 num_entries 256 num_free 256
(XEN) tlb_track_create:115 hash 0xf001f7bc8000 hash_size 512 
(XEN) ### domain f7b48080: rid=8-c mp_rid=2000
(XEN) arch_domain_create: domain=f7b48080
(XEN) vpd base: 0xf7ac, vpd size:65536
(XEN) Allocate domain vhpt at 0xf002e400
(XEN) Allocate domain vtlb at 0xf002e510
(XEN) ivt_base: 0xf401
(XEN) domain.c:470: arch_domain_create:470 domain 5 pervcpu_vhpt 1
(XEN) tlb_track_allocate_entries:69 allocated 256 num_entries 256 num_free 256
(XEN) tlb_track_create:115 hash 0xf001f7bf hash_size 512 
(XEN) ### domain f7a18080: rid=8-c mp_rid=2000
(XEN) arch_domain_create: domain=f7a18080
(XEN) vpd base: 0xf7ac, vpd size:65536
(XEN) Allocate domain vhpt at 0xf002e400
(XEN) Allocate domain vtlb at 0xf002e510
(XEN) ivt_base: 0xf401
(XEN) domain.c:470: arch_domain_create:470 domain 6 pervcpu_vhpt 1
(XEN) tlb_track_allocate_entries:69 allocated 256 num_entries 256 num_free 256
(XEN) tlb_track_create:115 hash 0xf001f7dd hash_size 512 
(XEN) ### domain f7a08080: rid=8-c mp_rid=2000
(XEN) arch_domain_create: domain=f7a08080
(XEN) vpd base: 0xf7aa, vpd size:65536
(XEN) Allocate domain vhpt at 0xf002e400
(XEN) Allocate domain vtlb at 0xf002e510
(XEN) ivt_base: 0xf401
(XEN) domain.c:470: arch_domain_create:470 domain 7 pervcpu_vhpt 1
(XEN) tlb_track_allocate_entries:69 allocated 256 num_entries 256 num_free 256
(XEN) tlb_track_create:115 hash 0xf001f7bf hash_size 512 
(XEN) ### domain f7a78080: rid=8-c mp_rid=2000
(XEN) arch_domain_create: domain=f7a78080
(XEN) vpd base: 0xf7aa, vpd size:65536
(XEN) Allocate 

[Xen-ia64-devel] [RFC] [PATCH 2/4][IA64][HVM] Windows 2003 server crashdump suppprt

2006-12-04 Thread Masaki Kanno
[2/4]

Xen common side.



xm_osinit_xen_common_side.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [RFC] [PATCH 3/4][IA64][HVM] Windows 2003 server crashdump suppprt

2006-12-04 Thread Masaki Kanno
[3/4]

Xen arch/ia64 side.



xm_osinit_xen_arch-ia64_side.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [RFC] [PATCH 4/4][IA64][HVM] Windows 2003 server crashdump suppprt

2006-12-04 Thread Masaki Kanno
[4/4]

Linux side.



xm_osinit_linux_side.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [RFC] [PATCH 1/4][IA64][HVM] Windows 2003 server crashdump suppprt

2006-12-04 Thread Masaki Kanno
[1/4]

Tools side.



xm_osinit_tools_side.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] Re: [Xen-devel] Re: Question about [Bug 747] [IPF-xen] xm listreports Error: Device 769 not connected

2006-11-30 Thread Masaki Kanno
Hi Ewan,

Thanks for your reply.

Best regards,
 Kan

On Thu, Nov 30, 2006 at 01:57:50PM +0900, Masaki Kanno wrote:

 Hi Ewan,
 
 http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=747
 
 
 [EMAIL PROTECTED] changed:
 
What|Removed |Added
 
 
  Status|NEW |RESOLVED
  Resolution||FIXED
 
 
 
 
 --- Comment #7 from [EMAIL PROTECTED]  2006-11-29 03:44 ---
 xm list seems to be working again now.
 
 Could you tell changeset this issue was fixed in?

I'm not sure, I'm afraid.  It might have been this one, but neither Ali or I
can remember precisely.

changeset:   12328:de7c20b6eaae7b9ac71eb3b63b5bff5b6d6a5220
user:Alastair Tse [EMAIL PROTECTED]
date:Thu Nov 09 16:05:00 2006 +
files:   tools/python/xen/xend/XendDomain.py
description:
[XEND] Ignore dying domains when refreshing list of domains.

Also cleanup stored configuration if managed domain is deleted.

Signed-off-by: Alastair Tse [EMAIL PROTECTED]


Ewan.

___
Xen-devel mailing list
[EMAIL PROTECTED]
http://lists.xensource.com/xen-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Question about [Bug 747] [IPF-xen] xm list reports Error: Device 769 not connected

2006-11-29 Thread Masaki Kanno
Hi Ewan,

http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=747


[EMAIL PROTECTED] changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED




--- Comment #7 from [EMAIL PROTECTED]  2006-11-29 03:44 ---
xm list seems to be working again now.

Could you tell changeset this issue was fixed in?

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] MCA patches causing Xen to hang on sn2

2006-11-27 Thread Masaki Kanno
Hi Jes,

For Xen boot option, you have to specify nomca in the left side of append 
line.
For Dom0 boot option, you have to specify nomca in the right side of append 
line.

Example:
 append=nomca com2=115200,8n1 console=com2 -- nomca console=tty0 
console=ttyS1,115200,8n1 rhgb root=/dev/sda2
    Xen boot options ++ Dom0 boot options 

If you cannot start up Xen/Dom0, please stop the salinfod in Dom0.
Nevertheless if you cannot start up Xen/Dom0, could you send the serial log to 
me?

Best regards,
 Kan

SUZUKI Kazuhiro wrote:
 Hi Jes,
 
 I tried this and attached the output below. I was wondering why we
 seem to allocate pages to MCA handlers on 64 processors even if we
 only boot 8, but thats a detail.
 
   It's OK. The mca_data is allocated up to NR_CPUS(=64) in MCA
 initialization routine. 
 
And I think that your system will boot up if nomca is specified
 in boot parameters.
 This didn't make any difference :(
 
   Please confirm whether to have specified nomca for not only the 
 Domain0 boot parameter but also Xen. And please send me a boot log
 with nomca too.
 
 Thanks,
 KAZ

Hi Kaz,

I just did 'nomca' on the boot line. Can you explain to me how I do
it for dom0 too?

Thanks,
Jes


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Fix xencomm for xm mem-set command

2006-11-11 Thread Masaki Kanno
Hi,

When I tested xm mem-set command and xm mem-max command, I found a bug 
in xencomm.  About error message, please refer an attached file: 
xm-memset2006.log.  My operations are as follows. 
  1. Boot Xen/dom0. (dom0_mem is default.)
  2. xm mem-set Domain-0 400
  3. xm mem-max Domain-0 400
  4. xm mem-set Domain-0 512

When balloon driver called the memory_op hypercall, xencomm overwrite 
in the hypercall parameter.  It is extent_start of xen_memory_reservation_t 
that is overwritten.  However, xencomm does not restore the hypercall 
parameter that overwrote.  As a result, hypercall of the 200th line in 
balloon driver fails.

linux-2.6-xen-sparse/drivers/xen/balloon/balloon.c
 167 static int increase_reservation(unsigned long nr_pages)
 168 {
 169 unsigned long  pfn, i, flags;
 170 struct page   *page;
[snip]
 190 set_xen_guest_handle(reservation.extent_start, frame_list);
 191 reservation.nr_extents   = nr_pages;
 192 rc = HYPERVISOR_memory_op(
 193 XENMEM_populate_physmap, reservation);
 194 if (rc  nr_pages) {
 195 if (rc  0) {
 196 int ret;
 197
 198 /* We hit the Xen hard limit: reprobe. */
 199 reservation.nr_extents = rc;
 200 ret = HYPERVISOR_memory_op(XENMEM_decrease_reservation,
 201 reservation);
 202 BUG_ON(ret != rc);
 203 }
[snip]


This patch saves and restores the hypercall parameter within xencomm.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



xm-memset2006.log
Description: Binary data


xcom_memory_op.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Fix time services of EFI emulation

2006-11-10 Thread Masaki Kanno
Hi,

When I shutdown two DomUs at the same time, the following error occurs 
in BS_code.
The root cause is, during shutdown process DomU writes hwclock and Xen 
emulates this with GetTime efi.runtime call, and in the above case, 
two GetTime calls are executed at the same time without any 
serialization.

(XEN) d 0xf7be0080 domid 35
(XEN) d 0xf7bd8080 domid 36
(XEN) vcpu 0xf7b5 vcpu 0
(XEN) vcpu 0xf7c3 vcpu 0
(XEN) 
(XEN) CPU 5
(XEN) 
(XEN) CPU 4
(XEN) psr : 121008226018 ifs : 838c ip  : [f0007f755521]
(XEN) psr : 121008226018 ifs : 838c ip  : [f0007f755521]
(XEN) ip is at ???
(XEN) unat:  pfs : 038c rsc : 
(XEN) ip is at ???
(XEN) unat:  pfs : 038c rsc : 
[snip]

Type   StartEnd   # Pages  Attributes
BS_data-0FFF  0001 0009
available  1000-6FFF  0006 0009
BS_data7000-8FFF  0002 0009
available  9000-00081FFF  0079 0009
RT_data00082000-00083FFF  0002 8009
available  00084000-00084FFF  0001 0009
BS_data00085000-0009  001B 0009
RT_code000C-000F  0040 8009
available  0010-0FF7  FE80 000B
BS_data0FF8-0FFF  0080 000B
available  1000-7D8F  0006D900 000B
BS_code7D90-7F97  2080 000B
available  7F98-7F9F  0080 000B
[snip]


This patch serializes the execution of following efi.runtimes.
  - GetTime
  - SetTime
  - GetWakeTime
  - SetWakeTime

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



time_services_lock.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Remove unused function in efi.c

2006-11-10 Thread Masaki Kanno
Hi,

This patch removes a unused function in efi.c.

Signed-off-by: Kouya Shimura [EMAIL PROTECTED]
Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



remove_unused.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [Patch] Guest PAL_INIT support for IPI

2006-11-06 Thread Masaki Kanno
Hi Zhang,

We were looking forward to your patches. 
We have a question and a comment.

Question:
  How can we inject the INIT interrupt into domVTi?
  We made xm os-init command, and tested your patch.
  But INIT handler of domVTi was not executed, we think.
  We attach two files.
   - merge.patch : It is a patch that we tested.
   - xenctx.log  : It is a cpu-context of domVTi after we test.

Comment:
  We think that TODO line is unnecessary.

@@ -404,7 +419,7 @@ static void deliver_ipi (VCPU *vcpu, uin
 break;
 case 5: // INIT
 // TODO -- inject guest INIT-- This!
-panic_domain (NULL, Inject guest INIT!\n);
+vmx_inject_guest_pal_init(vcpu);
 break;
 case 7: // ExtINT
 vmx_vcpu_pend_interrupt (vcpu, 0);

Best regards,
 Kan and Akio

This patch add guest PAL_INIT support for IPI

 

Signed-off-by, Zhang Xin  [EMAIL PROTECTED]

 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation

 


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


merge.patch
Description: Binary data


xenctx.log
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Fix a bug in INIT handler

2006-10-30 Thread Masaki Kanno
Hi,

I found a bug in INIT handler.  This bug sometimes occurs on 
the following conditions.
 1. Create a domVTi
 2. Run a user program on domVTi
 3. Push to INIT switch

When this bug occurs, Xen shows error messages. (e.g.: attached 
file INIT_handler_fault_messages.txt)

This bug occurs if a vCPU of domVTi runs on a pCPU where its 
INIT interruption have not occurred yet.  It is because 
arch._thread.on_ustack member in vcpu structure is always 
zero and, accordingly, ar.bspstore doesn't switch to Xen RBS 
in the MINSTATE_START_SAVE_MIN_PHYS.

This patch adds a checking of ipsr.vm bit into the 
MINSTATE_START_SAVE_MIN_PHYS for domVTi.  If ipsr.vm bit is 1, 
ar.bspstore is switched to Xen RBS.


Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan

(XEN) Entered OS INIT handler. PSP=fff301a0
(XEN) Delaying for 5 seconds...
(XEN) Entered OS INIT handler. PSP=fff301a0
(XEN) Delaying for 5 seconds...
(XEN) Entered OS INIT handler. PSP=fff301a0
(XEN) Delaying for 5 seconds...
(XEN) Entered OS INIT handler. PSP=fff301a0
(XEN) Delaying for 5 seconds...
(XEN) Entered OS INIT handler. PSP=fff301a0
(XEN) Delaying for 5 seconds...
(XEN) Entered OS INIT handler. PSP=fff301a0
(XEN) NaT bits  
(XEN) Delaying for 5 seconds...
(XEN) pr0815
(XEN) b0f0007fb1c070 ???
(XEN) ar.rsc
(XEN) cr.iipf0007fb39552 ???
(XEN) cr.ipsr   141008620030
(XEN) cr.ifs850a
(XEN) xip   f40501c1 startup_cpu_idle_loop+0x391/0x420
(XEN) xpsr  121008626030
(XEN) xfs   8590
(XEN) b18000fff7b850 ???
(XEN) 
(XEN) static registers r0-r15:
(XEN)  r0- 3  f4316e00 f7d8fe00 0590
(XEN)  r4- 7 73e8  0002 8000
(XEN)  r8-11 fffe   
(XEN) r12-15 f7d8fd60 f7d88000  101008622030
(XEN) 
(XEN) bank 0:
(XEN) r16-19 f7d88e68 0308  
(XEN) r20-23 0009804c8a70433f f40501c0 0001 
(XEN) r24-27 0031140c00c0  0590 0003
(XEN) r28-31 f40501c0 121008626030 8590 0815
(XEN) 
(XEN) bank 1:
(XEN) r16-19 0004 0815  02ab
(XEN) r20-23 101008620030 f409c100 001008622030 8100
(XEN) r24-27 8100 0003 f411f59c 
(XEN) r28-31  0003  
(XEN) Backtrace of current vcpu (vcpu_id 3)
(XEN) 
(XEN) Call Trace:
(XEN)  [f40501c1] startup_cpu_idle_loop+0x391/0x420
(XEN) sp=f7d8fd60 bsp=f7d88e10
(XEN) 
(XEN) PAL cache flush success

(XEN) NaT bits  
(XEN) pr0815
(XEN) VHPT Translation.
(XEN) b0f0007fb1c070 d 0xf7cb4080 domid 1
(XEN) ???
(XEN) vcpu 0xf7c88000 vcpu 0
(XEN) ar.rsc
(XEN) 
(XEN) CPU 7
(XEN) cr.iipf0007fb39552 psr : 1018080a2030 ifs : 
8002 ip  : [f4099000]
(XEN) ???
(XEN) ip is at save_switch_stack+0x0/0x1e0
(XEN) cr.ipsr   141008620030
(XEN) unat:  pfs : c206 rsc : 000c
(XEN) cr.ifs850a
(XEN) rnat:  bsps:  pr  : 0555c865
(XEN) xip   f40501c1 ldrs:  ccv : 
 fpsr: 0009804c0270033f
(XEN) startup_cpu_idle_loop+0x391/0x420
(XEN) csd :  ssd : 
(XEN) b0  : 0409b800 b6  : 2009cf70 b7  : f409bc40
(XEN) f6  : 0 f7  : 0
(XEN) xpsr  121008626030
(XEN) f8  : 10019b271ad20 f9  : 1003efffe7960
(XEN) xfs   8590
(XEN) f10 : 1003edb09 f11 : 10008e9e3e30014d46727
(XEN) b18000fff7b850 r1  : f4316e00 r2  : 
1018080a2030 r3  : f409bc10
(XEN) ???
(XEN) 
(XEN) static registers r0-r15:
(XEN) r8  : f0007fb0 r9  :  r10 : 
(XEN)  r0- 3  f4316e00 f7d9fe00 0590
(XEN) r11 : 0009804c0270033f r12 : f7faccf0 r13 : f7c88000
(XEN)  r4- 7 75d0  0002 8000
(XEN) r14 : c206 r15 : 084b r16 : db09
(XEN)  r8-11 fffe   
(XEN) r17 :  r18 :  r19 : 202b8000
(XEN) r12-15 f7d9fd60 f7d98000

[Xen-ia64-devel] [PATCH] xenctx shows more registers for ia64

2006-10-24 Thread Masaki Kanno
Hi,

This patch adds more user registers to show them to xenctx for ia64.
Tested domU/domVTi on ia64.

Sample is the below.
# ./xenctx 1 0
 iip:   e810  
 ipsr:  1012087a6010   b0:a00100068a70
 b6:a0010014ff60   b7:e800
 cr_ifs:850a   ar_unat:   
 ar_pfs:8209   ar_rsc:0008
 ar_rnat:      ar_bspstore:   a00100c19030
 ar_fpsr:   0009804c8a70433f   event_callback_ip: a00100067a20
 pr:0005aa85   loadrs:0078
 iva:   a0018000   dcr:   7e04

 r1:  a001010369a0
 r2:  1000   r3:  8209
 r4:     r5:  
 r6:     r7:  
 r8:  a00100068a70   r9:  0100
 r10:    r11: 00050ac5
 sp:  a00100c1fd80   tp:  a00100c18000
 r14: 0001   r15: 
 r16: fff04c18   r17: a00100c1fdb0
 r18: a00100c1fdb1   r19: a00100c1fe90
 r20: a00100c1fe10   r21: 
 r22: 0001   r23: 
 r24: a00100e5a448   r25: a00100c18f10
 r26: 0030   r27: 
 r28: 001d   r29: 
 r30:    r31: 

 itr: P ridva   paps  ed pl ar a d makey
 [0]  1 05 a001 00400 1a  64M 1  2  3  1 1 0 WB  00
 [1]  1 07 e000 0 18  16M 1  2  3  1 1 0 WB  00
 [2]  0 00  0 00  0  0  0  0 0 0 WB  00
 [3]  0 00  0 00  0  0  0  0 0 0 WB  00
 [4]  0 00  0 00  0  0  0  0 0 0 WB  00
 [5]  0 00  0 00  0  0  0  0 0 0 WB  00
 [6]  0 00  0 00  0  0  0  0 0 0 WB  00
 [7]  0 00  0 00  0  0  0  0 0 0 WB  00

 dtr: P ridva   paps  ed pl ar a d makey
 [0]  1 05 a001 00400 1a  64M 1  2  3  1 1 0 WB  00
 [1]  1 07  1 10  64K 1  2  3  1 1 0 WB  00
 [2]  1 07 e000 0 18  16M 1  2  3  1 1 0 WB  00
 [3]  0 00  0 00  0  0  0  0 0 0 WB  00
 [4]  0 00  0 00  0  0  0  0 0 0 WB  00
 [5]  0 00  0 00  0  0  0  0 0 0 WB  00
 [6]  0 00  0 00  0  0  0  0 0 0 WB  00
 [7]  0 00  0 00  0  0  0  0 0 0 WB  00


Signed-off-by: Akio Takebe [EMAIL PROTECTED]
Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



xenctx_more_showregs.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Move console_start_sync()

2006-10-23 Thread Masaki Kanno
Hi,

This patch moves console_start_sync() before first message in 
ia64_init_handler(), and it cleans up ia64_init_handler().

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



move_console_sync_and_cleanup.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH][RFC] New command: xm pcpu-list

2006-09-22 Thread Masaki Kanno
Hi all,

I would push a new command xm pcpu-list that reports physical 
CPU configuration.
I suppose that Xen works on a machine where a lot of physical 
CPUs are installed.  It is useful for users to know the 
configuration of physical CPUs so that they can allocate VCPUs 
efficiently. This command offers the means for it.

I began this patch with ia64 machines.  It is just because I 
have one.

I would like to make this command work on x86 and powerpc 
machines.  Unfortunately, I don't have any with dual-core and 
multi-thread features.  I don't have much information on them 
either.  I would appreciate if you give any help to make the 
command work on x86 and powerpc.

Best regards,
 Kan


cf. 
# xm pcpu-list
PCPU  Node  Socket  CoreThread State
   0 00x001802 0 0 online
   1 00x001803 0 0 online
   2 00x001800 1 0 online
   3 00x001801 1 0 online
   4 00x001802 1 0 online
   5 00x001803 1 0 online
   6 00x001800 0 1 online
   7 00x001801 0 1 online
   8 00x001802 0 1 online
   9 00x001803 0 1 online
  10 00x001800 1 1 online
  11 00x001801 1 1 online
  12 00x001802 1 1 online
  13 00x001803 1 1 online
  14 00x001800 0 0 online
  15 00x001801 0 0 online
# xm info
host   : tiger154
release: 2.6.16.13-xen
version: #1 SMP Fri Sep 22 11:28:14 JST 2006
machine: ia64
nr_cpus: 16
nr_nodes   : 1
sockets_per_node   : 4
cores_per_socket   : 2
threads_per_core   : 2
cpu_mhz: 1595
hw_caps: 
::::::::
total_memory   : 8166
free_memory: 7586
xen_major  : 3
xen_minor  : 0
xen_extra  : -unstable
xen_caps   : xen-3.0-ia64 hvm-3.0-ia64
xen_pagesize   : 16384
platform_params: virt_start=0xe800
xen_changeset  : Thu Sep 21 15:35:45 2006 -0600 11460:
da942e577e5e
cc_compiler: gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)
cc_compile_by  : root
cc_compile_domain  : 
cc_compile_date: Fri Sep 22 11:23:42 JST 2006
xend_config_format : 2



pcpu-list.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] pickled code

2006-09-20 Thread Masaki Kanno
Hi Jes,

When I created a domU, Xen does the panic with your patch.
Because _domain of page_info structure was changed to u64, 
type_info of page_info structure is not 8 bytes alignment.

Best regards,
 Kan

Kernel command line:  root=/dev/hda1 ro nomca nosmp xencons=tty0 console=tty0 3
PID hash table entries: 2048 (order: 11, 65536 bytes)
lookup_domain_mpa: d 0xf7de0080 id 1 current 0xf7db8000 id 0
(XEN) lookup_domain_mpa: bad mpa 0xc019064 (= 0x2000)
(XEN) Warning: UC to WB for mpaddr=c019064
008226018, isr=0x0a06
(XEN) Unaligned Reference.
(XEN) d 0xf4290080 domid 0
(XEN) vcpu 0xf4268000 vcpu 0
(XEN) 
(XEN) CPU 0
(XEN) psr : 121008226018 ifs : 8994 ip  : [f4067191]
(XEN) ip is at get_page_type+0xf1/0x300
(XEN) unat:  pfs : 0ea3 rsc : 0003
(XEN) rnat:  bsps:  pr  : 0002aa69
(XEN) ldrs:  ccv :  fpsr: 0009804c0270033f
(XEN) csd :  ssd : 
(XEN) b0  : f4029e30 b6  : f40290a0 b7  : a00100068510
(XEN) f6  : 08000 f7  : 1003e6db6db6db6db6db7
(XEN) f8  : 1003e0002085a f9  : 1003e
(XEN) f10 : 100079cd9967f8c00 f11 : 1003e0139
(XEN) r1  : f43168d0 r2  : e0001fb5fd90 r3  : e0001fb5fd91
(XEN) r8  : 0001 r9  :  r10 : 
(XEN) r11 : 09e9 r12 : f426f920 r13 : f4268000
(XEN) r14 : e001 r15 : 07de0080 r16 : 
(XEN) r17 : 07de00808002 r18 : 07de0080 r19 : 1fff
(XEN) r20 : f426f928 r21 : 8000 r22 : 
(XEN) r23 :  r24 : f426fe20 r25 : f426fe28
(XEN) r26 :  r27 :  r28 : 
(XEN) r29 : 0001 r30 :  r31 : f7de3828
(XEN) 
(XEN) Call Trace:
(XEN)  [f4098140] show_stack+0x80/0xa0
(XEN) sp=f426f550 bsp=f42690c8
(XEN)  [f406c300] ia64_fault+0x280/0x670
(XEN) sp=f426f720 bsp=f4269090
(XEN)  [f4095100] ia64_leave_kernel+0x0/0x310
(XEN) sp=f426f720 bsp=f4269090
(XEN)  [f4067190] get_page_type+0xf0/0x300
(XEN) sp=f426f920 bsp=f4268fe8
(XEN)  [f4029e30] do_grant_table_op+0x1090/0x18d0
(XEN) sp=f426f920 bsp=f4268f00
(XEN)  [f405d0e0] ia64_hypercall+0x4f0/0xe00
(XEN) sp=f426f940 bsp=f4268ea0
(XEN)  [f406c840] ia64_handle_break+0x150/0x2e0
(XEN) sp=f426fdf0 bsp=f4268e68
(XEN)  [f4095100] ia64_leave_kernel+0x0/0x310
(XEN) sp=f426fe00 bsp=f4268e68
(XEN) 
(XEN) 
(XEN) Panic on CPU 0:
(XEN) Fault in Xen.
(XEN) 
(XEN) 
(XEN) Reboot in five seconds...


Hi,

I found another interesting issue in the code - the way the 'pickle'
functions work just cannot be right. There is no way one should ever
try and truncate the output of __pa() to u32 or expect to be able to
run __va() on a u32 and obtain any level of usable output.

I have to admit I have zero clue what the pickle code is trying to
achieve, but I am at least fairly confident that something needs to
be done in this space :(

Cheers,
Jes


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Add xen boot option dom0_vcpus_pin

2006-09-11 Thread Masaki Kanno
Hi,

This patch adds the xen boot option to pin Domain-0 VCPUs. 
Please refer to the attached files for test result.

 vcpu1.txt : No specified option.
 vcpu2.txt : Specified option dom0_vcpus_pin.
 vcpu3.txt : Specified options dom0_max_vcpus=2 and dom0_vcpus_pin.
 vcpu4.txt : Specified option dom0_max_vcpus=2.

Signed-off-by: Kouya SHIMURA [EMAIL PROTECTED]
Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



opt_dom0_vcpus_pin.patch
Description: Binary data
# xm dmesg
EFI v1.10 by INTEL: SALsystab=0x7fe4c8c0 ACPI=0x7ff95000 ACPI 2.0=0x7ff94000 
MPS=0x7ff93000 SMBIOS=0xf
ERROR: I/O base address must be specified.
 __  ___  ___ __ _  
 \ \/ /___ _ __   |___ / / _ \_   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \|_ \| | | |__| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | |  ___) | |_| |__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_| |(_)___/\__,_|_| |_|___/\__\__,_|_.__/|_|\___|

 http://www.cl.cam.ac.uk/netos/xen
 University of Cambridge Computer Laboratory

 Xen version 3.0-unstable (root@) (gcc version 3.4.4 20050721 (Red Hat 
3.4.4-2)) Tue Sep 12 00:43:11 JST 2006
 Latest ChangeSet: Sun Sep 10 15:34:14 2006 -0600 11447:a1988768828d

(XEN) Xen command line: BOOT_IMAGE=scsi0:EFI\redhat\../xen/xen.gz-kan  
dom0_mem=512M com2=115200,8n1 console=com2 
[snip]

# xm vcpu-list
Name  ID  VCPU  CPU  State  Time(s)  CPU Affinity
Domain-0   0 00   r--  29.8  any cpu
# 
# xm dmesg
EFI v1.10 by INTEL: SALsystab=0x7fe4c8c0 ACPI=0x7ff95000 ACPI 2.0=0x7ff94000 
MPS=0x7ff93000 SMBIOS=0xf
ERROR: I/O base address must be specified.
 __  ___  ___ __ _  
 \ \/ /___ _ __   |___ / / _ \_   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \|_ \| | | |__| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | |  ___) | |_| |__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_| |(_)___/\__,_|_| |_|___/\__\__,_|_.__/|_|\___|

 http://www.cl.cam.ac.uk/netos/xen
 University of Cambridge Computer Laboratory

 Xen version 3.0-unstable (root@) (gcc version 3.4.4 20050721 (Red Hat 
3.4.4-2)) Tue Sep 12 00:43:11 JST 2006
 Latest ChangeSet: Sun Sep 10 15:34:14 2006 -0600 11447:a1988768828d

(XEN) Xen command line: BOOT_IMAGE=scsi0:EFI\redhat\../xen/xen.gz-kan  
dom0_mem=512M dom0_vcpus_pin com2=115200,8n1 console=com2 
[snip]

# xm vcpu-list
Name  ID  VCPU  CPU  State  Time(s)  CPU Affinity
Domain-0   0 00   r--  12.7  0
# 
# xm dmesg
EFI v1.10 by INTEL: SALsystab=0x7fe4c8c0 ACPI=0x7ff95000 ACPI 2.0=0x7ff94000 
MPS=0x7ff93000 SMBIOS=0xf
ERROR: I/O base address must be specified.
 __  ___  ___ __ _  
 \ \/ /___ _ __   |___ / / _ \_   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \|_ \| | | |__| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | |  ___) | |_| |__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_| |(_)___/\__,_|_| |_|___/\__\__,_|_.__/|_|\___|

 http://www.cl.cam.ac.uk/netos/xen
 University of Cambridge Computer Laboratory

 Xen version 3.0-unstable (root@) (gcc version 3.4.4 20050721 (Red Hat 
3.4.4-2)) Tue Sep 12 00:43:11 JST 2006
 Latest ChangeSet: Sun Sep 10 15:34:14 2006 -0600 11447:a1988768828d

(XEN) Xen command line: BOOT_IMAGE=scsi0:EFI\redhat\../xen/xen.gz-kan  
dom0_mem=512M dom0_max_vcpus=2 dom0_vcpus_pin com2=115200,8n1 console=com2 
[snip]

# xm vcpu-list
Name  ID  VCPU  CPU  State  Time(s)  CPU Affinity
Domain-0   0 00   r--  11.5  0
Domain-0   0 11   -b-   7.5  1
# 
# xm dmesg
EFI v1.10 by INTEL: SALsystab=0x7fe4c8c0 ACPI=0x7ff95000 ACPI 2.0=0x7ff94000 
MPS=0x7ff93000 SMBIOS=0xf
ERROR: I/O base address must be specified.
 __  ___  ___ __ _  
 \ \/ /___ _ __   |___ / / _ \_   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \|_ \| | | |__| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | |  ___) | |_| |__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_| |(_)___/\__,_|_| |_|___/\__\__,_|_.__/|_|\___|

 http://www.cl.cam.ac.uk/netos/xen
 University of Cambridge Computer Laboratory

 Xen version 3.0-unstable (root@) (gcc version 3.4.4 20050721 (Red Hat 
3.4.4-2)) Tue Sep 12 00:43:11 JST 2006
 Latest ChangeSet: Sun Sep 10 15:34:14 2006 -0600 11447:a1988768828d

(XEN) Xen command line: BOOT_IMAGE=scsi0:EFI\redhat\../xen/xen.gz-kan  
dom0_mem=512M

[Xen-ia64-devel] [PATCH] hwclock support

2006-09-04 Thread Masaki Kanno
Hi,

This patch supports the hwclock on Domain-0.
It adds the following EFI runtime service emulations for the 
hwclock support.
 1. SetTime
 2. Get/SetWakeupTime

Test result:
1. Results of SetTime
 # hwclock --show
 Sun 03 Sep 2006 01:12:42 PM JST  -0.242322 seconds
 # hwclock --set --date=03 Sep 2006 15:00
 # hwclock --show
 Sun 03 Sep 2006 03:00:05 PM JST  -0.281547 seconds
 # hwclock --systohc
 # hwclock --show
 Sun 03 Sep 2006 01:14:42 PM JST  -0.676491 seconds

2. Results of Get/SetWakeupTime
 Because I didn't find any applications that uses these EFI 
 runtime services, I didn't test them. 

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



hwclock.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Remove warning message in fw_emul.c

2006-08-28 Thread Masaki Kanno
Hi,

This patch removes warning message in fw_emul.c.

Tested by booting Domain-0 and running the efibootmgr on 
Domain-0.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



remove_warning.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] vcpu-pin bug to dom0 ?

2006-08-28 Thread Masaki Kanno
Hi Yongkang,

Hi all,

I find a strange issue that if I only assign 1 or 2 vcpus to dom0, I can 
not use vcpu-pin to pin the No. 0 vcpu. It will report: Invalid argument.
 But it works, if I let Xen0 see all vcpus when booting.

For example, in IA32, set (dom0-cpus 1)
After Xen0 boot up, xm vcpu-p 0 0 0 will see errors.
If setting (dom0-cpus 0), above command works.

In IA64, set dom0_max_vcpus=2 (totally have 16 vcpus)
After Xen0 boot up, xm vcpu-p 0 0 0 will see errors.
But xm vcpu-p 0 1 0 works.


I think that you can solve this problem by applying the following 
patch, and inputting xm vcpu-pin 0 0 0 from two consoles at the 
same time...  Need many retry :-)

diff -r 5b9ff5e8653a xen/common/domctl.c
--- a/xen/common/domctl.c   Sun Aug 27 06:56:01 2006 +0100
+++ b/xen/common/domctl.c   Mon Aug 28 18:01:28 2006 +0900
@@ -380,13 +380,6 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domc
 
 if ( op-cmd == XEN_DOMCTL_setvcpuaffinity )
 {
-if ( v == current )
-{
-ret = -EINVAL;
-put_domain(d);
-break;
-}
-
 xenctl_cpumap_to_cpumask(
 new_affinity, op-u.vcpuaffinity.cpumap);
 ret = vcpu_set_affinity(v, new_affinity);


But, if Domain-0 has one virtual CPU, this problem cannot be solved 
even if applying this patch. If you are using CREDIT scheduler, 
'xm vcpu-pin 0 0 0' makes an error by the following line.


static int
csched_vcpu_set_affinity(struct vcpu *vc, cpumask_t *affinity)
{
unsigned long flags;
int lcpu;

if ( vc == current )
{
/* No locking needed but also can't move on the spot... */
if ( !cpu_isset(vc-processor, *affinity) )
return -EBUSY;    This!

vc-cpu_affinity = *affinity;
}


Hi Keir,
Do you have good ideas to solve this problem?


Best regards,
 Kan

Best Regards,
Yongkang (Kangkang)

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] vcpu-pin bug to dom0 ?

2006-08-28 Thread Masaki Kanno

Hi Yongkang,

Hi all,

I find a strange issue that if I only assign 1 or 2 vcpus to dom0, I can 
not use vcpu-pin to pin the No. 0 vcpu. It will report: Invalid argument.
 But it works, if I let Xen0 see all vcpus when booting.

For example, in IA32, set (dom0-cpus 1)
After Xen0 boot up, xm vcpu-p 0 0 0 will see errors.
If setting (dom0-cpus 0), above command works.

In IA64, set dom0_max_vcpus=2 (totally have 16 vcpus)
After Xen0 boot up, xm vcpu-p 0 0 0 will see errors.
But xm vcpu-p 0 1 0 works.


I think that you can solve this problem by applying the following 
patch, and inputting xm vcpu-pin 0 0 0 from two consoles at the 
same time...  Need many retry :-)


Hi Yongkang,

Sorry, that is an expected behavior, my patch is far from perfect.
As Keir noted, we need a scheduler fix to correctly solve this.

Not apply my patch:
 +---+--+---+
 | Target domain | pCPU that vCPU processing| Result|
 |   | 'xm vcpu-pin' command works  |   |
 +---+--+---+
 | Domain-0  | == Target pCPU   | Error(22) |
 |   +--+---+
 |   | != Target pCPU   | Error(22) |
 +---+--+---+
 | Domain-U  |  -   | OK|
 +---+--+---+

Apply my patch:
 +---+--+---+
 | Target domain | pCPU that vCPU processing| Result|
 |   | 'xm vcpu-pin' command works  |   |
 +---+--+---+
 | Domain-0  | == Target pCPU   | OK|
 |   +--+---+
 |   | != Target pCPU   | Error(16) |
 +---+--+---+
 | Domain-U  |  -   | OK|
 +---+--+---+


Best regards,
 Kan


diff -r 5b9ff5e8653a xen/common/domctl.c
--- a/xen/common/domctl.c   Sun Aug 27 06:56:01 2006 +0100
+++ b/xen/common/domctl.c   Mon Aug 28 18:01:28 2006 +0900
@@ -380,13 +380,6 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domc
 
 if ( op-cmd == XEN_DOMCTL_setvcpuaffinity )
 {
-if ( v == current )
-{
-ret = -EINVAL;
-put_domain(d);
-break;
-}
-
 xenctl_cpumap_to_cpumask(
 new_affinity, op-u.vcpuaffinity.cpumap);
 ret = vcpu_set_affinity(v, new_affinity);


But, if Domain-0 has one virtual CPU, this problem cannot be solved 
even if applying this patch. If you are using CREDIT scheduler, 
'xm vcpu-pin 0 0 0' makes an error by the following line.


static int
csched_vcpu_set_affinity(struct vcpu *vc, cpumask_t *affinity)
{
unsigned long flags;
int lcpu;

if ( vc == current )
{
/* No locking needed but also can't move on the spot... */
if ( !cpu_isset(vc-processor, *affinity) )
return -EBUSY;    This!

vc-cpu_affinity = *affinity;
}


Hi Keir,
Do you have good ideas to solve this problem?


Best regards,
 Kan

Best Regards,
Yongkang (Kangkang)

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [RFC] About problem of xm vcpu-pin

2006-08-19 Thread Masaki Kanno
Hi Alex,

OK, I'm sending a patch by the next email.

Best regards,
 Kan


On Fri, 2006-08-18 at 22:19 +0900, Masaki Kanno wrote:

 I have tested xm vcpu-pin command with latest changeset 11045.
 Currently the problem don't happen. Do you think that this 
 problem was settled?
 
 More, I removed pinned Domain-0 vCPU0 to pCPU0 processing 
 and tried booting Xen and Domain-0. Any problems did not happen.
 May I post a patch to remove pinned Domain-0 vCPU0 to pCPU0?
 

Hi Kan,

   Sounds good to me.  As Tristan mentioned in that thread, the early
pinning is outdated.  We should remove it and try to expose any
remaining bugs.  Thanks,

   Alex

-- 
Alex Williamson HP Open Source  Linux Org.



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] efibootmgr support

2006-08-16 Thread Masaki Kanno
Hi,

This patch supports the efibootmgr on Domain-0.
This patch adds the following EFI runtime service emulations 
for the efibootmgr support.
 - GetVariable
 - GetNextVariableName
 - SetVariable

Tested by booting Domain-0 and running the efibootmgr on 
Domain-0.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



efi_variable.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] xen with more that one processor!

2006-08-09 Thread Masaki Kanno
Hi Rodrigo,

Please do setting in the elilo.conf dom0_max_vcpus=4. 

Best regards,
 Kan

Hi!

I`m trying to boot dom0 and domU with 4 processors...
I set in the elilo.conf max_cpus=4 but when I boot dom0, my /proc/cpuinfo
just shows 1 processor!

How can I set 4 processors correctly?

Thanks!

---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Why Xen/ia64 pins vCPU0 of Domain-0 on pCPU0?

2006-07-31 Thread Masaki Kanno
Hi Kevin and Tristan,

Thanks for your information.

FYI, I attach log when Domain-0 crashed.

Best regards,
 Kan

Hi all,

I have a basic question. Is there any particular reason why Xen/ia64
pins vCPU0 of Domain-0 on pCPU0? (cf. xensetup.c)

'xm vcpu-pin Domain-0 0 X' command crashes Domain-0. I thought that
Domain-0 should reject this command on Xen/ia64, if there was a reason
for pinning vCPU0.

Best regards,
 Kan


No reason, I think. That should be a bug related to dom0 migration...

Thanks,
Kevin
? __  ___  ___ __ _  
 \ \/ /___ _ __   |___ / / _ \_   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \|_ \| | | |__| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | |  ___) | |_| |__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_| |(_)___/\__,_|_| |_|___/\__\__,_|_.__/|_|\___|

 http://www.cl.cam.ac.uk/netos/xen
 University of Cambridge Computer Laboratory

 Xen version 3.0-unstable (root@) (gcc version 3.4.4 20050721 (Red Hat 
3.4.4-2)) Mon Jul 31 13:58:24 JST 2006
 Latest ChangeSet: Thu Jul 27 10:43:34 2006 -0600 10831:2d73714911c2

(XEN) xen image pstart: 0x400, xenheap pend: 0x800
(XEN) find_memory: efi_memmap_walk returns max_page=bffed
(XEN) Before heap_start: f4132958
(XEN) After heap_start: f415
(XEN) Init boot pages: 0x1d8 - 0x400.
(XEN) Init boot pages: 0x800 - 0x7fa0.
(XEN) Init boot pages: 0x7fe98000 - 0x7ff4.
(XEN) Init boot pages: 0x1 - 0x1c000.
(XEN) Init boot pages: 0x28000 - 0x2fd9af000.
(XEN) Init boot pages: 0x2fe814f18 - 0x2fedf4008.
(XEN) Init boot pages: 0x2fedf4068 - 0x2fedf7f4a.
(XEN) Init boot pages: 0x2fedf7fc3 - 0x2fedfb000.
(XEN) Init boot pages: 0x2fef398f9 - 0x2fef4e008.
(XEN) Init boot pages: 0x2fef4e998 - 0x2ffe14000.
(XEN) Init boot pages: 0x2ffe8 - 0x2fffb4000.
(XEN) System RAM: 8169MB (8366000kB)
(XEN) size of virtual frame_table: 20480kB
(XEN) virtual machine to physical table: f3a00098 size: 4144kB
(XEN) max_page: 0xbffed
(XEN) Xen heap: 62MB (64192kB)
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) ACPI: RSDP (v002 INTEL ) @ 
0x7ff94000
(XEN) ACPI: XSDT (v001 INTEL  SR870BN4 0x01072002 MSFT 0x00010013) @ 
0x7ff94090
(XEN) ACPI: FADT (v003 INTEL  SR870BN4 0x01072002 MSFT 0x00010013) @ 
0x7ff94138
(XEN) ACPI: MADT (v001 INTEL  SR870BN4 0x01072002 MSFT 0x00010013) @ 
0x7ff94230
(XEN) ACPI: DSDT (v001  Intel SR870BN4 0x MSFT 0x010d) @ 
0x
(XEN) SAL 3.1: Intel Corp   SR870BN4
 version 3.0
(XEN) SAL Platform features: BusLock IRQ_Redirection
(XEN) SAL: AP wakeup using external interrupt vector 0xf0
(XEN) No logical to physical processor mapping available
(XEN) avail:0x1180c600, 
status:0x600,control:0x1180c000, vm?0x0
(XEN) No VT feature supported.
(XEN) cpu_init: current=f40d4000
(XEN) vhpt_init: vhpt paddr=0x1fffe, end=0x1fffe
(XEN) iosapic_system_init: Disabling PC-AT compatible 8259 interrupts
(XEN) ACPI: Local APIC address e800fee0
(XEN) ACPI: LSAPIC (acpi_id[0x00] lsapic_id[0xc0] lsapic_eid[0x18] enabled)
(XEN) CPU 0 (0xc018) enabled (BSP)
(XEN) ACPI: LSAPIC (acpi_id[0x01] lsapic_id[0xc2] lsapic_eid[0x18] enabled)
(XEN) CPU 1 (0xc218) enabled
(XEN) ACPI: LSAPIC (acpi_id[0x02] lsapic_id[0xaa] lsapic_eid[0x00] disabled)
(XEN) CPU 2 (0xaa00) disabled
(XEN) ACPI: LSAPIC (acpi_id[0x03] lsapic_id[0xaa] lsapic_eid[0x00] disabled)
(XEN) CPU 3 (0xaa00) disabled
(XEN) ACPI: IOSAPIC (id[0x0] address[fec0] gsi_base[0])
(XEN) ACPI: IOSAPIC (id[0x1] address[fec1] gsi_base[24])
(XEN) ACPI: IOSAPIC (id[0x2] address[fec2] gsi_base[48])
(XEN) ACPI: IOSAPIC (id[0x3] address[fec3] gsi_base[72])
(XEN) ACPI: IOSAPIC (id[0x4] address[fec4] gsi_base[96])
(XEN) ACPI: IOSAPIC (id[0x5] address[fec5] gsi_base[120])
(XEN) ACPI: IOSAPIC (id[0x6] address[fec6] gsi_base[144])
(XEN) ACPI: PLAT_INT_SRC (low level type[0x3] id[0x00c0] eid[0x18] 
iosapic_vector[0x1e] global_irq[0x16]
(XEN) PLATFORM int CPEI (0x3): GSI 22 (level, low) - CPU 0 (0xc018) vector 30
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) register_intr: changing vector 39 from IO-SAPIC-edge to IO-SAPIC-level
(XEN) 2 CPUs available, 4 CPUs total
(XEN) MCA related initialization done
(XEN) setup_vector(254): handler=f4109d90, flags=102
(XEN) setup_vector(239): handler=f4109d90, flags=102
(XEN) CPU 0: base freq=199.457MHz, ITC ratio=15/2, ITC freq=1495.927MHz
(XEN) Time init:
(XEN)  System Time: 1559607ns
(XEN)  scale:   AB219863
(XEN) Boot processor id 0x0/0xc018
(XEN) num_online_cpus=1, max_cpus=64
(XEN) cpu_init: current=f429
(XEN) vhpt_init: vhpt 

Re: [Xen-ia64-devel] [RFC] MCA handler support for Xen/ia64

2006-07-31 Thread Masaki Kanno
Hi Alex,

Thanks for your comments and informations.
We start to make patch.

Best regards,
 You, Kaz, and Kan

On Fri, 2006-07-28 at 21:12 +0900, Masaki Kanno wrote:
 Hi Alex,
 
 We are awfully sorry to have kept you waiting for a long time.

Hi Kan,

   No problem, thanks for your thorough investigation into this.

 Our design is to inject all CMC/CPEs into dom0 vcpu0. I think this is 
 sufficient because our goal of this initial support is logging of 
 hardware error, not recovery. See detailed flow below.

   This looks good to me.  Queuing and tracking the interrupts could get
complicated, but I can't think of a better way to do it without going
back the the previous design of storing the error records in Xen.  Also
note that not all platforms support CPE interrupt, so you may need to
invent a slightly different flow for that case.  I would assume in this
case that Xen would poll each CPU for SAL_GET_STATE_INFO.  If it get
back an error log it adds that pCPU to a queue, and the next time dom0
calls SAL_GET_STATE_INFO it gets directed to the correct pCPU to re-read
the error log and clear it (much like your existing interrupt model).

What about clearing error records?
 
 By our new design, Xen issues SAL_CLEAR_STATE_INFO synchronizing with 
 SAL_CLEAR_STATE_INFO that dom0 issues.

   I like this approach better.

Do you plan to support CMC and CPE throttling in Xen
 
 Yes, our design is supported CMC and CPE throttling in Xen and dynamic 
 polling intervals. We think that Xen must not fall or slow down with 
 hot CMC and CPE interruption.

   Great!

It may be overly complicated to support CPEI on dom0
 
 Thanks for your advice. As for MADT and IOSAPIC, we are not well 
 informed. We hope for advice from you and everyone.
 Your advice modifies Linux/kernel(mca.c) of dom0, doesn't it? If so, 
 we modify Linux/kernel of dom0, and CPE supports polling mode only.

I would start out with the easier case of letting dom0 poll for CPE
records.  This should require no change to dom0 MCA code, just make sure
a CPEI vector is not reported to dom0 via the ACPI MADT.

   We can then later investigate optimizations to make this more
efficient.  If we do something like a virtual IOSAPIC to deliver the CPE
interrupt, there shouldn't be any changes necessary to the dom0 MCA
code.  We just need to see how hard this would be (it may be easy).

 BTW, new member kaz has join our team.

   Welcome Kaz!  Thanks,

   Alex

-- 
Alex Williamson HP Open Source  Linux Org.



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [RFC] MCA handler support for Xen/ia64

2006-07-28 Thread Masaki Kanno
] |  | |
||  A |  | |
||  | |  | |
+|--|-|--|-+
||  +-+|Hardware
|||  |||
|V|  V||
| +-pCPU0-++-pCPU1-+  ||
| |   |---|   |  ||
| +---++-+-+  ||
||||
|  +-+-+  ||
|  |record1+--+|
|  +---+   |
+--+

  Step8: Dom0 issues(17) SAL_CLEAR_STATE_INFO. 

+--+
| +-CMC/CPE handler--+ |dom0
| |  | |
| | SAL_CLEAR_STATE_INFO | |
| | (17) | |
| |  |   | |
| +--|---+ |
+--(trap)--+
|| |Xen
|V |
| +-vCPU0-+|
| |   | +---+  |
| |  --- pCPU1 |  |
| |   | +---+  |
| +---+ |status |  |
|   +---+  |
+--+
| +-pCPU0-++-pCPU1-+   |Hardware
| |   ||   |   |
| +---++-+-+   |
|| |
|  +-+-+   |
|  |record1|   |
|  +---+   |
+--+

  Step9: Xen traps this SAL call.
 If the pCPU to clear SAL record is not the same as the 
 vCPU, Xen issues(18) IPI for another pCPU, Xen on 
 another pCPU issues(19) SAL call.
 Xen frees(20) pCPU1 information.

+--+
| +-CMC/CPE handler--+ |dom0
| |  | |
| +--+ |
+--+
| +-vCPU0-+|Xen
| |   ||
| |   |   (20) |
| |   ||
| |   ||
| +---+|
|SAL_CLEAR_STATE_INFO  |
| send IPI  (19)   |
|   (18)  A  | |
+||--|-+
|||  | |Hardware
|V|  V |
| +-pCPU0-++-pCPU1-+   |
| |   |---|   |   |
| +---++---+   |
+--+

   What about clearing error records?  We need to be careful that error
records read by Xen and cleared before being passed to dom0 are volatile
and could be lost if the system crashes or if dom0 doesn't retrieve
them.  It's best to only clear the log after the error record has been
received by dom0 and dom0 issues a SAL_CLEAR_STATE_INFO.  This will get
complicated if we need to clear error records on all pCPUs in response
to a SAL_CLEAR_STATE_INFO on dom0 vCPU0.


By our new design, Xen issues SAL_CLEAR_STATE_INFO synchronizing with 
SAL_CLEAR_STATE_INFO that dom0 issues.

   Do you plan to support CMC and CPE throttling in Xen (ie. switching
between interrupt driven and polling handlers under load) and dynamic
polling intervals?


Yes, our design is supported CMC and CPE throttling in Xen and dynamic 
polling intervals. We think that Xen must not fall or slow down with 
hot CMC and CPE interruption.

   It may be overly complicated to support CPEI on dom0 (fake MADT
entries, trapping IOSAPIC write, maybe an entirely virtual IOSAPIC in
order to describe a valid GSI for the CPEI, etc...).  Probably best to
start out with just letting dom0 poll for CPE records.  Thanks,


Thanks for your advice. As for MADT and IOSAPIC, we are not well 
informed. We hope for advice from you and everyone.
Your advice modifies Linux/kernel(mca.c) of dom0, doesn't it? If so, 
we modify Linux/kernel of dom0, and CPE supports polling mode only.


BTW, new member kaz has join our team.

   Alex

-- 
Alex Williamson HP Open Source  Linux Org.

Best regards,
 Yutaka Ezaki(You)
 Kazuhiro Suzuki(Kaz)
 Masaki Kanno(Kan)



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [RFC] MCA handler support for Xen/ia64

2006-07-12 Thread Masaki Kanno
Hi all,

This is a design memo of the MCA handler for Xen/ia64. 
We hope many reviews and many comments. 

1. Basic design
 - The MCA/CMC/CPE handler of the Xen/ia64 makes use of Linux code
   as much as possible.

 - The CMC/CPE interruption is injected to dom0 for logging. 
   This interruption is not injected to domU or domVTI. 

 - If the MCA interruption is a TLB check, the MCA handler 
   changes the MCA to a CMC interruption, and inject it to dom0.
   This interruption is not injected to domU or domVTi.

 - If the MCA interruption is not a TLB check, the MCA handler 
   does not try to recover, and Xen/ia64 reboot.

2. Detail design

2.1 Initialization of MCA handler
 The processing sequence is basically as follows. 
   1) Clear the Rendez checkin flag for all cpus.
   2) Register the rendezvous interrupt vector with SAL.
   3) Register the wakeup interrupt vector with SAL.
   4) Register the Xen/ia64 MCA handler with SAL.
   5) Configure the CMCI/P vector and handler. Interrupts 
  for CMC are per-processor, so AP CMC interrupts are 
  setup in smp_callin() (smpboot.c).
   6) Setup the MCA rendezvous interrupt vector.
   7) Setup the MCA wakeup interrupt vector.
   8) Setup the CPEI/P handler.
   9) Initialize the areas set aside by the Xen/ia64 to 
  buffer the platform/processor error states for 
  MCA/CMC/CPE handling.
  10) Read the MCA error record for logging (by Dom0) if 
  Xen has been rebooted due to an unrecoverable MCA.

2.2 MCA handler (TLB error only)
 The processing sequence is basically as follows. 
   1) Get processor state parameter on existing PALE_CHECK.
  And purge TR and TC, and reload TR. 
   2) Call the ia64_mca_handler().
   3) Wait for checkin of slave processors.
   4) Wakeup all the processors which are spinning in the 
  rendezvous loop. 
   5) Get the MCA error record. 
  And hold the MCA error record into Xen/ia64 for logging
  by dom0.
   6) Clear the MCA error record. 
   7) Inject the external interruption of CMC to dom0.
   8) Set IA64_MCA_CORRECTED to the ia64_sal_os_state struct. 
   9) Return to the SAL and resume the interrupted processing. 

2.3 MCA handler (TLB error and the others error)
 The processing sequence is basically as follows. 
   1) Get processor state parameter on existing PALE_CHECK.
  And purge TR and TC, and reload TR. 
   2) Call the ia64_mca_handler().
   3) Wait for checkin of slave processors.
   4) Wakeup all the processors which are spinning in the 
  rendezvous loop. 
   5) Get the MCA error record. 
  And save the MCA error record into Xen/ia64 for logging 
  by dom0 after reboot. [*1]
   6) Return to the SAL and reboot the Xen/ia64.

2.4 MCA handler (Not TLB error)
 The processing sequence is basically as follows. 
   1) Get processor state parameter on existing PALE_CHECK.
   2) Call the ia64_mca_handler().
   3) Wait for checkin of slave processors.
   4) Wakeup all the processors which are spinning in the 
  rendezvous loop. 
   5) Get the MCA error record. 
  And save the MCA error record into Xen/ia64 for logging 
  by dom0 after reboot. [*1]
   6) Return to the SAL and reboot the Xen/ia64.

2.5 CMC handler
 The processing sequence is basically as follows. 
   1) Call the ia64_mca_cmc_int_handler() from the 
  __do_IRQ() in the ia64_handle_irq().
   2) Get the MCA error record. 
  And save the MCA error record into Xen/ia64 for logging 
  by dom0 after reboot. [*1]
   3) Inject the external interruption of CMC to dom0.

2.6 CPE handler
  Same as CMC.

2.7 SAL emulation for Dom0/DomU/DomVTI
  The following SAL emulation procedures are added.
   - SAL_SET_VECTORS
   - SAL_GET_STATE_INFO
   - SAL_GET_STATE_INFO_SIZE
   - SAL_CLEAR_STATE_INFO
   - SAL_MC_SET_PARAMS

Note:
 [*1]: Actually, read the MCA error record again after the 
   Xen/ia64 rebooted and log it with dom0.

Best regards,
 Yutaka Ezaki
 Masaki Kanno

 


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Add carriage return to printk/printf

2006-06-01 Thread Masaki Kanno
Hi,

When I tested Xen/ia64, I encountered with printk()/printf() that 
carriage return(CR) was forgotten. 
This patch adds the CR to printk()/printf().

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



add_CR.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] [PATCH 1/2] [RESEND] FPSWA emulation support

2006-05-25 Thread Masaki Kanno
Hi Dan/Alex,

Thanks for your information and advice.
I will correct and resend this patch.
 - I remove handle_fpu_swa_for_xen(). and,
 - When an FP error occurs in Xen, Xen does the panic.

Best regards,
 Kan

Why do we need handle_fpu_swa_for_xen()?  Xen should 
 never hit an FP
 error that requires this support.  Thanks,
 
 Same as you, I think that Xen should never hit an FP error 
 that requires 
 this support. However, the time when FPSWA is necessary for 
 Xen may come.
 Therefore I'd like to leave handle_fpu_swa_for_xen().

I have to disagree.  An FP error that requires handling
by the FPSWA is very rare, even in applications that do
lots of floating point processing.

Linux/ia64 does no floating point processing, Xen/x86 does
no floating point processing, and Xen/ia64 shouldn't either.
If some developer adds floating point code into Xen/x86,
guests will break.  (This happened once last year --
someone added floating point code to compute some statistics.)

Linux/ia64 uses floating point registers only for integer
multiply/divide (and? fast copy?).  The same is true of Xen/ia64.
If any Xen/ia64 developer adds any floating point code
in Xen that could possibly cause an FPSWA, I'd rather see
a panic so that code could be removed or fixed.

Last, it is not Linux or Xen style to add code that might
possibly be used sometime in the future.  A kernel and a
hypervisor should be as small as possible and adding
even a few hundred bytes of code that maybe will be
executed sometime in the future (or, in my opinion, never
should be) is bad style.  At best, the code should be
surrounded by an #if 0, and I don't think even that
is a good idea.

Just my opinion... others may disagree.
Dan


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH 1/2] [RESEND*2] FPSWA emulation support

2006-05-25 Thread Masaki Kanno
1/2



FPSWA-emulation.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH 1/2] [RESEND] FPSWA emulation support

2006-05-24 Thread Masaki Kanno
Hi Alex,

On Tue, 2006-05-23 at 11:33 -0600, Alex Williamson wrote:
 On Fri, 2006-05-19 at 00:37 +0900, Masaki Kanno wrote:
  Hi,
  
  I resend the patch which reflected comment.
 
 Hi Kan,
 
Sorry for the late comment, but could you consolidate
 handle_fpu_swa_for_domain() and handle_fpu_swa_for_xen()?  I only see a
 few lines that are different and a lot of replicated code.  Thanks,

Hi Kan,

   Why do we need handle_fpu_swa_for_xen()?  Xen should never hit an FP
error that requires this support.  Thanks,

Same as you, I think that Xen should never hit an FP error that requires 
this support. However, the time when FPSWA is necessary for Xen may come.
Therefore I'd like to leave handle_fpu_swa_for_xen().

Best regards,
 Kan


   Alex

-- 
Alex Williamson HP Linux  Open Source Lab



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH 1/2] FPSWA emulation support

2006-05-18 Thread Masaki Kanno

1/2



FPSWA-emulation.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH 2/2] FPSWA emulation support

2006-05-18 Thread Masaki Kanno

[2/2]



FPSWA-hypercall.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH 0/2] FPSWA emulation support

2006-05-18 Thread Masaki Kanno
Hi all,

These patches supports FPSWA emulation to the dom0/domU.

Patch's summary:
 - [1/2] This patch support FPSWA emulation.
  -- If the FP fault/trap occurred in the dom0/domU, Xen call FPSWA.
   --- If FPSWA succeed, Xen doesn't inject the FP fault/trap to 
   the dom0/domU. Xen resume next instruction in the dom0/domU.
   --- If FPSWA fail, Xen inject the FP fault/tarp to the dom0/domU, 
   and save the fpswa_ret_t to the struct arch_vcpu.
  -- If the FP fault/trap occurred in Xen, Xen call FPSWA.
   --- If FPSWA succeed, Xen resume next instruction in Xen.
   --- If FPSWA fail, Xen does the panic.
  -- A trap_init() initializes *fpswa_interface.
  -- A fpswa.h copied from the Linux/kernel.

 - [2/2] This patch support FPSWA hypercall to the dom0/domU.
  -- FPSWA hypercall uses 2 bundles in the hypercall patch table.
 1st bundle for the pseude-entry-point, 2nd bundle for the 
 hypercall patch.
  -- The set_virtual_address_map emulation of EFI changes the 
 fpswa_interface_t and the pseude-entry-point to virtual address.
  -- When the Linux/kernel on the dom0/domU received the FP 
 fault/trap, the Linux/kernel calls FPSWA. And it's a hypercall 
 to Xen. Xen returns the fpswa_ret_t saved in the struct 
 arch_vcpu to the vcpu r8-r11 of the dom0/domU.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH 0/2] FPSWA emulation support

2006-05-18 Thread Masaki Kanno
Hi Tristan,

Thanks for your comment.

Le Jeudi 18 Mai 2006 12:36, Masaki Kanno a 馗rit :
 Hi all,

 These patches supports FPSWA emulation to the dom0/domU.

 Patch's summary:
[...]
   -- If the FP fault/trap occurred in Xen, Xen call FPSWA.
--- If FPSWA succeed, Xen resume next instruction in Xen.
--- If FPSWA fail, Xen does the panic.
Why not panic'ing on every Xen FP fault ?
Within Xen, we should only use fp for bit copying/clearing.

The time when FPSWA is necessary for Xen may come. But I don't 
know what time it is. :-)

Best regards,
 Kan


Tristan.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH 1/2] FPSWA emulation support

2006-05-18 Thread Masaki Kanno
Hi Tristan,

Le Jeudi 18 Mai 2006 12:36, Masaki Kanno a 馗rit :
Very minor comments:

[...]
+static void
+xen_increment_iip(struct pt_regs *regs)
+{
+  struct ia64_psr *ipsr = (struct ia64_psr *)regs-cr_ipsr;
+  if (ipsr-ri == 2) { ipsr-ri=0; regs-cr_iip += 16; }
+  else ipsr-ri++;
+  return;
+}
There is already a vcpu_increment_iip.  Maybe you should merge both: 
vcpu_increment_iip may call regs_increment_iip.
(rename btw).

Ok, I will merge these function.


[...]
+static unsigned long
+handle_fpu_swa_for_domain (int fp_fault, struct pt_regs *regs, unsigned 
long isr)
+{
+  struct vcpu *v = current;
+  IA64_BUNDLE bundle;
+  IA64_BUNDLE __get_domain_bundle(UINT64);
+  unsigned long fault_ip;
+  fpswa_ret_t ret;
+
+  fault_ip = regs-cr_iip;
+  if (!fp_fault  (ia64_psr(regs)-ri == 0))
+  fault_ip -= 16;
A comment is required here.

Ok, I will add comment.


[...]
+  if (ret.status) {
+  PSCBX(v, fpswa_ret.status) = ret.status;
+  PSCBX(v, fpswa_ret.err0)   = ret.err0;
+  PSCBX(v, fpswa_ret.err1)   = ret.err1;
+  PSCBX(v, fpswa_ret.err2)   = ret.err2;
A single assignment should be ok: PSCBX(v, fpswa_ret) = ret;

Ok, I will correct it.


Tristan.


Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH 1/2] [RESEND] FPSWA emulation support

2006-05-18 Thread Masaki Kanno
Hi,

I resend the patch which reflected comment.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



FPSWA-emulation.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH 2/2] [RESEND] FPSWA emulation support

2006-05-18 Thread Masaki Kanno
Hi,

I resend the patch which reflected comment.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



FPSWA-hypercall.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Remove warning (process.c)

2006-05-10 Thread Masaki Kanno
Hi,

This patch removed warning messages from process.c.
I tested compilation only.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



remove.warning.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] [RESEND*2] SetVirtualAddressMap emulation support

2006-05-09 Thread Masaki Kanno
Hi,

I reflected comment for this patch.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



efi_set_virt_addr_map.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] [RESEND] SetVirtualAddressMap emulationsupport

2006-05-09 Thread Masaki Kanno
Hi Tristan,

Sorry, I don't forget your idea. But I postponed your idea to put 
out all firmware functions of dom_fw.c due to the following reasons.
 - Your idea was magnificent to me. Therefore I put out only EFI to 
   give priority to work of FPSWA support.
 - About a PAL function, there is already vmx/pal_emul.c, and, about 
   a coordination policy with it, I hesitate in judgment.

I want to realize your idea after having finished work of FPSWA 
support. Are you sure?

BTW, may I write copyright when I made a new file?

Best regards,
 Kan

Le Lundi 08 Mai 2006 08:32, Masaki Kanno a 馗rit :
 Hi Kevin,

 Thanks for your comment.
 I correct a patch and will send it again.
BTW, I think it is a good idea to create a new file for firmware.  I'd 
prefer 
to put all fw functions ie pal/sal/efi emulator, because dom_fw.c is already 
rather big.

(Just my 2 cents :-)
Tristan.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH] [RESEND] SetVirtualAddressMap emulationsupport

2006-05-09 Thread Masaki Kanno
Hi Tristan,

 BTW, may I write copyright when I made a new file?
My point of view: I don't really like copyright in file headers because they 
are not updated and the real copyright belongs to all the contributor.  But I
won't reject it.

Thank you.
Because a new file moves most codes from hypercall.c and wrote it, 
I decide not to write copyright.

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH] [RESEND*2] SetVirtualAddressMap emulation support

2006-05-09 Thread Masaki Kanno
Hi Alex,

I'd like you to apply this patch. Or do you have comment about 
this patch?

Best regards,
 Kan

Hi,

I reflected comment for this patch.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] [RESEND] SetVirtualAddressMap emulation support

2006-05-01 Thread Masaki Kanno
Hi,

I reflected comment for this patch.
I confirmed that GetTime(), ResetSystem() and SetVirtualAddressMap() 
emulation worked.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan


efi_set_virt_addr_map.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Fixed print_md

2006-05-01 Thread Masaki Kanno
Hi,

This patch fixed the domain memory information that print_md() 
output. An output result of print_md() please refer to an 
attached file. (print_md.txt)

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan


print_md.patch
Description: Binary data


print_md.txt
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [PATCH] Removed warning messages

2006-05-01 Thread Masaki Kanno
Hi,

This patch removed warning messages.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan


remove_warning.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] Installing XEN/ia64 on Debian

2006-04-27 Thread Masaki Kanno
Hi,

This is a method in case of RHEL4, but please refer to it.

http://lists.xensource.com/archives/html/xen-ia64-devel/2006-03/msg00362.html

Best regards,
 Kan

Rodrigo Lord wrote:
Hello!

I'm trying to install XEN/ia64 on my Debian!
So, I'm using Mercurial and downloaded in the link below:

http://xenbits.xensource.com/ext/xen-ia64-unstable.hg

Now, should I install normally? (as a x86)
Is it a patch? Or should I apply a patch?
I would like some information to complete my installation on my Debian
system!

(Sorry for my bad english)

Thanks!
__

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH] Remove FORCE_CRASH from alt_itlb_miss

2006-04-25 Thread Masaki Kanno
Hi Anthony,

Thanks for your detail information.
I thought that my patch causes discord between Xen and Linux-kernel.
When I confirmed itc with ITP, I confirmed that a FPSWA code was 
inserted at size of 16 kbytes. As you thought, I think that it is 
good to use bigger page size so that we reduce tlb miss.


Hi Alex,

I remake my patch and should send it again? Or, at first you apply 
my patch to xen-ia64-unstable tree, and do I send a patch setting 
page size later?


Best regards,
 Kan

Xu, Anthony wrote:
From: Masaki Kanno
Sent: 2006?4?25? 23:27
To: xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel] [PATCH] Remove FORCE_CRASH from alt_itlb_miss

Hi Anthony,

Please teach me in detail.

For supporting discontinuous memory, ps in region register is always 16K,
There are two implicit parameters for itc instruction,
The other is cr.ifa indicating fault virtual address.
One is cr.itir, cr.itir.ps determine the TLB page size,
When dtlb_miss happens,
cr.itir.ps = rr.ps (now this is 16K)
But in identity mapping, we can use bigger page size to reduce tlb miss faults,
Following is pseudo code for this
cr.itir.ps = IA64_GRANULE_SHIFT
itc.d  // insert the TLB entry

I applied this patch and FPSWA supporting patch to Xen and tested it.
Because I ran LTP on dom0, and a test about a floating point succeeded,
I sent this patch.
Your patch is correct. :-) 

Best regards,
 Kan

Xu, Anthony wrote:
One comment,
Since the page size of region 7 is 16K now,
This patch make identity mapping based on 16K.
Can we align with linux kernel using 16M identity mapping?

Thanks,
-Anthony

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Masaki
Kanno
Sent: 2006?4?24? 19:39
To: xen-ia64-devel@lists.xensource.com
Subject: [Xen-ia64-devel] [PATCH] Remove FORCE_CRASH from alt_itlb_miss

Hi,

This patch removed FORCE_CRASH from alt_itlb_miss handler.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] alt_itlb_miss?

2006-04-24 Thread Masaki Kanno
Hi Kevin,

Thanks for your explanation. 
Sorry, I'd like you to explain this once again. Please look at the 
below figure.

1) Instruction TLB Fault ---+
|
 +--+
 |
 +--- ENTRY(iltb_miss)
/* Check ifa (It was VHPT_CCHAIN_LOOKUP before here) */
mov r16 = cr.ifa
extr.u r17=r16,59,5
cmp.eq p6,p0=0x1e,r17
   (p6) br.cond.spnt late_alt_itlb_miss -+
cmp.eq p6,p0=0x1d,r17|
   (p6) br.cond.spnt late_alt_itlb_miss ---+ |
   | |
   | |
2) Alternate Instruction TLB Fault ---+| |
  || |
 ++| |
 | | |
 +--- ENTRY(alt_itlb_miss)| |
mov r16=cr.ifa | |
   | |
   late_alt_itlb_miss: ---+-+

/* Check cpl */
cmp.ne p8,p0=r0,r23
or r19=r17,r19
or r19=r19,r18
   (p8) br.cond.spnt page_fault

  + /* Check ifa with my patch */
  + extr.u r22=r16,59,5
  + cmp.ne p8,p0=0x1e,r22
  +(p8) br.cond.spnt 1f --+
  |
itc.i r19 |
mov pr=r31,-1 |
rfi   |
  |
  +1: ---+
  + FORCE_CRASH

If case 1), I think that a FORCE_CRASH and ifa checking is unnecessary 
according to your explanation.
If case 2), I think that a FORCE_CRASH and ifa checking is necessary.
Because, I thought that Xen may use a wrong address. 
If case 2), does Xen trust only cpl?

Best regards,
 Kan

Tian, Kevin wrote:
From: Masaki Kanno [mailto:[EMAIL PROTECTED]
Sent: 2006定4埖21晩 18:56

Hi Kan,

   Thanks, this looks like exactly what we need.  If there are no
other
comments, please send me this patch w/ a Signed-off-by and we can
get
it
in tree.  BTW, glad to hear you're working on the FPSWA issue and
are
making good progress!  Thanks,

Alex

Seems OK. One small comment is that we may also remove
FORCE_CRASH completely since the assumption to add that
check doesn't exist now. Actually VHPT_CCHAIN_LOOKUP
already makes check upon VMM area to decide whether jumping
to alt_itlb_miss handler. In this case, simply removing
FORCE_CRASH line can also work. :-)

If alt_itlb_fault occurred, we need ifa checking and FORCE_CRASH,
don't we?
Therefore I don't need to change my patch, do I?


The check is already made before jumping to alt_itlb_miss. 
Also architecturally there's no limitation to prevent uncacheable 
instruction falling into that category. So I think there's no need 
for existence of FORCE_CRASH there, right? :-)

Thanks,
Kevin



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Remove FORCE_CRASH from alt_itlb_miss

2006-04-24 Thread Masaki Kanno
Hi,

This patch removed FORCE_CRASH from alt_itlb_miss handler.

Signed-off-by: Masaki Kanno [EMAIL PROTECTED]

Best regards,
 Kan


remove_FORCE_CRASH.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

  1   2   >