Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-11-15 Thread Kasai Takanori

Hi Yangkang,


You (yongkang.you) said:

The trace back message will happen when I just inserted vnif driver into
VTI, when I used 2 NICs for VTI (1 is IOEMU, the other is VNIF).


 I've looked `netif_release_rx_bufs: fix me for copying receiver.'
message sometimes at vnif detaching. But I've not met domain-vti
crash.

The crashing happened if I also use IOEMU NIC for VTI domain. The VTI

vif

config should like:
vif = [ 'type=ioemu, bridge=xenbr0', '' ]


 Thanks. I'll try it.


I was able to confirm this problem occurred in the same environment.
(changeset :11810 )
We continue and investigate this problem.


Sorry, the reply was delayed.
I have confirmed this problem has already corrected it as follows. 
The crash doesn't happen even if IOEMU NIC is used for the VTI domain. 




・xen-unstable.hg   : cs 12048
・xen-ia64-unstable.hg  : cs 12062

[NET] front: Clean up error handling. 
This eliminates earlier work around patch for an observed crash.




However, IOEMU NIC is registered in xenwatch as VNIF. 
Then the crashing happened if 'xm network-detach' is executed for IOEMU NIC.
Because IOEMU NIC is shut as VNIF. 


I think that I should not register originally IOEMU NIC in xenwatch.
Because the same issue exists for x86 architecture, 
I'll contribute it to the xen-devel community.


Thanks,

--
Takanori Kasai




___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-11-15 Thread You, Yongkang
HI Takanori,

Thank you for the investigation and bug fixing. As you mentioned, X86 side also 
meets this problem recently, because there are some mechanism changing. Wait 
for your patches. :)

Best Regards,
Yongkang (Kangkang) 永康

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Kasai
Takanori
Sent: 2006年11月16日 12:10
To: xen-ia64-devel
Subject: Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

Hi Yangkang,

 You (yongkang.you) said:
 The trace back message will happen when I just inserted vnif driver into
 VTI, when I used 2 NICs for VTI (1 is IOEMU, the other is VNIF).

  I've looked `netif_release_rx_bufs: fix me for copying receiver.'
message sometimes at vnif detaching. But I've not met domain-vti
crash.
 The crashing happened if I also use IOEMU NIC for VTI domain. The
VTI
vif
 config should like:
 vif = [ 'type=ioemu, bridge=xenbr0', '' ]

  Thanks. I'll try it.

I was able to confirm this problem occurred in the same environment.
(changeset :11810 )
We continue and investigate this problem.

Sorry, the reply was delayed.
I have confirmed this problem has already corrected it as follows.
The crash doesn't happen even if IOEMU NIC is used for the VTI domain.


・xen-unstable.hg   : cs 12048
・xen-ia64-unstable.hg  : cs 12062

 [NET] front: Clean up error handling.
 This eliminates earlier work around patch for an observed crash.


However, IOEMU NIC is registered in xenwatch as VNIF.
Then the crashing happened if 'xm network-detach' is executed for IOEMU
NIC.
Because IOEMU NIC is shut as VNIF.

I think that I should not register originally IOEMU NIC in xenwatch.
Because the same issue exists for x86 architecture,
I'll contribute it to the xen-devel community.

Thanks,

--
 Takanori Kasai




___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-11-15 Thread Kasai Takanori

Hi Yongkang,


You (yongkang.you) said:
Thank you for the investigation and bug fixing. As you mentioned, X86 side 
also
meets this problem recently, because there are some mechanism changing. Wait 
for your patches. :)


Thank you for reply.

I cannot correct this problem yet.
I wanted only to confirm the correction method to the community.

When the mechanism is changed, could you correct this problem?
In this case, is it scheduled to be corrected about when?

Best regards,

--
Takanori Kasai 




___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-11-15 Thread You, Yongkang
Hi Takanori,

I cannot correct this problem yet.
I wanted only to confirm the correction method to the community.

When the mechanism is changed, could you correct this problem?
In this case, is it scheduled to be corrected about when?

I saw an email from He, Qing in last month. He threw the question out, but 
seemed no response yet. It is:
http://lists.xensource.com/archives/html/xen-devel/2006-10/msg01333.html

Best Regards,
Yongkang (Kangkang) 永康

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-20 Thread Kasai Takanori

Hi Yangkang,


You (yongkang.you) said:

The trace back message will happen when I just inserted vnif driver into
VTI, when I used 2 NICs for VTI (1 is IOEMU, the other is VNIF).


 I've looked `netif_release_rx_bufs: fix me for copying receiver.'
message sometimes at vnif detaching. But I've not met domain-vti
crash.

The crashing happened if I also use IOEMU NIC for VTI domain. The VTI

vif

config should like:
vif = [ 'type=ioemu, bridge=xenbr0', '' ]


 Thanks. I'll try it.


I was able to confirm this problem occurred in the same environment.
(changeset :11810 )
We continue and investigate this problem.


There are two files called by net/core/dev.c.
One is a kernel's file, and another is added for VNIF.
The added file for VNIF should be called when VNIF is initialized.
However, when IOEMU NIC is defined in the config file for VTI domain,
kernel's file is called.
I have not understood the cause yet.

--- log file ---
# kernel: netfront: Initializing virtual ethernet driver.
# kernel: vif vif-0: 2 parsing device/vif/0/mac
# kernel: netif_release_rx_bufs: fix me for copying receiver.
# net.agent[6461]: remove event not handled
# net.agent[6463]: remove event not handled
# kernel: netfront: device eth2 has copying receive path.
# kernel: kernel BUG at net/core/dev.c:3073!--- This ---
# kernel: xenwatch[6329]: bugcheck! 0 [1]

--- linux-2.6-xen-sparse/net/core/dev.c ---
# void netdev_run_todo(void)
# {
〜
#if (list_empty(net_todo_list))   3073 line : disagreement
#goto out;
〜

--- linux-2.6.16.29/net/core/dev.c ---
# int unregister_netdevice(struct net_device *dev)
# {
#   struct net_device *d, **dp;
〜
#   BUG_ON(dev-reg_state != NETREG_REGISTERED);  === 3073 line : agreement
〜


You (yongkang.you) said:
2. After insert VNIF driver, VTI kernel often calls trace
when running applications (kudzu, vi etc.).


This problem doesn't occur as long as we have operated up to now.
How many times has this problem occurred in you up to now?
Could you teach to me a little more in detail?


You (yongkang.you) said:

Thanks for tracking the issue.
I redo the testing, but I also didn't meet the kudzu call trace again.
I will track it in future. If I find more problems,
I will let you know.


Thank you.

Best Regards,
-  Takanori Kasai 




___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-18 Thread Doi . Tsunehisa
] On Behalf Of DOI
 Tsunehisa
 Sent: 2006$BDj(B10$BTB(B16$BHU(B 20:31
 To: xen-ia64-devel
 Subject: [Xen-ia64-devel] Please try PV-on-HVM on IPF
 
 Hi all,
 
   We've ported PV-on-HVM drivers for IPF. But I think that
 only few tries it. Thus, I try to describe to use it.
 
   And I attach several patches about PV-on-HVM.
 
 + fix-warning.patch
   - warning fix for HVM PV driver
 + notsafe-comment.patch
   - add not-SMP-safe comment about PV-on-HVM
   - to take Isaku's suggestion.
 + pv-backport.patch (preliminary)
   - current HVM PV driver for only 2.6.16 or 2.6.16.* kernel
   - this is preliminary patch for backporting to before 2.6.16
 kernel
   - we tested only compiling on RHEL4.
 
 [Usage of PV-on-HVM]
 
   1) get xen-ia64-unstable.hg tree (after cs:11805) and built it.
 
   2) create a guest system image.
  - simply, install guest system on VT-i domain
 
   3) build linux-2.6.16 kernel for guest system
  - get linux-2.6.16 kernel source and build
 
   4) change guest kernel in the image to linux-2.6.16 kernel
  - edit config file of boot loader
 
   5) build PV-on-HVM drivers
  # cd xen-ia64-unstable.hg/unmodified_drivers/linux-2.6
  # sh mkbuildtree
  # make -C /usr/src/linux-2.6.16 M=$PWD modules
 
   6) copy the drivers to guest system image
  - mount guest system image with lomount command.
  - copy the drivers to guest system image
# cp -p */*.ko guest_system...
 
   7) start VT-i domain
 
   8) attach drivers
 domvti# insmod xen-platform-pci.ko
 domvti# insmod xenbus.ko
 domvti# insmod xen-vbd.ko
 domvti# insmod xen-vnif.ko
 
   9) attach devices with xm block-attach/network-attach
  - this operation is same for dom-u
 
 Thanks,
 - Tsunehisa Doi
 
 

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-18 Thread You, Yongkang

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: 2006年10月18日 15:56
To: You, Yongkang
Cc: DOI Tsunehisa; xen-ia64-devel
Subject: Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

  Hi Yongkang,

  Thank you for your report.

  Does it output this trace back message when you detach vnif with
xm network-detach command ?
No. I didn't use above command to detach vnif. Usually I used following ways to 
enable vnif NIC.
1. create VTI with vif = [ '' ]
2. After VTI boot up, insert 3 related PV drivers to enable VNIF.
3. If there is a /etc/sysconfig/network-script/ifcfg-eth0, I need not do 
anything, VNIF eth0 will be brought up.

The trace back message will happen when I just inserted vnif driver into VTI, 
when I used 2 NICs for VTI (1 is IOEMU, the other is VNIF).


  I've looked `netif_release_rx_bufs: fix me for copying receiver.'
message sometimes at vnif detaching. But I've not met domain-vti
crash.
The crashing happened if I also use IOEMU NIC for VTI domain. The VTI vif 
config should like:
vif = [ 'type=ioemu, bridge=xenbr0', '' ]


  What is the guest OS vesion ?
The VTI guest OS is RHEL4u3.

Thanks,
- Tsunehisa



Best Regards,
Yongkang (Kangkang) 永康

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-18 Thread Doi . Tsunehisa
You (yongkang.you) said:
  Does it output this trace back message when you detach vnif with
xm network-detach command ?
 No. I didn't use above command to detach vnif. Usually I used following
 ways to enable vnif NIC.
 1. create VTI with vif = [ '' ]
 2. After VTI boot up, insert 3 related PV drivers to enable VNIF.
 3. If there is a /etc/sysconfig/network-script/ifcfg-eth0, I need not do 
 anything, VNIF eth0 will be brought up.
 
 The trace back message will happen when I just inserted vnif driver into 
 VTI, when I used 2 NICs for VTI (1 is IOEMU, the other is VNIF).

  I've looked `netif_release_rx_bufs: fix me for copying receiver.'
message sometimes at vnif detaching. But I've not met domain-vti
crash.
 The crashing happened if I also use IOEMU NIC for VTI domain. The VTI vif
 config should like:
 vif = [ 'type=ioemu, bridge=xenbr0', '' ]

  Thanks. I'll try it.

  What is the guest OS vesion ?
 The VTI guest OS is RHEL4u3.

  Did you change kernel to linux-2.6.16 ?

- Tsunehisa

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-18 Thread You, Yongkang
  What is the guest OS vesion ?
 The VTI guest OS is RHEL4u3.

  Did you change kernel to linux-2.6.16 ?
Yes. Or I can not even insert xen-platform.ko and xenbus.ko. ;)

Best Regards,
Yongkang (Kangkang) 永康

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-18 Thread Doi . Tsunehisa
You (Tristan.Gingold) said:
   On RHEL4 U2 kernel (Linux version 2.6.9-22.EL), xen-platform-pci
 includes `PCI Bus :00', but it should be the leaf node, I think.
 To insmod xen-platform-pci, this iomem space was conflicted with
 the inner device, thus it can't be attached.
 At least RHEL4 U2 is coherent!

  Yes.

 But I do not understand where 'c200-c2000fff : PCI Bus :00' comes
 from.  According to your lspci, there is nothing here.  Maybe it was added
 by the ACPI table ?

  Thank you. I'll try to look.

- Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-18 Thread Kasai Takanori
Hi Yongkang, 


My name is Takanori Kasai.
I receive your report, and am investigating the following problem. 


You (yongkang.you) said:
The trace back message will happen when I just inserted vnif driver into 
VTI, when I used 2 NICs for VTI (1 is IOEMU, the other is VNIF).



 I've looked `netif_release_rx_bufs: fix me for copying receiver.'
message sometimes at vnif detaching. But I've not met domain-vti
crash.

The crashing happened if I also use IOEMU NIC for VTI domain. The VTI vif
config should like:
vif = [ 'type=ioemu, bridge=xenbr0', '' ]


 Thanks. I'll try it.


I was able to confirm this problem occurred in the same environment.  
(changeset :11810 )
We continue and investigate this problem. 




You (yongkang.you) said:
2. After insert VNIF driver, VTI kernel often calls trace 
when running applications (kudzu, vi etc.). 


This problem doesn't occur as long as we have operated up to now. 
How many times has this problem occurred in you up to now?

Could you teach to me a little more in detail?

Thanks,

- Takanori Kasai





___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread Doi . Tsunehisa
Hi all,

I (Doi.Tsunehisa) said:
   We've ported PV-on-HVM drivers for IPF. But I think that
 only few tries it. Thus, I try to describe to use it.
 
   And I attach several patches about PV-on-HVM.
 
 + fix-warning.patch
   - warning fix for HVM PV driver
 + notsafe-comment.patch
   - add not-SMP-safe comment about PV-on-HVM
   - to take Isaku's suggestion.
 + pv-backport.patch (preliminary)
   - current HVM PV driver for only 2.6.16 or 2.6.16.* kernel
   - this is preliminary patch for backporting to before 2.6.16
 kernel
   - we tested only compiling on RHEL4.

  Today, I was testing my preliminary patch on RHEL4 U2.

  But, xen-platform-pci.ko could not be attached with RHEL4 U2 original
kernel. Thus I was investigating it. So, I found a strange movement about
PCI configuration.

  I tested on two envrionments, linux-2.6.16.29  and RHEL4 U2 original.
Outputs of lspci command are same on both environment.

   The output is follow:

[Both environment of VT-i domain]
# lspci -v
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
Flags: fast devsel

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
Flags: medium devsel

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 
(prog-if 80 [Master])
Flags: bus master, fast devsel, latency 0
I/O ports at 1100 [size=16]

00:02.0 VGA compatible controller: Cirrus Logic GD 5446 (prog-if 00 [VGA])
Flags: bus master, fast devsel, latency 0
Memory at c000 (32-bit, prefetchable) [size=32M]
Memory at c300 (32-bit, non-prefetchable) [size=4K]

00:03.0 Class ff80: Unknown device 5853:0001 (rev 01)
Flags: fast devsel
I/O ports at 1000 [disabled] [size=256]
Memory at c200 (32-bit, prefetchable) [disabled] [size=16M]

  There is `Unknown device' in thie message. It's pyseudo-device for
PV-on-HVM called `xen-platform-pci'.

  Next, I showed IO-memory maps with /proc/iomem. But, IO-memory maps
are different on both environment.

[linux-2.6.16.29 on VT-i domain]
# cat /proc/iomem
-0009 : System RAM
000a-000b : reserved
000c-000f : reserved
0010-03ff : System RAM
0400-04bf3fff : System RAM
  0400-046d34bf : Kernel code
  046d34c0-04bf373f : Kernel data
04bf4000-3c458fff : System RAM
    deleted *
3ff7-3fffdfff : System RAM
3fffe000-3fff : reserved
c000-c1ff : :00:02.0
c200-c2ff : :00:03.0
c300-c3000fff : :00:02.0
e000-e033dcf7 : PCI Bus :00 I/O Ports -0cf7
e0340d00-e3ff : PCI Bus :00 I/O Ports 0d00-

[RHEL4 U2 on VT-i domain]
# cat /proc/iomem
c000-c1ff : PCI Bus :00
  c000-c1ff : :00:02.0
c200-c2ff : :00:03.0
  c200-c2000fff : PCI Bus :00
c300-c3000fff : :00:02.0

  On RHEL4 U2 environment, I could show only this message. It's
strange, and it's the reason that xen-platform-pci.ko module can't
be attached, I think.

  I don't understand what is happend. Does anyone know the reason ?

Thanks,
- Tsunehisa Doi


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread Doi . Tsunehisa
  Sorry, I forgot to write about my test environment.

  I tested with xen-ia64-unstable.hg (cs:11810).

I (Doi.Tsunehisa) said:
 Hi all,
 
 I (Doi.Tsunehisa) said:
We've ported PV-on-HVM drivers for IPF. But I think that
  only few tries it. Thus, I try to describe to use it.
  
And I attach several patches about PV-on-HVM.
  
  + fix-warning.patch
- warning fix for HVM PV driver
  + notsafe-comment.patch
- add not-SMP-safe comment about PV-on-HVM
- to take Isaku's suggestion.
  + pv-backport.patch (preliminary)
- current HVM PV driver for only 2.6.16 or 2.6.16.* kernel
- this is preliminary patch for backporting to before 2.6.16
  kernel
- we tested only compiling on RHEL4.
 
   Today, I was testing my preliminary patch on RHEL4 U2.
 
   But, xen-platform-pci.ko could not be attached with RHEL4 U2 original
 kernel. Thus I was investigating it. So, I found a strange movement about
 PCI configuration.
 
   I tested on two envrionments, linux-2.6.16.29  and RHEL4 U2 original.
 Outputs of lspci command are same on both environment.
 
The output is follow:
 
 [Both environment of VT-i domain]
 # lspci -v
 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 
02)
 Flags: fast devsel
 
 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II
]
 Flags: medium devsel
 
 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
 II] (prog-if 80 [Master])
 Flags: bus master, fast devsel, latency 0
 I/O ports at 1100 [size=16]
 
 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 (prog-if 00 [VGA]
)
 Flags: bus master, fast devsel, latency 0
 Memory at c000 (32-bit, prefetchable) [size=32M]
 Memory at c300 (32-bit, non-prefetchable) [size=4K]
 
 00:03.0 Class ff80: Unknown device 5853:0001 (rev 01)
 Flags: fast devsel
 I/O ports at 1000 [disabled] [size=256]
 Memory at c200 (32-bit, prefetchable) [disabled] [siz
e=16M]
 
   There is `Unknown device' in thie message. It's pyseudo-device for
 PV-on-HVM called `xen-platform-pci'.
 
   Next, I showed IO-memory maps with /proc/iomem. But, IO-memory maps
 are different on both environment.
 
 [linux-2.6.16.29 on VT-i domain]
 # cat /proc/iomem
 -0009 : System RAM
 000a-000b : reserved
 000c-000f : reserved
 0010-03ff : System RAM
 0400-04bf3fff : System RAM
   0400-046d34bf : Kernel code
   046d34c0-04bf373f : Kernel data
 04bf4000-3c458fff : System RAM
 deleted *
 3ff7-3fffdfff : System RAM
 3fffe000-3fff : reserved
 c000-c1ff : :00:02.0
 c200-c2ff : :00:03.0
 c300-c3000fff : :00:02.0
 e000-e033dcf7 : PCI Bus :00 I/O Ports -0cf7
 e0340d00-e3ff : PCI Bus :00 I/O Ports 0d00-
 
 [RHEL4 U2 on VT-i domain]
 # cat /proc/iomem
 c000-c1ff : PCI Bus :00
   c000-c1ff : :00:02.0
 c200-c2ff : :00:03.0
   c200-c2000fff : PCI Bus :00
 c300-c3000fff : :00:02.0
 
   On RHEL4 U2 environment, I could show only this message. It's
 strange, and it's the reason that xen-platform-pci.ko module can't
 be attached, I think.
 
   I don't understand what is happend. Does anyone know the reason ?
 
 Thanks,
 - Tsunehisa Doi
 
 
 ___
 Xen-ia64-devel mailing list
 Xen-ia64-devel@lists.xensource.com
 http://lists.xensource.com/xen-ia64-devel
 

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread Tristan Gingold
Le Mardi 17 Octobre 2006 14:30, [EMAIL PROTECTED] a écrit :
 Hi all,

 I (Doi.Tsunehisa) said:
[...]
   Today, I was testing my preliminary patch on RHEL4 U2.

   But, xen-platform-pci.ko could not be attached with RHEL4 U2 original
 kernel. Thus I was investigating it. So, I found a strange movement about
 PCI configuration.

   I tested on two envrionments, linux-2.6.16.29  and RHEL4 U2 original.
 Outputs of lspci command are same on both environment.

The output is follow:

 [Both environment of VT-i domain]
 # lspci -v
 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
 02) Flags: fast devsel

 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
 Flags: medium devsel

 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
 II] (prog-if 80 [Master]) Flags: bus master, fast devsel, latency 0
 I/O ports at 1100 [size=16]

 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 (prog-if 00 [VGA])
 Flags: bus master, fast devsel, latency 0
 Memory at c000 (32-bit, prefetchable) [size=32M]
 Memory at c300 (32-bit, non-prefetchable) [size=4K]

 00:03.0 Class ff80: Unknown device 5853:0001 (rev 01)
 Flags: fast devsel
 I/O ports at 1000 [disabled] [size=256]
 Memory at c200 (32-bit, prefetchable) [disabled]
 [size=16M]

   There is `Unknown device' in thie message. It's pyseudo-device for
 PV-on-HVM called `xen-platform-pci'.

   Next, I showed IO-memory maps with /proc/iomem. But, IO-memory maps
 are different on both environment.

 [linux-2.6.16.29 on VT-i domain]
 # cat /proc/iomem
 -0009 : System RAM
 000a-000b : reserved
 000c-000f : reserved
 0010-03ff : System RAM
 0400-04bf3fff : System RAM
   0400-046d34bf : Kernel code
   046d34c0-04bf373f : Kernel data
 04bf4000-3c458fff : System RAM
 deleted *
 3ff7-3fffdfff : System RAM
 3fffe000-3fff : reserved
 c000-c1ff : :00:02.0
 c200-c2ff : :00:03.0
 c300-c3000fff : :00:02.0
 e000-e033dcf7 : PCI Bus :00 I/O Ports -0cf7
 e0340d00-e3ff : PCI Bus :00 I/O Ports 0d00-

 [RHEL4 U2 on VT-i domain]
 # cat /proc/iomem
 c000-c1ff : PCI Bus :00
   c000-c1ff : :00:02.0
 c200-c2ff : :00:03.0
   c200-c2000fff : PCI Bus :00
 c300-c3000fff : :00:02.0

   On RHEL4 U2 environment, I could show only this message. It's
 strange, and it's the reason that xen-platform-pci.ko module can't
 be attached, I think.

   I don't understand what is happend. Does anyone know the reason ?
I do not really understand what is strange for you.
The output is not the same but:
* it may be due to kernel version
* the xen pseudo device appears in both version.

What is the error message when you try to insmod xen-platform-pci ?
Have you ever succeed to insmod it in RHEL 4 ?

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread Doi . Tsunehisa
  Hi Tristan,

  Thank you for your comment.

You (Tristan.Gingold) said:
 [linux-2.6.16.29 on VT-i domain]
 # cat /proc/iomem
 -0009 : System RAM
 000a-000b : reserved
 000c-000f : reserved
 0010-03ff : System RAM
 0400-04bf3fff : System RAM
   0400-046d34bf : Kernel code
   046d34c0-04bf373f : Kernel data
 04bf4000-3c458fff : System RAM
 deleted *
 3ff7-3fffdfff : System RAM
 3fffe000-3fff : reserved
 c000-c1ff : :00:02.0
 c200-c2ff : :00:03.0
 c300-c3000fff : :00:02.0
 e000-e033dcf7 : PCI Bus :00 I/O Ports -0cf7
 e0340d00-e3ff : PCI Bus :00 I/O Ports 0d00-

 [RHEL4 U2 on VT-i domain]
 # cat /proc/iomem
 c000-c1ff : PCI Bus :00
   c000-c1ff : :00:02.0
 c200-c2ff : :00:03.0
   c200-c2000fff : PCI Bus :00
 c300-c3000fff : :00:02.0

   On RHEL4 U2 environment, I could show only this message. It's
 strange, and it's the reason that xen-platform-pci.ko module can't
 be attached, I think.

   I don't understand what is happend. Does anyone know the reason ?
 I do not really understand what is strange for you.
 The output is not the same but:
 * it may be due to kernel version
 * the xen pseudo device appears in both version.

  I think that the diffence are noticed..

 [linux-2.6.16.29 on VT-i domain]
 c200-c2ff : :00:03.0

 [RHEL4 U2 on VT-i domain]
 c200-c2ff : :00:03.0
   c200-c2000fff : PCI Bus :00

  On RHEL4 U2 kernel (Linux version 2.6.9-22.EL), xen-platform-pci
includes `PCI Bus :00', but it should be the leaf node, I think.
To insmod xen-platform-pci, this iomem space was conflicted with
the inner device, thus it can't be attached.

 What is the error message when you try to insmod xen-platform-pci ?
 Have you ever succeed to insmod it in RHEL 4 ?

  I've never succeeded to insmod it in RHEL 4 original kernel.

  The error message is follows:

# insmod xen-platform-pci.ko
PCI: Enabling device :00:03.0 (0010 - 0013)
:MEM I/O resource 0xc200 @ 0x100 busy
xen-platform-pci: probe of :00:03.0 failed with error -16

   This message is outputted from follow codes:

[unmodified_drivers/linux-2.6/platform-pci/platform-pci.c]   202
   181  static int __devinit platform_pci_init(struct pci_dev *pdev,
   182 const struct pci_device_id *ent)
   183  {
   184  int i, ret;
   185  long ioaddr, iolen;
   186  long mmio_addr, mmio_len;
   187
   ...
   202
   203  if (request_mem_region(mmio_addr, mmio_len, DRV_NAME) == NULL)
   204  {
   205  printk(KERN_ERR :MEM I/O resource 0x%lx @ 0x%lx 
busy\n,
   206 mmio_addr, mmio_len);
   207  return -EBUSY;
   208  }

  I've tried to insert checking code before request_mem_region()
to show the resource tree. So, I found the reason that it can't be
attaced.

- Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread Tristan Gingold
Le Mardi 17 Octobre 2006 15:12, [EMAIL PROTECTED] a écrit :
   Hi Tristan,

   Thank you for your comment.

 You (Tristan.Gingold) said:
  [linux-2.6.16.29 on VT-i domain]
  # cat /proc/iomem
  -0009 : System RAM
  000a-000b : reserved
  000c-000f : reserved
  0010-03ff : System RAM
  0400-04bf3fff : System RAM
0400-046d34bf : Kernel code
046d34c0-04bf373f : Kernel data
  04bf4000-3c458fff : System RAM
  deleted *
  3ff7-3fffdfff : System RAM
  3fffe000-3fff : reserved
  c000-c1ff : :00:02.0
  c200-c2ff : :00:03.0
  c300-c3000fff : :00:02.0
  e000-e033dcf7 : PCI Bus :00 I/O Ports -0cf7
  e0340d00-e3ff : PCI Bus :00 I/O Ports 0d00-
 
  [RHEL4 U2 on VT-i domain]
  # cat /proc/iomem
  c000-c1ff : PCI Bus :00
c000-c1ff : :00:02.0
  c200-c2ff : :00:03.0
c200-c2000fff : PCI Bus :00
  c300-c3000fff : :00:02.0
 
On RHEL4 U2 environment, I could show only this message. It's
  strange, and it's the reason that xen-platform-pci.ko module can't
  be attached, I think.
 
I don't understand what is happend. Does anyone know the reason ?
 
  I do not really understand what is strange for you.
  The output is not the same but:
  * it may be due to kernel version
  * the xen pseudo device appears in both version.

   I think that the diffence are noticed..

  [linux-2.6.16.29 on VT-i domain]
  c200-c2ff : :00:03.0
 
  [RHEL4 U2 on VT-i domain]
  c200-c2ff : :00:03.0
c200-c2000fff : PCI Bus :00
OK, it is now clearer.
Sorry I missed it!

   On RHEL4 U2 kernel (Linux version 2.6.9-22.EL), xen-platform-pci
 includes `PCI Bus :00', but it should be the leaf node, I think.
 To insmod xen-platform-pci, this iomem space was conflicted with
 the inner device, thus it can't be attached.
At least RHEL4 U2 is coherent!
But I do not understand where 'c200-c2000fff : PCI Bus :00' comes 
from.  According to your lspci, there is nothing here.  Maybe it was added by 
the ACPI table ?

This is strange indeed :-(

Tristan.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread You, Yongkang
Hi Tsunehisa,

I have tried your patch and tried the modules in VTI domains. 
VBD hasn't problem. I can mount VBD hard disk xvda successfully.
But VNIF modules has problems. After I tried to insmod VNIF driver, VTI domain 
crashed. 

My vnif config: vif= [ 'type=ioemu, bridge=xenbr0', ' ' ]
BTW, I remake the VTI kernel with 2.6.16. 

Following is the log:
===
[EMAIL PROTECTED] ~]# insmod xen-platform-pci.ko
PCI: Enabling device :00:03.0 (0010 - 0013)
Grant table initialized
[EMAIL PROTECTED] ~]# insmod xenbus.ko
[EMAIL PROTECTED] ~]# insmod xen-vnif.ko
[EMAIL PROTECTED] ~]# vif vif-0: 2 parsing device/vif/0/mac
netif_release_rx_bufs: fix me for copying receiver.
kernel BUG at net/core/dev.c:3073!
xenwatch[3970]: bugcheck! 0 [1]
Modules linked in: xen_vnif xenbus xen_platform_pci sunrpc binfmt_misc dm_mod 
thermal processor fan container button
Pid: 3970, CPU 0, comm: xenwatch
psr : 1010081a6018 ifs : 838b ip  : [a001005eec40]Not 
tainted
ip is at unregister_netdevice+0x1a0/0x580
unat:  pfs : 038b rsc : 0003
rnat: a00100a646c1 bsps: 0007 pr  : 6941
ldrs:  ccv :  fpsr: 0009804c8a70433f
csd :  ssd : 
b0  : a001005eec40 b6  : a001000b79c0 b7  : a001bbc0
f6  : 1003e00a0 f7  : 1003e20c49ba5e353f7cf
f8  : 1003e04e2 f9  : 1003e0fa0
f10 : 1003e3b9aca00 f11 : 1003e431bde82d7b634db
r1  : a00100b34120 r2  : 0002 r3  : 00104000
r8  : 0026 r9  : 0001 r10 : e1014644
r11 : 0003 r12 : e25b7da0 r13 : e25b
r14 : 4000 r15 : a0010086f558 r16 : a0010086f560
r17 : e1d9fde8 r18 : e1d98030 r19 : e1014638
r20 : 0073 r21 : 0003 r22 : 0002
r23 : e1d98040 r24 : e1014608 r25 : e1014d80
r26 : e1014d60 r27 : 0073 r28 : 0073
r29 :  r30 :  r31 : 

Call Trace:
 [a00100011df0] show_stack+0x50/0xa0
sp=e25b7910 bsp=e25b12e0
 [a001000126c0] show_regs+0x820/0x840
sp=e25b7ae0 bsp=e25b1298
 [a00100037030] die+0x1d0/0x2e0
sp=e25b7ae0 bsp=e25b1250
 [a00100037180] die_if_kernel+0x40/0x60
sp=e25b7b00 bsp=e25b1220
 [a001000373d0] ia64_bad_break+0x230/0x480
sp=e25b7b00 bsp=e25b11f0
 [a001c3c0] ia64_leave_kernel+0x0/0x280
sp=e25b7bd0 bsp=e25b11f0
 [a001005eec40] unregister_netdevice+0x1a0/0x580
sp=e25b7da0 bsp=e25b1198
 [a001005ef050] unregister_netdev+0x30/0x60
sp=e25b7da0 bsp=e25b1178
 [a002000c5cd0] close_netdev+0x90/0xc0 [xen_vnif]
sp=e25b7da0 bsp=e25b1140
 [a002000c7870] backend_changed+0x1030/0x1080 [xen_vnif]
sp=e25b7da0 bsp=e25b10a8
 [a002000e5160] otherend_changed+0x160/0x1a0 [xenbus]
sp=e25b7dc0 bsp=e25b1068
 [a002000e3e70] xenwatch_handle_callback+0x70/0x100 [xenbus]
sp=e25b7dc0 bsp=e25b1040
 [a002000e4230] xenwatch_thread+0x330/0x3a0 [xenbus]
sp=e25b7dc0 bsp=e25b1018
 [a001000b6e20] kthread+0x180/0x200
sp=e25b7e20 bsp=e25b0fd8
 [a001000141b0] kernel_thread_helper+0xd0/0x100
sp=e25b7e30 bsp=e25b0fb0
 [a00194c0] start_kernel_thread+0x20/0x40
sp=e25b7e30 bsp=e25b0fb0
 BUG: xenwatch/3970, lock held at task exit time!
 [a002000f0cf8] {xenwatch_mutex}
.. held by:  xenwatch: 3970 [e25b, 110]
... acquired at:   xenwatch_thread+0x1e0/0x3a0 [xenbus]

Best Regards,
Yongkang (Kangkang) 永康

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of DOI
Tsunehisa
Sent: 2006年10月16日 20:31
To: xen-ia64-devel
Subject: [Xen-ia64-devel] Please try PV-on-HVM on IPF

Hi all,

  We've ported PV-on-HVM drivers for IPF. But I think that
only few tries it. Thus, I try to describe to use it.

  And I attach several patches about PV-on-HVM.

+ fix-warning.patch
  - warning fix for HVM PV driver
+ notsafe-comment.patch
  - add not-SMP-safe comment about PV-on-HVM
  - to take Isaku's suggestion.
+ pv-backport.patch (preliminary

RE: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread You, Yongkang
EM~. If I didn't use both ioemu and vnif at the same time, I can use vnif. :)
The workable vnif config: vnif=[ ' ' ] 

But I also meet kernel call trace, when running kuduz, after insmod vnif moduel.

The call trace log:
=
Modules linked in: xen_vnif xenbus xen_platform_pci sunrpc binfmt_misc dm_mod 
thermal processor fan container button

Pid: 4061, CPU 0, comm:kudzu
psr : 121008026038 ifs : 8003 ip  : [a00100461621]Not 
taintede selects  |   F12 next screen
ip is at serial_in+0x301/0x340
unat:  pfs : 060f rsc : 0003
rnat: a00100a646c1 bsps: 0007 pr  : 05566aa5
ldrs:  ccv :  fpsr: 0009804c8a70033f
csd :  ssd : 
b0  : a00100462ff0 b6  : a00100462f80 b7  : a0010004cfe0
f6  : 1003e000b421fa3e8 f7  : 1003e018f
f8  : 1003e000b421fa259 f9  : 1003e0001
f10 : 0fffdc8c0 f11 : 1003e
r1  : a00100b34120 r2  : 00ff r3  : a00100a646a8
r8  : 0100 r9  : 0080 r10 : a00100a645d4
r11 : c000e00fe3f8 r12 : eb15fbe0 r13 : eb158000
r14 : 03fa r15 : 000fe3fa r16 : a00100a645d4
r17 : a0010094de00 r18 : 0001 r19 : 0002
r20 : 03f8 r21 : c000e00fe3fa r22 : c000e000
r23 : a0010094de00 r24 : 03fa r25 : 0001
r26 : a0010094de08 r27 : a0010094de00 r28 : 
r29 : a0010094de00 r30 : 03fa r31 : 00ff

Call Trace:
 [a00100011df0] show_stack+0x50/0xa0
sp=eb15f830 bsp=eb1595d0
 [a001000126c0] show_regs+0x820/0x840
sp=eb15fa00 bsp=eb159588
 [a001000d2a30] softlockup_tick+0x150/0x180
sp=eb15fa00 bsp=eb159558
 [a0010009e210] do_timer+0x990/0x9c0
sp=eb15fa10 bsp=eb159510
 [a00100035fa0] timer_interrupt+0x200/0x340
sp=eb15fa10 bsp=eb1594d0
 [a001000d30b0] handle_IRQ_event+0x90/0x120
sp=eb15fa10 bsp=eb159490
 [a001000d3290] __do_IRQ+0x150/0x3e0
sp=eb15fa10 bsp=eb159440
 [a00100010e60] ia64_handle_irq+0xa0/0x140
sp=eb15fa10 bsp=eb159418
 [a001c3c0] ia64_leave_kernel+0x0/0x280
sp=eb15fa10 bsp=eb159418
 [a00100461620] serial_in+0x300/0x340
sp=eb15fbe0 bsp=eb159400
 [a00100462ff0] serial8250_interrupt+0x70/0x200
sp=eb15fbe0 bsp=eb159398
 [a001000d30b0] handle_IRQ_event+0x90/0x120
sp=eb15fbf0 bsp=eb159358
 [a001000d3400] __do_IRQ+0x2c0/0x3e0
sp=eb15fbf0 bsp=eb159308
 [a00100010e60] ia64_handle_irq+0xa0/0x140
sp=eb15fbf0 bsp=eb1592e0
 [a001c3c0] ia64_leave_kernel+0x0/0x280
sp=eb15fbf0 bsp=eb1592e0 

Best Regards,
Yongkang (Kangkang) 永康

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of You,
Yongkang
Sent: 2006年10月17日 22:27
To: DOI Tsunehisa; xen-ia64-devel
Subject: RE: [Xen-ia64-devel] Please try PV-on-HVM on IPF

Hi Tsunehisa,

I have tried your patch and tried the modules in VTI domains.
VBD hasn't problem. I can mount VBD hard disk xvda successfully.
But VNIF modules has problems. After I tried to insmod VNIF driver, VTI
domain crashed.

My vnif config: vif= [ 'type=ioemu, bridge=xenbr0', ' ' ]
BTW, I remake the VTI kernel with 2.6.16.

Following is the log:
==
=
[EMAIL PROTECTED] ~]# insmod xen-platform-pci.ko
PCI: Enabling device :00:03.0 (0010 - 0013)
Grant table initialized
[EMAIL PROTECTED] ~]# insmod xenbus.ko
[EMAIL PROTECTED] ~]# insmod xen-vnif.ko
[EMAIL PROTECTED] ~]# vif vif-0: 2 parsing device/vif/0/mac
netif_release_rx_bufs: fix me for copying receiver.
kernel BUG at net/core/dev.c:3073!
xenwatch[3970]: bugcheck! 0 [1]
Modules linked in: xen_vnif xenbus xen_platform_pci sunrpc binfmt_misc
dm_mod thermal processor fan container button
Pid: 3970, CPU 0, comm: xenwatch
psr : 1010081a6018 ifs : 838b ip  :
[a001005eec40]Not tainted
ip is at unregister_netdevice+0x1a0/0x580
unat:  pfs : 038b rsc : 0003
rnat: a00100a646c1 bsps: 0007 pr  : 6941
ldrs:  ccv :  fpsr: 0009804c8a70433f
csd

RE: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread You, Yongkang
HI Tsunehisa,

More testing results based on changeset 11810:
--
1. I did a continuous KB in VTI VBD disk 35 times, all passed. The KB speed in 
VBD is a little faster than in IOEMU disk.
2. I did a lot of scp 1G file by VTI VNIF, all pass. The scp speed is 
extraordinary higher than without VNIF.
3. UP and SMP(2vcpus) VTI can both use PV drivers. 

Issues:
--
I noticed there are too many kernel unaligned access reported by VTI, after 
insert vnif module.

Two issues reported by yesterday:
---
1. If use IOEMU NIC too, insert VNIF driver will cause VTI domain crash. IA32 
hasn't this issue.
2. After insert VNIF driver, VTI kernel often calls trace when running 
applications (kudzu, vi etc.). 

Best Regards,
Yongkang (Kangkang) 永康

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of DOI
Tsunehisa
Sent: 2006年10月16日 20:31
To: xen-ia64-devel
Subject: [Xen-ia64-devel] Please try PV-on-HVM on IPF

Hi all,

  We've ported PV-on-HVM drivers for IPF. But I think that
only few tries it. Thus, I try to describe to use it.

  And I attach several patches about PV-on-HVM.

+ fix-warning.patch
  - warning fix for HVM PV driver
+ notsafe-comment.patch
  - add not-SMP-safe comment about PV-on-HVM
  - to take Isaku's suggestion.
+ pv-backport.patch (preliminary)
  - current HVM PV driver for only 2.6.16 or 2.6.16.* kernel
  - this is preliminary patch for backporting to before 2.6.16
kernel
  - we tested only compiling on RHEL4.

[Usage of PV-on-HVM]

  1) get xen-ia64-unstable.hg tree (after cs:11805) and built it.

  2) create a guest system image.
 - simply, install guest system on VT-i domain

  3) build linux-2.6.16 kernel for guest system
 - get linux-2.6.16 kernel source and build

  4) change guest kernel in the image to linux-2.6.16 kernel
 - edit config file of boot loader

  5) build PV-on-HVM drivers
 # cd xen-ia64-unstable.hg/unmodified_drivers/linux-2.6
 # sh mkbuildtree
 # make -C /usr/src/linux-2.6.16 M=$PWD modules

  6) copy the drivers to guest system image
 - mount guest system image with lomount command.
 - copy the drivers to guest system image
   # cp -p */*.ko guest_system...

  7) start VT-i domain

  8) attach drivers
domvti# insmod xen-platform-pci.ko
domvti# insmod xenbus.ko
domvti# insmod xen-vbd.ko
domvti# insmod xen-vnif.ko

  9) attach devices with xm block-attach/network-attach
 - this operation is same for dom-u

Thanks,
- Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-17 Thread Alex Williamson
On Mon, 2006-10-16 at 21:31 +0900, DOI Tsunehisa wrote:
 
   And I attach several patches about PV-on-HVM.
 
 + fix-warning.patch
   - warning fix for HVM PV driver
 + notsafe-comment.patch
   - add not-SMP-safe comment about PV-on-HVM
   - to take Isaku's suggestion.

   I applied the two above.  I assume the one below is not meant for
upstream/not ready.  Thanks,

Alex

 + pv-backport.patch (preliminary)
   - current HVM PV driver for only 2.6.16 or 2.6.16.* kernel
   - this is preliminary patch for backporting to before 2.6.16
 kernel
   - we tested only compiling on RHEL4.
 

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Please try PV-on-HVM on IPF

2006-10-16 Thread DOI Tsunehisa
Hi all,

  We've ported PV-on-HVM drivers for IPF. But I think that
only few tries it. Thus, I try to describe to use it.

  And I attach several patches about PV-on-HVM.

+ fix-warning.patch
  - warning fix for HVM PV driver
+ notsafe-comment.patch
  - add not-SMP-safe comment about PV-on-HVM
  - to take Isaku's suggestion.
+ pv-backport.patch (preliminary)
  - current HVM PV driver for only 2.6.16 or 2.6.16.* kernel
  - this is preliminary patch for backporting to before 2.6.16
kernel
  - we tested only compiling on RHEL4.

[Usage of PV-on-HVM]

  1) get xen-ia64-unstable.hg tree (after cs:11805) and built it.

  2) create a guest system image.
 - simply, install guest system on VT-i domain

  3) build linux-2.6.16 kernel for guest system
 - get linux-2.6.16 kernel source and build

  4) change guest kernel in the image to linux-2.6.16 kernel
 - edit config file of boot loader

  5) build PV-on-HVM drivers
 # cd xen-ia64-unstable.hg/unmodified_drivers/linux-2.6
 # sh mkbuildtree
 # make -C /usr/src/linux-2.6.16 M=$PWD modules

  6) copy the drivers to guest system image
 - mount guest system image with lomount command.
 - copy the drivers to guest system image
   # cp -p */*.ko guest_system...

  7) start VT-i domain

  8) attach drivers
domvti# insmod xen-platform-pci.ko
domvti# insmod xenbus.ko
domvti# insmod xen-vbd.ko
domvti# insmod xen-vnif.ko

  9) attach devices with xm block-attach/network-attach
 - this operation is same for dom-u

Thanks,
- Tsunehisa Doi
# HG changeset patch
# User [EMAIL PROTECTED]
# Node ID 199aa46b3aa2bd3e9e684344e000d4ad40177541
# Parent  bf0a6f241c5eb7bea8b178b490ed32178c7b5bff
warning fix for HVM PV driver

Signed-off-by: Tsunehisa Doi [EMAIL PROTECTED]

diff -r bf0a6f241c5e -r 199aa46b3aa2 
linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c
--- a/linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c  Mon Oct 16 20:00:12 
2006 +0900
+++ b/linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c  Mon Oct 16 20:20:16 
2006 +0900
@@ -36,8 +36,10 @@ unsigned long
 unsigned long
 xencomm_vaddr_to_paddr(unsigned long vaddr)
 {
+#ifndef CONFIG_VMX_GUEST
struct page *page;
struct vm_area_struct *vma;
+#endif
 
if (vaddr == 0)
return 0;
# HG changeset patch
# User [EMAIL PROTECTED]
# Node ID bf0a6f241c5eb7bea8b178b490ed32178c7b5bff
# Parent  fcd746cf4647e06b8e88e620c29610ba43e3ad7c
Add not-SMP-safe comment about PV-on-HVM

Signed-off-by: Tsunehisa Doi [EMAIL PROTECTED]

diff -r fcd746cf4647 -r bf0a6f241c5e xen/arch/ia64/xen/mm.c
--- a/xen/arch/ia64/xen/mm.cSat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/xen/mm.cMon Oct 16 20:00:12 2006 +0900
@@ -400,6 +400,7 @@ gmfn_to_mfn_foreign(struct domain *d, un
 
// This function may be called from __gnttab_copy()
// during destruction of VT-i domain with PV-on-HVM driver.
+   // ** FIXME: This is not SMP-safe yet about p2m table. **
if (unlikely(d-arch.mm.pgd == NULL)) {
if (VMX_DOMAIN(d-vcpu[0]))
return INVALID_MFN;
diff -r fcd746cf4647 -r bf0a6f241c5e xen/arch/ia64/xen/vhpt.c
--- a/xen/arch/ia64/xen/vhpt.c  Sat Oct 14 18:10:08 2006 -0600
+++ b/xen/arch/ia64/xen/vhpt.c  Mon Oct 16 20:00:12 2006 +0900
@@ -216,6 +216,7 @@ void vcpu_flush_vtlb_all(struct vcpu *v)
   grant_table share page from guest_physmap_remove_page()
   in arch_memory_op() XENMEM_add_to_physmap to realize
   PV-on-HVM feature. */
+   /* FIXME: This is not SMP-safe yet about p2m table */
/* Purge vTLB for VT-i domain */
thash_purge_all(v);
}
# HG changeset patch
# User [EMAIL PROTECTED]
# Node ID 7089d11a9e0b723079c83697c529970d7b3b0750
# Parent  199aa46b3aa2bd3e9e684344e000d4ad40177541
Modify for PV-on-HVM backport

diff -r 199aa46b3aa2 -r 7089d11a9e0b 
linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c
--- a/linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c  Mon Oct 16 20:20:16 
2006 +0900
+++ b/linux-2.6-xen-sparse/arch/ia64/xen/xencomm.c  Mon Oct 16 20:21:40 
2006 +0900
@@ -22,6 +22,10 @@
 #include asm/page.h
 #include asm/xen/xencomm.h
 
+#ifdef HAVE_COMPAT_H
+#include compat.h
+#endif
+
 static int xencomm_debug = 0;
 
 static unsigned long kernel_start_pa;
diff -r 199aa46b3aa2 -r 7089d11a9e0b 
linux-2.6-xen-sparse/drivers/xen/blkfront/blkfront.c
--- a/linux-2.6-xen-sparse/drivers/xen/blkfront/blkfront.c  Mon Oct 16 
20:20:16 2006 +0900
+++ b/linux-2.6-xen-sparse/drivers/xen/blkfront/blkfront.c  Mon Oct 16 
20:21:40 2006 +0900
@@ -48,6 +48,10 @@
 #include asm/hypervisor.h
 #include asm/maddr.h
 
+#ifdef HAVE_COMPAT_H
+#include compat.h
+#endif
+
 #define BLKIF_STATE_DISCONNECTED 0
 #define BLKIF_STATE_CONNECTED1
 #define BLKIF_STATE_SUSPENDED2
@@ -468,6 +472,27 @@ int blkif_ioctl(struct inode *inode, str
  command, (long)argument,