Re: [Xen-ia64-devel][PATCH]Change to new interrupt deliver mechanism

2006-12-06 Thread Doi . Tsunehisa
You (anthony.xu) said:
   BTW, in my experience, the vector doesn't set to VIOSAPIC at
 HVMOP_set_param hypercall. Thus I'll implement to find the GSI at
 interrupt injection phase.
 
 In this case,
 
 Can we call set_callback_irq with hardware irq inside Qemu rather than
 platform_pci, just after platform_pci is initiated in Qemu?
 
 It seems resolve this issue.

 What's your opinion about this?

  Sorry, I don't know this issue for detail. I think that the guest
OS sets interrupt mask register of VIOSAPIC until setting own vector.
I assume that the guest OS might change the vector for such hardware
in active state.

 BTW I found we use viosapic_set_irq to pend the platform_pci interrupt.
 It may be no correct, because platform_pic interrupt is like edge
 triggered interrupt, but VIOSAPIC for this irq is programmed to a level
 triggered due to it is PCI device.
 I'll fix this.

  Thank you.

- Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH]Change to new interrupt deliver mechanism

2006-12-06 Thread Doi . Tsunehisa
Hi Anthony,

You (anthony.xu) said:
   Sorry, I don't know this issue for detail. I think that the guest
 OS sets interrupt mask register of VIOSAPIC until setting own vector.
 I assume that the guest OS might change the vector for such hardware
 in active state.
 
 Hi Doi,
 
 This issue is from you,  guest linux uses vector, while there is no
 function like vector_to_irq.

  Sorry, I might misunderstand your comment.

  Yes, I think so.

 Because the hardware irq for platform_pci will not be changes.
 We can call set_callback_irq with hardware irq inside Qemu,
 Thus, platform_pci driver don't need to call set_callback_irq.

 Yes, guest OS can change the vector for platform_pci hardware irq,
 But the hardware irq is not changed, and HV know how to translate
 hardware irq to level through VIOSAPIC,
 And HV still use viosapic_set_irq to pend platform_pci interrupt.
 
 So I think it works.

  Do you meen that we have to modify qemu code to solve this isssue ?

Thanks,
- Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Re: [Xen-devel] unnecessary VCPU migration happens again

2006-12-06 Thread Emmanuel Ackaouy
Hi Anthony.

Could you send xentrace output for scheduling operations
in your setup?

Perhaps we're being a little too aggressive spreading
work across sockets. We do this on vcpu_wake right now.

I'm not sure I understand why HVM VCPUs would block
and wake more often than PV VCPUs though. Can you
explain?

If you could gather some scheduler traces and send
results, it will give us a good idea of what's going
on and why. The multi-core support is new and not
widely tested so it's possible that it is being
overly aggressive or perhaps even buggy.

Emmanuel.


On Fri, Dec 01, 2006 at 06:11:32PM +0800, Xu, Anthony wrote:
 Emmanue,
 
 I found that unnecessary VCPU migration happens again.
 
 
 My environment is,
 
 IPF two sockes, two cores per socket, 1 thread per core.
 
 There are 4 core totally.
 
 There are 3 domain, they are all UP,
 So there are 3 VCPU totally.
 
 One is domain0,
 The other two are VTI-domain.
 
 I found there are lots of migrations.
 
 
 This is caused by below code segment in function csched_cpu_pick.
 When I comments this code segment, there is no migration in above 
 enviroment. 
 
 
 
 I have a little analysis about this code.
 
 This code handls multi-core and multi-thread, that's very good,
 If two VCPUS running on LPs which belong to the same core, then the
 performance
 is bad, so if there are free LPS, we should let this two VCPUS run on
 different cores.
 
 This code may work well with para-domain.
 Because para-domain is seldom blocked,
 It may be block due to guest call halt instruction.
 This means if a idle VCPU is running on a LP,
 there is no non-idle VCPU running on this LP.
 In this evironment, I think below code should work well.
 
 
 But in HVM environment, HVM is blocked by IO operation,
 That is to say, if a idle VCPU is running on a LP, maybe a
 HVM VCPU is blocked, and HVM VCPU will run on this LP, when
 it is woken up.
 In this evironment, below code cause unnecessary migrations.
 I think this doesn't reach the goal ot this code segment.
 
 In IPF side, migration is time-consuming, so it caused some performance
 degradation.
 
 
 I have a proposal and it may be not good.
 
 We can change the meaning of idle-LP,
 
 Idle-LP means a idle-VCPU is running on this LP, and there is no VCPU
 blocked on this
 LP.( if this VCPU is woken up, this VCPU will run on this LP).
 
 
 
 --Anthony
 
 
 /*
  * In multi-core and multi-threaded CPUs, not all idle execution
  * vehicles are equal!
  *
  * We give preference to the idle execution vehicle with the
 most
  * idling neighbours in its grouping. This distributes work
 across
  * distinct cores first and guarantees we don't do something
 stupid
  * like run two VCPUs on co-hyperthreads while there are idle
 cores
  * or sockets.
  */
 while ( !cpus_empty(cpus) )
 {
 nxt = first_cpu(cpus);
 
 if ( csched_idler_compare(cpu, nxt)  0 )
 {
 cpu = nxt;
 cpu_clear(nxt, cpus);
 }
 else if ( cpu_isset(cpu, cpu_core_map[nxt]) )
 {
 cpus_andnot(cpus, cpus, cpu_sibling_map[nxt]);
 }
 else
 {
 cpus_andnot(cpus, cpus, cpu_core_map[nxt]);
 }
 
 ASSERT( !cpu_isset(nxt, cpus) );
 }

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][Patch] fix vti broken for 12795

2006-12-06 Thread Alex Williamson
On Wed, 2006-12-06 at 13:54 +0800, Zhang, Xing Z wrote:
 Hi alex:
 
  This patch use Isaku’s new foreign mapping interface. It can
 boot Vti again.

   Applied, thanks,

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] VTI Windows installation and booting HowTo

2006-12-06 Thread Alex Williamson
On Thu, 2006-11-16 at 23:32 +0800, You, Yongkang wrote:

 3. In EFI shell, go to fs0: ; run cp efi\microsoft\winnt50\boot . 
 
 4. Go to msutil\ ; run nvrboot; 
 
   input I - boot - ENTER - Q; exit EFI shell; continue. 
 
 5. Windows will the perform installation. It takes 5 hours to complete 
 installation. VTI domain still met slow CD-ROM issue, which will cause VTI 
 domain installation slowly. In recent Changeset VTI performance depredates a 
 lot, the installation might be slower.

   I've tried this many times, and I always seem to get stuck here.  I
do step #4 above, exit from EFI shell, and get a quick text flash that
looks like it's selecting and booting Windows 2003, then my VNC session
goes blank.  It runs for a little over an hour consuming 100% of a vcpu,
then according to the xend.log the domain reboots.  I've using the 12.01
GFW with xen-ia64-unstable.hg tip (12796).  It repeats the same thing
again if I re-attempt from step 4.  Thanks,

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Re: [PATCH] Re: [Xen-devel] Re: [PATCH 2/2] PV framebuffer

2006-12-06 Thread Atsushi SAKAI
Hi, Markus

Thank you 
I confirmed to run on IA64.

Thanks
Atsushi SAKAI

Atsushi SAKAI [EMAIL PROTECTED] writes:

 Hi, Markus

 Thank you for your suggestion.
 Would you please post sample vfb config file?
 Or any document for latest version.

 Anyway, your vfb patch makes large changes compared to previous post
 for example xencons in config is erased in latest version.

 Other example, your following suggestion in config file 
 seems to be vfbif = xx not vfb = xx

 Thanks
 Atsushi SAKAI

name = parafat
memory = 384
disk = [ 'file:/spare/parafat,xvda,w', ]
vif = [ 'mac=00:16:3e:34:a2:c0, bridge=xenbr0', ]
vfb = [ 'type=sdl' ]
uuid = 535c7fc5-3de0-370c-05ea-e6b91383ffa1
bootloader=/usr/bin/pygrub
vcpus=1
on_reboot   = 'restart'
on_crash= 'restart'








___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Xen/IA64 Healthiness Report -Cset#12796

2006-12-06 Thread Zhang, Jingke
Xen/IA64 Healthiness Report

Several issues:
1. VTI Linux domain boot slowly, if enabled 'serial=pty'. 
2. SMPVTI_LTP performed slowly. 

Except SMPVTI_LTP case, all the other nightly cases can pass in manually
testing.

Testing Environment:

Platform: Tiger4
Processor: Itanium 2 Processor
Logic Processors number: 8 (2 processors with Due Core)
PAL version: 8.47
Service OS: RHEL4u3 IA64 SMP with 2 VCPUs
VTI Guest OS: RHEL4u2  RHEL4u3
XenU Guest OS: RHEL4u2
Xen IA64 Unstable tree: 12796:d901f2fe8c25
Xen Schedule: credit
VTI Guest Firmware Flash.fd.2006.12.01 MD5:
09a224270416036a8b4e6f8496e97854

Summary Test Report:
-
  Total cases: 16
  Passed:15
  Failed: 1

Case Name Status   Case Description
Four_SMPVTI_Coexistpass   4 VTI (mem=256, vcpus=2)
Two_UP_VTI_Co pass   2 UP_VTI (mem=256)
One_UP_VTIpass1 UP_VTI (mem=256)
One_UP_XenU  pass1 UP_xenU(mem=256)
SMPVTI_LTP__fail__VTI (vcpus=4, mem=512) run LTP
SMPVTI_and_SMPXenU   pass  1 VTI + 1 xenU (mem=256 vcpus=2)
Two_SMPXenU_Coexistpass2 xenU (mem=256, vcpus=2)
One_SMPVTI_4096M  pass  1 VTI (vcpus=2, mem=4096M)
SMPVTI_Network  pass  1 VTI (mem=256,vcpu=2) and 'ping'
SMPXenU_Networkpass  1 XenU (vcpus=2) and 'ping'
One_SMP_XenU  pass   1 SMP xenU (vcpus=2)
One_SMP_VTIpass   1 SMP VTI (vcpus=2)
SMPVTI_Kernel_Build  pass  VTI (vcpus=4) and do Kernel Build
Four_SMPVTI_Coexist  pass  4 VTI domains( mem=256, vcpu=2)
SMPVTI_Windows  pass  SMPVTI windows(vcpu=2)
SMPWin_SMPVTI_SMPxenU  pass  SMPVTI Linux/Windows  XenU
UPVTI_Kernel_Build   pass   1 UP VTI and do kernel build
Notes:
-
The last stable changeset:
-
12014:9c649ca5c1cc

Thanks,
Zhangjingke

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] VTI Windows installation and booting HowTo

2006-12-06 Thread You, Yongkang

   I've tried this many times, and I always seem to get stuck here.  I
do step #4 above, exit from EFI shell, and get a quick text flash that
looks like it's selecting and booting Windows 2003, then my VNC session
goes blank.  It runs for a little over an hour consuming 100% of a vcpu,
then according to the xend.log the domain reboots.  I've using the 12.01
GFW with xen-ia64-unstable.hg tip (12796).  It repeats the same thing
again if I re-attempt from step 4.  Thanks,


Hi Alex,

From your description, I think all the steps are correct. Maybe there are some 
bugs in recently changeset. I didn't try it in these days. Jingke did a 
Windows installation test on 11.10 GW + Changeset 12525(xen-ia64-unstable) for 
Kouya's acceleration patch. The installation and booting are successful in 50 
mins. 

We can try Windows Installation in the tip. And give you the feedback again.

Could it relate with different Win2k3 version? I meet this issue (100% vcpus 
consuming) before, when try to create 4 vcpus VTI Windows. Anthony has already 
fixed this issue several weeks ago.

Best Regards,
Yongkang (Kangkang) 永康

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH]Change to new interrupt deliver mechanism

2006-12-06 Thread Xu, Anthony
[EMAIL PROTECTED] write on 2006年12月6日 17:45:
 Hi Anthony,
 
   Do you meen that we have to modify qemu code to solve this isssue ?

Doi,

I think it's a clean way to handle both windows and linux guest in IPF side.
And it is a very little modification in Qemu.

But maybe IA32 can not use this method, because there may be two hardware irq
for platform_pci, one is for PIC(i8259, irq 10 or 11), the other is for IOAPIC( 
maybe irq 28).
In IPF side, there is only IOSAPIC, so there is only un-changed hardware irq 
(28) for platfrom_pci 
device.



--Anthony


 
 Thanks,
 - Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH]Change to new interrupt deliver mechanism

2006-12-06 Thread Doi . Tsunehisa
Hi Anthony,

 This issue is from you,  guest linux uses vector, while there is no
 function like vector_to_irq.
 
   Sorry, I might misunderstand your comment.

  I've been thinking more about your comment.

  We may be able to call set_callback_irq inside qemu, but I don't
think that it's good solution for this issue. Because qemu doesn't
have such interface in current implementation, and HV can't know
that PV-on-HVM driver is initiated or not, I think.

  I've thought that we can get GSI for platform_pci from Device ID
in HV, if mapping betweeen devid and gsi is fixed.

  There is hvm_pci_intx_gsi() macro in xen/arch/ia64/vmx/viosapic.c

[xen/arch/ia64/vmx/viosapic.c]-
#define hvm_pci_intx_gsi(dev, intx)  \
(dev)  2) + ((dev)  3) + (intx))  31) + 16)


void viosapic_set_pci_irq(struct domain *d, int device, int intx, int level)
{
int irq;
irq = hvm_pci_intx_gsi(device, intx);

viosapic_set_irq(d, irq, level);
}
---

  It seems that device to gsi mapping is fixed. If it's correct,
we can get GSI in HV from device ID which is notified from PV-driver
with set_callback_irq.

  What do you think about this ?

Thanks,
- Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH]Change to new interrupt deliver mechanism

2006-12-06 Thread Doi . Tsunehisa
Hi Anthony,

You (anthony.xu) said:
   Do you meen that we have to modify qemu code to solve this isssue ?
 
 Doi,
 
 I think it's a clean way to handle both windows and linux guest in IPF side.
 And it is a very little modification in Qemu.
 
 But maybe IA32 can not use this method, because there may be two hardware
 irq for platform_pci, one is for PIC(i8259, irq 10 or 11), the other is for
 IOAPIC( maybe irq 28).
 In IPF side, there is only IOSAPIC, so there is only un-changed hardware 
 irq (28) for platfrom_pci device.

  If IA32 platform can not use this method, we must find anothger aproach
to resolve this issue, I think.

Thanks,
- Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH]Change to new interrupt deliver mechanism

2006-12-06 Thread Xu, Anthony
[EMAIL PROTECTED] write on 2006年12月7日 10:37:
 Hi Anthony,
   I've thought that we can get GSI for platform_pci from Device ID
 in HV, if mapping betweeen devid and gsi is fixed.
 
   There is hvm_pci_intx_gsi() macro in xen/arch/ia64/vmx/viosapic.c
 
 [xen/arch/ia64/vmx/viosapic.c]-
 #define hvm_pci_intx_gsi(dev, intx)  \
 (dev)  2) + ((dev)  3) + (intx))  31) + 16)
 
 
 void viosapic_set_pci_irq(struct domain *d, int device, int intx, int
 level) {
 int irq;
 irq = hvm_pci_intx_gsi(device, intx);
 
 viosapic_set_irq(d, irq, level);
 }
 ---
 
   It seems that device to gsi mapping is fixed. If it's correct,
 we can get GSI in HV from device ID which is notified from PV-driver
 with set_callback_irq.
 
   What do you think about this ?

That's a good solution for IPF side.

It may not work for IA32 side, if apic is disable.

So we can use #ifdef.

Anthony

 
 Thanks,
 - Tsunehisa Doi

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] RE: [Xen-devel] unnecessary VCPU migration happens again

2006-12-06 Thread Xu, Anthony
Hi,

Thanks for your reply. Please see embedded comments.


Petersson, Mats write on 2006年12月6日 22:14:
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Emmanuel
 Ackaouy Sent: 06 December 2006 14:02
 To: Xu, Anthony
 Cc: [EMAIL PROTECTED]; xen-ia64-devel
 Subject: Re: [Xen-devel] unnecessary VCPU migration happens again
 
 Hi Anthony.
 
 Could you send xentrace output for scheduling operations
 in your setup?
I'm not sure xentrace works on IPF side. I'm trying.

 
 Perhaps we're being a little too aggressive spreading
 work across sockets. We do this on vcpu_wake right now.

I think below logic also does spreading work.

1. in csched_load_balance, below code segment sets _VCPUF_migrating flag
 in peer_vcpu, as the comment said,
/*
 * If we failed to find any remotely queued VCPUs to move here,
 * see if it would be more efficient to move any of the running
 * remote VCPUs over here.
 */


/* Signal the first candidate only. */
if ( !is_idle_vcpu(peer_vcpu) 
 is_idle_vcpu(__runq_elem(spc-runq.next)-vcpu) 
 __csched_running_vcpu_is_stealable(cpu, peer_vcpu) )
{
set_bit(_VCPUF_migrating, peer_vcpu-vcpu_flags);
spin_unlock(per_cpu(schedule_data, peer_cpu).schedule_lock);

CSCHED_STAT_CRANK(steal_loner_signal);
cpu_raise_softirq(peer_cpu, SCHEDULE_SOFTIRQ);
break;
}


2. When this peer_vcpu is scheduled out, migration happens,

void context_saved(struct vcpu *prev)
{
clear_bit(_VCPUF_running, prev-vcpu_flags);

if ( unlikely(test_bit(_VCPUF_migrating, prev-vcpu_flags)) )
vcpu_migrate(prev);
}

From this logic, the migration happens frequently if the numbers VCPU
is less than the number of logic CPU.


Anthony.



 
 I'm not sure I understand why HVM VCPUs would block
 and wake more often than PV VCPUs though. Can you
 explain?
 
 Whilst I don't know any of the facts of the original poster, I can
 tell you why HVM and PV guests have differing number of scheduling
 operations...
 
 Every time you get a IOIO/MMIO vmexit that leads to a qemu-dm
 interaction, you'll get a context switch. So for an average IDE block
 read/write (for example) on x86, you get 4-5 IOIO intercepts to send
 the command to qemu, then an interrupt is sent to the guest to
 indicate that the operation is finished, followed by a 256 x 16-bit
 IO read/write of the sector content (which is normally just one IOIO
 intercept unless the driver is stupid). This means around a dozen
 or so schedule operations to do one disk IO operation.
 
 The same operation in PV (or using PV driver in HVM guest of course)
 would require a single transaction from DomU to Dom0 and back, so only
 two schedule operations.
 
 The same problem occurs of course for other hardware devices such as
 network, keyboard, mouse, where a transaction consists of more than a
 single read or write to a single register.



That I want to highlight is,

When HVM VCPU is executing IO operation,
This HVM VCPU is blocked by HV, until this IO operation
is emulated by Qemu. Then HV wakes up this HVM VCPU.

While PV VCPU will not be blocked by PV driver.


I can give below senario.

There are two sockets, two core per socket.

Assume, dom0 is running on socket1 core1,
 vti1 is runing on socket1 core2,
Vti 2 is runing on socket2 core1,
Socket2 core2 is idle.

If vti2 is blocked by IO operation, then socket2 core1 is idle,
That means two cores in socket2 are idle,
While dom0 and vti1 are running on two cores of socket1,

Then scheduler will try to spread dom0 and vti1 on these two sockets.
Then migration happens. This is no necessary.



 
 
 If you could gather some scheduler traces and send
 results, it will give us a good idea of what's going
 on and why. The multi-core support is new and not
 widely tested so it's possible that it is being
 overly aggressive or perhaps even buggy.
 

 ___
 Xen-devel mailing list
 [EMAIL PROTECTED]
 http://lists.xensource.com/xen-devel

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] eepro100 for vti (ie network under windows)

2006-12-06 Thread Kouya SHIMURA
Hi, Tristan
Thank you for information.

I tried it. But unfortunately it doesn't help Windows 2k3.
It adds two new network card.
 * eepro100
 * dp8381x
Windows 2K3 seems to have neither drivers although NICs are
recognized (I confirmed it through device manager).

Thanks,
Kouya

Tristan Gingold writes:
  Hi,
  
  has anyone tried to use this:
  http://lists.gnu.org/archive/html/qemu-devel/2006-12/msg00015.html
  
  It may be a solution to enable network on EFI and Windows.
  
  Tristan.
  
  ___
  Xen-ia64-devel mailing list
  Xen-ia64-devel@lists.xensource.com
  http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] eepro100 for vti (ie network under windows)

2006-12-06 Thread Tristan Gingold
On Thu, Dec 07, 2006 at 12:44:33PM +0900, Kouya SHIMURA wrote:
 Hi, Tristan
 Thank you for information.
 
 I tried it. But unfortunately it doesn't help Windows 2k3.
 It adds two new network card.
  * eepro100
  * dp8381x
 Windows 2K3 seems to have neither drivers although NICs are
 recognized (I confirmed it through device manager).
Hi,

thank you for trying.
I was pretty sure windows had a driver for eepro100.  One of Bull system
used an eepro100 (ie not an eepro1000) and windows could network on it.

However there may be several eepro100 with a few differences.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel][PATCH]Implement irq redirection of IOSAPIC

2006-12-06 Thread Xu, Anthony
Implement irq redirection of IOSAPIC.


Regards
Anthony



iosapic_redirection.patch
Description: iosapic_redirection.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel][PATCH] fix warning

2006-12-06 Thread Xu, Anthony
Fix warning

Regards
Anthony



warning_fix.patch
Description: warning_fix.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel][PATCH] Send events to VTI domain through level triggered irq

2006-12-06 Thread Xu, Anthony
Send events to VTI domain through level triggered irq

Regards,
Anthony


send_irq_event.patch
Description: send_irq_event.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel