[XenPPC] Re: xencomm porting and inline handles

2006-09-27 Thread Tristan Gingold
Le Mardi 26 Septembre 2006 20:23, Hollis Blanchard a écrit :
 On Tue, 2006-09-26 at 10:04 +0200, Tristan Gingold wrote:
  After more work, inline xencomm is not that magic: it doesn't work for
  modules which are loaded in virtual memory.  So I have to use mini
  xencomm at least for modules.

 What's the problem with modules? Their text/data isn't physically
 contiguous, but where exactly is the problem?
Inline xencomm only works for physically contiguous area because only the base 
address is passed.  Therefore it doesn't work for modules.

Tristan.


___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [XenPPC] Help with JS21 disk solution

2006-09-27 Thread Segher Boessenkool

If you need inv_all here, you have a bug elsewhere...


I agree, I'm just trying to corner the beast :)


Ok, this seems to work, its pretty solid, so somehow our  
invalidation logic is sufficient for network but not disk  
activity.  One theory is that disk uses short lived TCE entries and  
not batching as network does.


So we have a workaround and later we can investigate the single  
entry issue.


Do you map the DART table as M=1 or M=0?  U3 should use M=0
(and needs logic to flush the data to main memory), while U4
should use M=1...


Segher


___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [XenPPC] Help with JS21 disk solution

2006-09-27 Thread Jimi Xenidis


On Sep 27, 2006, at 8:37 AM, Segher Boessenkool wrote:


If you need inv_all here, you have a bug elsewhere...


I agree, I'm just trying to corner the beast :)


Ok, this seems to work, its pretty solid, so somehow our  
invalidation logic is sufficient for network but not disk  
activity.  One theory is that disk uses short lived TCE entries  
and not batching as network does.


So we have a workaround and later we can investigate the single  
entry issue.


Do you map the DART table as M=1 or M=0?  U3 should use M=0
(and needs logic to flush the data to main memory), while U4
should use M=1...


We are running in real-mode so there is no mapping.
We use normal writes and flush the cache.
After we flush everything we then go after the IO regs to invalidate,  
that syncs the hell out of the processor.


I'm considering using our CI IO ops to update the DART table just to  
see if it makes a diff.

-JX

___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


[XenPPC] Re: xencomm porting and inline handles

2006-09-27 Thread Hollis Blanchard
On Wed, 2006-09-27 at 08:19 +0200, Tristan Gingold wrote:
 Le Mardi 26 Septembre 2006 20:23, Hollis Blanchard a écrit :
  On Tue, 2006-09-26 at 10:04 +0200, Tristan Gingold wrote:
   After more work, inline xencomm is not that magic: it doesn't work for
   modules which are loaded in virtual memory.  So I have to use mini
   xencomm at least for modules.
 
  What's the problem with modules? Their text/data isn't physically
  contiguous, but where exactly is the problem?
 Inline xencomm only works for physically contiguous area because only the 
 base 
 address is passed.  Therefore it doesn't work for modules.

I understand that; please explain exactly what about the modules isn't
working.

For example, the stack used in kernel modules is still physically
contiguous, so using stack-allocated data structures should work fine.
However, making hypercalls directly using global data structures
wouldn't work. However, the inline code is only being used for the
hypercalls that could be made early. Is that the problem? Please
identify the specific issue(s).

-- 
Hollis Blanchard
IBM Linux Technology Center


___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel