On Fri, 2007-12-21 at 10:00 -0800, [EMAIL PROTECTED] wrote:
> On Fri, Dec 21, 2007 at 07:56:25AM +1100, Benjamin Herrenschmidt wrote:
> > ...
> > Can't you just have a primitive to sync things up that you call
> > explicitely from your driver after fetching a new status entry ?
> >
>
> Well,
[EMAIL PROTECTED] wrote:
> On Fri, Dec 21, 2007 at 07:56:25AM +1100, Benjamin Herrenschmidt wrote:
>> ...
>> Can't you just have a primitive to sync things up that you call
>> explicitely from your driver after fetching a new status entry ?
>
> Well, the only mechanisms I know to get things
On Fri, Dec 21, 2007 at 07:56:25AM +1100, Benjamin Herrenschmidt wrote:
> ...
> Can't you just have a primitive to sync things up that you call
> explicitely from your driver after fetching a new status entry ?
>
Well, the only mechanisms I know to get things synced are the ones
I mentioned
On Fri, Dec 21, 2007 at 07:56:25AM +1100, Benjamin Herrenschmidt wrote:
...
Can't you just have a primitive to sync things up that you call
explicitely from your driver after fetching a new status entry ?
Well, the only mechanisms I know to get things synced are the ones
I mentioned before:
[EMAIL PROTECTED] wrote:
On Fri, Dec 21, 2007 at 07:56:25AM +1100, Benjamin Herrenschmidt wrote:
...
Can't you just have a primitive to sync things up that you call
explicitely from your driver after fetching a new status entry ?
Well, the only mechanisms I know to get things synced are the
On Fri, 2007-12-21 at 10:00 -0800, [EMAIL PROTECTED] wrote:
On Fri, Dec 21, 2007 at 07:56:25AM +1100, Benjamin Herrenschmidt wrote:
...
Can't you just have a primitive to sync things up that you call
explicitely from your driver after fetching a new status entry ?
Well, the only
On Tue, 2007-12-18 at 12:07 -0800, [EMAIL PROTECTED] wrote:
> On Tue, Dec 18, 2007 at 05:50:42PM +0100, Stefan Richter wrote:
>
> > Do I understand correctly?: A device and the CPUs communicate via two
> > separate memory areas: A data buffer and a status FIFO. The NUMA
> > interconnect may
On Tue, Dec 18, 2007 at 09:59:24PM +0100, Stefan Richter wrote:
>
> From its purpose it sounds like you need this only for few special
> memory regions which would typically be mapped by dma_map_single()
We need the _sg versions too, as Roland already mentioned.
> and
> furthermore that
On Tue, Dec 18, 2007 at 09:59:24PM +0100, Stefan Richter wrote:
>
> So that would be option 3) of yours, though without your attrs
> parameter. Do you expect the need for even more flags for other kinds
> of special behavior?
I was hoping to keep the option of adding additional
flags, but
On Tue, Dec 18, 2007 at 09:59:24PM +0100, Stefan Richter wrote:
So that would be option 3) of yours, though without your attrs
parameter. Do you expect the need for even more flags for other kinds
of special behavior?
I was hoping to keep the option of adding additional
flags, but for
On Tue, Dec 18, 2007 at 09:59:24PM +0100, Stefan Richter wrote:
From its purpose it sounds like you need this only for few special
memory regions which would typically be mapped by dma_map_single()
We need the _sg versions too, as Roland already mentioned.
and
furthermore that
On Tue, 2007-12-18 at 12:07 -0800, [EMAIL PROTECTED] wrote:
On Tue, Dec 18, 2007 at 05:50:42PM +0100, Stefan Richter wrote:
Do I understand correctly?: A device and the CPUs communicate via two
separate memory areas: A data buffer and a status FIFO. The NUMA
interconnect may reorder
> However, your older patch series looks like you want this behavior also
> in areas which are mapped by dma_map_sg(), do you?. Still, adding two
> functions of the kind like above, if necessary, might still be
> preferable to changing the call parameters of existing functions or to
>
[EMAIL PROTECTED] wrote:
> Reorderings are possible on reads and
> writes. Things get synced up by either an interrupt or a write to
> a memory region with a "barrier attribute". Memory allocated with
> dma_alloc_coherent() gets the barrier attribute. The idea here is
> to allow memory allocated
On Tue, Dec 18, 2007 at 05:50:42PM +0100, Stefan Richter wrote:
> Do I understand correctly?: A device and the CPUs communicate via two
> separate memory areas: A data buffer and a status FIFO. The NUMA
> interconnect may reorder accesses of the device to the areas. (Write
> accesses? Read
> > Waaay back in October I sent some patches for passing additional
> > attributes to the dma_map_* routines:
> >
> > http://marc.info/?l=linux-kernel=119137949604365=2
>
> Do I understand correctly?: A device and the CPUs communicate via two
> separate memory areas: A data buffer and
[EMAIL PROTECTED] wrote:
> Waaay back in October I sent some patches for passing additional
> attributes to the dma_map_* routines:
>
> http://marc.info/?l=linux-kernel=119137949604365=2
Do I understand correctly?: A device and the CPUs communicate via two
separate memory areas: A data buffer
[EMAIL PROTECTED] wrote:
Waaay back in October I sent some patches for passing additional
attributes to the dma_map_* routines:
http://marc.info/?l=linux-kernelm=119137949604365w=2
Do I understand correctly?: A device and the CPUs communicate via two
separate memory areas: A data buffer
Waaay back in October I sent some patches for passing additional
attributes to the dma_map_* routines:
http://marc.info/?l=linux-kernelm=119137949604365w=2
Do I understand correctly?: A device and the CPUs communicate via two
separate memory areas: A data buffer and a status
On Tue, Dec 18, 2007 at 05:50:42PM +0100, Stefan Richter wrote:
Do I understand correctly?: A device and the CPUs communicate via two
separate memory areas: A data buffer and a status FIFO. The NUMA
interconnect may reorder accesses of the device to the areas. (Write
accesses? Read
However, your older patch series looks like you want this behavior also
in areas which are mapped by dma_map_sg(), do you?. Still, adding two
functions of the kind like above, if necessary, might still be
preferable to changing the call parameters of existing functions or to
overloading
[EMAIL PROTECTED] wrote:
Reorderings are possible on reads and
writes. Things get synced up by either an interrupt or a write to
a memory region with a barrier attribute. Memory allocated with
dma_alloc_coherent() gets the barrier attribute. The idea here is
to allow memory allocated with
Waaay back in October I sent some patches for passing additional
attributes to the dma_map_* routines:
http://marc.info/?l=linux-kernel=119137949604365=2
This is somthing needed for ia64 Altix NUMA machines (as described
in that thread).
Several folks objected to the approach I used - it was
Waaay back in October I sent some patches for passing additional
attributes to the dma_map_* routines:
http://marc.info/?l=linux-kernelm=119137949604365w=2
This is somthing needed for ia64 Altix NUMA machines (as described
in that thread).
Several folks objected to the approach I used - it
24 matches
Mail list logo