David Brownell wrote:
> unlike the slab allocator bug(s) I pointed out. (And which
> Manfred seems to have gone silent on.)
which bugs?
If you enable FORCED_DEBUG the allocator will stress test the slab
users. Just use kmem_cache_create and create a cache HW_CACHEALIGN cache
with 4 byte objects. You'll notice that the objects are not cache line
aligned, even without FORCED_DEBUG.
And I think you agree that it would be foolish to align 32 byte objects
on 128 byte P 4 cache lines, or?
>
> /* pci consistent pages allocated in units of LOGICAL_PAGE_SIZE, layout:
> * - pci_page (always in the 'slab')
> * - bitmap (with blocks_per_page bits)
> * - blocks (starting at blocks_offset)
> *
> * this can easily be optimized, but the best fix would be to
> * make this just a bus-specific front end to mm/slab.c logic.
^^^^
> */
Adding that new frond end was already on my todo list for 2.5, but it
means modifying half of mm/slab.c.
> extern struct pci_pool *
> pci_pool_create (const char *name, struct pci_dev *pdev,
> int size, int align, int flags)
> [...]
>
> if (align < L1_CACHE_BYTES)
> align = L1_CACHE_BYTES;
>
Why?
If a caller really needs L1 alignment he can use
pci_pool_create(,,max(my_align, L1_CACHE_BYTES),);
Why do you hardcode L1_CACHE_BYTES?
> /* Convert a DMA mapping to its cpu address (as returned by pci_pool_alloc).
> * Don't assume this is cheap, although on some platforms it may be simple
> * macros adding a constant to the DMA handle.
> */
> extern void *
> pci_pool_dma_to_cpu (struct pci_pool *pool, dma_addr_t handle);
Do lots of drivers need the reverse mapping? It wasn't on my todo list
yet.
--
Manfred
_______________________________________________
[EMAIL PROTECTED]
To unsubscribe, use the last form field at:
http://lists.sourceforge.net/lists/listinfo/linux-usb-devel