The concept of memory pool cache is different from the CPU architecture cache.
Unlike some DSP, in the modern CPU and system, it is not a common use case to 
assign addresses to the cache and lock them as a memory. In especial, the data 
buffer is too large.

The memory pool cache is a LIFO to store the pointers and trying to reduce the 
memory footprints and reduce the CPU cache conflict and eviction. (Always try 
to use the used memory previously)

Only when trying to access the memory itself, a whole cache line will be 
checked and try to be loaded. It is impossible that a CPU load a whole buffer 
(2KB for example) directly without any READ / WRITE / FLUSH to the cache.


BR. Bing

From: [email protected] <[email protected]>
Sent: Thursday, April 25, 2024 12:24 AM
To: [email protected]
Subject: question about MemPool

External email: Use caution opening links or attachments

Hello,

When doing a rte_mempool_get_bulk() with a cache enabled mempool, first objects 
are retrieved from cache and then from the common pool which I assume is 
sitting in shared memory (DDR or L3?). Wouldn't accessing the objects from the 
mempool in shared memory itself pull those objects into processor cache? Can 
this be avoided?

Thanks,
Vince

Reply via email to