Hi, In the documentation it says that:
* @param cache_size * If cache_size is non-zero, the rte_mempool library will try to * limit the accesses to the common lockless pool, by maintaining a * per-lcore object cache. This argument must be lower or equal to * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5.* It is advised to choose* * * cache_size to have "n modulo cache_size == 0": if this is* * * not the case, some elements will always stay in the pool and will* * * never be used.* The access to the per-lcore table is of course * faster than the multi-producer/consumer pool. The cache can be * disabled if the cache_size argument is set to 0; it can be useful to * avoid losing objects in cache. I wonder if someone can please explain the high-lightened sentence, how the cache size affects the objects inside the ring. And also does it mean that if I'm sharing pool between different cores can it be that a core sees the pool as empty although it has objects in it? Thanks, Roy