Previously, the maximum size of a state that could be allocated from a state pool was a block. However, this has caused us various issues particularly with shaders which are potentially very large. We've also hit issues with render passes with a large number of attachments when we go to allocate the block of surface state. This effectively removes the restriction on the maximum size of a single state. (There's still a limit of 1MB imposed by a fixed-length bucket array.)
For states larger than the block size, we just grab a large block off of the block pool rather than sub-allocating. When we go to allocate some chunk of state and the current bucket does not have state, we try to pull a chunk from some larger bucket and split it up. This should improve memory usage if a client occasionally allocates a large block of state. This commit is inspired by some similar work done by Juan A. Suarez Romero <jasua...@igalia.com>. --- src/intel/vulkan/anv_allocator.c | 43 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/src/intel/vulkan/anv_allocator.c b/src/intel/vulkan/anv_allocator.c index 7a687b7..68c389c 100644 --- a/src/intel/vulkan/anv_allocator.c +++ b/src/intel/vulkan/anv_allocator.c @@ -650,6 +650,12 @@ anv_fixed_size_state_pool_alloc_new(struct anv_fixed_size_state_pool *pool, struct anv_block_state block, old, new; uint32_t offset; + /* If our state is large, we don't need any sub-allocation from a block. + * Instead, we just grab whole (potentially large) blocks. + */ + if (state_size >= block_size) + return anv_block_pool_alloc(block_pool, state_size); + restart: block.u64 = __sync_fetch_and_add(&pool->block.u64, state_size); @@ -702,6 +708,43 @@ anv_state_pool_alloc_no_vg(struct anv_state_pool *pool, goto done; } + /* Try to grab a chunk from some larger bucket and split it up */ + for (unsigned b = bucket + 1; b < ANV_STATE_BUCKETS; b++) { + int32_t chunk_offset; + if (anv_free_list_pop(&pool->buckets[b].free_list, + &pool->block_pool.map, &chunk_offset)) { + unsigned chunk_size = anv_state_pool_get_bucket_size(b); + + if (chunk_size > pool->block_size && + state.alloc_size < pool->block_size) { + assert(chunk_size % pool->block_size == 0); + /* We don't want to split giant chunks into tiny chunks. Instead, + * break anything bigger than a block into block-sized chunks and + * then break it down into bucket-sized chunks from there. Return + * all but the first block of the chunk to the block bucket. + */ + const uint32_t block_bucket = + anv_state_pool_get_bucket(pool->block_size); + anv_free_list_push(&pool->buckets[block_bucket].free_list, + pool->block_pool.map, + chunk_offset + pool->block_size, + pool->block_size, + (chunk_size / pool->block_size) - 1); + chunk_size = pool->block_size; + } + + assert(chunk_size % state.alloc_size == 0); + anv_free_list_push(&pool->buckets[bucket].free_list, + pool->block_pool.map, + chunk_offset + state.alloc_size, + state.alloc_size, + (chunk_size / state.alloc_size) - 1); + + state.offset = chunk_offset; + goto done; + } + } + state.offset = anv_fixed_size_state_pool_alloc_new(&pool->buckets[bucket], &pool->block_pool, state.alloc_size, -- 2.5.0.400.gff86faf _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev