Hello,

It's my first email on mailing lists, if I did something wrong please tell me so I will not do it again.

I'm having a problem when using an hash_list shared between processes.

I have the following code to create the hash_list on the primary process, and get the existing list from de secondary process:

static void
init_connections_hash(struct rte_hash **connections_hash, int max_connections)
{
    struct rte_hash_parameters hash_params = {
        .name = HASH_NAME,
        .entries = max_connections,
        .key_len = sizeof(struct connection),
        .hash_func = rte_hash_crc,
        .hash_func_init_val = 0,
        .socket_id = SOCKET_ID_ANY,
        .extra_flag = RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD|RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF
    };
    *connections_hash = rte_hash_create(&hash_params);
    if (*connections_hash == NULL)
    {
        rte_exit(EXIT_FAILURE, "Failed to create TCP connections hash table\n");
    }

}

static void
get_existing_connections_hash(struct rte_hash **connections_hash)
{
    *connections_hash = rte_hash_find_existing(HASH_NAME);
    if (*connections_hash == NULL)
    {
        rte_exit(EXIT_FAILURE, "Failed to get existing TCP connections hash table\n");
    }
}

And in the main, when initializing the process:

if(proc_type == RTE_PROC_PRIMARY) {
        init_connections_hash(&connections_hash, MAX_CONNECTIONS_IN_LIST);
    } else {
        get_existing_connections_hash(&connections_hash);
    }
}

What I think it's happening, is when I create with .hash_func=rte_hash_crc (I also tested with rte_jhash), it writes the memory address of the function, that exists or is allocated on the main process, but is not shared with the secondary process.

So when I call rte_hash_lookup_data, it calls rte_hash_hash(h, key), and for reference the function is:

hash_sig_t
rte_hash_hash(const struct rte_hash *h, const void *key)
{
    /* calc hash result by key */
    return h->hash_func(key, h->key_len, h->hash_func_init_val);
}

But h->hash_func is the memory address that is available only in the primary process, and when it gets called at the secondary process, I get segmentation fault error.

I tried changing the h->hash_func to rte_hash_crc, for my application it worked like a charm, so I really think it's something with the address of the function not being available to the secondary process.

This only happens with this function, I tried to add some items in the primary process, and in the secondary I iterated the list with rte_hash_iterate, I could read everything in it without any problem, so the has_list was shared correctly between the processes.

The solution I found was to use rte_hash_lookup_with_hash_data instead of rte_hash_lookup_data, it's working like I intended, but then it would be pointless to init the hash with .hash_func=somehashfunc when using multiple processes

So I would like to know if it's something wrong that I did when creating or using the list, if it is some bug on the code, or if this solution is really the way it's intended to do when using multiple processes.

Thanks

Reply via email to