> +int lg_odp_pktio_lookup(const char *dev, pktio_entry_t *pktio_entry)
> +{
> +     int ret = -1;
> +     char ipc_shm_name[ODPH_RING_NAMESIZE];
> +     size_t ring_size;
> +
> +     if (!(lg_odp_shm_lookup_pktio(dev) == 0 &&
> +           odp_shm_lookup_ipc("shm_packet_pool") == 0)) {
> +             ODP_DBG("pid %d unable to find ipc object: %s.\n",
> +                     getpid(), dev);
> +             goto error;
> +     }

So, the role of the new odp_shm_lookup_ipc() call is just check if a shm has 
been created. I'm against adding xxx_ipc() versions of existing API calls. You 
should be able to just use the normal lookup. Maybe ODP (global init) needs to 
be informed that application runs in multi-process mode, so that ODP internal 
data structures are mapped so that multiple processes can lookup from those. 

If you check e.g. "Multi-process Sample Application" chapter in DPDK sample 
apps guide. It gives a cleaner framework how to handle the multi-process case.

> +
> +     ODP_DBG("pid %d odp_shm_lookup_ipc found shared object\n",
> +             getpid());
> +     ring_size =  PKTIO_IPC_ENTRIES * sizeof(void *) +
> +             sizeof(odph_ring_t);
> +
> +     memset(ipc_shm_name, 0, ODPH_RING_NAMESIZE);
> +     memcpy(ipc_shm_name, dev, strlen(dev));
> +     memcpy(ipc_shm_name + strlen(dev), "_r", 2);
> +
> +     /* allocate shared memory for buffers needed to be produced */
> +     odp_shm_t shm = odp_shm_reserve(ipc_shm_name, ring_size,
> +                     ODP_CACHE_LINE_SIZE,
> +                     ODP_SHM_PROC_NOCREAT);

This should be replaced by the normal (multi-process capable) lookup.

> +
> +     pktio_entry->s.ipc_r = odp_shm_addr(shm);
> +     if (!pktio_entry->s.ipc_r) {
> +             ODP_DBG("pid %d unable to find ipc ring %s name\n",
> +                     getpid(), dev);
> +             goto error;
> +     }
> +
> +     memcpy(ipc_shm_name + strlen(dev), "_p", 2);
> +     /* allocate shared memory for produced ring buffer handlers. That
> +      * buffers will be cleaned up after they are produced by other
> process.
> +      */
> +     shm = odp_shm_reserve(ipc_shm_name, ring_size,
> +                     ODP_CACHE_LINE_SIZE,
> +                     ODP_SHM_PROC_NOCREAT);

Same thing here.

-Petri

> +
> +     pktio_entry->s.ipc_p = odp_shm_addr(shm);
> +     if (!pktio_entry->s.ipc_p) {
> +             ODP_DBG("pid %d unable to find ipc ring %s name\n",
> +                     getpid(), dev);
> +             goto error;
> +     }
> +
> +     pktio_entry->s.type = ODP_PKTIO_TYPE_IPC;
> +
> +     /* Get info about remote pool */
> +     struct pktio_info *pinfo = lng_map_shm_pool_info(dev,
> +                                     ODP_SHM_PROC_NOCREAT);
> +     pktio_entry->s.ipc_pool_base = map_remote_pool(pinfo);
> +     pktio_entry->s.ipc_pkt_size = pinfo->shm_pkt_size;
> +
> +     /* @todo: to simplify in linux-generic implementation we create pool
> for
> +      * packets from IPC queue. On receive implementation copies packets
> to
> +      * that pool. Later we can try to reuse original tool without
> packets
> +      * copying.
> +      */
> +     pktio_entry->s.pkt_sock.pool = lg_odp_alloc_and_create_pool(
> +             pinfo->shm_pkt_pool_size, pinfo->shm_pkt_size);
> +
> +     ret = 0;
> +     ODP_DBG("%s OK.\n", __func__);
> +error:
> +     /* @todo free shm on error (api not impemented yet) */
> +     return ret;
> +}
> +


_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to