On Tue, Jan 5, 2010 at 5:32 AM, Eli Cohen <[email protected]> wrote:
> Add support for RoCEE device binding and IP --> GID resolution. Path resolving
> and multicast joining are implemented within cma.c by filling the responses
> and
> pushing the callbacks to the cma work queue. IP->GID resolution always yields
> IPv6 link local addresses - remote GIDs are derived from the destination MAC
> address of the remote port. Multicast GIDs are always mapped to multicast MACs
> as is done in IPv6. Some helper functions are added to ib_addr.h. IPv4
> multicast is enabled by translating IPv4 multicast addresses to IPv6 multicast
> as described in
> http://www.mail-archive.com/[email protected]/msg02134.html.
>
> Signed-off-by: Eli Cohen <[email protected]>
> ---
> drivers/infiniband/core/cma.c | 261
> ++++++++++++++++++++++++++++++++++++++--
> drivers/infiniband/core/ucma.c | 45 ++++++-
> include/rdma/ib_addr.h | 98 +++++++++++++++-
> 3 files changed, 385 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
> index fbdd731..e8e28ae 100644
> --- a/drivers/infiniband/core/cma.c
> +++ b/drivers/infiniband/core/cma.c
[snip...]
> @@ -1707,6 +1740,78 @@ static int cma_resolve_iw_route(struct rdma_id_private
> *id_priv, int timeout_ms)
> return 0;
> }
>
> +static int cma_resolve_rocee_route(struct rdma_id_private *id_priv)
> +{
> + struct rdma_route *route = &id_priv->id.route;
> + struct rdma_addr *addr = &route->addr;
> + struct cma_work *work;
> + int ret;
> + struct sockaddr_in *src_addr = (struct sockaddr_in
> *)&route->addr.src_addr;
> + struct sockaddr_in *dst_addr = (struct sockaddr_in
> *)&route->addr.dst_addr;
> + struct net_device *ndev = NULL;
> +
> + if (src_addr->sin_family != dst_addr->sin_family)
> + return -EINVAL;
> +
> + work = kzalloc(sizeof *work, GFP_KERNEL);
> + if (!work)
> + return -ENOMEM;
> +
> + work->id = id_priv;
> + INIT_WORK(&work->work, cma_work_handler);
> +
> + route->path_rec = kzalloc(sizeof *route->path_rec, GFP_KERNEL);
> + if (!route->path_rec) {
> + ret = -ENOMEM;
> + goto err1;
> + }
> +
> + route->num_paths = 1;
> +
> + rocee_mac_to_ll(&route->path_rec->sgid, addr->dev_addr.src_dev_addr);
> + rocee_mac_to_ll(&route->path_rec->dgid, addr->dev_addr.dst_dev_addr);
> +
> + route->path_rec->hop_limit = 2;
> + route->path_rec->reversible = 1;
> + route->path_rec->pkey = cpu_to_be16(0xffff);
> + route->path_rec->mtu_selector = 2;
> +
> + if (addr->dev_addr.bound_dev_if) {
> + ndev = dev_get_by_index(&init_net,
> addr->dev_addr.bound_dev_if);
> + if (!ndev)
> + return -ENODEV;
> + }
> +
> + if (ndev)
> + route->path_rec->mtu = rocee_get_mtu(ndev->mtu);
> + route->path_rec->rate_selector = 2;
> + if (ndev)
> + route->path_rec->rate = rocee_get_rate(ndev);
The rocee_get_rate routine seems to merely get the (local) device
rate. So this seems to me to only work in a homogeneous (single speed
subnet) but what about a heterogeneous one (either different speed
links or links negotiated down) ? What happens if a link internal to
the subnet is slower ? Isn't this important for setting a proper
static rate control ?
Similar thing may also be true for other path related parameters if
they can vary along the path.
-- Hal
[snip...]
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html