Jack,
On Mon, 2008-04-28 at 14:38 +0300, Jack Morgenstein wrote:
> mlx4-core: enable changing default max HCA resource limits.
>
> Enable module-initialization time modification of default HCA
> maximum resource limits via module parameters, as is done in mthca.
>
> Specify the log of the parameter value, rather than the value itself
> to avoid the hidden side-effect of rounding up values to next power-of-2.
This is much needed; thanks!
One minor comment:
In places where there are reserved resources (like qps, srqs, others ?),
should it be ensured that the parameters set are above the logs of those
amounts so the user doesn't shoot themselves in the foot by accident ?
Or perhaps a little more on the ranges in the mod param descriptions ?
-- Hal
> Signed-off-by: Jack Morgenstein <[EMAIL PROTECTED]>
>
> ---
>
> Roland,
> This patch was first posted on Oct 16, 2007 (but got overlooked).
>
> I'm reposting its current incarnation, which applies to the OFED 1.4 driver
> as is currently on the OpenFabrics server (based on Kernel 2.6.25-rc7).
>
> Please queue up for kernel 2.6.26.
> Thanks!
> Jack
>
> Index: ofed_kernel/drivers/net/mlx4/main.c
> ===================================================================
> --- ofed_kernel.orig/drivers/net/mlx4/main.c 2007-10-29 10:22:34.771753000
> +0200
> +++ ofed_kernel/drivers/net/mlx4/main.c 2007-10-29 11:03:17.939875000
> +0200
> @@ -85,6 +85,56 @@ static struct mlx4_profile default_profi
> .num_mtt = 1 << 20,
> };
>
> +static struct mlx4_profile mod_param_profile = { 0 };
> +
> +module_param_named(log_num_qp, mod_param_profile.num_qp, int, 0444);
> +MODULE_PARM_DESC(log_num_qp, "log maximum number of QPs per HCA");
> +
> +module_param_named(log_num_srq, mod_param_profile.num_srq, int, 0444);
> +MODULE_PARM_DESC(log_num_srq, "log maximum number of SRQs per HCA");
> +
> +module_param_named(log_rdmarc_per_qp, mod_param_profile.rdmarc_per_qp, int,
> 0444);
> +MODULE_PARM_DESC(log_rdmarc_per_qp, "log number of RDMARC buffers per QP");
> +
> +module_param_named(log_num_cq, mod_param_profile.num_cq, int, 0444);
> +MODULE_PARM_DESC(log_num_cq, "log maximum number of CQs per HCA");
> +
> +module_param_named(log_num_mcg, mod_param_profile.num_mcg, int, 0444);
> +MODULE_PARM_DESC(log_num_mcg, "log maximum number of multicast groups per
> HCA");
> +
> +module_param_named(log_num_mpt, mod_param_profile.num_mpt, int, 0444);
> +MODULE_PARM_DESC(log_num_mpt,
> + "log maximum number of memory protection table entries per
> HCA");
> +
> +module_param_named(log_num_mtt, mod_param_profile.num_mtt, int, 0444);
> +MODULE_PARM_DESC(log_num_mtt,
> + "log maximum number of memory translation table segments per
> HCA");
> +
> +static void process_mod_param_profile(void)
> +{
> + default_profile.num_qp = (mod_param_profile.num_qp ?
> + 1 << mod_param_profile.num_qp :
> + default_profile.num_qp);
> + default_profile.num_srq = (mod_param_profile.num_srq ?
> + 1 << mod_param_profile.num_srq :
> + default_profile.num_srq);
> + default_profile.rdmarc_per_qp = (mod_param_profile.rdmarc_per_qp ?
> + 1 << mod_param_profile.rdmarc_per_qp :
> + default_profile.rdmarc_per_qp);
> + default_profile.num_cq = (mod_param_profile.num_cq ?
> + 1 << mod_param_profile.num_cq :
> + default_profile.num_cq);
> + default_profile.num_mcg = (mod_param_profile.num_mcg ?
> + 1 << mod_param_profile.num_mcg :
> + default_profile.num_mcg);
> + default_profile.num_mpt = (mod_param_profile.num_mpt ?
> + 1 << mod_param_profile.num_mpt :
> + default_profile.num_mpt);
> + default_profile.num_mtt = (mod_param_profile.num_mtt ?
> + 1 << mod_param_profile.num_mtt :
> + default_profile.num_mtt);
> +}
> +
> static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
> {
> int err;
> @@ -514,6 +564,7 @@ static int __devinit mlx4_init_hca(struc
> goto err_stop_fw;
> }
>
> + process_mod_param_profile();
> profile = default_profile;
>
> icm_size = mlx4_make_profile(dev, &profile, &dev_cap, &init_hca);
>
> _______________________________________________
> general mailing list
> [email protected]
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
>
> To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general