On Fri, Oct 16, 2009 at 04:08:24PM -0700, Vasu Dev wrote:
> -v3
>  Moves max_queue_depth initialization after slave_configure is
> called from after slave_alloc calling done. Also adjusted
> max_queue_depth check to skip ramp up if current queue_depth
> is >= max_queue_depth.
> 
> Signed-off-by: Vasu Dev <[email protected]>

Looks good to me. I ran some tests on s390 and the queue depth counter
increased until hitting the defined maximum.

> @@ -779,6 +781,36 @@ static struct device_attribute sdev_attr_queue_depth_rw =
>              sdev_store_queue_depth_rw);
> 
>  static ssize_t
> +sdev_show_queue_ramp_up_period(struct device *dev,
> +                            struct device_attribute *attr,
> +                            char *buf)
> +{
> +     struct scsi_device *sdev;
> +     sdev = to_scsi_device(dev);
> +     return snprintf(buf, 20, "%lu\n", sdev->queue_ramp_up_period);
> +}
> +
> +static ssize_t
> +sdev_store_queue_ramp_up_period(struct device *dev,
> +                             struct device_attribute *attr,
> +                             const char *buf, size_t count)
> +{
> +     struct scsi_device *sdev = to_scsi_device(dev);
> +     unsigned long period;
> +
> +     if (strict_strtoul(buf, 10, &period))
> +             return -EINVAL;
> +
> +     sdev->queue_ramp_up_period = period;
> +     return period;
> +}

[...]
> +     unsigned long queue_ramp_up_period;     /* ramp up period in jiffies */
> +#define SCSI_DEFAULT_RAMP_UP_PERIOD  (120 * HZ)

Only a small inconvenience i guess: The sysfs attribute shows the
ramp-up-period in jiffies. On my system with HZ==100 the default is
12000 and i was wondering if this would be milliseconds or something
related. If HZ changes, the unit of the ramp-up-period also changes.

I would prefer having seconds or milliseconds in sysfs and only using
jiffies internally.

Thanks for doing this,

Christof
_______________________________________________
devel mailing list
[email protected]
http://www.open-fcoe.org/mailman/listinfo/devel

Reply via email to