On Tue, Aug 8, 2017 at 3:38 AM, David Jeffery wrote:
> There is a race between changing I/O elevator and request_queue removal
> which can trigger the warning in kobject_add_internal. A program can
> use sysfs to request a change of elevator at the same time another task
>
On Mon, Aug 07, 2017 at 06:06:11PM -0400, Laurence Oberman wrote:
> On Mon, Aug 7, 2017 at 8:48 AM, Laurence Oberman
> wrote:
>
> >
> >
> > On 08/05/2017 02:56 AM, Ming Lei wrote:
> >
> >> In Red Hat internal storage test wrt. blk-mq scheduler, we
> >> found that I/O
On Mon, Aug 07, 2017 at 01:29:41PM -0400, Laurence Oberman wrote:
>
>
> On 08/07/2017 11:27 AM, Bart Van Assche wrote:
> > On Mon, 2017-08-07 at 08:48 -0400, Laurence Oberman wrote:
> > > I tested this series using Ming's tests as well as my own set of tests
> > > typically run against changes
On Mon, 2017-08-07 at 18:06 -0400, Laurence Oberman wrote:
> With [mq-deadline] enabled on stock I dont see them at all and it behaves.
>
> Now with Ming's patches if we enable [mq-deadline] we DO see the clone
> failures and the hard lockup so we have opposit behaviour with the
> scheduler
On 08/04/2017 05:19 PM, Ming Lei wrote:
> On Fri, Aug 04, 2017 at 07:55:41AM -0600, Jens Axboe wrote:
>> On 08/04/2017 05:17 AM, Ming Lei wrote:
>>> On Thu, Aug 03, 2017 at 02:01:55PM -0600, Jens Axboe wrote:
We don't have to inc/dec some counter, since we can just
iterate the tags. That
On 08/07/2017 01:29 PM, Laurence Oberman wrote:
On 08/07/2017 11:27 AM, Bart Van Assche wrote:
On Mon, 2017-08-07 at 08:48 -0400, Laurence Oberman wrote:
I tested this series using Ming's tests as well as my own set of tests
typically run against changes to upstream code in my SRP
> Il giorno 07 ago 2017, alle ore 19:32, Paolo Valente
> ha scritto:
>
>>
>> Il giorno 05 ago 2017, alle ore 00:05, Paolo Valente
>> ha scritto:
>>
>>>
>>> Il giorno 04 ago 2017, alle ore 13:01, Mel Gorman
>>>
> Il giorno 05 ago 2017, alle ore 13:54, Mel Gorman
> ha scritto:
> ...
>
>> In addition, as for coverage, we made the empiric assumption that
>> start-up time measured with each of the above easy-to-benchmark
>> applications gives an idea of the time that it would
> Il giorno 05 ago 2017, alle ore 00:05, Paolo Valente
> ha scritto:
>
>>
>> Il giorno 04 ago 2017, alle ore 13:01, Mel Gorman
>> ha scritto:
>>
>> On Fri, Aug 04, 2017 at 09:26:20AM +0200, Paolo Valente wrote:
I took that into
On 08/07/2017 11:27 AM, Bart Van Assche wrote:
On Mon, 2017-08-07 at 08:48 -0400, Laurence Oberman wrote:
I tested this series using Ming's tests as well as my own set of tests
typically run against changes to upstream code in my SRP test-bed.
My tests also include very large sequential
Thanks for your feedback Hannes, agreed!
Cheers,
Guilherme
On Mon, Aug 07, 2017 at 10:29:05AM +0200, Hannes Reinecke wrote:
> On 08/05/2017 05:51 PM, Shaohua Li wrote:
> > From: Shaohua Li
> >
> > In testing software RAID, I usually found it's hard to cover specific cases.
> > RAID is supposed to work even disk is in semi good state, for
Mikulas,
> If you create the integrity tag at or above device mapper level, you
> will run into problems because the same device can be accessed using
> device mapper and using physical volume /dev/sd*. If you create
> integrity tags at device mapper level, they will contain device
> mapper's
On Mon, 2017-08-07 at 08:48 -0400, Laurence Oberman wrote:
> I tested this series using Ming's tests as well as my own set of tests
> typically run against changes to upstream code in my SRP test-bed.
> My tests also include very large sequential buffered and un-buffered I/O.
>
> This series
On 08/07/2017 03:42 PM, Guilherme G. Piccoli wrote:
> We observed that it's possible to perform partition operations in both
> nvmf target and initiator block devices, like creating and deleting
> partitions.
>
> But there is no sync mechanism between target and initiator regarding
> the
We observed that it's possible to perform partition operations in both
nvmf target and initiator block devices, like creating and deleting
partitions.
But there is no sync mechanism between target and initiator regarding
the partitions operations. After creating a partition on initiator, for
On Mon, Aug 07, 2017 at 04:09:12PM +0300, Anton Volkov wrote:
> This is more of a style-oriented suggestion. This kind of template is
> commonly used in other modules.
Yes but there is no point in using gotos here (i.e. cleanup to be done), you
an just return directly.
And yes it is a minor nit.
This is more of a style-oriented suggestion. This kind of template is
commonly used in other modules.
Regards,
Anton
On 07.08.2017 15:54, Johannes Thumshirn wrote:
On Mon, Aug 07, 2017 at 03:37:50PM +0300, Anton Volkov wrote:
+err_out:
return err;
Any reason you can't just use
On Mon, Aug 07, 2017 at 03:37:50PM +0300, Anton Volkov wrote:
> +err_out:
> return err;
Any reason you can't just use return err; at the respective callsites?
Thanks,
Johannes
--
Johannes Thumshirn Storage
jthumsh...@suse.de
On 08/05/2017 02:56 AM, Ming Lei wrote:
In Red Hat internal storage test wrt. blk-mq scheduler, we
found that I/O performance is much bad with mq-deadline, especially
about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
SRP...)
Turns out one big issue causes the performance
The early device registration made possible a race leading to allocations
of disks with wrong minors.
This patch moves the device registration further down the loop_init
function to make the race infeasible.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Anton
On 08/05/2017 05:51 PM, Shaohua Li wrote:
> From: Shaohua Li
>
> In testing software RAID, I usually found it's hard to cover specific cases.
> RAID is supposed to work even disk is in semi good state, for example, some
> sectors are broken. Since we can't control the behavior of
On 08/05/2017 01:39 PM, Christoph Hellwig wrote:
> Can you use normal linux style for the code instead of copy and
> pasting the weird naming and capitalization from the DAC960 driver?
>
Yes; already planned for v2. But first wanted to get some general
feedback (like: is anyone interested in that
23 matches
Mail list logo