Hi Bart,
Bart Van Assche writes:
>
...
> diff --git a/Documentation/features/locking/cmpxchg64/arch-support.txt
> b/Documentation/features/locking/cmpxchg64/arch-support.txt
> new file mode 100644
> index ..65b3290ce5d5
> --- /dev/null
> +++
On Mon, May 14, 2018 at 08:22:11PM +0800, Ming Lei wrote:
> Hi Jianchao,
>
> On Mon, May 14, 2018 at 06:05:50PM +0800, jianchao.wang wrote:
> > Hi ming
> >
> > On 05/14/2018 05:38 PM, Ming Lei wrote:
> > >> Here is the deadlock scenario.
> > >>
> > >> nvme_eh_work // EH0
> > >> ->
On Tue, May 15, 2018 at 07:47:07AM +0800, Ming Lei wrote:
> > > > [ 760.727485] nvme nvme1: EH 0: after recovery -19
> > > > [ 760.727488] nvme nvme1: EH: fail controller
> > >
> > > The above issue(hang in nvme_remove()) is still an old issue, which
> > > is because queues are kept as quiesce
On Mon, May 14, 2018 at 09:18:21AM -0600, Keith Busch wrote:
> Hi Ming,
>
> On Sat, May 12, 2018 at 08:21:22AM +0800, Ming Lei wrote:
> > > [ 760.679960] nvme nvme1: controller is down; will reset:
> > > CSTS=0x, PCI_STATUS=0x
> > > [ 760.701468] nvme nvme1: EH 0: after shutdown,
On Mon, May 14, 2018 at 02:01:36PM -0700, Omar Sandoval wrote:
> On Mon, May 14, 2018 at 02:42:41PM -0600, Keith Busch wrote:
> > This test will run a background IO process and inject an admin command
> > with a very short timeout that is all but guaranteed to expire without
> > a completion: the
Hi Bart,
On Mon, May 14, 2018 at 10:42 PM, Bart Van Assche
wrote:
> On Mon, 2018-05-14 at 11:50 -0700, Max Filippov wrote:
>> On Mon, May 14, 2018 at 11:46 AM, Bart Van Assche
>> wrote:
>> > The next patch in this series introduces a call to
On Mon, May 14, 2018 at 02:42:41PM -0600, Keith Busch wrote:
> This test will run a background IO process and inject an admin command
> with a very short timeout that is all but guaranteed to expire without
> a completion: the async event request.
>
> Signed-off-by: Keith Busch
On Mon, 2018-05-14 at 11:50 -0700, Max Filippov wrote:
> On Mon, May 14, 2018 at 11:46 AM, Bart Van Assche
> wrote:
> > The next patch in this series introduces a call to cmpxchg64()
> > in the block layer core for those architectures on which this
> > functionality is
This test will run a background IO process and inject an admin command
with a very short timeout that is all but guaranteed to expire without
a completion: the async event request.
Signed-off-by: Keith Busch
---
v1 -> v2:
Changed description since its not a test for a
On Mon, 14 May 2018, Johannes Weiner wrote:
> Since I'm using the same model and infrastructure for memory and IO
> load as well, IMO it makes more sense to present them in a coherent
> interface instead of trying to retrofit and change the loadavg file,
> which might not even be possible.
Well
On Mon, May 14, 2018 at 02:02:37PM -0600, Keith Busch wrote:
> This test will run a background IO process and inject an admin command
> with a very short timeout that is all but guaranteed to expire without
> a completion: the async event request.
Thanks, a few comments below.
> Signed-off-by:
This test will run a background IO process and inject an admin command
with a very short timeout that is all but guaranteed to expire without
a completion: the async event request.
Signed-off-by: Keith Busch
---
tests/nvme/005 | 42
Thanks!
On Mon, May 14, 2018 at 3:24 PM, Jens Axboe wrote:
> On 5/8/18 7:33 PM, Kent Overstreet wrote:
>> - Add separately allowed mempools, biosets: bcachefs uses both all over the
>>place
>>
>> - Bit of utility code - bio_copy_data_iter(), zero_fill_bio_iter()
>>
>> -
On 5/8/18 7:33 PM, Kent Overstreet wrote:
> - Add separately allowed mempools, biosets: bcachefs uses both all over the
>place
>
> - Bit of utility code - bio_copy_data_iter(), zero_fill_bio_iter()
>
> - bio_list_copy_data(), the bi_next check - defensiveness because of a bug I
>had
On Fri, May 11, 2018 at 03:11:45PM -0600, Jens Axboe wrote:
> On 5/8/18 7:33 PM, Kent Overstreet wrote:
> > Allows mempools to be embedded in other structs, getting rid of a
> > pointer indirection from allocation fastpaths.
> >
> > mempool_exit() is safe to call on an uninitialized but zeroed
On Mon, May 14, 2018 at 03:39:33PM +, Christopher Lameter wrote:
> On Mon, 7 May 2018, Johannes Weiner wrote:
>
> > What to make of this number? If CPU utilization is at 100% and CPU
> > pressure is 0, it means the system is perfectly utilized, with one
> > runnable thread per CPU and nobody
On Mon, May 14, 2018 at 11:46 AM, Bart Van Assche
wrote:
> The next patch in this series introduces a call to cmpxchg64()
> in the block layer core for those architectures on which this
> functionality is available. Make it possible to test whether
> cmpxchg64() is
The next patch in this series introduces a call to cmpxchg64()
in the block layer core for those architectures on which this
functionality is available. Make it possible to test whether
cmpxchg64() is available by introducing CONFIG_ARCH_HAVE_CMPXCHG64.
Signed-off-by: Bart Van Assche
Recently the blk-mq timeout handling code was reworked. See also Tejun
Heo, "[PATCHSET v4] blk-mq: reimplement timeout handling", 08 Jan 2018
(https://www.mail-archive.com/linux-block@vger.kernel.org/msg16985.html).
This patch reworks the blk-mq timeout handling code again. The timeout
handling
Hello Jens,
This is the ninth incarnation of the blk-mq timeout handling rework. All
previously posted comments have been addressed. Please consider this patch
series for inclusion in the upstream kernel.
Bart.
Changes compared to v8:
- Split into two patches.
- Moved the spin_lock_init() call
On Mon, 2018-05-14 at 13:15 +0800, jianchao.wang wrote:
> a 32bit deadline is indeed OK to judge whether a request is timeout or not.
> but how is the expires value determined for __blk_add_timer -> mod_timer ?
> as we know, when a request is started, we need to arm a timer for it.
> the expires
> Il giorno 14 mag 2018, alle ore 19:31, Jens Axboe ha
> scritto:
>
> On 5/14/18 11:16 AM, Paolo Valente wrote:
>>
>>
>>> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche
>>> ha scritto:
>>>
>>> On Fri, 2018-05-04 at 19:17 +0200, Paolo
On 05/14/18 08:39, Christopher Lameter wrote:
On Mon, 7 May 2018, Johannes Weiner wrote:
What to make of this number? If CPU utilization is at 100% and CPU
pressure is 0, it means the system is perfectly utilized, with one
runnable thread per CPU and nobody waiting. At two or more runnable
On 5/14/18 11:16 AM, Paolo Valente wrote:
>
>
>> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche
>> ha scritto:
>>
>> On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote:
>>> When invoked for an I/O request rq, [ ... ]
>>
>> Tested-by: Bart Van Assche
> Il giorno 10 mag 2018, alle ore 18:14, Bart Van Assche
> ha scritto:
>
> On Fri, 2018-05-04 at 19:17 +0200, Paolo Valente wrote:
>> When invoked for an I/O request rq, [ ... ]
>
> Tested-by: Bart Van Assche
>
>
>
Any decision for this
On 05/09/2018 02:48 AM, Christoph Hellwig wrote:
> After already supporting a simple implementation of buffered writes for
> the blocksize == PAGE_SIZE case in the last commit this adds full support
> even for smaller block sizes. There are three bits of per-block
> information in the
On Mon, 7 May 2018, Johannes Weiner wrote:
> What to make of this number? If CPU utilization is at 100% and CPU
> pressure is 0, it means the system is perfectly utilized, with one
> runnable thread per CPU and nobody waiting. At two or more runnable
> tasks per CPU, the system is 100%
On Wed, 2018-05-09 at 09:54 +0200, Christoph Hellwig wrote:
> Same numerical value (for now at least), but a much better documentation
> of intent.
There is a typo in the subject of this patch: __GFP_NORECLAIM should be
changed into __GFP_RECLAIM. Otherwise:
Reviewed-by: Bart Van Assche
On Wed, 2018-05-09 at 09:54 +0200, Christoph Hellwig wrote:
> We just can't do I/O when doing block layer requests allocations,
> so use GFP_NOIO instead of the even more limited __GFP_DIRECT_RECLAIM.
Reviewed-by: Bart Van Assche
On Wed, 2018-05-09 at 09:54 +0200, Christoph Hellwig wrote:
> blk_old_get_request already has it at hand, and in blk_queue_bio, which
> is the fast path, it is constant.
Reviewed-by: Bart Van Assche
On Wed, 2018-05-09 at 09:54 +0200, Christoph Hellwig wrote:
> Switch everyone to blk_get_request_flags, and then rename
> blk_get_request_flags to blk_get_request.
Reviewed-by: Bart Van Assche
On Wed, 2018-05-09 at 09:54 +0200, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig
Reviewed-by: Bart Van Assche
Hi Ming,
On Sat, May 12, 2018 at 08:21:22AM +0800, Ming Lei wrote:
> > [ 760.679960] nvme nvme1: controller is down; will reset: CSTS=0x,
> > PCI_STATUS=0x
> > [ 760.701468] nvme nvme1: EH 0: after shutdown, top eh: 1
> > [ 760.727099] pci_raw_set_power_state: 62 callbacks
On Wed, 2018-05-09 at 09:54 +0200, Christoph Hellwig wrote:
> Always GFP_KERNEL, and keeping it would cause serious complications for
> the next change.
This patch description is very brief. Shouldn't the description of this patch
mention whether or not any functionality is changed (I think no
On 5/14/18 8:38 AM, Christoph Hellwig wrote:
> Jens, any comments?
Looks good to me.
--
Jens Axboe
Jens, any comments?
On Wed, May 09, 2018 at 09:54:02AM +0200, Christoph Hellwig wrote:
> Hi all,
>
> this series sorts out the mess around how we use gfp flags in the
> block layer get_request interface.
>
> Changes since RFC:
> - don't switch to GFP_NOIO for allocations in blk_get_request.
>
Hi Jianchao,
On Mon, May 14, 2018 at 06:05:50PM +0800, jianchao.wang wrote:
> Hi ming
>
> On 05/14/2018 05:38 PM, Ming Lei wrote:
> >> Here is the deadlock scenario.
> >>
> >> nvme_eh_work // EH0
> >> -> nvme_reset_dev //hold reset_lock
> >> -> nvme_setup_io_queues
> >> ->
The config file is bash and it gets sourced, so all bash magic is
doable in there as well. Document it so others don't have to
re-discover this gem as well.
Signed-off-by: Johannes Thumshirn
---
Documentation/running-tests.md | 12
1 file changed, 12
Hi ming
On 05/14/2018 05:38 PM, Ming Lei wrote:
>> Here is the deadlock scenario.
>>
>> nvme_eh_work // EH0
>> -> nvme_reset_dev //hold reset_lock
>> -> nvme_setup_io_queues
>> -> nvme_create_io_queues
>> -> nvme_create_queue
>> -> set nvmeq->cq_vector
>>
On Mon, May 14, 2018 at 04:21:04PM +0800, jianchao.wang wrote:
> Hi ming
>
> Please refer to my test log and analysis.
>
> [ 229.872622] nvme nvme0: I/O 164 QID 1 timeout, reset controller
> [ 229.872649] nvme nvme0: EH 0: before shutdown
> [ 229.872683] nvme nvme0: I/O 165 QID 1 timeout,
On Thu, May 10, 2018 at 09:41:32AM -0400, Johannes Weiner wrote:
> So there is a reason I'm tracking productivity states per-cpu and not
> globally. Consider the following example periods on two CPUs:
>
> CPU 0
> Task 1: | EXECUTING | memstalled |
> Task 2: | runqueued | EXECUTING |
>
>
Hi ming
Please refer to my test log and analysis.
[ 229.872622] nvme nvme0: I/O 164 QID 1 timeout, reset controller
[ 229.872649] nvme nvme0: EH 0: before shutdown
[ 229.872683] nvme nvme0: I/O 165 QID 1 timeout, reset controller
[ 229.872700] nvme nvme0: I/O 166 QID 1 timeout, reset
On Wed, May 9, 2018 at 4:02 PM, Theodore Y. Ts'o wrote:
> On Wed, May 09, 2018 at 10:49:54AM +0200, Dmitry Vyukov wrote:
>> Hi Ted,
>>
>> Did you follow all instructions (commit, config, compiler, etc)?
>> syzbot does not have any special magic, it just executes the described
>>
43 matches
Mail list logo