On Wed, Feb 06, 2019 at 09:21:40AM +, John Garry wrote:
> On 05/02/2019 18:23, Christoph Hellwig wrote:
> > On Tue, Feb 05, 2019 at 03:09:28PM +, John Garry wrote:
> > > For SCSI devices, unfortunately not all IO sent to the HW originates from
> > > blk-mq or any other single entity.
> >
>
On 05/02/2019 18:23, Christoph Hellwig wrote:
On Tue, Feb 05, 2019 at 03:09:28PM +, John Garry wrote:
For SCSI devices, unfortunately not all IO sent to the HW originates from
blk-mq or any other single entity.
Where else would SCSI I/O originate from?
Please note that I was referring to
On Tue, Feb 05, 2019 at 03:09:28PM +, John Garry wrote:
> For SCSI devices, unfortunately not all IO sent to the HW originates from
> blk-mq or any other single entity.
Where else would SCSI I/O originate from?
On 05/02/2019 15:15, Hannes Reinecke wrote:
On 2/5/19 4:09 PM, John Garry wrote:
On 05/02/2019 14:52, Keith Busch wrote:
On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
On 04/02/2019 07:12, Hannes Reinecke wrote:
Hi Hannes,
So, as the user then has to wait for the system to dec
On Tue, Feb 05, 2019 at 04:10:47PM +0100, Hannes Reinecke wrote:
> On 2/5/19 3:52 PM, Keith Busch wrote:
> > Whichever layer dispatched the IO to a CPU specific context should
> > be the one to wait for its completion. That should be blk-mq for most
> > block drivers.
> >
> Indeed.
> But we don't
On 2/5/19 4:09 PM, John Garry wrote:
On 05/02/2019 14:52, Keith Busch wrote:
On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
On 04/02/2019 07:12, Hannes Reinecke wrote:
Hi Hannes,
So, as the user then has to wait for the system to declars 'ready for
CPU remove', why can't we jus
On Tue, Feb 05, 2019 at 03:09:28PM +, John Garry wrote:
> On 05/02/2019 14:52, Keith Busch wrote:
> > On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
> > > On 04/02/2019 07:12, Hannes Reinecke wrote:
> > >
> > > Hi Hannes,
> > >
> > > >
> > > > So, as the user then has to wait fo
On 2/5/19 3:52 PM, Keith Busch wrote:
On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
On 04/02/2019 07:12, Hannes Reinecke wrote:
Hi Hannes,
So, as the user then has to wait for the system to declars 'ready for
CPU remove', why can't we just disable the SQ and wait for all I/O to
On 05/02/2019 14:52, Keith Busch wrote:
On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
On 04/02/2019 07:12, Hannes Reinecke wrote:
Hi Hannes,
So, as the user then has to wait for the system to declars 'ready for
CPU remove', why can't we just disable the SQ and wait for all I/O
On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
> On 04/02/2019 07:12, Hannes Reinecke wrote:
>
> Hi Hannes,
>
> >
> > So, as the user then has to wait for the system to declars 'ready for
> > CPU remove', why can't we just disable the SQ and wait for all I/O to
> > complete?
> > We c
On 04/02/2019 07:12, Hannes Reinecke wrote:
On 2/1/19 10:57 PM, Thomas Gleixner wrote:
On Fri, 1 Feb 2019, Hannes Reinecke wrote:
Thing is, if we have _managed_ CPU hotplug (ie if the hardware
provides some
means of quiescing the CPU before hotplug) then the whole thing is
trivial;
disable SQ a
On 2/1/19 10:57 PM, Thomas Gleixner wrote:
On Fri, 1 Feb 2019, Hannes Reinecke wrote:
Thing is, if we have _managed_ CPU hotplug (ie if the hardware provides some
means of quiescing the CPU before hotplug) then the whole thing is trivial;
disable SQ and wait for all outstanding commands to compl
On Fri, 1 Feb 2019, Hannes Reinecke wrote:
> Thing is, if we have _managed_ CPU hotplug (ie if the hardware provides some
> means of quiescing the CPU before hotplug) then the whole thing is trivial;
> disable SQ and wait for all outstanding commands to complete.
> Then trivially all requests are c
On 1/31/19 6:48 PM, John Garry wrote:
On 30/01/2019 12:43, Thomas Gleixner wrote:
On Wed, 30 Jan 2019, John Garry wrote:
On 29/01/2019 17:20, Keith Busch wrote:
On Tue, Jan 29, 2019 at 05:12:40PM +, John Garry wrote:
On 29/01/2019 15:44, Keith Busch wrote:
Hm, we used to freeze the queu
On 30/01/2019 12:43, Thomas Gleixner wrote:
On Wed, 30 Jan 2019, John Garry wrote:
On 29/01/2019 17:20, Keith Busch wrote:
On Tue, Jan 29, 2019 at 05:12:40PM +, John Garry wrote:
On 29/01/2019 15:44, Keith Busch wrote:
Hm, we used to freeze the queues with CPUHP_BLK_MQ_PREPARE callback,
On Wed, 30 Jan 2019, John Garry wrote:
> On 29/01/2019 17:20, Keith Busch wrote:
> > On Tue, Jan 29, 2019 at 05:12:40PM +, John Garry wrote:
> > > On 29/01/2019 15:44, Keith Busch wrote:
> > > >
> > > > Hm, we used to freeze the queues with CPUHP_BLK_MQ_PREPARE callback,
> > > > which would re
On 29/01/2019 17:20, Keith Busch wrote:
On Tue, Jan 29, 2019 at 05:12:40PM +, John Garry wrote:
On 29/01/2019 15:44, Keith Busch wrote:
Hm, we used to freeze the queues with CPUHP_BLK_MQ_PREPARE callback,
which would reap all outstanding commands before the CPU and IRQ are
taken offline. T
On 29/01/2019 16:27, Thomas Gleixner wrote:
On Tue, 29 Jan 2019, John Garry wrote:
On 29/01/2019 12:01, Thomas Gleixner wrote:
If the last CPU which is associated to a queue (and the corresponding
interrupt) goes offline, then the subsytem/driver code has to make sure
that:
1) No more reque
On Tue, Jan 29, 2019 at 05:12:40PM +, John Garry wrote:
> On 29/01/2019 15:44, Keith Busch wrote:
> >
> > Hm, we used to freeze the queues with CPUHP_BLK_MQ_PREPARE callback,
> > which would reap all outstanding commands before the CPU and IRQ are
> > taken offline. That was removed with commi
On 29/01/2019 15:44, Keith Busch wrote:
On Tue, Jan 29, 2019 at 03:25:48AM -0800, John Garry wrote:
Hi,
I have a question on $subject which I hope you can shed some light on.
According to commit c5cb83bb337c25 ("genirq/cpuhotplug: Handle managed
IRQs on CPU hotplug"), if we offline the last CP
On Tue, 29 Jan 2019, John Garry wrote:
> On 29/01/2019 12:01, Thomas Gleixner wrote:
> > If the last CPU which is associated to a queue (and the corresponding
> > interrupt) goes offline, then the subsytem/driver code has to make sure
> > that:
> >
> >1) No more requests can be queued on that
On Tue, Jan 29, 2019 at 03:25:48AM -0800, John Garry wrote:
> Hi,
>
> I have a question on $subject which I hope you can shed some light on.
>
> According to commit c5cb83bb337c25 ("genirq/cpuhotplug: Handle managed
> IRQs on CPU hotplug"), if we offline the last CPU in a managed IRQ
> affinity
Hi Hannes, Thomas,
On 29/01/2019 12:01, Thomas Gleixner wrote:
On Tue, 29 Jan 2019, Hannes Reinecke wrote:
That actually is a very good question, and I have been wondering about this
for quite some time.
I find it a bit hard to envision a scenario where the IRQ affinity is
automatically (and,
On Tue, 29 Jan 2019, Hannes Reinecke wrote:
> That actually is a very good question, and I have been wondering about this
> for quite some time.
>
> I find it a bit hard to envision a scenario where the IRQ affinity is
> automatically (and, more importantly, atomically!) re-routed to one of the
>
On 1/29/19 12:25 PM, John Garry wrote:
Hi,
I have a question on $subject which I hope you can shed some light on.
According to commit c5cb83bb337c25 ("genirq/cpuhotplug: Handle managed
IRQs on CPU hotplug"), if we offline the last CPU in a managed IRQ
affinity mask, the IRQ is shutdown.
The
Hi,
I have a question on $subject which I hope you can shed some light on.
According to commit c5cb83bb337c25 ("genirq/cpuhotplug: Handle managed
IRQs on CPU hotplug"), if we offline the last CPU in a managed IRQ
affinity mask, the IRQ is shutdown.
The reasoning is that this IRQ is thought t
26 matches
Mail list logo