We can rely on the dma-mapping code to handle any DMA limits that is
bigger than the ISA DMA mask for us (either using an iommu or swiotlb),
so remove setting the block layer bounce limit for anything but the
unchecked_isa_dma case, or the bouncing for highmem pages.
Signed-off-by: Christoph
-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/ide/ide-probe.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index 8d8ed036ca0a..56d7bc228cb3 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@
at least for iommu equipped systems we can get rid of the block layer
bounce limit setting entirely.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/ide/ide-dma.c | 2 --
drivers/ide/ide-lib.c | 26 --
drivers/ide/ide-probe.c | 3 ---
include/linux
This was used by the ide, scsi and networking code in the past to
determine if they should bounce payloads. Now that the dma mapping
always have to support dma to all physical memory (thanks to swiotlb
for non-iommu systems) there is no need to this crude hack any more.
Signed-off-by: Christoph
These days the dma mapping routines must be able to handle any address
supported by the device, be that using an iommu, or swiotlb if none is
supported. With that the PCI_DMA_BUS_IS_PHYS check in illegal_highdma
is not needed and can be removed.
Signed-off-by: Christoph Hellwig <h...@lst
This way we have one central definition of it, and user can select it as
needed.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
arch/alpha/Kconfig | 4 +---
arch/arm/Kconfig| 3 ---
arch/arm64/Kconfig | 4 +---
arch/hexagon/Kconfig
selectable.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
arch/arm/Kconfig| 4 +---
arch/arm64/Kconfig | 5 ++---
arch/ia64/Kconfig | 9 +
arch/mips/Kconfig | 3 +++
arch/mips/cavium-octeon/Kconfig | 5 -
arc
swiotlb is only used as a library of helper for xen-swiotlb if Xen support
is enabled on arm, so don't build it by default.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
arch/arm/Kconfig | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/Kconfig b/ar
swiotlb now selects the DMA_DIRECT_OPS config symbol, so this will
always be true.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
lib/swiotlb.c | 4
1 file changed, 4 deletions(-)
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 47aeb04c1997..07f260319b82 100644
--- a/lib/swi
instead of only doing it when highmem is set.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
arch/alpha/Kconfig | 3 ---
arch/arc/Kconfig | 3 ---
arch/arm/mach-axxia/Kconfig| 1 -
arch/arm/mach-bcm/Kconfig | 1 -
arch/arm/mach-exynos/Kconfig | 1 -
ar
Instead select the PHYS_ADDR_T_64BIT for 32-bit architectures that need a
64-bit phys_addr_t type directly.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
arch/arc/Kconfig | 4 +---
arch/arm/kernel/setup.c| 2 +-
arch/arm/mm/K
This symbol is now always identical to CONFIG_ARCH_DMA_ADDR_T_64BIT, so
remove it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/pci/Kconfig | 4
drivers/pci/bus.c | 4 ++--
include/linux/pci.h | 2 +-
3 files changed, 3 insertions(+), 7 deletions(-)
diff --git a/drive
This function is only used by built-in code.
Reviewed-by: Christoph Hellwig <h...@lst.de>
---
lib/iommu-helper.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/lib/iommu-helper.c b/lib/iommu-helper.c
index 23633c0fda4a..ded1703e7e64 100644
--- a/lib/iommu-helper.c
+++ b/lib/iommu-he
This way we have one central definition of it, and user can select it as
needed. Note that we now also always select it when CONFIG_DMA_API_DEBUG
is select, which fixes some incorrect checks in a few network drivers.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
arch/alpha/K
This way we have one central definition of it, and user can select it as
needed.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
arch/powerpc/Kconfig | 4 +---
arch/s390/Kconfig| 5 ++---
arch/sparc/Kconfig | 5 +
arch/x86/Kconfig | 6 ++
lib/Kconfig | 3
This avoids selecting IOMMU_HELPER just for this function. And we only
use it once or twice in normal builds so this often even is a size
reduction.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
arch/alpha/Kconfig | 3 ---
arch/arm/Kconfig| 3 ---
arch
This code is only used by sparc, and all new iommu drivers should use the
drivers/iommu/ framework. Also remove the unused exports.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
{include/linux => arch/sparc/include/asm}/iommu-common.h | 0
arch/sparc/include/asm/i
Hi all,
this seris aims for a single defintion of the Kconfig symbol. To get
there various cleanups, mostly about config symbols are included as well.
The patch looks fine, but in general I think descriptions of what
you fixed in the code or more important than starting out with
a backtrace.
E.g. please explain what was wrong, how you fixed it and only after
that mention how it was caught. (preferably without the whole trace)
Looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
In addition to all the arguments in the changelog the diffstat is
a pretty clear indicator that a straight forward state machine is
exactly what we want.
On Wed, Apr 11, 2018 at 10:11:05AM +0800, Ming Lei wrote:
> On Tue, Apr 10, 2018 at 03:01:57PM -0600, Bart Van Assche wrote:
> > The blk-mq timeout handling code ignores completions that occur after
> > blk_mq_check_expired() has been called and before blk_mq_rq_timed_out()
> > has reset
On Tue, Apr 10, 2018 at 05:02:40PM -0600, Bart Van Assche wrote:
> Because blkcg_exit_queue() is now called from inside blk_cleanup_queue()
> it is no longer safe to access cgroup information during or after the
> blk_cleanup_queue() call. Hence protect the generic_make_request_checks()
> call
On Wed, Apr 11, 2018 at 07:58:52PM -0600, Bart Van Assche wrote:
> Several block drivers call alloc_disk() followed by put_disk() if
> something fails before device_add_disk() is called without calling
> blk_cleanup_queue(). Make sure that also for this scenario a request
> queue is dissociated
On Wed, Apr 11, 2018 at 04:19:18PM +0300, Sagi Grimberg wrote:
>
>> static void __blk_mq_requeue_request(struct request *rq)
>> {
>> struct request_queue *q = rq->q;
>> +enum mq_rq_state old_state = blk_mq_rq_state(rq);
>> blk_mq_put_driver_tag(rq);
>>
On Mon, Apr 09, 2018 at 06:34:55PM -0700, Bart Van Assche wrote:
> If a completion occurs after blk_mq_rq_timed_out() has reset
> rq->aborted_gstate and the request is again in flight when the timeout
> expires then a request will be completed twice: a first time by the
> timeout handler and a
On Mon, Apr 09, 2018 at 09:52:03AM -0700, Matthew Wilcox wrote:
> On Mon, Apr 09, 2018 at 05:39:16PM +0200, Christoph Hellwig wrote:
> > blk_get_request is used for pass-through style I/O and thus doesn't need
> > GFP_NOIO.
>
> Obviously GFP_KERNEL is a big improvement over
On Mon, Apr 09, 2018 at 09:03:54AM -0700, Matthew Wilcox wrote:
> > @@ -499,7 +499,7 @@ int sg_scsi_ioctl(struct request_queue *q, struct
> > gendisk *disk, fmode_t mode,
> > break;
> > }
> >
> > - if (bytes && blk_rq_map_kern(q, rq, buffer, bytes, __GFP_RECLAIM)) {
> > + if
blk_get_request is used for pass-through style I/O and thus doesn't need
GFP_NOIO.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 432923751551..253a869558f9
We just can't do I/O when doing block layer requests allocations,
so use GFP_NOIO instead of the even more limited __GFP_DIRECT_RECLAIM.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/block/blk-co
blk_old_get_request already has it at hand, and in blk_queue_bio, which
is the fast path, it is constant.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c | 14 +++---
drivers/scsi/scsi_error.c | 4
2 files changed, 7 insertions(+), 11 deletions(-)
Switch everyone to blk_get_request_flags, and then rename
blk_get_request_flags to blk_get_request.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c | 14 +++---
block/bsg.c| 5 ++---
block/scsi_ioctl.c
Same numerical value (for now at least), but a much better documentation
of intent.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/scsi_ioctl.c | 2 +-
drivers/block/drbd/drbd_bitmap.c | 3 ++-
drivers/block/pktcdvd.c | 2 +-
drivers/ide/ide-
Always GFP_KERNEL, and keeping it would cause serious complications for
the next change.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/scsi/osd/osd_initiator.c | 24 +++-
fs/exofs/ore.c | 10 +-
fs/exofs/super.c
Hi all,
this series sorts out the mess around how we use gfp flags in the
block layer get_request interface.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index abcb8684ba67..abde22c755ab 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1517,7 +1517,7 @@ static
and now we map
> all possible CPUs to hw queues, so at least one CPU is mapped to each hctx.
>
> So queue mapping has became static and fixed just like percpu variable, and
> we don't need to handle queue remapping any more.
Looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Sun, Apr 08, 2018 at 05:48:13PM +0800, Ming Lei wrote:
> Now the actual meaning of queue mapped is that if there is any online
> CPU mapped to this hctx, so implement blk_mq_hw_queue_mapped() in this
> way.
Reviewed-by: Christoph Hellwig <h...@lst.de>
d:
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Sun, Apr 08, 2018 at 05:48:11PM +0800, Ming Lei wrote:
> No driver uses this interface any more, so remove it.
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Sun, Apr 08, 2018 at 05:48:10PM +0800, Ming Lei wrote:
> This patch introduces helper of blk_mq_hw_queue_first_cpu() for
> figuring out the hctx's first cpu, and code duplication can be
> avoided.
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Sun, Apr 08, 2018 at 05:48:09PM +0800, Ming Lei wrote:
> This patch figures out the final selected CPU, then writes
> it to hctx->next_cpu once, then we can avoid to intermediate
> next cpu observed from other dispatch paths.
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
x 0, then hctx 0 may become the bottleneck of IO dispatch and
> completion.
>
> This patch sets up the mapping from the beginning, and aligns to
> queue mapping for PCI device (blk_mq_pci_map_queues()).
Please kill the now pointless cpu_to_queue_index function.
Otherwise looks good:
R
this issue by making hctx->next_cpu pointing to the
> first CPU in hctx->cpumask if all CPUs in hctx->cpumask are offline.
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
This looks sensible, but I'm worried about taking a whole spinlock
for every request completion, including irq disabling. However it seems
like your new updated pattern would fit use of cmpxchg() very nicely.
I really can't make sense of that report. And I'm also curious why
you think 17cb960f29c2 should change anything for that code path.
On Fri, Apr 06, 2018 at 01:09:08PM -0400, Douglas Gilbert wrote:
> So you found a document that outlines NVMe's architecture! Could you
> share the url (no marketing BS, please)?
You can always take a look at the actual spec:
On Mon, Apr 09, 2018 at 08:53:49AM +0200, Hannes Reinecke wrote:
> Why don't you fold the 'flags' argument into the 'gfp_flags', and drop
> the 'flags' argument completely?
> Looks a bit pointless to me, having two arguments denoting basically
> the same ...
Wrong way around. gfp_flags doesn't
On Fri, Apr 06, 2018 at 08:24:18AM +0200, Hannes Reinecke wrote:
> Ah. Far better.
> What about delegating FORMAT UNIT to the control LUN, and not
> implementing it for the individual disk LUNs?
> That would make an even stronger case for having a control LUN;
> with that there wouldn't be any
On Sat, Mar 31, 2018 at 01:03:46PM +0200, Hannes Reinecke wrote:
> Actually I would propose to have a 'management' LUN at LUN0, who could
> handle all the device-wide commands (eg things like START STOP UNIT,
> firmware update, or even SMART commands), and ignoring them for the
> remaining LUNs.
I really don't want more lightnvm cruft in the core. We'll need
a proper abstraction.c
On Fri, Mar 23, 2018 at 12:00:08PM +0100, Matias Bjørling wrote:
> On 02/05/2018 01:15 PM, Matias Bjørling wrote:
> > The nvme driver sets up the size of the nvme namespace in two steps.
> > First it
gs)
> #define blk_queue_preempt_only(q)\
> test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags)
> +#define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)
Separate patch, please.
Otherwise looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
> +#define IOMAP_DIO_WRITE_SYNC (1 << 29)
Actually based on the next patch can we rename this to something like:
IOMAP_DIO_NEED_SYNC? That makes the usage a little more clear.
struct iomap_dio, aio.work);
iocb->ki_complete(dio->iocb, iomap_dio_complete(dio); 0);
}
Otherwise looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
> + /*
> + * capture amount written on completion as we can't reliably account
> + * for it on submission
Captialize the first c, and a . at the end of the sentence please.
Otherwise looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
into:
- return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev), 0);
dev->num_vecs > 1 ? 1 /* admin queue */ : 0);
no functional change, but much easier to understand.
Except for that the whole series looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
> +static inline unsigned int nvme_ioq_vector(struct nvme_dev *dev,
> + unsigned int qid)
No need for the inline here I think.
> +{
> + /*
> + * A queue's vector matches the queue identifier unless the controller
> + * has only one vector available.
> + */
> +
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Fri, Mar 23, 2018 at 04:19:21PM -0600, Keith Busch wrote:
> The PCI interrupt vectors intended to be associated with a queue may
> not start at 0. This patch adds an offset parameter so blk-mq may find
> the intended affinity mask. The default value is 0 so existing drivers
> that don't care
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
> + const char *page, size_t count)
> +{
> + struct nvmet_port *port = to_nvmet_port(item);
> + struct device *dev;
> + struct pci_dev *p2p_dev = NULL;
> + bool use_p2pmem;
> +
> + switch (page[0]) {
> + case 'y':
> + case 'Y':
> + case
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Mon, Mar 19, 2018 at 07:36:53PM +0100, Jonas Rabenstein wrote:
> Check whether the shadow mbr does fit in the provided space on the
> target. Also a proper firmware should handle this case and return an
> error we may prevent problems or even damage with crappy firmwares.
>
> Signed-off-by:
On Mon, Mar 19, 2018 at 07:36:52PM +0100, Jonas Rabenstein wrote:
> Every opal-sed table is described in the OPAL_TABLE_TABLE. Provide a
> function to get desired metadata information out of that table.
Your new function doesn't seem to be used at all.
Looks fine:
Reviewed-by: Christoph Hellwig <h...@lst.de>
a pending error?
Except for that this looks fine:
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Mon, Mar 19, 2018 at 07:36:47PM +0100, Jonas Rabenstein wrote:
> Add function address (and if available its symbol) to the message if a
> step function fails.
Looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
e
> other functions.
Should probably be one patch for each of the two separate changes.
Except for that this looks fine to me:
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Mon, Mar 19, 2018 at 07:36:45PM +0100, Jonas Rabenstein wrote:
> Every step starts with resetting the cmd buffer as well as the comid and
> constructs the appropriate OPAL_CALL command. Consequently, those
> actions may be combined into one generic function. On should take care,
> that the
onas Rabenstein <jonas.rabenst...@studium.uni-erlangen.de>
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Mon, Mar 19, 2018 at 07:36:42PM +0100, Jonas Rabenstein wrote:
> Hi,
> I was advised to resend the patchset as a v2 where all the patches are
> in a flat hierarchy. So here is a complete set which hopefully pleases
> all requirements.
> As the previous fixes have by now all landed into
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Mon, Mar 19, 2018 at 07:36:50PM +0100, Jonas Rabenstein wrote:
> Allow modification of the shadow mbr. If the shadow mbr is not marked as
> done, this data will be presented read only as the device content. Only
> after marking the shadow mbr as done and unlocking a locking range the
> actual
On Fri, Mar 16, 2018 at 12:11:16AM +0800, Coly Li wrote:
> On 15/03/2018 11:08 PM, Bart Van Assche wrote:
> > Signed-off-by: Bart Van Assche
>
> Hi Bart,
>
> Could you please to add at least one line commit log ? Thanks in advance
> for this :-)
Hi subject line
except csum have
> CPU endianness.
> - struct cache_sb_le in which all integer members except csum are
> declared as little endian.
Can you call this cache_sb_disk to name it after the purpose instead
of the implementation?
Except for that this looks like the right fix:
Reviewed-by:
On Thu, Mar 15, 2018 at 08:08:12AM -0700, Bart Van Assche wrote:
> +#define csum_set(i) ({ \
> + const void *p = (void *)(i) + sizeof(uint64_t); \
> + const void *q = bset_bkey_last(i); \
> +
On Thu, Mar 15, 2018 at 11:42:25AM +0100, Arnd Bergmann wrote:
> Is anyone producing a chip that includes enough of the Privileged ISA spec
> to have things like system calls, but not the MMU parts?
Various SiFive SOCs seem to support M and U mode, but no S mode or
iommu. That should be enough
> +static void swap_cache_sb_from_cpu(struct cache_sb *sb,
> +struct cache_sb *out)
> +{
> + int i;
> +
> + out->offset = cpu_to_le64(sb->offset);
> + out->flags = cpu_to_le64(sb->flags);
> + out->seq=
On Thu, Mar 15, 2018 at 09:04:24AM +0100, Ondrej Zary wrote:
> On Thursday 15 March 2018, Christoph Hellwig wrote:
> > The paride drivers are some of the cruftiest, grottiest block drivers
> > (besides drivers/ide and floppy.c) and have seen one single targeted
> > commit
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
rtitions index")
Reported-by: Jiufei Xue <jiufei@linux.alibaba.com>
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c | 93
1 file changed, 40 insertions(+), 53 deletions(-)
diff --git a/block/blk-core.c
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
I still don't like the code duplication, but I guess I can fix this
up in one of the next merge windows myself..
Reviewed-by: Christoph Hellwig <h...@lst.de>
Same as for hpsa..
Reviewed-by: Christoph Hellwig <h...@lst.de>
scsi_request anymore.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Hannes Reinecke <h...@suse.com>
Reviewed-by: Johannes Thumshirn <jthumsh...@suse.de>
---
block/bsg-lib.c | 158 +++
block/bsg.c
Users of the bsg-lib interface should only use the bsg_job data structure
and not know about implementation details of it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Benjamin Block <bbl...@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <h...@suse.com>
Rev
The zfcp driver wants to know the timeout for a bsg job, so add a field
to struct bsg_job for it in preparation of not exposing the request
to the bsg-lib users.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Benjamin Block <bbl...@linux.vnet.ibm.com>
Reviewed-by: Hannes
Hi all,
this series cleans up various abuses of the bsg interfaces, and then
splits bsg for SCSI passthrough from bsg for arbitrary transport
passthrough. This removes the scsi_request abuse in bsg-lib that is
very confusing, and also makes sure we can sanity check the requests
we get. The
rtitions index")
Reported-by: Jiufei Xue <jiufei@linux.alibaba.com>
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c | 99
1 file changed, 43 insertions(+), 56 deletions(-)
diff --git a/block/blk-core.c
On Thu, Mar 08, 2018 at 02:09:23PM -0700, Jens Axboe wrote:
> On Thu, Mar 08 2018, Christoph Hellwig wrote:
> > bio_check_eod() should check partiton size not the whole disk if
> > bio->bi_partno is non-zero.
> >
> > Based on an earlier patch from Jiufei Xue.
>
&
On Sat, Mar 10, 2018 at 11:01:43PM +0800, Ming Lei wrote:
> > I really dislike this being open coded in drivers. It really should
> > be helper chared with the blk-mq map building that drivers just use.
> >
> > For now just have a low-level blk_pci_map_queues that
> > blk_mq_pci_map_queues, hpsa
This looks generally fine to me:
Reviewed-by: Christoph Hellwig <h...@lst.de>
As a follow on we should probably kill virtscsi_queuecommand_single and
thus virtscsi_host_template_single as well.
> Given storage IO is always C/S model, there isn't such issue with
> SCSI_MQ(blk-mq),
W
i_host_template so that drivers
> can provide blk-mq only support, so driver code can avoid the trouble
> for supporting both.
>
> Cc: Omar Sandoval <osan...@fb.com>,
> Cc: "Martin K. Petersen" <martin.peter...@oracle.com>,
> Cc: James Bottomley <james.bottom...
> +static void hpsa_setup_reply_map(struct ctlr_info *h)
> +{
> + const struct cpumask *mask;
> + unsigned int queue, cpu;
> +
> + for (queue = 0; queue < h->msix_vectors; queue++) {
> + mask = pci_irq_get_affinity(h->pdev, queue);
> + if (!mask)
> +
On Thu, Mar 08, 2018 at 04:17:19PM +0800, Jiufei Xue wrote:
> Hi Christoph,
>
> On 2018/3/8 下午3:46, Christoph Hellwig wrote:
> > bio_check_eod() should check partiton size not the whole disk if
> > bio->bi_partno is non-zero.
> >
> I think the check should be
> + /* 256 tags should be high enough to saturate device */
> + int max_queues = DIV_ROUND_UP(h->scsi_host->can_queue, 256);
> +
> + /* per NUMA node hw queue */
> + h->scsi_host->nr_hw_queues = min_t(int, nr_node_ids, max_queues);
I don't think this magic should be in a driver.
On Tue, Feb 27, 2018 at 06:07:46PM +0800, Ming Lei wrote:
> This patch can support to partition host-wide tags to multiple hw queues,
> so each hw queue related data structures(tags, hctx) can be accessed in
> NUMA locality way, for example, the hw queue can be per NUMA node.
>
> It is observed
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
> +static void hpsa_setup_reply_map(struct ctlr_info *h)
> +{
> + const struct cpumask *mask;
> + unsigned int queue, cpu;
> +
> + for (queue = 0; queue < h->msix_vectors; queue++) {
> + mask = pci_irq_get_affinity(h->pdev, queue);
> + if (!mask)
> +
601 - 700 of 2477 matches
Mail list logo