Instead of using UTS_RELEASE, use init_utsname()->release, which means that
we don't need to rebuild the code just for the git head commit changing.
Signed-off-by: John Garry
---
I originally sent an RFC using new string uts_release, but that
string is not needed as we can use init
On 08/02/2024 10:08, John Garry wrote:
On 05/02/2024 23:10, Masahiro Yamada wrote:
I think what you can contribute are:
- Explore the UTS_RELEASE users, and check if you can get rid of it.
Unfortunately I expect resistance for this. I also expect places like FW
loader it is necessary. And
On 05/02/2024 23:10, Masahiro Yamada wrote:
I think what you can contribute are:
- Explore the UTS_RELEASE users, and check if you can get rid of it.
Unfortunately I expect resistance for this. I also expect places like FW
loader it is necessary. And when this is used in sysfs, people will s
On 02/02/2024 15:01, Masahiro Yamada wrote:
--
2.35.3
As you see, several drivers store UTS_RELEASE in their driver data,
and even print it in debug print.
I do not see why it is useful.
I would tend to agree, and mentioned that earlier.
As you discussed in 3/4, if UTS_RELEASE is unneeded
On 01/02/2024 16:09, Jakub Kicinski wrote:
On Thu, 1 Feb 2024 14:20:23 +0100 Jiri Pirko wrote:
BTW, I assume that changes like this are also ok:
8<-
net: team: Don't bother filling in ethtool driver version
Yup, just to be clear - you can send this independently from the s
On 31/01/2024 19:24, Jakub Kicinski wrote:
On Wed, 31 Jan 2024 10:48:50 + John Garry wrote:
Instead of using UTS_RELEASE, use uts_release, which means that we don't
need to rebuild the code just for the git head commit changing.
Signed-off-by: John Garry
Yes, please!
Acked-by:
On 31/01/2024 16:22, Greg KH wrote:
before:
real0m53.591s
user1m1.842s
sys 0m9.161s
after:
real0m37.481s
user0m46.461s
sys 0m7.199s
Sending as an RFC as I need to test more of the conversions and I would
like to also convert more UTS_RELEASE users to prove this is proper
Add a char [] for UTS_RELEASE so that we don't need to rebuild code which
references UTS_RELEASE.
Signed-off-by: John Garry
---
include/linux/utsname.h | 1 +
init/version.c | 3 +++
2 files changed, 4 insertions(+)
diff --git a/include/linux/utsname.h b/include/linux/utsname.h
EASE users to prove this is proper
approach.
John Garry (4):
init: Add uts_release
tracing: Use uts_release
net: ethtool: Use uts_release
firmware_loader: Use uts_release
drivers/base/firmware_loader/main.c | 39 +++--
include/linux/utsname.h |
Instead of using UTS_RELEASE, use uts_release, which means that we don't
need to rebuild the code just for the git head commit changing.
Since UTS_RELEASE was used for fw_path and this points to const data,
append uts_release dynamically to an intermediate string.
Signed-off-by: John
Instead of using UTS_RELEASE, use uts_release, which means that we don't
need to rebuild the code just for the git head commit changing.
Signed-off-by: John Garry
---
net/ethtool/ioctl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/ethtool/ioctl.c b/net/et
Instead of using UTS_RELEASE, use uts_release, which means that we don't
need to rebuild the code just for the git head commit changing.
Signed-off-by: John Garry
---
kernel/trace/trace.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/
means count when bigger than
threshold, and 1 means smaller.
Signed-off-by: Qi Liu
Some minor items and nits with coding style below, but generally looks ok:
Reviewed-by: John Garry
---
MAINTAINERS|6 +
drivers/perf/Kconfig
On 20/04/2021 01:02, Ming Lei wrote:
On Tue, Apr 20, 2021 at 12:06:24AM +0800, John Garry wrote:
Function sdev_store_queue_depth() enforces that the sdev queue depth cannot
exceed shost.can_queue.
However, the LLDD may still set cmd_per_lun > can_queue, which leads to an
initial sdev qu
eue.
Signed-off-by: John Garry
---
Topic originally discussed at:
https://lore.kernel.org/linux-scsi/85dec8eb-8eab-c7d6-b0fb-5622747c5...@interlog.com/T/#m5663d0cac657d843b93d0c9a2374f98fc04384b9
Last idea there was to error/warn in scsi_add_host() for cmd_per_lun >
can_queue. However, such a
On 26/03/2021 06:24, Zhen Lei wrote:
There are several spelling mistakes, as follows:
funcions ==> functions
distiguish ==> distinguish
detroyed ==> destroyed
Signed-off-by: Zhen Lei
I think that there should be a /s/appropriatley/appropriately/ in iommu.c
Thanks,
john
On 06/04/2021 17:54, John Garry wrote:
Hi Robin,
Sorry if the phrasing was unclear there - the allusion to default
domains is new, it just occurred to me that what we do there is in
fact fairly close to what I've suggested previously for this. In that
case, we have a global policy s
On 13/04/2021 10:12, liuqi (BA) wrote:
I do wonder why we even need maintain pcie_pmu->cpumask
Can't we just use cpu_online_mask as appropiate instead?
?
Sorry, missed it yesterday.
It seems that cpumask is always same as cpu_online_mask, So do we need
to reserve the cpumask sysfs interface
On 08/04/2021 13:06, Jiri Olsa wrote:
perf stat --topdown is not supported, as this requires the CPU PMU to
expose (alias) events for the TopDown L1 metrics from sysfs, which arm
does not do. To get that to work, we probably need to make perf use the
pmu-events cpumap to learn about those alias e
On 12/04/2021 14:34, liuqi (BA) wrote:
Hi John,
Thanks for reviewing this.
On 2021/4/9 18:22, John Garry wrote:
On 09/04/2021 10:05, Qi Liu wrote:
PCIe PMU Root Complex Integrated End Point(RCiEP) device is supported
to sample bandwidth, latency, buffer occupation etc.
Each PMU RCiEP device
On 09/04/2021 10:05, Qi Liu wrote:
PCIe PMU Root Complex Integrated End Point(RCiEP) device is supported
to sample bandwidth, latency, buffer occupation etc.
Each PMU RCiEP device monitors multiple Root Ports, and each RCiEP is
registered as a PMU in /sys/bus/event_source/devices, so users can
s
On 08/04/2021 10:01, Jonathan Cameron wrote:
On Wed, 7 Apr 2021 21:40:05 +0100
Will Deacon wrote:
On Wed, Apr 07, 2021 at 05:49:02PM +0800, Qi Liu wrote:
PCIe PMU Root Complex Integrated End Point(RCiEP) device is supported
to sample bandwidth, latency, buffer occupation etc.
Each PMU RCiEP
Add L3 metrics.
Signed-off-by: John Garry
Reviewed-by: Kajol Jain
---
.../arch/arm64/hisilicon/hip08/metrics.json | 161 ++
1 file changed, 161 insertions(+)
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
b/tools/perf/pmu-events/arch/arm64
s arm64-specific function
- Fix metric reuse for pmu-events parse metric testcase
John Garry (6):
perf metricgroup: Make find_metric() public with name change
perf test: Handle metric reuse in pmu-events parsing test
perf pmu: Add pmu_events_map__find()
perf vendor events arm64: Add Hisi
Add L2 metrics.
Signed-off-by: John Garry
Reviewed-by: Kajol Jain
---
.../arch/arm64/hisilicon/hip08/metrics.json | 42 +++
1 file changed, 42 insertions(+)
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
b/tools/perf/pmu-events/arch/arm64
Add L1 metrics. Formula is as consistent as possible with MAN pages
description for these metrics.
Signed-off-by: John Garry
Reviewed-by: Kajol Jain
---
.../arch/arm64/hisilicon/hip08/metrics.json | 30 +++
1 file changed, 30 insertions(+)
create mode 100644
tools/perf/pmu
ed-by: Paul A. Clarke
Signed-off-by: John Garry
Reviewed-by: Kajol Jain
---
tools/perf/arch/arm64/util/Build | 1 +
tools/perf/arch/arm64/util/pmu.c | 25 +
tools/perf/tests/pmu-events.c | 2 +-
tools/perf/util/metricgroup.c | 7 +++
tools/perf/util/
Function find_metric() is required for the metric processing in the
pmu-events testcase, so make it public. Also change the name to include
"metricgroup".
Tested-by: Paul A. Clarke
Signed-off-by: John Garry
Reviewed-by: Kajol Jain
---
tools/perf/util/metricgroup.c | 5 +++--
tools
The pmu-events parsing test does not handle metric reuse at all.
Introduce some simple handling to resolve metrics who reference other
metrics.
Tested-by: Paul A. Clarke
Signed-off-by: John Garry
Reviewed-by: Kajol Jain
---
tools/perf/tests/pmu-events.c | 81
On 07/04/2021 09:04, Joerg Roedel wrote:
On Mon, Mar 01, 2021 at 08:12:18PM +0800, John Garry wrote:
The Intel IOMMU driver supports flushing the per-CPU rcaches when a CPU is
offlined.
Let's move it to core code, so everyone can take advantage.
Also correct a code comment.
Based on
So then we have the issue of how to dynamically increase this rcache
threshold. The problem is that we may have many devices associated with
the same domain. So, in theory, we can't assume that when we increase
the threshold that some other device will try to fast free an IOVA
which
was allocate
On 06/04/2021 17:40, Rafael J. Wysocki wrote:
On Tue, Apr 6, 2021 at 5:51 PM John Garry wrote:
Hi guys,
On next-20210406, I enabled CONFIG_DEBUG_KMEMLEAK and
CONFIG_DEBUG_TEST_DRIVER_REMOVE for my arm64 system, and see this:
Hi Rafael,
Why exactly do you think that
Hi guys,
On next-20210406, I enabled CONFIG_DEBUG_KMEMLEAK and
CONFIG_DEBUG_TEST_DRIVER_REMOVE for my arm64 system, and see this:
root@debian:/home/john# more /sys/kernel/debug/kmemleak
unreferenced object 0x202803c11f00 (size 128):
comm "swapper/0", pid 1, jiffies 4294894325 (age 337.524s)
On 06/04/2021 14:34, Jiri Olsa wrote:
}
So once we evaluate a pmu_event in pctx->ids in @pe, @all is set false, and
we would loop again in the do-while loop, regardless of what
expr__find_other() does (apart from erroring), and so call
hashmap__for_each_entry_safe(&pctx->ids, ) again.
ah ok, s
On 06/04/2021 13:55, Jiri Olsa wrote:
So expr__find_other() may add a new item to pctx->ids, and we always iterate
again, and try to lookup any pmu_events, *, above. If none exist, then we
hm, I don't see that.. so, what you do is:
hashmap__for_each_entry_safe((&pctx->ids) ) {
On 06/04/2021 13:17, Jiri Olsa wrote:
+ ref = &metric->metric_ref;
+ ref->metric_name = pe->metric_name;
+ ref->metric_expr = pe->metric_expr;
+ list_add_tail(&metric->list, compound_list);
+
+
On 30/03/2021 07:41, kajoljain wrote:
On 3/30/21 2:37 AM, Paul A. Clarke wrote:
On Fri, Mar 26, 2021 at 10:57:40AM +, John Garry wrote:
On 25/03/2021 20:39, Paul A. Clarke wrote:
On Thu, Mar 25, 2021 at 06:33:12PM +0800, John Garry wrote:
Metric reuse support is added for pmu-events
On 01/04/2021 14:49, Jiri Olsa wrote:
On Thu, Mar 25, 2021 at 06:33:14PM +0800, John Garry wrote:
SNIP
+struct metric {
+ struct list_head list;
+ struct metric_ref metric_ref;
+};
+
+static int resolve_metric_simple(struct expr_parse_ctx *pctx
On 02/04/2021 00:16, Ian Rogers wrote:
On Thu, Mar 25, 2021 at 3:38 AM John Garry wrote:
Function find_metric() is required for the metric processing in the
pmu-events testcase, so make it public. Also change the name to include
"metricgroup".
Would it make mor
On 03/02/2021 17:23, Marc Zyngier wrote:
On 2021-02-02 15:46, John Garry wrote:
On 02/02/2021 14:48, Marc Zyngier wrote:
Not sure. I also now notice an error for the SAS PCI driver on D06
when nr_cpus < 16, which means number of MSI vectors allocated <
32, so looks the same problem.
On 25/03/2021 17:53, Will Deacon wrote:
On Thu, Mar 25, 2021 at 08:29:57PM +0800, John Garry wrote:
The Intel IOMMU driver supports flushing the per-CPU rcaches when a CPU is
offlined.
Let's move it to core code, so everyone can take advantage.
Also throw in a patch to stop expo
On 25/03/2021 20:39, Paul A. Clarke wrote:
On Thu, Mar 25, 2021 at 06:33:12PM +0800, John Garry wrote:
Metric reuse support is added for pmu-events parse metric testcase.
This had been broken on power9 recentlty:
https://lore.kernel.org/lkml/20210324015418.gc8...@li-24c3614c-2adc-11b2-a85c
Now that the core code handles flushing per-IOVA domain CPU rcaches,
remove the handling here.
Reviewed-by: Lu Baolu
Signed-off-by: John Garry
---
drivers/iommu/intel/iommu.c | 31 ---
include/linux/cpuhotplug.h | 1 -
2 files changed, 32 deletions(-)
diff --git
Function free_iova_fast() is only referenced by dma-iommu.c, which can
only be in-built, so stop exporting it.
This was missed in an earlier tidy-up patch.
Signed-off-by: John Garry
---
drivers/iommu/iova.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/iommu/iova.c b/drivers/iommu
Function iommu_dma_free_cpu_cached_iovas() no longer has any caller, so
delete it.
With that, function free_cpu_cached_iovas() may be made static.
Signed-off-by: John Garry
---
drivers/iommu/dma-iommu.c | 9 -
drivers/iommu/iova.c | 3 ++-
include/linux/dma-iommu.h | 8
Like the Intel IOMMU driver already does, flush the per-IOVA domain
CPU rcache when a CPU goes offline - there's no point in keeping it.
Reviewed-by: Robin Murphy
Signed-off-by: John Garry
---
drivers/iommu/iova.c | 30 +-
include/linux/cpuhotplug.h
_fast()
- Drop patch to correct comment
- Add patch to delete iommu_dma_free_cpu_cached_iovas() and associated
changes
John Garry (4):
iova: Add CPU hotplug handler to flush rcaches
iommu/vt-d: Remove IOVA domain rcache flushing for CPU offlining
iommu: Delete iommu_dma_free_cpu_cached_
The pmu-events parsing test does not handle metric reuse at all.
Introduce some simple handling to resolve metrics who reference other
metrics.
Signed-off-by: John Garry
---
tools/perf/tests/pmu-events.c | 80 +++
1 file changed, 80 insertions(+)
diff --git a
l/20210324015418.gc8...@li-24c3614c-2adc-11b2-a85c-85f334518bdb.ibm.com/
Differences to v1:
- Add pmu_events_map__find() as arm64-specific function
- Fix metric reuse for pmu-events parse metric testcase
John Garry (6):
perf metricgroup: Make find_metric() public with name change
perf tes
Add L1 metrics. Formula is as consistent as possible with MAN pages
description for these metrics.
Signed-off-by: John Garry
---
.../arch/arm64/hisilicon/hip08/metrics.json | 30 +++
1 file changed, 30 insertions(+)
create mode 100644
tools/perf/pmu-events/arch/arm64
d-off-by: John Garry
---
tools/perf/arch/arm64/util/Build | 1 +
tools/perf/arch/arm64/util/pmu.c | 25 +
tools/perf/tests/pmu-events.c | 2 +-
tools/perf/util/metricgroup.c | 7 +++
tools/perf/util/pmu.c | 5 +
tools/perf/util/
Add L2 metrics.
Signed-off-by: John Garry
---
.../arch/arm64/hisilicon/hip08/metrics.json | 42 +++
1 file changed, 42 insertions(+)
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
Add L3 metrics.
Signed-off-by: John Garry
---
.../arch/arm64/hisilicon/hip08/metrics.json | 161 ++
1 file changed, 161 insertions(+)
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
Function find_metric() is required for the metric processing in the
pmu-events testcase, so make it public. Also change the name to include
"metricgroup".
Signed-off-by: John Garry
---
tools/perf/util/metricgroup.c | 5 +++--
tools/perf/util/metricgroup.h | 3 ++-
2 files changed, 5
On 24/03/2021 01:54, Paul A. Clarke wrote:
--
Since commit 8989f5f07605 ("perf stat: Update POWER9 metrics to utilize
other metrics"), power9 has reused metrics.
And I am finding that subtest 10.3 caused problems when I tried to introduce
metric reuse on arm64, so I was just asking you to check
On 23/03/2021 15:06, Paul A. Clarke wrote:
On Mon, Mar 22, 2021 at 11:36:23AM +, John Garry wrote:
On 01/08/2020 12:40, Paul A. Clarke wrote:
v4 changes:
- removed acks from patch because it changed a bit
with the last fixes:
perf metric: Collect referenced metrics in
On 23/03/2021 13:05, Robin Murphy wrote:
On 2021-03-01 12:12, John Garry wrote:
Function free_cpu_cached_iovas() is not only called when a CPU is
hotplugged, so remove that part of the code comment.
FWIW I read it as clarifying why this is broken out into a separate
function vs. a monolithic
On 01/03/2021 12:12, John Garry wrote:
The Intel IOMMU driver supports flushing the per-CPU rcaches when a CPU is
offlined.
Let's move it to core code, so everyone can take advantage.
Also correct a code comment.
Based on v5.12-rc1. Tested on arm64 only.
Hi guys,
Friendly rem
There's apparently a bit in the PCI spec that reads:
The host bus bridge, in PC compatible systems, must return all
1's on a read transaction and discard data on a write transaction
when terminated with Master-Abort.
which obviously applies only to "PC compatible syste
On 19/03/2021 19:20, Robin Murphy wrote:
Hi Robin,
So then we have the issue of how to dynamically increase this rcache
threshold. The problem is that we may have many devices associated with
the same domain. So, in theory, we can't assume that when we increase
the threshold that some other dev
On 01/08/2020 12:40, Paul A. Clarke wrote:
v4 changes:
- removed acks from patch because it changed a bit
with the last fixes:
perf metric: Collect referenced metrics in struct metric_ref_node
- fixed runtime metrics [Kajol Jain]
- increased recursion depth [Paul A. Clarke]
um_scatter and total_xfer_len remain 0.
Fixes: 53de092f47ff ("scsi: libsas: Set data_dir as DMA_NONE if libata
marks qc as NODATA")
Signed-off-by: Jolly Shah
Reviewed-by: John Garry
@luojiaxing, can you please test this?
---
v2:
- reorganized code to avoid setting num_scatter t
On 16/03/2021 19:59, Bart Van Assche wrote:
On 3/16/21 10:43 AM, John Garry wrote:
On 16/03/2021 17:00, Bart Van Assche wrote:
I agree that Jens asked at the end of 2018 not to touch the fast path
to fix this use-after-free (maybe that request has been repeated more
recently). If Jens or
On 19/03/2021 17:00, Robin Murphy wrote:
On 2021-03-19 13:25, John Garry wrote:
Add a function to allow the max size which we want to optimise DMA
mappings
for.
It seems neat in theory - particularly for packet-based interfaces that
might have a known fixed size of data unit that they
On 19/03/2021 16:25, Robin Murphy wrote:
On 2021-03-19 13:25, John Garry wrote:
Some LLDs may request DMA mappings whose IOVA length exceeds that of the
current rcache upper limit.
This means that allocations for those IOVAs will never be cached, and
always must be allocated and freed from the
On 19/03/2021 16:13, Robin Murphy wrote:
On 2021-03-19 13:25, John Garry wrote:
Move the IOVA size power-of-2 rcache roundup into the IOVA allocator.
This is to eventually make it possible to be able to configure the upper
limit of the IOVA rcache range.
Signed-off-by: John Garry
On 19/03/2021 13:40, Christoph Hellwig wrote:
On Fri, Mar 19, 2021 at 09:25:42PM +0800, John Garry wrote:
For streaming DMA mappings involving an IOMMU and whose IOVA len regularly
exceeds the IOVA rcache upper limit (meaning that they are not cached),
performance can be reduced.
This is much
Move the IOVA size power-of-2 rcache roundup into the IOVA allocator.
This is to eventually make it possible to be able to configure the upper
limit of the IOVA rcache range.
Signed-off-by: John Garry
---
drivers/iommu/dma-iommu.c | 8 --
drivers/iommu/iova.c | 51
To help learn if the domain has regular IOVA nodes, add a count of
reserved nodes, calculated at init time.
Signed-off-by: John Garry
---
drivers/iommu/iova.c | 2 ++
include/linux/iova.h | 1 +
2 files changed, 3 insertions(+)
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index
Add a function to allow the max size which we want to optimise DMA mappings
for.
Signed-off-by: John Garry
---
drivers/iommu/dma-iommu.c | 2 +-
include/linux/dma-map-ops.h | 1 +
include/linux/dma-mapping.h | 5 +
kernel/dma/mapping.c| 11 +++
4 files changed, 18
Add a function which allows the max optimised IOMMU DMA size to be set.
Signed-off-by: John Garry
---
drivers/iommu/dma-iommu.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 15b7270a5c2a..a5dfbd6c0496 100644
are a bit ropey - any better
ideas welcome...
[0]
https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leiz...@huawei.com/
[1]
https://lore.kernel.org/linux-iommu/1607538189-237944-1-git-send-email-john.ga...@huawei.com/
John Garry (6):
iommu: Move IOVA power-of-2 roundup into
.org/linux-iommu/20210129092120.1482-1-thunder.leiz...@huawei.com/
Signed-off-by: John Garry
---
drivers/iommu/iova.c | 37 +++--
include/linux/iova.h | 11 ++-
2 files changed, 45 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/iova.c b/drivers/io
For IOMMU strict mode, more than doubles throughput in some scenarios.
Signed-off-by: John Garry
---
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index 4580e081e489
On 18/03/2021 00:24, Jolly Shah wrote:
Hi John,
Thanks for the review.
On Wed, Mar 17, 2021 at 4:44 AM John Garry wrote:
On 16/03/2021 19:39, Jolly Shah wrote:
When the cache_type for the scsi device is changed, the scsi layer
issues a MODE_SELECT command. The caching mode details are
Well yeah, in your particular case you're allocating from a heavily
over-contended address space, so much of the time it is genuinely full.
Plus you're primarily churning one or two sizes of IOVA, so there's a
high chance that you will either allocate immediately from the cached
node (after a
On 10/03/2021 17:47, John Garry wrote:
On 09/03/2021 15:55, John Garry wrote:
On 05/03/2021 16:35, Robin Murphy wrote:
Hi Robin,
When restarting after searching below the cached node fails, resetting
the start point to the anchor node is often overly pessimistic. If
allocations are made with
On 16/03/2021 19:39, Jolly Shah wrote:
When the cache_type for the scsi device is changed, the scsi layer
issues a MODE_SELECT command. The caching mode details are communicated
via a request buffer associated with the scsi command with data
direction set as DMA_TO_DEVICE (scsi_mode_select). When
On 16/03/2021 17:00, Bart Van Assche wrote:
On 3/16/21 9:15 AM, John Garry wrote:
I'll have a look at this ASAP - a bit busy.
But a quick scan and I notice this:
> @@ -226,6 +226,7 @@ static inline void
__blk_mq_put_driver_tag(struct blk_mq_hw_ctx *hctx,
>
Hi Bart,
I'll have a look at this ASAP - a bit busy.
But a quick scan and I notice this:
> @@ -226,6 +226,7 @@ static inline void __blk_mq_put_driver_tag(struct
blk_mq_hw_ctx *hctx,
> struct request *rq)
> {
>blk_mq_put_tag(hctx->tags, rq->mq_ctx,
John,
On 2021/1/15 18:10, John Garry wrote:
On 21/12/2020 13:04, Jiahui Cen wrote:
On 21/12/2020 03:24, Jiahui Cen wrote:
Hi John,
On 2020/12/18 18:40, John Garry wrote:
On 18/12/2020 06:23, Jiahui Cen wrote:
Since the [start, end) is a half-open interval, a range with the end equal
to the start
On 15/03/2021 10:01, Dmitry Vyukov wrote:
On Mon, Mar 15, 2021 at 10:45 AM John Garry wrote:
It does not happen too often on syzbot so far, so let's try to do the
right thing first.
I've filed:https://bugs.launchpad.net/qemu/+bug/1918917
with a link to this thread. To be fair, I d
On 12/03/2021 10:52, Arnd Bergmann wrote:
On Fri, Mar 12, 2021 at 11:38 AM Dmitry Vyukov wrote:
On Fri, Mar 12, 2021 at 11:11 AM Arnd Bergmann wrote:
It does not happen too often on syzbot so far, so let's try to do the
right thing first.
I've filed: https://bugs.launchpad.net/qemu/+bug/19189
u driver to the iommu ops")
Signed-off-by: Robin Murphy
If it's worth anything:
Reviewed-by: John Garry
---
Documentation/admin-guide/kernel-parameters.txt | 15 ---
drivers/iommu/dma-iommu.c | 13 -
drivers/iommu/intel/iommu.c
On 06/03/2021 19:34, Jiri Olsa wrote:
On Fri, Mar 05, 2021 at 11:06:58AM +, John Garry wrote:
Hi Jirka,
- struct pmu_events_map *map = perf_pmu__find_map(NULL);
+ struct pmu_events_map *map = find_cpumap();
so this is just for arm at the moment right?
Yes - but to be more
On 11/03/2021 00:58, Ming Lei wrote:
Indeed, blk_mq_queue_tag_busy_iter() already does take a reference to its
queue usage counter when called, and the queue cannot be frozen to switch
IO scheduler until all refs are dropped. This ensures no stale references
to IO scheduler requests will be seen
On 08/03/2021 16:22, John Garry wrote:
While max32_alloc_size indirectly tracks the largest*contiguous*
available space, one of the ideas from which it grew was to simply keep
count of the total number of free PFNs. If you're really spending
significant time determining that the tr
On 10/03/2021 16:00, Bart Van Assche wrote:
So I can incorporate any changes and suggestions so far and send a
non-RFC version - that may get more attention if none extra comes.
As mentioned on the cover letter, if patch 2+3/3 are accepted, then
patch 1/3 could be simplified. But I plan to lea
On 09/03/2021 19:21, Bart Van Assche wrote:
On 3/9/21 9:47 AM, John Garry wrote:
This does fall over if some tags are allocated without associated
request queue, which I do not know exists.
Hi Bart,
The only tag allocation mechanism I know of is blk_mq_get_tag(). The
only blk_mq_get_tag
On 08/03/2021 19:59, Bart Van Assche wrote:
This changes the behavior of blk_mq_tagset_busy_iter(). What will e.g.
happen if the mtip driver calls blk_mq_tagset_busy_iter(&dd->tags,
mtip_abort_cmd, dd) concurrently with another blk_mq_tagset_busy_iter()
call and if that causes all mtip_abort_cmd(
On 09/03/2021 15:57, Michael Kelley wrote:
From: John Garry Sent: Tuesday, March 9, 2021 2:10 AM
On 08/03/2021 17:56, Melanie Plageman wrote:
On Mon, Mar 08, 2021 at 02:37:40PM +, Michael Kelley wrote:
From: Melanie Plageman (Microsoft) Sent: Friday,
March 5, 2021 3:22 PM
The
On 05/03/2021 16:35, Robin Murphy wrote:
Hi Robin,
When restarting after searching below the cached node fails, resetting
the start point to the anchor node is often overly pessimistic. If
allocations are made with mixed limits - particularly in the case of the
opportunistic 32-bit allocation f
On 08/03/2021 17:56, Melanie Plageman wrote:
On Mon, Mar 08, 2021 at 02:37:40PM +, Michael Kelley wrote:
From: Melanie Plageman (Microsoft) Sent: Friday,
March 5, 2021 3:22 PM
The scsi_device->queue_depth is set to Scsi_Host->cmd_per_lun during
allocation.
Cap cmd_per_lun at can_queue t
On 06/03/2021 19:34, Jiri Olsa wrote:
On Fri, Mar 05, 2021 at 11:06:58AM +, John Garry wrote:
Hi Jirka,
- struct pmu_events_map *map = perf_pmu__find_map(NULL);
+ struct pmu_events_map *map = find_cpumap();
so this is just for arm at the moment right?
Yes - but to be more
On 08/03/2021 15:15, Robin Murphy wrote:
I figure that you're talking about 4e89dce72521 now. I would have
liked to know which real-life problem it solved in practice.
From what I remember, the problem reported was basically the one
illustrated in that commit and the one I alluded to above -
On 08/03/2021 13:08, Robin Murphy wrote:
On 2021-03-05 17:41, John Garry wrote:
On 05/03/2021 16:32, Robin Murphy wrote:
In converting intel-iommu over to the common IOMMU DMA ops, it quietly
lost the functionality of its "forcedac" option. Since this is a handy
thing both for testi
On 06/03/2021 02:52, Khazhy Kumykov wrote:
On Fri, Mar 5, 2021 at 7:20 AM John Garry wrote:
It has been reported many times that a use-after-free can be intermittently
found when iterating busy requests:
-
https://lore.kernel.org/linux-block/8376443a-ec1b-0cef-8244-ed584b96f...@huawei.com
On 06/03/2021 04:43, Bart Van Assche wrote:
On 3/5/21 7:14 AM, John Garry wrote:
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 7ff1b20d58e7..5950fee490e8 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -358,11 +358,16 @@ void blk_mq_tagset_busy_iter(struct
On 06/03/2021 04:32, Bart Van Assche wrote:
On 3/5/21 7:14 AM, John Garry wrote:
diff --git a/block/blk.h b/block/blk.h
index 3b53e44b967e..1a948bfd91e4 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -201,10 +201,29 @@ void elv_unregister_queue(struct request_queue *q);
static inline void
On 06/03/2021 18:13, Bart Van Assche wrote:
On 3/5/21 7:14 AM, John Garry wrote:
@@ -2296,10 +2296,14 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct
blk_mq_tags *tags,
for (i = 0; i < tags->nr_tags; i++) {
struct request *rq = tags->sta
1 - 100 of 1001 matches
Mail list logo