The pull request you sent on Mon, 14 Jul 2025 22:10:05 +0200 (CEST):
> git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git
> tags/for-6.16/dm-fixes-2
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/155a3c003e555a7300d156a5252c004c392ec6b0
Thank yo
On 7/15/25 01:17, Mikulas Patocka wrote:
On Thu, 10 Jul 2025, Sheng Yong wrote:
From: Sheng Yong
[..]
diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index ec84ba5e93e5..caf6ae9a8b52 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -2742,7 +2742,9 @@ static un
On 7/14/25 12:27 PM, Mikulas Patocka wrote:
There's no need to call need_resched() because cond_resched() will do
nothing if need_resched() returns false.
Signed-off-by: Mikulas Patocka
Reviewed-by: Matthew Sakai
---
drivers/md/dm-vdo/funnel-workqueue.c |3 +--
1 file changed, 1 i
Hi Linus
The following changes since commit db53805156f1e0aa6d059c0d3f9ac660d4ef3eb4:
dm-raid: fix variable in journal device check (2025-06-23 16:42:37 +0200)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git
tags/for-6.16/dm-
On Thu, 10 Jul 2025, Sheng Yong wrote:
> From: Sheng Yong
>
> If "try_verify_in_tasklet" is set for dm-verity, DM_BUFIO_CLIENT_NO_SLEEP
> is enabled for dm-bufio. However, when bufio tries to evict buffers, there
> is a chance to trigger scheduling in spin_lock_bh, the following warning
> is
On Mon, 2025-07-14 at 12:59 -0400, Benjamin Marzinski wrote:
> On Fri, Jul 11, 2025 at 04:11:46PM +0200, Martin Wilck wrote:
> > On Fri, 2025-07-11 at 14:15 +0200, Martin Wilck wrote:
> >
> >
> > It's getting so awkward that we might as well just use memset.
>
> Yeah. picking a member from one o
On Fri, Jul 11, 2025 at 04:11:46PM +0200, Martin Wilck wrote:
> On Fri, 2025-07-11 at 14:15 +0200, Martin Wilck wrote:
> > On Thu, 2025-07-10 at 14:10 -0400, Benjamin Marzinski wrote:
> > > When you change the reservation key of a registered multipath
> > > device,
> > > some of paths might be down
There's no need to call need_resched() because cond_resched() will do
nothing if need_resched() returns false.
Signed-off-by: Mikulas Patocka
---
drivers/md/dm-vdo/funnel-workqueue.c |3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
Index: linux-2.6/drivers/md/dm-vdo/funnel-workqueue.
On Wed, 9 Jul 2025, Dongsheng Yang wrote:
>
> 在 7/8/2025 4:16 AM, Mikulas Patocka 写道:
> >
> > On Mon, 7 Jul 2025, Dongsheng Yang wrote:
> >
> > > Hi Mikulas,
> > > This is V2 for dm-pcache, please take a look.
> > >
> > > Code:
> > > https://github.com/DataTravelGuide/linux tags/pcach
Am 12.07.25 um 22:14 schrieb Xose Vazquez Perez:
> Each blacklist only their own devices.
>
> Cc: Stefan Haberland
> Cc: Nigel Hislop
> Cc: Matthias Rudolph
> Cc: Heiko Carstens
> Cc: Vasily Gorbik
> Cc: Alexander Gordeev
> Cc: Christian Borntraeger
> Cc: Sven Schnelle
> Cc: Hannes Reinec
Am 13.07.25 um 00:11 schrieb Xose Vazquez Perez:
> On 7/12/25 10:14 PM, Xose Vazquez Perez wrote:
>
>> libmultipath/hwtable.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/libmultipath/hwtable.c b/libmultipath/hwtable.c
>> index 081d119c..4ca4245c 100644
>> --
On Sat, 2025-07-12 at 22:14 +0200, Xose Vazquez Perez wrote:
> Each blacklist only their own devices.
>
> Cc: Stefan Haberland
> Cc: Nigel Hislop
> Cc: Matthias Rudolph
> Cc: Heiko Carstens
> Cc: Vasily Gorbik
> Cc: Alexander Gordeev
> Cc: Christian Borntraeger
> Cc: Sven Schnelle
> Cc: Ha
On Sun, 2025-07-13 at 00:11 +0200, Xose Vazquez Perez wrote:
> On 7/12/25 10:14 PM, Xose Vazquez Perez wrote:
>
> > libmultipath/hwtable.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/libmultipath/hwtable.c b/libmultipath/hwtable.c
> > index 081d119c..4ca4
On Mon, Jul 14, 2025 at 08:52:39AM +0100, John Garry wrote:
> On 14/07/2025 06:53, Christoph Hellwig wrote:
>> Now we should be able to implement the software atomic writes pretty
>> easily for zoned XFS, and funnily they might actually be slightly faster
>> than normal writes due to the transactio
Add cache.c and cache.h that introduce the top-level
“struct pcache_cache”. This object glues together the backing block
device, the persistent-memory cache device, segment array, RB-tree
indexes, and the background workers for write-back and garbage
collection.
* Persistent metadata
- pcache_ca
Introduce *cache_segment.c*, the in-memory/on-disk glue that lets a
`struct pcache_cache` manage its array of data segments.
* Metadata handling
- Loads the most-recent replica of both the segment-info block
(`struct pcache_segment_info`) and per-segment generation counter
(`struct pcach
Introduce cache_req.c, the high-level engine that
drives I/O requests through dm-pcache. It decides whether data is served
from the cache or fetched from the backing device, allocates new cache
space on writes, and flushes dirty ksets when required.
* Read path
- Traverses the striped RB-trees t
Add the top-level integration pieces that make the new persistent-memory
cache target usable from device-mapper:
* Documentation
- `Documentation/admin-guide/device-mapper/dm-pcache.rst` explains the
design, table syntax, status fields and runtime messages.
* Core target implementation
-
Add *cache_key.c* which becomes the heart of dm-pcache’s
in-memory index and on-media key-set (“kset”) format.
* Key objects (`struct pcache_cache_key`)
- Slab-backed allocator & ref-count helpers
- `cache_key_encode()/decode()` translate between in-memory keys and
their on-disk representa
Introduce cache_gc.c, a self-contained engine that reclaims cache
segments whose data have already been flushed to the backing device.
Running in the cache workqueue, the GC keeps segment usage below the
user-configurable *cache_gc_percent* threshold.
* need_gc() – decides when to trigger GC by ch
Introduce cache_writeback.c, which implements the asynchronous write-back
path for pcache. The new file is responsible for detecting dirty data,
organising it into an in-memory tree, issuing bios to the backing block
device, and advancing the cache’s *dirty tail* pointer once data has
been safely
Introduce segment.{c,h}, an internal abstraction that encapsulates
everything related to a single pcache *segment* (the fixed-size
allocation unit stored on the cache-device).
* On-disk metadata (`struct pcache_segment_info`)
- Embedded `struct pcache_meta_header` for CRC/sequence handling.
-
Add cache_dev.{c,h} to manage the persistent-memory device that stores
all pcache metadata and data segments. Splitting this logic out keeps
the main dm-pcache code focused on policy while cache_dev handles the
low-level interaction with the DAX block device.
* DAX mapping
- Opens the underlyin
This patch introduces *backing_dev.{c,h}*, a self-contained layer that
handles all interaction with the *backing block device* where cache
write-back and cache-miss reads are serviced. Isolating this logic
keeps the core dm-pcache code free of low-level bio plumbing.
* Device setup / teardown
-
Consolidate common PCACHE helpers into a new header so that subsequent
patches can include them without repeating boiler-plate.
- Logging macros with unified prefix and location info.
- Common constants (KB/MB helpers, metadata replica count, CRC seed).
- On-disk metadata header definition and CRC
Hi Mikulas,
This is V3 for dm-pcache, please take a look.
Code:
https://github.com/DataTravelGuide/linux tags/pcache_v3
Changelogs
V3 from V2:
- rebased against linux-dm dm-6.17
- add missing include file bitfiled.h (Mikulas)
- move kmem_cache from per-device
On 14/07/2025 06:53, Christoph Hellwig wrote:
Now we should be able to implement the software atomic writes pretty
easily for zoned XFS, and funnily they might actually be slightly faster
than normal writes due to the transaction batching. Now that we're
getting reasonable test coverage we shoul
27 matches
Mail list logo