Re: [git pull] device mapper fixes 2 for 6.16

2025-07-14 Thread pr-tracker-bot
The pull request you sent on Mon, 14 Jul 2025 22:10:05 +0200 (CEST): > git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git > tags/for-6.16/dm-fixes-2 has been merged into torvalds/linux.git: https://git.kernel.org/torvalds/c/155a3c003e555a7300d156a5252c004c392ec6b0 Thank yo

Re: [PATCH] dm-bufio: fix sched in atomic context

2025-07-14 Thread Sheng Yong
On 7/15/25 01:17, Mikulas Patocka wrote: On Thu, 10 Jul 2025, Sheng Yong wrote: From: Sheng Yong [..] diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index ec84ba5e93e5..caf6ae9a8b52 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -2742,7 +2742,9 @@ static un

Re: [PATCH] vdo: omit need_resched() before cond_resched()

2025-07-14 Thread Matthew Sakai
On 7/14/25 12:27 PM, Mikulas Patocka wrote: There's no need to call need_resched() because cond_resched() will do nothing if need_resched() returns false. Signed-off-by: Mikulas Patocka Reviewed-by: Matthew Sakai --- drivers/md/dm-vdo/funnel-workqueue.c |3 +-- 1 file changed, 1 i

[git pull] device mapper fixes 2 for 6.16

2025-07-14 Thread Mikulas Patocka
Hi Linus The following changes since commit db53805156f1e0aa6d059c0d3f9ac660d4ef3eb4: dm-raid: fix variable in journal device check (2025-06-23 16:42:37 +0200) are available in the Git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git tags/for-6.16/dm-

Re: [PATCH] dm-bufio: fix sched in atomic context

2025-07-14 Thread Mikulas Patocka
On Thu, 10 Jul 2025, Sheng Yong wrote: > From: Sheng Yong > > If "try_verify_in_tasklet" is set for dm-verity, DM_BUFIO_CLIENT_NO_SLEEP > is enabled for dm-bufio. However, when bufio tries to evict buffers, there > is a chance to trigger scheduling in spin_lock_bh, the following warning > is

Re: [PATCH 11/15] limpathpersist: Handle changing key corner case

2025-07-14 Thread Martin Wilck
On Mon, 2025-07-14 at 12:59 -0400, Benjamin Marzinski wrote: > On Fri, Jul 11, 2025 at 04:11:46PM +0200, Martin Wilck wrote: > > On Fri, 2025-07-11 at 14:15 +0200, Martin Wilck wrote: > > > > > > It's getting so awkward that we might as well just use memset. > > Yeah. picking a member from one o

Re: [PATCH 11/15] limpathpersist: Handle changing key corner case

2025-07-14 Thread Benjamin Marzinski
On Fri, Jul 11, 2025 at 04:11:46PM +0200, Martin Wilck wrote: > On Fri, 2025-07-11 at 14:15 +0200, Martin Wilck wrote: > > On Thu, 2025-07-10 at 14:10 -0400, Benjamin Marzinski wrote: > > > When you change the reservation key of a registered multipath > > > device, > > > some of paths might be down

[PATCH] vdo: omit need_resched() before cond_resched()

2025-07-14 Thread Mikulas Patocka
There's no need to call need_resched() because cond_resched() will do nothing if need_resched() returns false. Signed-off-by: Mikulas Patocka --- drivers/md/dm-vdo/funnel-workqueue.c |3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) Index: linux-2.6/drivers/md/dm-vdo/funnel-workqueue.

Re: [PATCH v2 00/11] dm-pcache – persistent-memory cache for block devices

2025-07-14 Thread Mikulas Patocka
On Wed, 9 Jul 2025, Dongsheng Yang wrote: > > 在 7/8/2025 4:16 AM, Mikulas Patocka 写道: > > > > On Mon, 7 Jul 2025, Dongsheng Yang wrote: > > > > > Hi Mikulas, > > > This is V2 for dm-pcache, please take a look. > > > > > > Code: > > > https://github.com/DataTravelGuide/linux tags/pcach

Re: [PATCH] multipath-tools: fix default blacklist of s390 devices

2025-07-14 Thread Stefan Haberland
Am 12.07.25 um 22:14 schrieb Xose Vazquez Perez: > Each blacklist only their own devices. > > Cc: Stefan Haberland > Cc: Nigel Hislop > Cc: Matthias Rudolph > Cc: Heiko Carstens > Cc: Vasily Gorbik > Cc: Alexander Gordeev > Cc: Christian Borntraeger > Cc: Sven Schnelle > Cc: Hannes Reinec

Re: [PATCH] multipath-tools: fix default blacklist of s390 devices

2025-07-14 Thread Stefan Haberland
Am 13.07.25 um 00:11 schrieb Xose Vazquez Perez: > On 7/12/25 10:14 PM, Xose Vazquez Perez wrote: > >>   libmultipath/hwtable.c | 4 ++-- >>   1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/libmultipath/hwtable.c b/libmultipath/hwtable.c >> index 081d119c..4ca4245c 100644 >> --

Re: [PATCH] multipath-tools: fix default blacklist of s390 devices

2025-07-14 Thread Martin Wilck
On Sat, 2025-07-12 at 22:14 +0200, Xose Vazquez Perez wrote: > Each blacklist only their own devices. > > Cc: Stefan Haberland > Cc: Nigel Hislop > Cc: Matthias Rudolph > Cc: Heiko Carstens > Cc: Vasily Gorbik > Cc: Alexander Gordeev > Cc: Christian Borntraeger > Cc: Sven Schnelle > Cc: Ha

Re: [PATCH] multipath-tools: fix default blacklist of s390 devices

2025-07-14 Thread Martin Wilck
On Sun, 2025-07-13 at 00:11 +0200, Xose Vazquez Perez wrote: > On 7/12/25 10:14 PM, Xose Vazquez Perez wrote: > > >   libmultipath/hwtable.c | 4 ++-- > >   1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/libmultipath/hwtable.c b/libmultipath/hwtable.c > > index 081d119c..4ca4

Re: [PATCH v6 0/6] block/md/dm: set chunk_sectors from stacked dev stripe size

2025-07-14 Thread Christoph Hellwig
On Mon, Jul 14, 2025 at 08:52:39AM +0100, John Garry wrote: > On 14/07/2025 06:53, Christoph Hellwig wrote: >> Now we should be able to implement the software atomic writes pretty >> easily for zoned XFS, and funnily they might actually be slightly faster >> than normal writes due to the transactio

[PATCH v3 10/11] dm-pcache: add cache core

2025-07-14 Thread Dongsheng Yang
Add cache.c and cache.h that introduce the top-level “struct pcache_cache”. This object glues together the backing block device, the persistent-memory cache device, segment array, RB-tree indexes, and the background workers for write-back and garbage collection. * Persistent metadata - pcache_ca

[PATCH v3 05/11] dm-pcache: add cache_segment

2025-07-14 Thread Dongsheng Yang
Introduce *cache_segment.c*, the in-memory/on-disk glue that lets a `struct pcache_cache` manage its array of data segments. * Metadata handling - Loads the most-recent replica of both the segment-info block (`struct pcache_segment_info`) and per-segment generation counter (`struct pcach

[PATCH v3 09/11] dm-pcache: add cache_req

2025-07-14 Thread Dongsheng Yang
Introduce cache_req.c, the high-level engine that drives I/O requests through dm-pcache. It decides whether data is served from the cache or fetched from the backing device, allocates new cache space on writes, and flushes dirty ksets when required. * Read path - Traverses the striped RB-trees t

[PATCH v3 11/11] dm-pcache: initial dm-pcache target

2025-07-14 Thread Dongsheng Yang
Add the top-level integration pieces that make the new persistent-memory cache target usable from device-mapper: * Documentation - `Documentation/admin-guide/device-mapper/dm-pcache.rst` explains the design, table syntax, status fields and runtime messages. * Core target implementation -

[PATCH v3 08/11] dm-pcache: add cache_key

2025-07-14 Thread Dongsheng Yang
Add *cache_key.c* which becomes the heart of dm-pcache’s in-memory index and on-media key-set (“kset”) format. * Key objects (`struct pcache_cache_key`) - Slab-backed allocator & ref-count helpers - `cache_key_encode()/decode()` translate between in-memory keys and their on-disk representa

[PATCH v3 07/11] dm-pcache: add cache_gc

2025-07-14 Thread Dongsheng Yang
Introduce cache_gc.c, a self-contained engine that reclaims cache segments whose data have already been flushed to the backing device. Running in the cache workqueue, the GC keeps segment usage below the user-configurable *cache_gc_percent* threshold. * need_gc() – decides when to trigger GC by ch

[PATCH v3 06/11] dm-pcache: add cache_writeback

2025-07-14 Thread Dongsheng Yang
Introduce cache_writeback.c, which implements the asynchronous write-back path for pcache. The new file is responsible for detecting dirty data, organising it into an in-memory tree, issuing bios to the backing block device, and advancing the cache’s *dirty tail* pointer once data has been safely

[PATCH v3 04/11] dm-pcache: add segment layer

2025-07-14 Thread Dongsheng Yang
Introduce segment.{c,h}, an internal abstraction that encapsulates everything related to a single pcache *segment* (the fixed-size allocation unit stored on the cache-device). * On-disk metadata (`struct pcache_segment_info`) - Embedded `struct pcache_meta_header` for CRC/sequence handling. -

[PATCH v3 03/11] dm-pcache: add cache device

2025-07-14 Thread Dongsheng Yang
Add cache_dev.{c,h} to manage the persistent-memory device that stores all pcache metadata and data segments. Splitting this logic out keeps the main dm-pcache code focused on policy while cache_dev handles the low-level interaction with the DAX block device. * DAX mapping - Opens the underlyin

[PATCH v3 02/11] dm-pcache: add backing device management

2025-07-14 Thread Dongsheng Yang
This patch introduces *backing_dev.{c,h}*, a self-contained layer that handles all interaction with the *backing block device* where cache write-back and cache-miss reads are serviced. Isolating this logic keeps the core dm-pcache code free of low-level bio plumbing. * Device setup / teardown -

[PATCH v3 01/11] dm-pcache: add pcache_internal.h

2025-07-14 Thread Dongsheng Yang
Consolidate common PCACHE helpers into a new header so that subsequent patches can include them without repeating boiler-plate. - Logging macros with unified prefix and location info. - Common constants (KB/MB helpers, metadata replica count, CRC seed). - On-disk metadata header definition and CRC

[PATCH v3 00/11] dm-pcache – persistent-memory cache for block devices

2025-07-14 Thread Dongsheng Yang
Hi Mikulas, This is V3 for dm-pcache, please take a look. Code: https://github.com/DataTravelGuide/linux tags/pcache_v3 Changelogs V3 from V2: - rebased against linux-dm dm-6.17 - add missing include file bitfiled.h (Mikulas) - move kmem_cache from per-device

Re: [PATCH v6 0/6] block/md/dm: set chunk_sectors from stacked dev stripe size

2025-07-14 Thread John Garry
On 14/07/2025 06:53, Christoph Hellwig wrote: Now we should be able to implement the software atomic writes pretty easily for zoned XFS, and funnily they might actually be slightly faster than normal writes due to the transaction batching. Now that we're getting reasonable test coverage we shoul