On Tue, Sep 26, 2023 at 11:53:33PM +0100, Matthew Wilcox wrote:
> I'm going to sleep now instead of running the last 10 steps of the
> bisect. If nobody's found this when I wake up, I'll finish it then.
Bisection found it. I confirmed by hand; checking out this commit
yields a faile
On Tue, Sep 26, 2023 at 11:08:07PM +0100, Matthew Wilcox wrote:
> Got this in linux-next 20230926, and I don't think it's due to my
> patches on top (it may be, will verify):
Confirmed not my patches;
git bisect start
# status: waiting for both good and bad commits
Got this in linux-next 20230926, and I don't think it's due to my
patches on top (it may be, will verify):
04178 generic/347 run fstests generic/347 at 2023-09-26 17:24:55
04178 XFS (sdb): Mounting V5 Filesystem c0c11e6a-170c-48e4-84c5-42b46d6d5197
04178 XFS (sdb): Ending clean mount
04179
On Tue, Jul 04, 2023 at 07:06:26AM -0700, Bart Van Assche wrote:
> On 7/4/23 05:21, Jan Kara wrote:
> > +struct bdev_handle {
> > + struct block_device *bdev;
> > + void *holder;
> > +};
>
> Please explain in the patch description why a holder pointer is introduced
> in struct bdev_handle and
On Tue, Jul 04, 2023 at 02:21:28PM +0200, Jan Kara wrote:
> +struct bdev_handle *blkdev_get_handle_by_dev(dev_t dev, blk_mode_t mode,
> + void *holder, const struct blk_holder_ops *hops)
> +{
> + struct bdev_handle *handle = kmalloc(sizeof(struct bdev_handle),
> +
On Tue, May 30, 2023 at 08:49:23AM -0700, Johannes Thumshirn wrote:
> Now that all callers of bio_add_folio() check the return value, mark it as
> __must_check.
>
> Signed-off-by: Johannes Thumshirn
Reviewed-by: Matthew Wilcox (Oracle)
--
dm-devel mailing list
dm-devel@redh
a newly created bio can't fail, use the newly
> introduced __bio_add_folio() function.
>
> Signed-off-by: Johannes Thumshirn
Reviewed-by: Matthew Wilcox (Oracle)
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On Tue, May 30, 2023 at 08:49:21AM -0700, Johannes Thumshirn wrote:
> Just like for bio_add_pages() add a no-fail variant for bio_add_folio().
>
> Signed-off-by: Johannes Thumshirn
Reviewed-by: Matthew Wilcox (Oracle)
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.r
On Mon, May 29, 2023 at 04:59:40PM -0400, Mikulas Patocka wrote:
> Hi
>
> I improved the dm-flakey device mapper target, so that it can do random
> corruption of read and write bios - I uploaded it here:
> https://people.redhat.com/~mpatocka/testcases/bcachefs/dm-flakey.c
>
> I set up
On Mon, May 22, 2023 at 04:11:33PM +0530, Nitesh Shetty wrote:
> + token = alloc_page(gfp_mask);
Why is PAGE_SIZE the right size for 'token'? That seems quite unlikely.
I could understand it being SECTOR_SIZE or something that's dependent on
the device, but I cannot fathom it being
On Fri, Apr 21, 2023 at 12:58:03PM -0700, Luis Chamberlain wrote:
> - *pl_index = sector >> (PAGE_SHIFT - SECTOR_SHIFT);
> + *pl_index = sector >> (PAGE_SECTORS_SHIFT);
You could/should remove the () around PAGE_SECTORS_SHIFT
(throughout)
--
dm-devel mailing list
dm-devel@redhat.com
On Fri, Apr 21, 2023 at 12:58:05PM -0700, Luis Chamberlain wrote:
> Just use the PAGE_SECTORS generic define. This produces no functional
> changes. While at it use left shift to simplify this even further.
How is FOO << 2 simpler than FOO * 4?
> - return bioset_init(_ioend_bioset, 4 *
On Wed, Apr 19, 2023 at 04:09:29PM +0200, Johannes Thumshirn wrote:
> Now that all users of bio_add_page check for the return value, mark
> bio_add_page as __must_check.
Should probably add __must_check to bio_add_folio too? If this is
really the way you want to go ... means we also need a
On Mon, Apr 17, 2023 at 03:11:57PM -0400, Mikulas Patocka wrote:
> If we use bio_for_each_folio_all on an empty bio, it will access the first
> bio vector unconditionally (it is uninitialized) and it may crash
> depending on the uninitialized data.
Wait, how do we have an empty bio in the first
On Wed, Mar 29, 2023 at 10:05:48AM -0700, Johannes Thumshirn wrote:
> +++ b/drivers/block/drbd/drbd_bitmap.c
> @@ -1043,9 +1043,11 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx
> *ctx, int page_nr) __must_ho
> bio = bio_alloc_bioset(device->ldev->md_bdev, 1, op, GFP_NOIO,
>
On Thu, Feb 16, 2023 at 12:47:08PM -0500, Mikulas Patocka wrote:
> + while (order > 0) {
> + page = alloc_pages(gfp_mask
> + | __GFP_NOMEMALLOC | __GFP_NORETRY |
> __GFP_NOWARN, order);
... | __GFP_COMP
> page =
On Wed, Jun 08, 2022 at 12:01:12PM -0700, Deven Bowers wrote:
> IPE is a Linux Security Module which takes a complimentary approach to
Hello, IPE. You're looking exceptionally attractive today. Have you
been working out?
(maybe you meant "complementary"? ;-)
--
dm-devel mailing list
Not quite sure whose bug this is. Current Linus head running xfstests
against ext4 (probably not ext4's fault?)
01818 generic/250 run fstests generic/250 at 2022-05-28 23:48:09
01818 EXT4-fs (dm-0): mounted filesystem with ordered data mode. Quota mode:
none.
01818 EXT4-fs (dm-0):
On Sun, Mar 13, 2022 at 07:03:39PM +0800, Qu Wenruo wrote:
> > Specifically for the page cache (which I hope is what you meant by
> > "page error status", because we definitely can't use that for DIO),
>
> Although what I exactly mean is PageError flag.
>
> For DIO the pages are not mapping to
On Sun, Mar 13, 2022 at 06:24:32PM +0800, Qu Wenruo wrote:
> Since if any of the split bio got an error, the whole bio will have
> bi_status set to some error number.
>
> This is completely fine for write bio, but I'm wondering can we get a
> better granularity by introducing per-bvec bi_status
On Thu, Nov 04, 2021 at 11:09:19PM -0400, Theodore Ts'o wrote:
> On Thu, Nov 04, 2021 at 12:04:43PM -0700, Darrick J. Wong wrote:
> > > Note that I've avoided implementing read/write fops for dax devices
> > > partly out of concern for not wanting to figure out shared-mmap vs
> > > write coherence
On Thu, Nov 04, 2021 at 10:43:23AM -0700, Christoph Hellwig wrote:
> Well, the answer for other interfaces (at least at the gold plated
> cost option) is so strong internal CRCs that user visible bits clobbered
> by cosmic rays don't realisticly happen. But it is a problem with the
> cheaper
On Thu, Nov 04, 2021 at 01:30:48AM -0700, Christoph Hellwig wrote:
> Well, the whole problem is that we should not have to manage this at
> all, and this is where I blame Intel. There is no good reason to not
> slightly overprovision the nvdimms and just do internal bad page
> remapping like
On Sun, Oct 31, 2021 at 01:19:48PM +, Pavel Begunkov wrote:
> On 10/29/21 23:32, Dave Chinner wrote:
> > Yup, you just described RWF_HIPRI! Seriously, Pavel, did you read
> > past this? I'll quote what I said again, because I've already
> > addressed this argument to point out how silly it
On Fri, Oct 15, 2021 at 03:26:15PM +0200, Christoph Hellwig wrote:
> +static inline sector_t bdev_nr_bytes(struct block_device *bdev)
> +{
> + return i_size_read(bdev->bd_inode);
Uh. loff_t, surely?
--
dm-devel mailing list
dm-devel@redhat.com
On Tue, Jun 29, 2021 at 07:49:24AM +, ruansy.f...@fujitsu.com wrote:
> > But I think this is unnecessary; why not just pass the PFN into
> > mf_dax_kill_procs?
>
> Because the mf_dax_kill_procs() is called in filesystem recovery function,
> which is at the end of the RMAP routine. And the
On Mon, Jun 28, 2021 at 08:02:14AM +0800, Shiyang Ruan wrote:
> +/*
> + * dax_load_pfn - Load pfn of the DAX entry corresponding to a page
> + * @mapping: The file whose entry we want to load
> + * @index: offset where the DAX entry located in
> + *
> + * Return: pfn number of the DAX entry
>
Use kvcalloc or kvmalloc_array instead (depending whether zeroing is
useful).
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/md/dm-snap-persistent.c | 6 +++---
drivers/md/dm-snap.c| 5 +++--
drivers/md/dm-table.c | 30 ++
include/linux
On Sat, Feb 20, 2021 at 06:01:56PM +, David Laight wrote:
> From: SelvaKumar S
> > Sent: 19 February 2021 12:45
> >
> > This patchset tries to add support for TP4065a ("Simple Copy Command"),
> > v2020.05.04 ("Ratified")
> >
> > The Specification can be found in following link.
> >
On Fri, Feb 19, 2021 at 06:15:16PM +0530, SelvaKumar S wrote:
> + struct nvme_copy_range *range = NULL;
[...]
> + range = kmalloc_array(nr_range, sizeof(*range),
> + GFP_ATOMIC | __GFP_NOWARN);
[...]
> + req->special_vec.bv_page = virt_to_page(range);
> +
On Tue, Jan 26, 2021 at 03:52:34PM +0100, Christoph Hellwig wrote:
> bio_kmalloc shares almost no logic with the bio_set based fast path
> in bio_alloc_bioset. Split it into an entirely separate implementation.
>
> Signed-off-by: Christoph Hellwig
> ---
> block/bio.c | 167
FYI your email is completely unreadable to those not using html.
I can't tell what you wrote and what Damien wrote.
On Thu, Jan 28, 2021 at 08:33:10AM +, Chaitanya Kulkarni wrote:
> On 1/27/21 11:21 PM, Damien Le Moal wrote:
>
> On 2021/01/28 16:12, Chaitanya Kulkarni wrote:
>
>
>
I just got a Tiger Lake based laptop and installed Debian on it with
dm-crypt. The installer attempts to write zeroes to the encrypted partition
in order to prevent various metadata attacks (using blockdev-wipe [1])
After about eight hours with it not even halfway, I aborted this attempt.
The
On Tue, Nov 24, 2020 at 02:27:08PM +0100, Christoph Hellwig wrote:
> Use file->f_mapping in all remaining places that have a struct file
> available to properly handle the case where inode->i_mapping !=
> file_inode(file)->i_mapping.
>
> Signed-off-by: Christoph Hellwi
On Fri, Nov 20, 2020 at 04:32:53PM +0100, Christoph Hellwig wrote:
> On Fri, Nov 20, 2020 at 12:21:21PM +0100, Jan Kara wrote:
> > > > AFAICT bd_size_lock is pointless after these changes so we can just
> > > > remove
> > > > it?
> > >
> > > I don't think it is, as reuqiring bd_mutex for size
On Wed, Nov 18, 2020 at 09:47:57AM +0100, Christoph Hellwig wrote:
> @@ -2887,13 +2887,13 @@ EXPORT_SYMBOL(filemap_map_pages);
> vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
> {
> struct page *page = vmf->page;
> - struct inode *inode = file_inode(vmf->vma->vm_file);
> +
On Wed, Nov 18, 2020 at 10:23:51AM +0100, Jan Beulich wrote:
> On 18.11.2020 10:09, Greg KH wrote:
> > On Wed, Nov 18, 2020 at 10:04:04AM +0100, Jan Beulich wrote:
> >> On 18.11.2020 09:58, Christoph Hellwig wrote:
> >>> On Wed, Nov 18, 2020 at 09:56:11AM +0100, Jan Beulich wrote:
> since
On Wed, Sep 23, 2020 at 08:39:02PM -0400, Mike Snitzer wrote:
> On Thu, Jun 25 2020 at 7:31am -0400,
> Matthew Wilcox (Oracle) wrote:
>
> > Similar to memalloc_noio() and memalloc_nofs(), memalloc_nowait()
> > guarantees we will not sleep to reclaim memory. Use it to si
On Mon, Jul 13, 2020 at 03:40:39AM +0800, Austin Chang wrote:
> + # When using dmsetup directly instead of volume manager like lvm2,
> + # the first 4k of the metadata device should be zeroed to indicate
> + # empty metadata.
> + dd if=/dev/zero of=/dev/mapper/metadata bs=4k conv=notrunc
...
On Wed, Jul 01, 2020 at 06:57:47PM +0100, Matthew Wilcox wrote:
> On Wed, Jul 01, 2020 at 12:41:03PM -0400, Mike Snitzer wrote:
> > On Wed, Jul 01 2020 at 5:06am -0400,
> > Christoph Hellwig wrote:
> >
> > > Hi Jens,
> > >
> > > we have a lot of
On Wed, Jul 01, 2020 at 12:41:03PM -0400, Mike Snitzer wrote:
> On Wed, Jul 01 2020 at 5:06am -0400,
> Christoph Hellwig wrote:
>
> > Hi Jens,
> >
> > we have a lot of bdi congestion related code that is left around without
> > any use. This series removes it in preparation of sorting out the
On Tue, Jun 30, 2020 at 08:34:36AM +0200, Michal Hocko wrote:
> On Mon 29-06-20 22:28:30, Matthew Wilcox wrote:
> [...]
> > The documentation is hard to add a new case to, so I rewrote it. What
> > do you think? (Obviously I'll split this out differently for submission;
>
On Mon, Jun 29, 2020 at 04:45:14PM +0300, Mike Rapoport wrote:
>
>
> On June 29, 2020 3:52:31 PM GMT+03:00, Michal Hocko wrote:
> >On Mon 29-06-20 13:18:16, Matthew Wilcox wrote:
> >> On Mon, Jun 29, 2020 at 08:08:51AM +0300, Mike Rapoport wrote:
> >> >
On Mon, Jun 29, 2020 at 08:08:51AM +0300, Mike Rapoport wrote:
> > @@ -886,8 +868,12 @@ static struct dm_buffer
> > *__alloc_buffer_wait_no_callback(struct dm_bufio_client
> > return NULL;
> >
> > if (dm_bufio_cache_size_latch != 1 && !tried_noio_alloc) {
> > +
Similar to memalloc_noio() and memalloc_nofs(), memalloc_nowait()
guarantees we will not sleep to reclaim memory. Use it to simplify
dm-bufio's allocations.
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/md/dm-bufio.c| 30 --
include/linux/sched.h| 1
On Thu, Jun 25, 2020 at 10:36:11PM +0200, Michal Hocko wrote:
> On Thu 25-06-20 11:48:32, Darrick J. Wong wrote:
> > On Thu, Jun 25, 2020 at 12:31:16PM +0100, Matthew Wilcox (Oracle) wrote:
> > > I want a memalloc_nowait like we have memalloc_noio and memalloc_nofs
> >
On Thu, Jun 25, 2020 at 11:48:32AM -0700, Darrick J. Wong wrote:
> On Thu, Jun 25, 2020 at 12:31:16PM +0100, Matthew Wilcox (Oracle) wrote:
> > I want a memalloc_nowait like we have memalloc_noio and memalloc_nofs
> > for an upcoming patch series, and Jens also wants it f
everything around frees up some PF
flags and generally makes the world a better place.
Patch series also available from
http://git.infradead.org/users/willy/linux.git/shortlog/refs/heads/memalloc
Matthew Wilcox (Oracle) (6):
mm: Replace PF_MEMALLOC_NOIO with memalloc_noio
mm: Add become_kswapd
On Thu, Jun 25, 2020 at 02:40:17PM +0200, Michal Hocko wrote:
> On Thu 25-06-20 12:31:22, Matthew Wilcox wrote:
> > Similar to memalloc_noio() and memalloc_nofs(), memalloc_nowait()
> > guarantees we will not sleep to reclaim memory. Use it to simplify
> > d
On Thu, Jun 25, 2020 at 02:22:39PM +0200, Michal Hocko wrote:
> On Thu 25-06-20 12:31:17, Matthew Wilcox wrote:
> > We're short on PF_* flags, so make memalloc_noio its own bit where we
> > have plenty of space.
>
> I do not mind moving that outside of the PF_* space. Unles
Instead of using custom macros to set/restore PF_MEMALLOC_NOFS, use
memalloc_nofs_save() like the rest of the kernel.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/xfs/kmem.c | 2 +-
fs/xfs/xfs_aops.c | 4 ++--
fs/xfs/xfs_buf.c | 2 +-
fs/xfs/xfs_linux.h | 6 --
fs/xfs
We're short on PF_* flags, so make memalloc_nofs its own bit where we
have plenty of space.
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/iomap/buffered-io.c | 2 +-
include/linux/sched.h| 2 +-
include/linux/sched/mm.h | 13 ++---
3 files changed, 8 insertions(+), 9 deletions
We're short on PF_* flags, so make memalloc_nocma its own bit where we
have plenty of space.
Signed-off-by: Matthew Wilcox (Oracle)
---
include/linux/sched.h| 2 +-
include/linux/sched/mm.h | 15 +++
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/include/linux
We're short on PF_* flags, so make memalloc_noio its own bit where we
have plenty of space.
Signed-off-by: Matthew Wilcox (Oracle)
---
drivers/block/loop.c | 3 ++-
drivers/md/dm-zoned-metadata.c | 5 ++---
include/linux/sched.h | 2 +-
include/linux/sched/mm.h | 30
-off-by: Matthew Wilcox (Oracle)
---
fs/xfs/libxfs/xfs_btree.c | 14 --
include/linux/sched/mm.h | 26 ++
mm/vmscan.c | 16 +---
3 files changed, 35 insertions(+), 21 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btr
On Thu, May 07, 2020 at 03:50:56PM +0800, Zhen Lei wrote:
> @@ -266,7 +266,7 @@ int swap_writepage(struct page *page, struct
> writeback_control *wbc)
>
> static sector_t swap_page_sector(struct page *page)
> {
> - return (sector_t)__page_file_index(page) << (PAGE_SHIFT - 9);
> +
On Thu, May 07, 2020 at 03:50:57PM +0800, Zhen Lei wrote:
> +++ b/block/blk-settings.c
> @@ -150,7 +150,7 @@ void blk_queue_max_hw_sectors(struct request_queue *q,
> unsigned int max_hw_secto
> unsigned int max_sectors;
>
> if ((max_hw_sectors << 9) < PAGE_SIZE) {
> -
On Thu, May 07, 2020 at 03:50:56PM +0800, Zhen Lei wrote:
> +++ b/mm/page_io.c
> @@ -38,7 +38,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags,
>
> bio->bi_iter.bi_sector = map_swap_page(page, );
> bio_set_dev(bio, bdev);
> - bio->bi_iter.bi_sector <<=
On Tue, May 05, 2020 at 07:55:41PM +0800, Zhen Lei wrote:
> +++ b/mm/swapfile.c
> @@ -177,8 +177,8 @@ static int discard_swap(struct swap_info_struct *si)
>
> /* Do not discard the swap header page! */
> se = first_se(si);
> - start_block = (se->start_block + 1) << (PAGE_SHIFT -
On Tue, May 05, 2020 at 06:32:36PM +0100, antlists wrote:
> On 05/05/2020 12:55, Zhen Lei wrote:
> > When I studied the code of mm/swap, I found "1 << (PAGE_SHIFT - 9)" appears
> > many times. So I try to clean up it.
> >
> > 1. Replace "1 << (PAGE_SHIFT - 9)" or similar with SECTORS_PER_PAGE
> >
On Wed, Feb 12, 2020 at 12:07:28PM -0500, Vivek Goyal wrote:
> +int dax_pgoff(sector_t dax_offset, sector_t sector, size_t size, pgoff_t
> *pgoff)
> +{
> + phys_addr_t phys_off = (dax_offset + sector) * 512;
> +
> + if (pgoff)
> + *pgoff = PHYS_PFN(phys_off);
> + if
On Sun, Aug 25, 2019 at 02:39:47PM +0300, Denis Efremov wrote:
> On 25.08.2019 09:11, Matthew Wilcox wrote:
> > On Sat, Aug 24, 2019 at 01:01:02PM +0300, Denis Efremov wrote:
> >> This patch open codes the bitmap_weight() call. The direct
> >> invocation of hwei
On Sat, Aug 24, 2019 at 01:01:02PM +0300, Denis Efremov wrote:
> This patch open codes the bitmap_weight() call. The direct
> invocation of hweight_long() allows to remove the BUG_ON and
> excessive "longs to bits, bits to longs" conversion.
Honestly, that's not the problem with this function.
On Thu, Aug 08, 2019 at 05:50:10AM -0400, Mikulas Patocka wrote:
> A deadlock with this stacktrace was observed.
>
> The obvious problem here is that in the call chain
> xfs_vm_direct_IO->__blockdev_direct_IO->do_blockdev_direct_IO->kmem_cache_alloc
>
> we do a GFP_KERNEL allocation while we
On Tue, Apr 24, 2018 at 08:29:14AM -0400, Mikulas Patocka wrote:
>
>
> On Mon, 23 Apr 2018, Matthew Wilcox wrote:
>
> > On Mon, Apr 23, 2018 at 08:06:16PM -0400, Mikulas Patocka wrote:
> > > Some bugs (such as buffer overflows) are better detected
> > >
On Mon, Apr 23, 2018 at 08:06:16PM -0400, Mikulas Patocka wrote:
> Some bugs (such as buffer overflows) are better detected
> with kmalloc code, so we must test the kmalloc path too.
Well now, this brings up another item for the collective TODO list --
implement redzone checks for vmalloc.
On Thu, Apr 19, 2018 at 12:12:38PM -0400, Mikulas Patocka wrote:
> Unfortunatelly, some kernel code has bugs - it uses kvmalloc and then
> uses DMA-API on the returned memory or frees it with kfree. Such bugs were
> found in the virtio-net driver, dm-integrity or RHEL7 powerpc-specific
> code.
On Fri, Apr 20, 2018 at 05:21:26PM -0400, Mikulas Patocka wrote:
> On Fri, 20 Apr 2018, Matthew Wilcox wrote:
> > On Fri, Apr 20, 2018 at 04:54:53PM -0400, Mikulas Patocka wrote:
> > > On Fri, 20 Apr 2018, Michal Hocko wrote:
> > > > No way. This is just wrong! First
On Fri, Apr 20, 2018 at 03:08:52PM +0200, Michal Hocko wrote:
> > In order to detect these bugs reliably I submit this patch that changes
> > kvmalloc to always use vmalloc if CONFIG_DEBUG_VM is turned on.
>
> No way. This is just wrong! First of all, you will explode most likely
> on many
On Fri, Apr 20, 2018 at 04:54:53PM -0400, Mikulas Patocka wrote:
> On Fri, 20 Apr 2018, Michal Hocko wrote:
> > No way. This is just wrong! First of all, you will explode most likely
> > on many allocations of small sizes. Second, CONFIG_DEBUG_VM tends to be
> > enabled quite often.
>
> You're an
On Wed, Mar 21, 2018 at 12:25:39PM -0400, Mikulas Patocka wrote:
> Now - we don't want higher-order allocations for power-of-two caches
> (because higher-order allocations just cause memory fragmentation without
> any benefit)
Higher-order allocations don't cause memory fragmentation. Indeed,
On Wed, Mar 21, 2018 at 01:40:31PM -0500, Christopher Lameter wrote:
> On Wed, 21 Mar 2018, Mikulas Patocka wrote:
>
> > > > F.e. you could optimize the allcations > 2x PAGE_SIZE so that they do
> > > > not
> > > > allocate powers of two pages. It would be relatively easy to make
> > > >
On Wed, Mar 21, 2018 at 12:39:33PM -0500, Christopher Lameter wrote:
> One other thought: If you want to improve the behavior for large scale
> objects allocated through kmalloc/kmemcache then we would certainly be
> glad to entertain those ideas.
>
> F.e. you could optimize the allcations > 2x
On Tue, Mar 20, 2018 at 01:25:09PM -0400, Mikulas Patocka wrote:
> The reason why we need this is that we are going to merge code that does
> block device deduplication (it was developed separatedly and sold as a
> commercial product), and the code uses block sizes that are not a power of
> two
Hi Mike, Joe,
Please consider reverting commit 9f9ef0657d53d988dc07b096052b3dd07d6e3c46.
The reasoning is flawed -- just because something is an ioctl does not
mean that you should be using GFP_NOFS. The only reason to use GFP_NOFS
is if the filesystem is holding a lock which means that calling
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 03:43:09PM +0000, Matthew Wilcox wrote:
> > In the case of a network filesystem being used to communicate with
> > a different VM on the same physical machine, there is no backing
> > device, just a network
From: Dan Williams [mailto:dan.j.willi...@intel.com]
> A couple weeks back, in the course of reviewing the memcpy_nocache()
> proposal from Brian, Linus subtly suggested that the pmem specific
> memcpy_to_pmem() routine be moved to be implemented at the driver
> level [1]:
Of course, there may
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sat, Jan 21, 2017 at 04:28:52PM +0000, Matthew Wilcox wrote:
> > Of course, there may not be a backing device either!
>
> s/backing device/block device/ ? If so fully agreed. I like the dax_ops
> scheme, but we should go all th
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 06:39:28PM +0000, Matthew Wilcox wrote:
> > Two guests on the same physical machine (or a guest and a host) have access
> > to the same set of physical addresses. This might be an NV-DIMM, or it
> > m
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 06:19:24PM +0000, Matthew Wilcox wrote:
> > No, I mean a network filesystem like 9p or cifs or nfs. If the memcpy
> > is supposed to be performed by the backing device
>
> struct backing_dev has no rel
80 matches
Mail list logo