Allow device-mapper to route flush operations to the
per-target implementation. In order for the device stacking to work we
need a dax_dev and a pgoff relative to that device. This gives each
layer of the stack the information it needs to look up the operation
pointer for the next level.
This
Changes since v2 [1]:
1/ Address the concerns from "[NAK] copy_from_iter_ops()" [2]. The
copy_from_iter_ops approach is replaced with a new set _flushcache
memcpy and user-copy helpers (Al)
2/ Use _flushcache as the suffix for the new cache managing copy helpers
rather than _writethrough
Filesystem-DAX flushes caches whenever it writes to the address returned
through dax_direct_access() and when writing back dirty radix entries.
That flushing is only required in the pmem case, so the dax_flush()
helper skips cache management work when the underlying driver does not
specify a flush
With all calls to this routine re-directed through the pmem driver, we can kill
the pmem api indirection. arch_wb_cache_pmem() is now optionally supplied by
the arch specific asm/pmem.h. Same as before, pmem flushing is only defined
for x86_64, but it is straightforward to add other archs in the
Now that all possible providers of the dax_operations copy_from_iter
method are implemented, switch filesytem-dax to call the driver rather
than copy_to_iter_pmem.
Signed-off-by: Dan Williams
---
arch/x86/include/asm/pmem.h | 50
Kill this globally defined wrapper and move to libnvdimm so that we can
ultimately remove include/linux/pmem.h.
Cc:
Cc: Jan Kara
Cc: Jeff Moyer
Cc: Ingo Molnar
Cc: Christoph Hellwig
Cc: "H. Peter Anvin"
Allow device-mapper to route copy_from_iter operations to the
per-target implementation. In order for the device stacking to work we
need a dax_dev and a pgoff relative to that device. This gives each
layer of the stack the information it needs to look up the operation
pointer for the next level.
Filesystem-DAX flushes caches whenever it writes to the address returned
through dax_direct_access() and when writing back dirty radix entries.
That flushing is only required in the pmem case, so add a dax operation
to allow pmem to take this extra action, but skip it for other dax
capable devices
Some platforms arrange for cpu caches to be flushed on power-fail. On
those platforms there is no requirement that the kernel track and flush
potentially dirty cache lines. Given that we still insert entries into
the radix for locking purposes this patch only disables the cache flush
loop, not the
Now that all callers of the pmem api have been converted to dax helpers that
call back to the pmem driver, we can remove include/linux/pmem.h.
Cc:
Cc: Jan Kara
Cc: Jeff Moyer
Cc: Ingo Molnar
Cc: Christoph Hellwig
The pmem driver attaches to both persistent and volatile memory ranges
advertised by the ACPI NFIT. When the region is volatile it is redundant
to spend cycles flushing caches at fsync(). Check if the hosting region
is volatile and do not set QUEUE_FLAG_WC if it is.
Cc: Jan Kara
On Fri, Jun 9, 2017 at 1:25 PM, Dan Williams wrote:
> The pmem driver attaches to both persistent and volatile memory ranges
> advertised by the ACPI NFIT. When the region is volatile it is redundant
> to spend cycles flushing caches at fsync(). Check if the hosting
This patch value is in the following:
1. In the Storage-Backup environment of HyperCluster, includes
one storage array near to the host and one remote storage array,
and the two storage arrays have the same hardware.The same LUN is
writed or readed by the two storage arrays. However, usually, the
libmultipath/prioritizers: Prioritizer for device mapper multipath,
where the corresponding priority values of specific paths are provided
by a latency algorithm. And the latency algorithm is dependent on the
following arguments(latency_interval and io_num).
The principle of the algorithm is
Hi Hannes,
Thanks a lot.
Please find my replys as follows.
regards.
-Yang
On 2017/6/8 23:37, Hannes Reinecke wrote:
> On 06/06/2017 04:43 AM, Yang Feng wrote:
>> This patch value is in the following:
>> 1. In the Storage-Backup environment of HyperCluster, includes
>> one storage array near to
Hi Martin,
Thanks a lot.
It's a good idea.
The updated patch will be sent later.
Regards,
-Yang
On 2017/6/9 16:05, Martin Wilck wrote:
> Hello Yang,
>
>>> Actually, you're not alone here; several other storage array setups
>>> suffer from the same problem.
>>>
>>> Eg if you have a
tree:
https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git
for-next
head: 02da2e15e81f3f0b7cd1665a84669c6bb56276bc
commit: 02da2e15e81f3f0b7cd1665a84669c6bb56276bc [12/12] dm zoned:
drive-managed zoned block device target
config: m68k-allmodconfig (attached as .config)
Hello Yang,
> > Actually, you're not alone here; several other storage array setups
> > suffer from the same problem.
> >
> > Eg if you have a site-failover setup with two storage arrays at
> > different locations the problem is more-or-less the same;
> > both arrays potentially will be
Mike,
On 6/9/17 21:03, Mike Snitzer wrote:
> I've switched from .presuspend to .postsuspend, and eliminated the
> .presuspend_undo
>
> I'm not seeing any reason for .presuspend_undo
>
> See dm-cache-target.c for a .postsuspend that is pretty comparable to
> what dm-zoned's is doing.
>
> As for
On Thu, Jun 08 2017 at 11:42am -0400,
Jens Axboe wrote:
> On 06/03/2017 01:37 AM, Christoph Hellwig wrote:
> > This series introduces a new blk_status_t error code type for the block
> > layer so that we can have tigher control and explicit semantics for
> > block layer errors.
On Fri, Jun 09 2017, Mike Snitzer wrote:
> On Thu, Jun 08 2017 at 11:42am -0400,
> Jens Axboe wrote:
>
> > On 06/03/2017 01:37 AM, Christoph Hellwig wrote:
> > > This series introduces a new blk_status_t error code type for the block
> > > layer so that we can have tigher
On Sat, Jun 03 2017, Christoph Hellwig wrote:
> This series introduces a new blk_status_t error code type for the block
> layer so that we can have tigher control and explicit semantics for
> block layer errors.
>
> All but the last three patches are cleanups that lead to the new type.
>
> The
On Mon, May 29, 2017 at 11:22:48AM +0300, Gilad Ben-Yossef wrote:
>
> +static inline int crypto_wait_req(int err, struct crypto_wait *wait)
> +{
> + switch (err) {
> + case -EINPROGRESS:
> + case -EBUSY:
> + wait_for_completion(>completion);
> +
On Fri, Jun 09 2017 at 12:06am -0400,
Damien Le Moal wrote:
> Cast unsigned int to sector_t before shifting. Otherwise, the target
> length overflows and becomes incorrect.
>
> Also fix an incorrect setting of the target suspend
> operation.
>
> Signed-off-by: Damien Le
On 06/08/2017 07:07 PM, Shaohua Li wrote:
Neil fixed a bug in this side recently, which is pretty similar like the
reported, I'll push the patch to upstream soon. Could you please try my
for-next tree, and check if you still have the issue?
Thanks,
Shaohua
Kernel with the patch applied runs
25 matches
Mail list logo