Re: [PATCH v3 02/13] dax: require 'struct page' for filesystem dax

2017-10-20 Thread Dan Williams
On Fri, Oct 20, 2017 at 8:20 PM, Matthew Wilcox  wrote:
> On Fri, Oct 20, 2017 at 03:29:57PM -0700, Dan Williams wrote:
>> Ok, I'd also like to kill DAX support in the brd driver. It's a source
>> of complexity and maintenance burden for zero benefit. It's the only
>> ->direct_access() implementation that sleeps and it's the only
>> implementation where there is a non-linear relationship between
>> sectors and pfns. Having a 1:1 sector to pfn relationship will help
>> with the dma-extent-busy management since we don't need to keep
>> calling into the driver to map pfns back to sectors once we know the
>> pfn[0] sector[0] relationship.
>
> But these are important things that other block devices may / will want.
>
> For example, I think it's entirely sensible to support ->direct_access
> for RAID-0.  Dell are looking at various different options for having
> one pmemX device per DIMM and using RAID to lash them together.
> ->direct_access makes no sense for RAID-5 or RAID-1, but RAID-0 makes
> sense to me.
>
> Last time we tried to take sleeping out, there were grumblings from people
> with network block devices who thought they'd want to bring pages in
> across the network.  I'm a bit less sympathetic to this because I don't
> know anyone actively working on it, but the RAID-0 case is something I
> think we should care about.

True, good point. In fact we already support device-mapper striping
with ->direct_access(). I'd still like to go ahead with the sleeping
removal. When those folks come back and add network direct_access they
can do the hard work of figuring out cases where we need to call
direct_access in atomic contexts.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 02/13] dax: require 'struct page' for filesystem dax

2017-10-20 Thread Dan Williams
On Fri, Oct 20, 2017 at 9:29 AM, Christoph Hellwig  wrote:
> On Fri, Oct 20, 2017 at 08:23:02AM -0700, Dan Williams wrote:
>> Yes, however it seems these drivers / platforms have been living with
>> the lack of struct page for a long time. So they either don't use DAX,
>> or they have a constrained use case that never triggers
>> get_user_pages(). If it is the latter then they could introduce a new
>> configuration option that bypasses the pfn_t_devmap() check in
>> bdev_dax_supported() and fix up the get_user_pages() paths to fail.
>> So, I'd like to understand how these drivers have been using DAX
>> support without struct page to see if we need a workaround or we can
>> go ahead delete this support. If the usage is limited to
>> execute-in-place perhaps we can do a constrained ->direct_access() for
>> just that case.
>
> For axonram I doubt anyone is using it any more - it was a very for
> the IBM Cell blades, which were produceѕ in a rather limited number.
> And Cell basically seems to be dead as far as I can tell.
>
> For S/390 Martin might be able to help out what the status of xpram
> in general and DAX support in particular is.

Ok, I'd also like to kill DAX support in the brd driver. It's a source
of complexity and maintenance burden for zero benefit. It's the only
->direct_access() implementation that sleeps and it's the only
implementation where there is a non-linear relationship between
sectors and pfns. Having a 1:1 sector to pfn relationship will help
with the dma-extent-busy management since we don't need to keep
calling into the driver to map pfns back to sectors once we know the
pfn[0] sector[0] relationship.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH] mmap.2: Add description of MAP_SHARED_VALIDATE and MAP_SYNC

2017-10-20 Thread Ross Zwisler
On Thu, Oct 19, 2017 at 02:58:17PM +0200, Jan Kara wrote:
> Signed-off-by: Jan Kara 
> ---
>  man2/mmap.2 | 30 ++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/man2/mmap.2 b/man2/mmap.2
> index 47c3148653be..598ff0c64f7f 100644
> --- a/man2/mmap.2
> +++ b/man2/mmap.2
> @@ -125,6 +125,21 @@ are carried through to the underlying file.
>  to the underlying file requires the use of
>  .BR msync (2).)
>  .TP
> +.B MAP_SHARED_VALIDATE
> +The same as
> +.B MAP_SHARED
> +except that
> +.B MAP_SHARED
> +mappings ignore unknown flags in
> +.IR flags .
> +In contrast when creating mapping of
> +.B MAP_SHARED_VALIDATE
> +mapping type, the kernel verifies all passed flags are known and fails the
> +mapping with
> +.BR EOPNOTSUPP
> +otherwise. This mapping type is also required to be able to use some mapping
> +flags.
> +.TP

Some small nits:

I think you should maybe include a "(since Linux 4.15)" type note after the
MAP_SHARED_VALIDATE header.  You also need to update the following line:

   Both of these flags are described in POSIX.1-2001 and POSIX.1-2008.

Which used to refer to MAP_SYNC and MAP_PRIVATE.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 12/13] dax: handle truncate of dma-busy pages

2017-10-20 Thread Brian Foster
On Fri, Oct 20, 2017 at 10:27:22AM -0700, Dan Williams wrote:
> On Fri, Oct 20, 2017 at 9:32 AM, Christoph Hellwig  wrote:
> > On Fri, Oct 20, 2017 at 08:42:00AM -0700, Dan Williams wrote:
> >> I agree, but it needs quite a bit more thought and restructuring of
> >> the truncate path. I also wonder how we reclaim those stranded
> >> filesystem blocks, but a first approximation is wait for the
> >> administrator to delete them or auto-delete them at the next mount.
> >> XFS seems well prepared to reflink-swap these DMA blocks around, but
> >> I'm not sure about EXT4.
> >
> > reflink still is an optional and experimental feature in XFS.  That
> > being said we should not need to swap block pointers around on disk.
> > We just need to prevent the block allocator from reusing the blocks
> > for new allocations, and we have code for that, both for transactions
> > that haven't been committed to disk yet, and for deleted blocks
> > undergoing discard operations.
> >
> > But as mentioned in my second mail from this morning I'm not even
> > sure we need that.  For short-term elevated page counts like normal
> > get_user_pages users I think we can just wait for the page count
> > to reach zero, while for abuses of get_user_pages for long term
> > pinning memory (not sure if anyone but rdma is doing that) we'll need
> > something like FL_LAYOUT leases to release the mapping.
> 
> I'll take a look at hooking this up through a page-idle callback. Can
> I get some breadcrumbs to grep for from XFS folks on how to set/clear
> the busy state of extents?

See fs/xfs/xfs_extent_busy.c.

Brian
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 12/13] dax: handle truncate of dma-busy pages

2017-10-20 Thread Dan Williams
On Fri, Oct 20, 2017 at 9:32 AM, Christoph Hellwig  wrote:
> On Fri, Oct 20, 2017 at 08:42:00AM -0700, Dan Williams wrote:
>> I agree, but it needs quite a bit more thought and restructuring of
>> the truncate path. I also wonder how we reclaim those stranded
>> filesystem blocks, but a first approximation is wait for the
>> administrator to delete them or auto-delete them at the next mount.
>> XFS seems well prepared to reflink-swap these DMA blocks around, but
>> I'm not sure about EXT4.
>
> reflink still is an optional and experimental feature in XFS.  That
> being said we should not need to swap block pointers around on disk.
> We just need to prevent the block allocator from reusing the blocks
> for new allocations, and we have code for that, both for transactions
> that haven't been committed to disk yet, and for deleted blocks
> undergoing discard operations.
>
> But as mentioned in my second mail from this morning I'm not even
> sure we need that.  For short-term elevated page counts like normal
> get_user_pages users I think we can just wait for the page count
> to reach zero, while for abuses of get_user_pages for long term
> pinning memory (not sure if anyone but rdma is doing that) we'll need
> something like FL_LAYOUT leases to release the mapping.

I'll take a look at hooking this up through a page-idle callback. Can
I get some breadcrumbs to grep for from XFS folks on how to set/clear
the busy state of extents?
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 02/13] dax: require 'struct page' for filesystem dax

2017-10-20 Thread Christoph Hellwig
On Fri, Oct 20, 2017 at 08:23:02AM -0700, Dan Williams wrote:
> Yes, however it seems these drivers / platforms have been living with
> the lack of struct page for a long time. So they either don't use DAX,
> or they have a constrained use case that never triggers
> get_user_pages(). If it is the latter then they could introduce a new
> configuration option that bypasses the pfn_t_devmap() check in
> bdev_dax_supported() and fix up the get_user_pages() paths to fail.
> So, I'd like to understand how these drivers have been using DAX
> support without struct page to see if we need a workaround or we can
> go ahead delete this support. If the usage is limited to
> execute-in-place perhaps we can do a constrained ->direct_access() for
> just that case.

For axonram I doubt anyone is using it any more - it was a very for
the IBM Cell blades, which were produceѕ in a rather limited number.
And Cell basically seems to be dead as far as I can tell.

For S/390 Martin might be able to help out what the status of xpram
in general and DAX support in particular is.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [Qemu-devel] [RFC 2/2] KVM: add virtio-pmem driver

2017-10-20 Thread Christoph Hellwig
On Fri, Oct 20, 2017 at 08:05:09AM -0700, Dan Williams wrote:
> Right, that's the same recommendation I gave.
> 
> https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg08404.html
> 
> ...so maybe I'm misunderstanding your concern? It sounds like we're on
> the same page.

Yes, the above is exactly what I think we should do it.  And in many
ways virtio seems overkill if we could just have a hypercall or doorbell
page as the queueing infrastructure in virtio shouldn't really be
needed.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[ndctl patch] btt_check_bitmap: initialize rc

2017-10-20 Thread Jeff Moyer
It may be possible that rc is never set before returning from
the function.  nfree would have to be zero, and the bitmap
would have to be full.  Fix it.

Signed-off-by: Jeff Moyer 

diff --git a/ndctl/check.c b/ndctl/check.c
index 915bb9d..dafd6a8 100644
--- a/ndctl/check.c
+++ b/ndctl/check.c
@@ -508,7 +508,7 @@ static int btt_check_bitmap(struct arena_info *a)
 {
unsigned long *bm;
u32 i, btt_mapping;
-   int rc;
+   int rc = BTT_BITMAP_ERROR;
 
bm = bitmap_alloc(a->internal_nlba);
if (bm == NULL)
@@ -521,7 +521,6 @@ static int btt_check_bitmap(struct arena_info *a)
info(a->bttc,
"arena %d: internal block %#x is referenced by 
two map entries\n",
a->num, btt_mapping);
-   rc = BTT_BITMAP_ERROR;
goto out;
}
bitmap_set(bm, btt_mapping, 1);
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: Enabling peer to peer device transactions for PCIe devices

2017-10-20 Thread Logan Gunthorpe

Hi Ludwig,

P2P transactions are still *very* experimental at the moment and take a 
lot of expertise to get working in a general setup. It will definitely 
require changes to the kernel, including the drivers of all the devices 
you are trying to make talk to eachother. If you're up for it you can 
take a look at:


https://github.com/sbates130272/linux-p2pmem/

Which has our current rough work making NVMe fabrics use p2p transactions.

Logan

On 10/20/2017 6:36 AM, Ludwig Petrosyan wrote:

Dear Linux kernel group

my name is Ludwig Petrosyan I am working in DESY (Germany)

we are responsible for the control system of  all accelerators in DESY.

For a 7-8 years we have switched to MTCA.4 systems and using PCIe as a 
central Bus.


I am mostly responsible for the Linux drivers of the AMC Cards (PCIe 
endpoints).


The idea is start to use peer to peer transaction for PCIe endpoint (DMA 
and/or usual Read/Write)


Could You please advise me where to start, is there some Documentation 
how to do it.



with best regards


Ludwig


On 11/21/2016 09:36 PM, Deucher, Alexander wrote:
This is certainly not the first time this has been brought up, but I'd 
like to try and get some consensus on the best way to move this 
forward.  Allowing devices to talk directly improves performance and 
reduces latency by avoiding the use of staging buffers in system 
memory.  Also in cases where both devices are behind a switch, it 
avoids the CPU entirely.  Most current APIs (DirectGMA, PeerDirect, 
CUDA, HSA) that deal with this are pointer based.  Ideally we'd be 
able to take a CPU virtual address and be able to get to a physical 
address taking into account IOMMUs, etc.  Having struct pages for the 
memory would allow it to work more generally and wouldn't require as 
much explicit support in drivers that wanted to use it.

Some use cases:
1. Storage devices streaming directly to GPU device memory
2. GPU device memory to GPU device memory streaming
3. DVB/V4L/SDI devices streaming directly to GPU device memory
4. DVB/V4L/SDI devices streaming directly to storage devices
Here is a relatively simple example of how this could work for 
testing.  This is obviously not a complete solution.
- Device memory will be registered with Linux memory sub-system by 
created corresponding struct page structures for device memory
- get_user_pages_fast() will  return corresponding struct pages when 
CPU address points to the device memory

- put_page() will deal with struct pages for device memory
Previously proposed solutions and related proposals:
1.P2P DMA
DMA-API/PCI map_peer_resource support for peer-to-peer 
(http://www.spinics.net/lists/linux-pci/msg44560.html)

Pros: Low impact, already largely reviewed.
Cons: requires explicit support in all drivers that want to support 
it, doesn't handle S/G in device memory.

2. ZONE_DEVICE IO
Direct I/O and DMA for persistent memory 
(https://lwn.net/Articles/672457/)
Add support for ZONE_DEVICE IO memory with struct pages. 
(https://patchwork.kernel.org/patch/8583221/)

Pro: Doesn't waste system memory for ZONE metadata
Cons: CPU access to ZONE metadata slow, may be lost, corrupted on 
device reset.

3. DMA-BUF
RDMA subsystem DMA-BUF support 
(http://www.spinics.net/lists/linux-rdma/msg38748.html)

Pros: uses existing dma-buf interface
Cons: dma-buf is handle based, requires explicit dma-buf support in 
drivers.


4. iopmem
iopmem : A block device for PCIe memory 
(https://lwn.net/Articles/703895/)

5. HMM
Heterogeneous Memory Management 
(http://lkml.iu.edu/hypermail/linux/kernel/1611.2/02473.html)


6. Some new mmap-like interface that takes a userptr and a length and 
returns a dma-buf and offset?

Alex

--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[ndctl patch] dax_io: fix unknown parameter handling

2017-10-20 Thread Jeff Moyer
The for loop will not loop more than once due to the return statement.
What's more, the following code, which prints out the usage, also won't
run.  Let's change this to look more like other commands.  Print out
invalid options and then print out the usage.  usage_with_options will
exit, so no need for a return there.

Signed-off-by: Jeff Moyer 

diff --git a/daxctl/io.c b/daxctl/io.c
index 27e7463..2f8cb4a 100644
--- a/daxctl/io.c
+++ b/daxctl/io.c
@@ -526,15 +526,11 @@ int cmd_io(int argc, const char **argv, void *daxctl_ctx)
struct ndctl_ctx *ndctl_ctx;
 
argc = parse_options(argc, argv, options, u, 0);
-   for (i = 0; i < argc; i++) {
+   for (i = 0; i < argc; i++)
fail("Unknown parameter \"%s\"\n", argv[i]);
-   return -EINVAL;
-   }
 
-   if (argc) {
+   if (argc)
usage_with_options(u, options);
-   return 0;
-   }
 
if (!io.dev[0].parm_path && !io.dev[1].parm_path) {
usage_with_options(u, options);
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 12/13] dax: handle truncate of dma-busy pages

2017-10-20 Thread Dan Williams
On Fri, Oct 20, 2017 at 6:05 AM, Jeff Layton  wrote:
> On Thu, 2017-10-19 at 19:40 -0700, Dan Williams wrote:
>> get_user_pages() pins file backed memory pages for access by dma
>> devices. However, it only pins the memory pages not the page-to-file
>> offset association. If a file is truncated the pages are mapped out of
>> the file and dma may continue indefinitely into a page that is owned by
>> a device driver. This breaks coherency of the file vs dma, but the
>> assumption is that if userspace wants the file-space truncated it does
>> not matter what data is inbound from the device, it is not relevant
>> anymore.
>>
>> The assumptions of the truncate-page-cache model are broken by DAX where
>> the target DMA page *is* the filesystem block. Leaving the page pinned
>> for DMA, but truncating the file block out of the file, means that the
>> filesytem is free to reallocate a block under active DMA to another
>> file!
>>
>> Here are some possible options for fixing this situation ('truncate' and
>> 'fallocate(punch hole)' are synonymous below):
>>
>> 1/ Fail truncate while any file blocks might be under dma
>>
>> 2/ Block (sleep-wait) truncate while any file blocks might be under
>>dma
>>
>> 3/ Remap file blocks to a "lost+found"-like file-inode where
>>dma can continue and we might see what inbound data from DMA was
>>mapped out of the original file. Blocks in this file could be
>>freed back to the filesystem when dma eventually ends.
>>
>> 4/ Disable dax until option 3 or another long term solution has been
>>implemented. However, filesystem-dax is still marked experimental
>>for concerns like this.
>>
>> Option 1 will throw failures where userspace has never expected them
>> before, option 2 might hang the truncating process indefinitely, and
>> option 3 requires per filesystem enabling to remap blocks from one inode
>> to another.  Option 2 is implemented in this patch for the DAX path with
>> the expectation that non-transient users of get_user_pages() (RDMA) are
>> disallowed from setting up dax mappings and that the potential delay
>> introduced to the truncate path is acceptable compared to the response
>> time of the page cache case. This can only be seen as a stop-gap until
>> we can solve the problem of safely sequestering unallocated filesystem
>> blocks under active dma.
>>
>
> FWIW, I like #3 a lot more than #2 here. I get that it's quite a bit
> more work though, so no objection to this as a stop-gap fix.

I agree, but it needs quite a bit more thought and restructuring of
the truncate path. I also wonder how we reclaim those stranded
filesystem blocks, but a first approximation is wait for the
administrator to delete them or auto-delete them at the next mount.
XFS seems well prepared to reflink-swap these DMA blocks around, but
I'm not sure about EXT4.

>
>
>> The solution introduces a new FL_ALLOCATED lease to pin the allocated
>> blocks in a dax file while dma might be accessing them. It behaves
>> identically to an FL_LAYOUT lease save for the fact that it is
>> immediately sheduled to be reaped, and that the only path that waits for
>> its removal is the truncate path. We can not reuse FL_LAYOUT directly
>> since that would deadlock in the case where userspace did a direct-I/O
>> operation with a target buffer backed by an mmap range of the same file.
>>
>> Credit / inspiration for option 3 goes to Dave Hansen, who proposed
>> something similar as an alternative way to solve the problem that
>> MAP_DIRECT was trying to solve.
>>
>> Cc: Jan Kara 
>> Cc: Jeff Moyer 
>> Cc: Dave Chinner 
>> Cc: Matthew Wilcox 
>> Cc: Alexander Viro 
>> Cc: "Darrick J. Wong" 
>> Cc: Ross Zwisler 
>> Cc: Jeff Layton 
>> Cc: "J. Bruce Fields" 
>> Cc: Dave Hansen 
>> Reported-by: Christoph Hellwig 
>> Signed-off-by: Dan Williams 
>> ---
>>  fs/Kconfig  |1
>>  fs/dax.c|  188 
>> +++
>>  fs/locks.c  |   17 -
>>  include/linux/dax.h |   23 ++
>>  include/linux/fs.h  |   22 +-
>>  mm/gup.c|   27 ++-
>>  6 files changed, 268 insertions(+), 10 deletions(-)
>>
>> diff --git a/fs/Kconfig b/fs/Kconfig
>> index 7aee6d699fd6..a7b31a96a753 100644
>> --- a/fs/Kconfig
>> +++ b/fs/Kconfig
>> @@ -37,6 +37,7 @@ source "fs/f2fs/Kconfig"
>>  config FS_DAX
>>   bool "Direct Access (DAX) support"
>>   depends on MMU
>> + depends on FILE_LOCKING
>>   depends on !(ARM || MIPS || SPARC)
>>   select FS_IOMAP
>>   select DAX
>> diff --git a/fs/dax.c b/fs/dax.c
>> index b03f547b36e7..e0a3958fc5f2 100644
>> --- a/fs/dax.c
>> +++ b/fs/dax.c
>> @@ -22,6 

Re: [PATCH v3 02/13] dax: require 'struct page' for filesystem dax

2017-10-20 Thread Dan Williams
On Fri, Oct 20, 2017 at 12:57 AM, Christoph Hellwig  wrote:
>> --- a/arch/powerpc/sysdev/axonram.c
>> +++ b/arch/powerpc/sysdev/axonram.c
>> @@ -172,6 +172,7 @@ static size_t axon_ram_copy_from_iter(struct dax_device 
>> *dax_dev, pgoff_t pgoff,
>>
>>  static const struct dax_operations axon_ram_dax_ops = {
>>   .direct_access = axon_ram_dax_direct_access,
>> +
>>   .copy_from_iter = axon_ram_copy_from_iter,
>
> Unrelated whitespace change.  That being said - I don't think axonram has
> devmap support in any form, so this basically becomes dead code, doesn't
> it?
>
>> diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
>> index 7abb240847c0..e7e5db07e339 100644
>> --- a/drivers/s390/block/dcssblk.c
>> +++ b/drivers/s390/block/dcssblk.c
>> @@ -52,6 +52,7 @@ static size_t dcssblk_dax_copy_from_iter(struct dax_device 
>> *dax_dev,
>>
>>  static const struct dax_operations dcssblk_dax_ops = {
>>   .direct_access = dcssblk_dax_direct_access,
>> +
>>   .copy_from_iter = dcssblk_dax_copy_from_iter,
>
> Same comments apply here.

Yes, however it seems these drivers / platforms have been living with
the lack of struct page for a long time. So they either don't use DAX,
or they have a constrained use case that never triggers
get_user_pages(). If it is the latter then they could introduce a new
configuration option that bypasses the pfn_t_devmap() check in
bdev_dax_supported() and fix up the get_user_pages() paths to fail.
So, I'd like to understand how these drivers have been using DAX
support without struct page to see if we need a workaround or we can
go ahead delete this support. If the usage is limited to
execute-in-place perhaps we can do a constrained ->direct_access() for
just that case.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [Qemu-devel] [RFC 2/2] KVM: add virtio-pmem driver

2017-10-20 Thread Dan Williams
On Fri, Oct 20, 2017 at 1:00 AM, Christoph Hellwig  wrote:
> On Thu, Oct 19, 2017 at 11:21:26AM -0700, Dan Williams wrote:
>> The difference is that nvdimm_flush() is not mandatory, and that the
>> platform will automatically perform the same flush at power-fail.
>> Applications should be able to assume that if they are using MAP_SYNC
>> that no other coordination with the kernel or the hypervisor is
>> necessary.
>>
>> Advertising this as a generic Persistent Memory range to the guest
>> means that the guest could theoretically use it with device-dax where
>> there is no driver or filesystem sync interface. The hypervisor will
>> be waiting for flush notifications and the guest will just issue cache
>> flushes and sfence instructions. So, as far as I can see we need to
>> differentiate this virtio-model from standard "Persistent Memory" to
>> the guest and remove the possibility of guests/applications making the
>> wrong assumption.
>
> So add a flag that it is not.  We already have the nd_volatile type,
> that is special.  For now only in Linux, but I think adding this type
> to the spec eventually would be very useful for efficiently exposing
> directly mappable device to VM guests.

Right, that's the same recommendation I gave.

https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg08404.html

...so maybe I'm misunderstanding your concern? It sounds like we're on
the same page.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 12/13] dax: handle truncate of dma-busy pages

2017-10-20 Thread Jeff Layton
On Thu, 2017-10-19 at 19:40 -0700, Dan Williams wrote:
> get_user_pages() pins file backed memory pages for access by dma
> devices. However, it only pins the memory pages not the page-to-file
> offset association. If a file is truncated the pages are mapped out of
> the file and dma may continue indefinitely into a page that is owned by
> a device driver. This breaks coherency of the file vs dma, but the
> assumption is that if userspace wants the file-space truncated it does
> not matter what data is inbound from the device, it is not relevant
> anymore.
> 
> The assumptions of the truncate-page-cache model are broken by DAX where
> the target DMA page *is* the filesystem block. Leaving the page pinned
> for DMA, but truncating the file block out of the file, means that the
> filesytem is free to reallocate a block under active DMA to another
> file!
> 
> Here are some possible options for fixing this situation ('truncate' and
> 'fallocate(punch hole)' are synonymous below):
> 
> 1/ Fail truncate while any file blocks might be under dma
> 
> 2/ Block (sleep-wait) truncate while any file blocks might be under
>dma
> 
> 3/ Remap file blocks to a "lost+found"-like file-inode where
>dma can continue and we might see what inbound data from DMA was
>mapped out of the original file. Blocks in this file could be
>freed back to the filesystem when dma eventually ends.
> 
> 4/ Disable dax until option 3 or another long term solution has been
>implemented. However, filesystem-dax is still marked experimental
>for concerns like this.
> 
> Option 1 will throw failures where userspace has never expected them
> before, option 2 might hang the truncating process indefinitely, and
> option 3 requires per filesystem enabling to remap blocks from one inode
> to another.  Option 2 is implemented in this patch for the DAX path with
> the expectation that non-transient users of get_user_pages() (RDMA) are
> disallowed from setting up dax mappings and that the potential delay
> introduced to the truncate path is acceptable compared to the response
> time of the page cache case. This can only be seen as a stop-gap until
> we can solve the problem of safely sequestering unallocated filesystem
> blocks under active dma.
> 

FWIW, I like #3 a lot more than #2 here. I get that it's quite a bit
more work though, so no objection to this as a stop-gap fix.


> The solution introduces a new FL_ALLOCATED lease to pin the allocated
> blocks in a dax file while dma might be accessing them. It behaves
> identically to an FL_LAYOUT lease save for the fact that it is
> immediately sheduled to be reaped, and that the only path that waits for
> its removal is the truncate path. We can not reuse FL_LAYOUT directly
> since that would deadlock in the case where userspace did a direct-I/O
> operation with a target buffer backed by an mmap range of the same file.
> 
> Credit / inspiration for option 3 goes to Dave Hansen, who proposed
> something similar as an alternative way to solve the problem that
> MAP_DIRECT was trying to solve.
> 
> Cc: Jan Kara 
> Cc: Jeff Moyer 
> Cc: Dave Chinner 
> Cc: Matthew Wilcox 
> Cc: Alexander Viro 
> Cc: "Darrick J. Wong" 
> Cc: Ross Zwisler 
> Cc: Jeff Layton 
> Cc: "J. Bruce Fields" 
> Cc: Dave Hansen 
> Reported-by: Christoph Hellwig 
> Signed-off-by: Dan Williams 
> ---
>  fs/Kconfig  |1 
>  fs/dax.c|  188 
> +++
>  fs/locks.c  |   17 -
>  include/linux/dax.h |   23 ++
>  include/linux/fs.h  |   22 +-
>  mm/gup.c|   27 ++-
>  6 files changed, 268 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/Kconfig b/fs/Kconfig
> index 7aee6d699fd6..a7b31a96a753 100644
> --- a/fs/Kconfig
> +++ b/fs/Kconfig
> @@ -37,6 +37,7 @@ source "fs/f2fs/Kconfig"
>  config FS_DAX
>   bool "Direct Access (DAX) support"
>   depends on MMU
> + depends on FILE_LOCKING
>   depends on !(ARM || MIPS || SPARC)
>   select FS_IOMAP
>   select DAX
> diff --git a/fs/dax.c b/fs/dax.c
> index b03f547b36e7..e0a3958fc5f2 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -22,6 +22,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -1481,3 +1482,190 @@ int dax_iomap_fault(struct vm_fault *vmf, enum 
> page_entry_size pe_size,
>   }
>  }
>  EXPORT_SYMBOL_GPL(dax_iomap_fault);
> +
> +enum dax_lease_flags {
> + DAX_LEASE_PAGES,
> + DAX_LEASE_BREAK,
> +};
> +
> +struct dax_lease {
> + struct page **dl_pages;
> + unsigned long dl_nr_pages;
> + unsigned long dl_state;
> + struct file *dl_file;
> + atomic_t dl_count;
> 

Re: [PATCH v3 11/13] fs: use smp_load_acquire in break_{layout,lease}

2017-10-20 Thread Jeffrey Layton
On Thu, 2017-10-19 at 19:39 -0700, Dan Williams wrote:
> Commit 128a37852234 "fs: fix data races on inode->i_flctx" converted
> checks of inode->i_flctx to use smp_load_acquire(), but it did not
> convert break_layout(). smp_load_acquire() includes a READ_ONCE(). There
> should be no functional difference since __break_lease repeats the
> sequence, but this is a clean up to unify all ->i_flctx lookups on a
> common pattern.
> 
> Cc: Christoph Hellwig 
> Cc: Alexander Viro 
> Cc: Ross Zwisler 
> Cc: Jeff Layton 
> Cc: "J. Bruce Fields" 
> Signed-off-by: Dan Williams 
> ---
>  include/linux/fs.h |   10 ++
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 13dab191a23e..eace2c5396a7 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -2281,8 +2281,9 @@ static inline int break_lease(struct inode *inode, 
> unsigned int mode)
>* could end up racing with tasks trying to set a new lease on this
>* file.
>*/
> - smp_mb();
> - if (inode->i_flctx && !list_empty_careful(>i_flctx->flc_lease))
> + struct file_lock_context *ctx = smp_load_acquire(>i_flctx);
> +
> + if (ctx && !list_empty_careful(>flc_lease))
>   return __break_lease(inode, mode, FL_LEASE);
>   return 0;
>  }
> @@ -2325,8 +2326,9 @@ static inline int break_deleg_wait(struct inode 
> **delegated_inode)
>  
>  static inline int break_layout(struct inode *inode, bool wait)
>  {
> - smp_mb();
> - if (inode->i_flctx && !list_empty_careful(>i_flctx->flc_lease))
> + struct file_lock_context *ctx = smp_load_acquire(>i_flctx);
> +
> + if (ctx && !list_empty_careful(>flc_lease))
>   return __break_lease(inode,
>   wait ? O_WRONLY : O_WRONLY | O_NONBLOCK,
>   FL_LAYOUT);
> 

Nice catch. This can go in independently of the rest of the patches in
the series, I think. I'll assume Andrew is picking this up since he's in
the "To:", but let me know if you need me to get it.

Reviewed-by: Jeff Layton 
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: Enabling peer to peer device transactions for PCIe devices

2017-10-20 Thread Ludwig Petrosyan

Dear Linux kernel group

my name is Ludwig Petrosyan I am working in DESY (Germany)

we are responsible for the control system of  all accelerators in DESY.

For a 7-8 years we have switched to MTCA.4 systems and using PCIe as a 
central Bus.


I am mostly responsible for the Linux drivers of the AMC Cards (PCIe 
endpoints).


The idea is start to use peer to peer transaction for PCIe endpoint (DMA 
and/or usual Read/Write)


Could You please advise me where to start, is there some Documentation 
how to do it.



with best regards


Ludwig


On 11/21/2016 09:36 PM, Deucher, Alexander wrote:

This is certainly not the first time this has been brought up, but I'd like to 
try and get some consensus on the best way to move this forward.  Allowing 
devices to talk directly improves performance and reduces latency by avoiding 
the use of staging buffers in system memory.  Also in cases where both devices 
are behind a switch, it avoids the CPU entirely.  Most current APIs (DirectGMA, 
PeerDirect, CUDA, HSA) that deal with this are pointer based.  Ideally we'd be 
able to take a CPU virtual address and be able to get to a physical address 
taking into account IOMMUs, etc.  Having struct pages for the memory would 
allow it to work more generally and wouldn't require as much explicit support 
in drivers that wanted to use it.
  
Some use cases:

1. Storage devices streaming directly to GPU device memory
2. GPU device memory to GPU device memory streaming
3. DVB/V4L/SDI devices streaming directly to GPU device memory
4. DVB/V4L/SDI devices streaming directly to storage devices
  
Here is a relatively simple example of how this could work for testing.  This is obviously not a complete solution.

- Device memory will be registered with Linux memory sub-system by created 
corresponding struct page structures for device memory
- get_user_pages_fast() will  return corresponding struct pages when CPU 
address points to the device memory
- put_page() will deal with struct pages for device memory
  
Previously proposed solutions and related proposals:

1.P2P DMA
DMA-API/PCI map_peer_resource support for peer-to-peer 
(http://www.spinics.net/lists/linux-pci/msg44560.html)
Pros: Low impact, already largely reviewed.
Cons: requires explicit support in all drivers that want to support it, doesn't 
handle S/G in device memory.
  
2. ZONE_DEVICE IO

Direct I/O and DMA for persistent memory (https://lwn.net/Articles/672457/)
Add support for ZONE_DEVICE IO memory with struct pages. 
(https://patchwork.kernel.org/patch/8583221/)
Pro: Doesn't waste system memory for ZONE metadata
Cons: CPU access to ZONE metadata slow, may be lost, corrupted on device reset.
  
3. DMA-BUF

RDMA subsystem DMA-BUF support 
(http://www.spinics.net/lists/linux-rdma/msg38748.html)
Pros: uses existing dma-buf interface
Cons: dma-buf is handle based, requires explicit dma-buf support in drivers.

4. iopmem
iopmem : A block device for PCIe memory (https://lwn.net/Articles/703895/)
  
5. HMM

Heterogeneous Memory Management 
(http://lkml.iu.edu/hypermail/linux/kernel/1611.2/02473.html)

6. Some new mmap-like interface that takes a userptr and a length and returns a 
dma-buf and offset?
  
Alex


--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 00/13] dax: fix dma vs truncate and remove 'page-less' support

2017-10-20 Thread Christoph Hellwig
On Fri, Oct 20, 2017 at 09:47:50AM +0200, Christoph Hellwig wrote:
> I'd like to brainstorm how we can do something better.
> 
> How about:
> 
> If we hit a page with an elevated refcount in truncate / hole puch
> etc for a DAX file system we do not free the blocks in the file system,
> but add it to the extent busy list.  We mark the page as delayed
> free (e.g. page flag?) so that when it finally hits refcount zero we
> call back into the file system to remove it from the busy list.

Brainstorming some more:

Given that on a DAX file there shouldn't be any long-term page
references after we unmap it from the page table and don't allow
get_user_pages calls why not wait for the references for all
DAX pages to go away first?  E.g. if we find a DAX page in
truncate_inode_pages_range that has an elevated refcount we set
a new flag to prevent new references from showing up, and then
simply wait for it to go away.  Instead of a busy way we can
do this through a few hashed waitqueued in dev_pagemap.  And in
fact put_zone_device_page already gets called when putting the
last page so we can handle the wakeup from there.

In fact if we can't find a page flag for the stop new callers
things we could probably come up with a way to do that through
dev_pagemap somehow, but I'm not sure how efficient that would
be.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [Qemu-devel] [RFC 2/2] KVM: add virtio-pmem driver

2017-10-20 Thread Christoph Hellwig
On Thu, Oct 19, 2017 at 11:21:26AM -0700, Dan Williams wrote:
> The difference is that nvdimm_flush() is not mandatory, and that the
> platform will automatically perform the same flush at power-fail.
> Applications should be able to assume that if they are using MAP_SYNC
> that no other coordination with the kernel or the hypervisor is
> necessary.
> 
> Advertising this as a generic Persistent Memory range to the guest
> means that the guest could theoretically use it with device-dax where
> there is no driver or filesystem sync interface. The hypervisor will
> be waiting for flush notifications and the guest will just issue cache
> flushes and sfence instructions. So, as far as I can see we need to
> differentiate this virtio-model from standard "Persistent Memory" to
> the guest and remove the possibility of guests/applications making the
> wrong assumption.

So add a flag that it is not.  We already have the nd_volatile type,
that is special.  For now only in Linux, but I think adding this type
to the spec eventually would be very useful for efficiently exposing
directly mappable device to VM guests.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 02/13] dax: require 'struct page' for filesystem dax

2017-10-20 Thread Christoph Hellwig
> --- a/arch/powerpc/sysdev/axonram.c
> +++ b/arch/powerpc/sysdev/axonram.c
> @@ -172,6 +172,7 @@ static size_t axon_ram_copy_from_iter(struct dax_device 
> *dax_dev, pgoff_t pgoff,
>  
>  static const struct dax_operations axon_ram_dax_ops = {
>   .direct_access = axon_ram_dax_direct_access,
> +
>   .copy_from_iter = axon_ram_copy_from_iter,

Unrelated whitespace change.  That being said - I don't think axonram has
devmap support in any form, so this basically becomes dead code, doesn't
it?

> diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
> index 7abb240847c0..e7e5db07e339 100644
> --- a/drivers/s390/block/dcssblk.c
> +++ b/drivers/s390/block/dcssblk.c
> @@ -52,6 +52,7 @@ static size_t dcssblk_dax_copy_from_iter(struct dax_device 
> *dax_dev,
>  
>  static const struct dax_operations dcssblk_dax_ops = {
>   .direct_access = dcssblk_dax_direct_access,
> +
>   .copy_from_iter = dcssblk_dax_copy_from_iter,

Same comments apply here.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 00/13] dax: fix dma vs truncate and remove 'page-less' support

2017-10-20 Thread Christoph Hellwig
> The solution presented is not pretty. It creates a stream of leases, one
> for each get_user_pages() invocation, and polls page reference counts
> until DMA stops. We're missing a reliable way to not only trap the
> DMA-idle event, but also block new references being taken on pages while
> truncate is allowed to progress. "[PATCH v3 12/13] dax: handle truncate of
> dma-busy pages" presents other options considered, and notes that this
> solution can only be viewed as a stop-gap.

I'd like to brainstorm how we can do something better.

How about:

If we hit a page with an elevated refcount in truncate / hole puch
etc for a DAX file system we do not free the blocks in the file system,
but add it to the extent busy list.  We mark the page as delayed
free (e.g. page flag?) so that when it finally hits refcount zero we
call back into the file system to remove it from the busy list.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH 01/17] mm: introduce MAP_SHARED_VALIDATE, a mechanism to safely define new mmap flags

2017-10-20 Thread Christoph Hellwig
>   if (file) {
>   struct inode *inode = file_inode(file);
> + unsigned long flags_mask = file->f_op->mmap_supported_flags;
> +
> + if (!flags_mask)
> + flags_mask = LEGACY_MAP_MASK;
>  
>   switch (flags & MAP_TYPE) {
>   case MAP_SHARED:
> + /*
> +  * Silently ignore unsupported flags - MAP_SHARED has
> +  * traditionally behaved like that and we don't want
> +  * to break compatibility.
> +  */
> + flags &= flags_mask;
> + /*
> +  * Force use of MAP_SHARED_VALIDATE with non-legacy
> +  * flags. E.g. MAP_SYNC is dangerous to use with
> +  * MAP_SHARED as you don't know which consistency model
> +  * you will get.
> +  */
> + flags &= LEGACY_MAP_MASK;
> + /* fall through */
> + case MAP_SHARED_VALIDATE:
> + if (flags & ~flags_mask)
> + return -EOPNOTSUPP;

Hmmm.  I'd expect this to worth more like:

case MAP_SHARED:
/* Ignore all new flags that need validation: */
flags &= LEGACY_MAP_MASK;
/*FALLTHROUGH*/
case MAP_SHARED_VALIDATE:
if (flags & ~file->f_op->mmap_supported_flags)
return -EOPNOTSUPP;

with the legacy mask always implicitly support as indicated in my
comment to the XFS patch.

Although even the ignoring in MAP_SHARED seems dangerous, but I guess
we need that to keep strict backwards compatibility.  In world I'd
rather do

case MAP_SHARED:
if (flags & ~LEGACY_MAP_MASK)
return -EINVAL;


___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [fstests PATCH] generic: add test for DAX MAP_SYNC support

2017-10-20 Thread Amir Goldstein
On Fri, Oct 20, 2017 at 8:29 AM, Ross Zwisler
 wrote:
> Add a test that exercises DAX's new MAP_SYNC flag.
>
> This test creates a file and writes to it via an mmap(), but never syncs
> via fsync/msync.  This process is tracked via dm-log-writes, then replayed.
>
> If MAP_SYNC is working the dm-log-writes replay will show the test file
> with the same size that we wrote via the mmap() because each allocating
> page fault included an implicit metadata sync.  If MAP_SYNC isn't working
> (which you can test by fiddling with the parameters to mmap()) the file
> will be smaller or missing entirely.
>
> Note that dm-log-writes doesn't track the data that we write via the
> mmap(), so we can't do any data integrity checking.  We can only verify
> that the metadata writes for the page faults happened.
>
> Signed-off-by: Ross Zwisler 

Looks good. some nit picking...

> ---
>
> For this test to run successfully you'll need both Jan's MAP_SYNC series:
>
> https://www.spinics.net/lists/linux-xfs/msg11852.html
>
> and my series adding DAX support to dm-log-writes:
>
> https://lists.01.org/pipermail/linux-nvdimm/2017-October/012972.html
>
> ---
>  .gitignore|  1 +
>  common/dmlogwrites|  1 -
>  src/Makefile  |  3 +-
>  src/t_map_sync.c  | 74 +
>  tests/generic/466 | 77 
> +++
>  tests/generic/466.out |  3 ++
>  tests/generic/group   |  1 +
>  7 files changed, 158 insertions(+), 2 deletions(-)
>  create mode 100644 src/t_map_sync.c
>  create mode 100755 tests/generic/466
>  create mode 100644 tests/generic/466.out
>
> diff --git a/.gitignore b/.gitignore
> index 2014c08..9fc0695 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -119,6 +119,7 @@
>  /src/t_getcwd
>  /src/t_holes
>  /src/t_immutable
> +/src/t_map_sync
>  /src/t_mmap_cow_race
>  /src/t_mmap_dio
>  /src/t_mmap_fallocate
> diff --git a/common/dmlogwrites b/common/dmlogwrites
> index 247c744..5b57df9 100644
> --- a/common/dmlogwrites
> +++ b/common/dmlogwrites
> @@ -23,7 +23,6 @@ _require_log_writes()
> [ -z "$LOGWRITES_DEV" -o ! -b "$LOGWRITES_DEV" ] && \
> _notrun "This test requires a valid \$LOGWRITES_DEV"
>
> -   _exclude_scratch_mount_option dax
> _require_dm_target log-writes
> _require_test_program "log-writes/replay-log"
>  }
> diff --git a/src/Makefile b/src/Makefile
> index 3eb25b1..af7e7e9 100644
> --- a/src/Makefile
> +++ b/src/Makefile
> @@ -13,7 +13,8 @@ TARGETS = dirstress fill fill2 getpagesize holes lstat64 \
> multi_open_unlink dmiperf unwritten_sync genhashnames t_holes \
> t_mmap_writev t_truncate_cmtime dirhash_collide t_rename_overwrite \
> holetest t_truncate_self t_mmap_dio af_unix t_mmap_stale_pmd \
> -   t_mmap_cow_race t_mmap_fallocate fsync-err t_mmap_write_ro
> +   t_mmap_cow_race t_mmap_fallocate fsync-err t_mmap_write_ro \
> +   t_map_sync
>
>  LINUX_TARGETS = xfsctl bstat t_mtab getdevicesize preallo_rw_pattern_reader \
> preallo_rw_pattern_writer ftrunc trunc fs_perms testx looptest \
> diff --git a/src/t_map_sync.c b/src/t_map_sync.c
> new file mode 100644
> index 000..8190f3c
> --- /dev/null
> +++ b/src/t_map_sync.c
> @@ -0,0 +1,74 @@
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#define MiB(a) ((a)*1024*1024)
> +
> +/*
> + * These two defines were added to the kernel via commits entitled
> + * "mm: Define MAP_SYNC and VM_SYNC flags" and
> + * "mm: introduce MAP_SHARED_VALIDATE, a mechanism to safely define new mmap
> + * flags", respectively.

#ifndef?

> + */
> +#define MAP_SYNC 0x8
> +#define MAP_SHARED_VALIDATE 0x3
> +
> +void err_exit(char *op)
> +{
> +   fprintf(stderr, "%s: %s\n", op, strerror(errno));
> +   exit(1);
> +}
> +
> +int main(int argc, char *argv[])
> +{
> +   int page_size = getpagesize();
> +   int len = MiB(1);
> +   int i, fd, err;
> +   char *data;
> +
> +   if (argc < 2) {
> +   printf("Usage: %s \n", basename(argv[0]));
> +   exit(0);
> +   }
> +
> +   fd = open(argv[1], O_RDWR|O_CREAT, S_IRUSR|S_IWUSR);
> +   if (fd < 0)
> +   err_exit("fd");
> +
> +   ftruncate(fd, 0);

O_TRUNC?

> +   ftruncate(fd, len);
> +
> +   data = mmap(NULL, len, PROT_READ|PROT_WRITE,
> +   MAP_SHARED_VALIDATE|MAP_SYNC, fd, 0);
> +   if (data == MAP_FAILED)
> +   err_exit("mmap");
> +
> +   /*
> +* We intentionally don't sync 'fd' manually.  If MAP_SYNC is working
> +* these allocating page faults will cause the filesystem to sync its
> +* metadata so that when we replay the dm-log-writes log the test file
> +* will be 1 MiB in size.
> +*
> +* dm-log-writes doesn't track the data that we write 

转发:如何解决销*售【常见问题】

2017-10-20 Thread 杭瘩
如何解决【常见问题】
l  业绩压力大,但要提升业绩又觉得无从下手,怎么办?
l  报价后客户就没有回应了,怎么办?
l  客户拿着我们的报价与别人比,说我们的价格贵,怎么办?
l  与客户联络一段时间了,但客户一直不下单,怎么办?
l  提供样品给客户测试后,客户就不理我们了,怎么办?
l  开发陌生客户时,有没有好的方法减少电话或邮件的拒绝率?
l  收到客户的询盘后,认真分析过客户的需求并回复了,但客户反馈的很少,怎么办?
l  如何把老客户的满意度转化为销量?



详细 内容 参阅 附件 大纲

杭瘩
2017-10-20/14:08:34
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm