Hi Stefan,

On 04/11/16 04:49 AM, Stefan Hajnoczi wrote:
> QEMU already has NVDIMM support (https://pmem.io/).  It can be used both
> for passthrough and fake non-volatile memory:
> 
>   qemu-system-x86_64 \
>     -M pc,nvdimm=on \
>     -m 1024,maxmem=$((4096 * 1024 * 1024)),slots=2 \
>     -object memory-backend-file,id=mem0,mem-path=/tmp/foo,size=$((64 * 1024 * 
> 1024)) \
>     -device nvdimm,memdev=mem0
> 
> Please explain where iopmem comes from, where the hardware spec is, etc?

Yes, we are aware of nvdimm and, yes, there are quite a few
commonalities. The difference between nvdimm and iopmem is that the
memory that backs iopmem is on a PCI device and not connected through
system memory. Currently, we are working with prototype hardware so
there is no open spec that I'm aware of but the concept is really
simple: a single bar directly maps volatile or non-volatile memory.

One of the primary motivations behind iopmem is to provide memory to do
peer to peer transactions between PCI devices such that, for example, an
RDMA NIC could transfer data directly to storage and bypass the system
memory bus all together.


> Perhaps you could use nvdimm instead of adding a new device?

I'm afraid not. The main purpose of this patch is to enable us to test
kernel drivers for this type of hardware. If we use nvdimm, there is no
PCI device for our driver to enumerate and the existing, different,
NVDIMM drivers would be used instead.

Thanks for the consideration,

Logan

Reply via email to