Re: [RFC 0/8] pmem: Submission of the Persistent memory block device

2015-03-09 Thread Boaz Harrosh
On 03/06/2015 08:37 PM, Ross Zwisler wrote:
<>
>> I maintain these patches on latest Kernels here:
>>  git://git.open-osd.org/pmem.git branch pmem
>>
>> Thanks for reviewing
>> Boaz
> 
> Hey Boaz,
> 
> Regarding the PMEM series, my group has been working on an updated
> version of this driver for the past 6 months or so since I initially
> posted the beginnings of this series:
> 
> https://lkml.org/lkml/2014/8/27/674
> 
> That new version should be ready for public viewing sometime in April.
> 
> It's my preference that we wait to try and upstream any form of PMEM
> until we've released our updated version of the driver, and you've had a
> chance to review and add in any changes you need.  I'm cool with
> gathering additional feedback until then, of course.
> 

Dear Ross

We have a very grave problem with you guys. You do not develop an open
source driver in stealth mode, and not release code until some undefined
future. This is called forking. And forking means taking the extra hit
when main stream is advancing.

I've been changing this driver and supporting code, and even if not posting
every change on ML, so not to make noise, I pushed any new version to the
public tree every time. So anyone that monitors the tree can provision for
my changes and see where I'm going with this.

I've been monitoring your tree, and there was not a single change. If you
are making progress you should make them available, as soon as they advance
so we can see and adjust. (And/or maybe not make the same effort twice?)

> Trying to upstream this older version and then merging it with the newer
> stuff in-kernel seems like it'll just end up being more work in the end.
> 

I don't think so. Did you see the latest version of this driver. It is
so small. Any changes you will make, will probably be additions and
enhancements. I do not see how it will make such an extra work.

Show me the code, If I see that you are right I will make changes
accordingly, but for now I cannot see why it can make any kind of
problems.

> Thanks,
> - Ross

Thanks
Boaz

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/8] pmem: Submission of the Persistent memory block device

2015-03-06 Thread Christoph Hellwig
On Fri, Mar 06, 2015 at 11:37:45AM -0700, Ross Zwisler wrote:
> Regarding the PMEM series, my group has been working on an updated
> version of this driver for the past 6 months or so since I initially
> posted the beginnings of this series:
> 
> https://lkml.org/lkml/2014/8/27/674
> 
> That new version should be ready for public viewing sometime in April.
> 
> It's my preference that we wait to try and upstream any form of PMEM
> until we've released our updated version of the driver, and you've had a
> chance to review and add in any changes you need.  I'm cool with
> gathering additional feedback until then, of course.
> 
> Trying to upstream this older version and then merging it with the newer
> stuff in-kernel seems like it'll just end up being more work in the end.

We've been waiting far too long to get any version of this merged.  I
dont think waiting for vapourware is a good idea.  So either please post
your new code ASAP given that you apparently have it, or you'll just
have to do more work later.  Given how simple the pmem driver is I
really can't see any major merge problems anyway.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/8] pmem: Submission of the Persistent memory block device

2015-03-06 Thread Ross Zwisler
On Thu, 2015-03-05 at 12:32 +0200, Boaz Harrosh wrote:
> There are already NvDIMMs and other Persistent-memory devices in the market, 
> and
> lots more of them will be coming in near future.
> 
> Current stack is coming along very nice, and filesystems support for 
> leveraging this
> technologies has been submitted to Linus in the DAX series by Matthew Wilcox.
> 
> The general stack does not change:
>   block-device
>   partition
>   file-system
>   application file
> 
> The only extra care, see Matthew's DAX patches, Is the ->direct_access() API 
> from
> block devices that enables a direct mapping from Persistent-memory to user 
> application
> and/or Kernel for direct store/load of data.
> 
> The only missing piece is the actual block device that enables support
> for such NvDIMM chips. This is the driver we submit here.
> 
> The driver is very simple, in fact it is the 2nd smallest driver inside 
> drivers/block
> What the driver does is support a physical contiguous iomem range as a single 
> block
> device. The driver has support for as many as needed iomem ranges each as its 
> own device.
> (See patch-1 for more details)
> 
> We are using this driver over a year now, In a lab with combination of VMs 
> and real
> hardware, with a variety of hardware and vendors, and it is very stable. 
> Actually why
> not it is so simple it does nothing almost.
> 
> The driver is not only good for NvDIMMs, It is good for any flat memory mapped
> device. We've used it with NvDIMMs, Kernel reserved DRAM (memmap= on command 
> line),
> PCIE Battery backed memory cards, VM shared memory, and so on.
> 
> Together with this driver also submitted support for page-struct with
> Persistent-memory, so Persistent-memory can be used with RDMA, DMA, 
> block-devices
> and so on, just as regular memory, in a copy-less manner.
> With the use of these two simple patches, we were able to set up an RDMA 
> target
> machine which exports NvDIMMs and enables direct remote storage. The only
> "complicated" thing was the remote flush of caches because most RDMA nicks in
> Kernel will RDMA directly to L3 cache, so we needed to establish a message 
> that
> involves the remote CPU for this. But otherwise the mapping of pmem pointer
> to an RDMA key was trivial, directly from user-mode, with no extra Kernel 
> code.
> [The target is simple with no extra code, the RDMA client on the other hand 
> needs
>  a special driver]
> 
> I maintain these patches on latest Kernels here:
>   git://git.open-osd.org/pmem.git branch pmem
> 
> Thanks for reviewing
> Boaz

Hey Boaz,

Regarding the PMEM series, my group has been working on an updated
version of this driver for the past 6 months or so since I initially
posted the beginnings of this series:

https://lkml.org/lkml/2014/8/27/674

That new version should be ready for public viewing sometime in April.

It's my preference that we wait to try and upstream any form of PMEM
until we've released our updated version of the driver, and you've had a
chance to review and add in any changes you need.  I'm cool with
gathering additional feedback until then, of course.

Trying to upstream this older version and then merging it with the newer
stuff in-kernel seems like it'll just end up being more work in the end.

Thanks,
- Ross

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC 0/8] pmem: Submission of the Persistent memory block device

2015-03-05 Thread Boaz Harrosh

There are already NvDIMMs and other Persistent-memory devices in the market, and
lots more of them will be coming in near future.

Current stack is coming along very nice, and filesystems support for leveraging 
this
technologies has been submitted to Linus in the DAX series by Matthew Wilcox.

The general stack does not change:
block-device
partition
file-system
application file

The only extra care, see Matthew's DAX patches, Is the ->direct_access() API 
from
block devices that enables a direct mapping from Persistent-memory to user 
application
and/or Kernel for direct store/load of data.

The only missing piece is the actual block device that enables support
for such NvDIMM chips. This is the driver we submit here.

The driver is very simple, in fact it is the 2nd smallest driver inside 
drivers/block
What the driver does is support a physical contiguous iomem range as a single 
block
device. The driver has support for as many as needed iomem ranges each as its 
own device.
(See patch-1 for more details)

We are using this driver over a year now, In a lab with combination of VMs and 
real
hardware, with a variety of hardware and vendors, and it is very stable. 
Actually why
not it is so simple it does nothing almost.

The driver is not only good for NvDIMMs, It is good for any flat memory mapped
device. We've used it with NvDIMMs, Kernel reserved DRAM (memmap= on command 
line),
PCIE Battery backed memory cards, VM shared memory, and so on.

Together with this driver also submitted support for page-struct with
Persistent-memory, so Persistent-memory can be used with RDMA, DMA, 
block-devices
and so on, just as regular memory, in a copy-less manner.
With the use of these two simple patches, we were able to set up an RDMA target
machine which exports NvDIMMs and enables direct remote storage. The only
"complicated" thing was the remote flush of caches because most RDMA nicks in
Kernel will RDMA directly to L3 cache, so we needed to establish a message that
involves the remote CPU for this. But otherwise the mapping of pmem pointer
to an RDMA key was trivial, directly from user-mode, with no extra Kernel code.
[The target is simple with no extra code, the RDMA client on the other hand 
needs
 a special driver]

I maintain these patches on latest Kernels here:
git://git.open-osd.org/pmem.git branch pmem

Thanks for reviewing
Boaz

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/