Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-20 Thread Kani, Toshimitsu
On Fri, 2017-01-20 at 18:24 +0900, Yasunori Goto wrote:
 :
> > 
> > Like mentioned before, this discussion is more about presentation
> > of errors in a known consumable format, rather than recovering from
> > errors. While recovering from errors is interesting, we already
> > have layers like RAID for that, and they are as applicable to
> > NVDIMM backed storage as they have been for disk/SSD based storage.
> 
> I have one question here.
> 
> Certainly, user can use LVM mirroring for storage mode of NVDIMM.
> However, NVDIMM has DAX mode. 
> Can user use LVM mirroring for NVDIMM DAX mode?
> I could not find any information that LVM support DAX

dm-linear and dm-stripe support DAX.  This is done by mapping block
allocations to LVM physical devices.  Once blocks are allocated, all
DAX I/Os are direct and do not go through the device-mapper layer.  We
may be able to change it for read/write paths, but it remains true for
mmap.  So, I do not think DAX can be supported with LVM mirroring. 
This does not preclude hardware mirroring, though.

-Toshi
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-20 Thread Yasunori Goto
Hello,
Virshal-san.

First of all, your discussion is quite interesting for me. Thanks.

> > 
> > If DUE does happen and is flagged to the file system via MCE (somehow...),
> > and the fs finds that the error corrupts its allocated data page, or
> > metadata, now if the fs wants to recover its data the intuition is that
> > there needs to be a stronger error correction mechanism to correct the
> > hardware-uncorrectable errors. So knowing the hardware ECC baseline is
> > helpful for the file system to understand how severe are the faults in
> > badblocks, and develop its recovery methods.
> 
> Like mentioned before, this discussion is more about presentation of
> errors in a known consumable format, rather than recovering from errors.
> While recovering from errors is interesting, we already have layers
> like RAID for that, and they are as applicable to NVDIMM backed storage
> as they have been for disk/SSD based storage.

I have one question here.

Certainly, user can use LVM mirroring for storage mode of NVDIMM.
However, NVDIMM has DAX mode. 
Can user use LVM mirroring for NVDIMM DAX mode?
I could not find any information that LVM support DAX

In addition, current specs of NVDIMM (*) only define interleave feature of 
NVDIMMs. 
They does not mention about mirroring feature.
So, I don't understand how to use mirroring for DAX.

(*) "NVDIMM Namespace Specification" , "NVDIMM Block Window Driver Writer’s 
Guide",
   and "ACPI 6.1"

Regards,
---
Yasunori Goto

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-19 Thread Verma, Vishal L
On Tue, 2017-01-17 at 18:01 -0800, Andiry Xu wrote:
> On Tue, Jan 17, 2017 at 4:16 PM, Andreas Dilger 
> wrote:
> > On Jan 17, 2017, at 3:15 PM, Andiry Xu  wrote:
> > > On Tue, Jan 17, 2017 at 1:35 PM, Vishal Verma  > > l.com> wrote:
> > > > On 01/16, Darrick J. Wong wrote:
> > > > > On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
> > > > > > On 01/14, Slava Dubeyko wrote:
> > > > > > > 
> > > > > > >  Original Message 
> > > > > > > Subject: [LSF/MM TOPIC] Badblocks checking/representation
> > > > > > > in filesystems
> > > > > > > Sent: Jan 13, 2017 1:40 PM
> > > > > > > From: "Verma, Vishal L" 
> > > > > > > To: lsf...@lists.linux-foundation.org
> > > > > > > Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org
> > > > > > > , linux-fsde...@vger.kernel.org
> > > > > > > 
> > > > > > > > The current implementation of badblocks, where we
> > > > > > > > consult the
> > > > > > > > badblocks list for every IO in the block driver works,
> > > > > > > > and is a
> > > > > > > > last option failsafe, but from a user perspective, it
> > > > > > > > isn't the
> > > > > > > > easiest interface to work with.
> > > > > > > 
> > > > > > > As I remember, FAT and HFS+ specifications contain
> > > > > > > description of bad blocks
> > > > > > > (physical sectors) table. I believe that this table was
> > > > > > > used for the case of
> > > > > > > floppy media. But, finally, this table becomes to be the
> > > > > > > completely obsolete
> > > > > > > artefact because mostly storage devices are reliably
> > > > > > > enough. Why do you need
> > > > > 
> > > > > ext4 has a badblocks inode to own all the bad spots on disk,
> > > > > but ISTR it
> > > > > doesn't support(??) extents or 64-bit filesystems, and might
> > > > > just be a
> > > > > vestigial organ at this point.  XFS doesn't have anything to
> > > > > track bad
> > > > > blocks currently
> > > > > 
> > > > > > > in exposing the bad blocks on the file system level?  Do
> > > > > > > you expect that next
> > > > > > > generation of NVM memory will be so unreliable that file
> > > > > > > system needs to manage
> > > > > > > bad blocks? What's about erasure coding schemes? Do file
> > > > > > > system really need to suffer
> > > > > > > from the bad block issue?
> > > > > > > 
> > > > > > > Usually, we are using LBAs and it is the responsibility of
> > > > > > > storage device to map
> > > > > > > a bad physical block/page/sector into valid one. Do you
> > > > > > > mean that we have
> > > > > > > access to physical NVM memory address directly? But it
> > > > > > > looks like that we can
> > > > > > > have a "bad block" issue even we will access data into
> > > > > > > page cache's memory
> > > > > > > page (if we will use NVM memory for page cache, of
> > > > > > > course). So, what do you
> > > > > > > imply by "bad block" issue?
> > > > > > 
> > > > > > We don't have direct physical access to the device's address
> > > > > > space, in
> > > > > > the sense the device is still free to perform remapping of
> > > > > > chunks of NVM
> > > > > > underneath us. The problem is that when a block or address
> > > > > > range (as
> > > > > > small as a cache line) goes bad, the device maintains a
> > > > > > poison bit for
> > > > > > every affected cache line. Behind the scenes, it may have
> > > > > > already
> > > > > > remapped the range, but the cache line poison has to be kept
> > > > > > so that
> > > > > > there is a notification to the user/owner of the data that
> > > > > > something has
> > > > > > been lost. Since NVM is byte addressable memory sitting on
> > > > > > the memory
> > > > > > bus, such a poisoned cache line results in memory errors and
> > > > > > SIGBUSes.
> > > > > > Compared to tradational storage where an app will get nice
> > > > > > and friendly
> > > > > > (relatively speaking..) -EIOs. The whole badblocks
> > > > > > implementation was
> > > > > > done so that the driver can intercept IO (i.e. reads) to
> > > > > > _known_ bad
> > > > > > locations, and short-circuit them with an EIO. If the driver
> > > > > > doesn't
> > > > > > catch these, the reads will turn into a memory bus access,
> > > > > > and the
> > > > > > poison will cause a SIGBUS.
> > > > > 
> > > > > "driver" ... you mean XFS?  Or do you mean the thing that
> > > > > makes pmem
> > > > > look kind of like a traditional block device? :)
> > > > 
> > > > Yes, the thing that makes pmem look like a block device :) --
> > > > drivers/nvdimm/pmem.c
> > > > 
> > > > > 
> > > > > > This effort is to try and make this badblock checking
> > > > > > smarter - and try
> > > > > > and reduce the penalty on every IO to a smaller range, which
> > > > > > only the
> > > > > > filesystem can do.
> > > > > 
> > > > > Though... now that XFS merged the reverse mapping support,
> > > > > I've been
> > > > > wondering if there'll be a resubmission of the device errors
> > > 

Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-19 Thread Verma, Vishal L
On Tue, 2017-01-17 at 17:58 -0800, Andiry Xu wrote:
> On Tue, Jan 17, 2017 at 3:51 PM, Vishal Verma  m> wrote:
> > On 01/17, Andiry Xu wrote:
> > 
> > 
> > 
> > > > > 
> > > > > The pmem_do_bvec() read logic is like this:
> > > > > 
> > > > > pmem_do_bvec()
> > > > > if (is_bad_pmem())
> > > > > return -EIO;
> > > > > else
> > > > > memcpy_from_pmem();
> > > > > 
> > > > > Note memcpy_from_pmem() is calling memcpy_mcsafe(). Does this
> > > > > imply
> > > > > that even if a block is not in the badblock list, it still can
> > > > > be bad
> > > > > and causes MCE? Does the badblock list get changed during file
> > > > > system
> > > > > running? If that is the case, should the file system get a
> > > > > notification when it gets changed? If a block is good when I
> > > > > first
> > > > > read it, can I still trust it to be good for the second
> > > > > access?
> > > > 
> > > > Yes, if a block is not in the badblocks list, it can still cause
> > > > an
> > > > MCE. This is the latent error case I described above. For a
> > > > simple read()
> > > > via the pmem driver, this will get handled by memcpy_mcsafe. For
> > > > mmap,
> > > > an MCE is inevitable.
> > > > 
> > > > Yes the badblocks list may change while a filesystem is running.
> > > > The RFC
> > > > patches[1] I linked to add a notification for the filesystem
> > > > when this
> > > > happens.
> > > > 
> > > 
> > > This is really bad and it makes file system implementation much
> > > more
> > > complicated. And badblock notification does not help very much,
> > > because any block can be bad potentially, no matter it is in
> > > badblock
> > > list or not. And file system has to perform checking for every
> > > read,
> > > using memcpy_mcsafe. This is disaster for file system like NOVA,
> > > which
> > > uses pointer de-reference to access data structures on pmem. Now
> > > if I
> > > want to read a field in an inode on pmem, I have to copy it to
> > > DRAM
> > > first and make sure memcpy_mcsafe() does not report anything
> > > wrong.
> > 
> > You have a good point, and I don't know if I have an answer for
> > this..
> > Assuming a system with MCE recovery, maybe NOVA can add a mce
> > handler
> > similar to nfit_handle_mce(), and handle errors as they happen, but
> > I'm
> > being very hand-wavey here and don't know how much/how well that
> > might
> > work..
> > 
> > > 
> > > > No, if the media, for some reason, 'dvelops' a bad cell, a
> > > > second
> > > > consecutive read does have a chance of being bad. Once a
> > > > location has
> > > > been marked as bad, it will stay bad till the ACPI clear error
> > > > 'DSM' has
> > > > been called to mark it as clean.
> > > > 
> > > 
> > > I wonder what happens to write in this case? If a block is bad but
> > > not
> > > reported in badblock list. Now I write to it without reading
> > > first. Do
> > > I clear the poison with the write? Or still require a ACPI DSM?
> > 
> > With writes, my understanding is there is still a possibility that
> > an
> > internal read-modify-write can happen, and cause a MCE (this is the
> > same
> > as writing to a bad DRAM cell, which can also cause an MCE). You
> > can't
> > really use the ACPI DSM preemptively because you don't know whether
> > the
> > location was bad. The error flow will be something like write causes
> > the
> > MCE, a badblock gets added (either through the mce handler or after
> > the
> > next reboot), and the recovery path is now the same as a regular
> > badblock.
> > 
> 
> This is different from my understanding. Right now write_pmem() in
> pmem_do_bvec() does not use memcpy_mcsafe(). If the block is bad it
> clears poison and writes to pmem again. Seems to me writing to bad
> blocks does not cause MCE. Do we need memcpy_mcsafe for pmem stores?

You are right, writes don't use memcpy_mcsafe, and will not directly
cause an MCE. However a write can cause an asynchronous 'CMCI' -
corrected machine check interrupt, but this is not critical, and wont be
a memory error as the core didn't consume poison. memcpy_mcsafe cannot
protect against this because the write is 'posted' and the CMCI is not
synchronous. Note that this is only in the latent error or memmap-store
case.

> 
> Thanks,
> Andiry
> 
> > > 
> > > > [1]: http://www.linux.sgi.com/archives/xfs/2016-06/msg00299.html
> > > > 
> > > 
> > > Thank you for the patchset. I will look into it.
> > > 
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-17 Thread Lu Zhang
I'm curious about the fault model and corresponding hardware ECC mechanisms
for NVDIMMs. In my understanding for memory accesses to trigger MCE, it
means the memory controller finds a detectable but uncorrectable error
(DUE). So if there is no hardware ECC support the media errors won't even
be noticed, not to mention badblocks or machine checks.

Current hardware ECC support for DRAM usually employs (72, 64) single-bit
error correction mechanism, and for advanced ECCs there are techniques like
Chipkill or SDDC which can tolerate a single DRAM chip failure. What is the
expected ECC mode for NVDIMMs, assuming that PCM or 3dXpoint based
technology might have higher error rates?

If DUE does happen and is flagged to the file system via MCE (somehow...),
and the fs finds that the error corrupts its allocated data page, or
metadata, now if the fs wants to recover its data the intuition is that
there needs to be a stronger error correction mechanism to correct the
hardware-uncorrectable errors. So knowing the hardware ECC baseline is
helpful for the file system to understand how severe are the faults in
badblocks, and develop its recovery methods.

Regards,
Lu

On Tue, Jan 17, 2017 at 6:01 PM, Andiry Xu  wrote:

> On Tue, Jan 17, 2017 at 4:16 PM, Andreas Dilger  wrote:
> > On Jan 17, 2017, at 3:15 PM, Andiry Xu  wrote:
> >> On Tue, Jan 17, 2017 at 1:35 PM, Vishal Verma 
> wrote:
> >>> On 01/16, Darrick J. Wong wrote:
>  On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
> > On 01/14, Slava Dubeyko wrote:
> >>
> >>  Original Message 
> >> Subject: [LSF/MM TOPIC] Badblocks checking/representation in
> filesystems
> >> Sent: Jan 13, 2017 1:40 PM
> >> From: "Verma, Vishal L" 
> >> To: lsf...@lists.linux-foundation.org
> >> Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org,
> linux-fsde...@vger.kernel.org
> >>
> >>> The current implementation of badblocks, where we consult the
> >>> badblocks list for every IO in the block driver works, and is a
> >>> last option failsafe, but from a user perspective, it isn't the
> >>> easiest interface to work with.
> >>
> >> As I remember, FAT and HFS+ specifications contain description of
> bad blocks
> >> (physical sectors) table. I believe that this table was used for
> the case of
> >> floppy media. But, finally, this table becomes to be the completely
> obsolete
> >> artefact because mostly storage devices are reliably enough. Why do
> you need
> 
>  ext4 has a badblocks inode to own all the bad spots on disk, but ISTR
> it
>  doesn't support(??) extents or 64-bit filesystems, and might just be a
>  vestigial organ at this point.  XFS doesn't have anything to track bad
>  blocks currently
> 
> >> in exposing the bad blocks on the file system level?  Do you expect
> that next
> >> generation of NVM memory will be so unreliable that file system
> needs to manage
> >> bad blocks? What's about erasure coding schemes? Do file system
> really need to suffer
> >> from the bad block issue?
> >>
> >> Usually, we are using LBAs and it is the responsibility of storage
> device to map
> >> a bad physical block/page/sector into valid one. Do you mean that
> we have
> >> access to physical NVM memory address directly? But it looks like
> that we can
> >> have a "bad block" issue even we will access data into page cache's
> memory
> >> page (if we will use NVM memory for page cache, of course). So,
> what do you
> >> imply by "bad block" issue?
> >
> > We don't have direct physical access to the device's address space,
> in
> > the sense the device is still free to perform remapping of chunks of
> NVM
> > underneath us. The problem is that when a block or address range (as
> > small as a cache line) goes bad, the device maintains a poison bit
> for
> > every affected cache line. Behind the scenes, it may have already
> > remapped the range, but the cache line poison has to be kept so that
> > there is a notification to the user/owner of the data that something
> has
> > been lost. Since NVM is byte addressable memory sitting on the memory
> > bus, such a poisoned cache line results in memory errors and
> SIGBUSes.
> > Compared to tradational storage where an app will get nice and
> friendly
> > (relatively speaking..) -EIOs. The whole badblocks implementation was
> > done so that the driver can intercept IO (i.e. reads) to _known_ bad
> > locations, and short-circuit them with an EIO. If the driver doesn't
> > catch these, the reads will turn into a memory bus access, and the
> > poison will cause a SIGBUS.
> 
>  "driver" ... you mean XFS?  Or do you mean the thing that makes pmem
>  look kind of like a traditional block device? :)
> >>>

Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-17 Thread Andiry Xu
On Tue, Jan 17, 2017 at 4:16 PM, Andreas Dilger  wrote:
> On Jan 17, 2017, at 3:15 PM, Andiry Xu  wrote:
>> On Tue, Jan 17, 2017 at 1:35 PM, Vishal Verma  
>> wrote:
>>> On 01/16, Darrick J. Wong wrote:
 On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
> On 01/14, Slava Dubeyko wrote:
>>
>>  Original Message 
>> Subject: [LSF/MM TOPIC] Badblocks checking/representation in filesystems
>> Sent: Jan 13, 2017 1:40 PM
>> From: "Verma, Vishal L" 
>> To: lsf...@lists.linux-foundation.org
>> Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org, 
>> linux-fsde...@vger.kernel.org
>>
>>> The current implementation of badblocks, where we consult the
>>> badblocks list for every IO in the block driver works, and is a
>>> last option failsafe, but from a user perspective, it isn't the
>>> easiest interface to work with.
>>
>> As I remember, FAT and HFS+ specifications contain description of bad 
>> blocks
>> (physical sectors) table. I believe that this table was used for the 
>> case of
>> floppy media. But, finally, this table becomes to be the completely 
>> obsolete
>> artefact because mostly storage devices are reliably enough. Why do you 
>> need

 ext4 has a badblocks inode to own all the bad spots on disk, but ISTR it
 doesn't support(??) extents or 64-bit filesystems, and might just be a
 vestigial organ at this point.  XFS doesn't have anything to track bad
 blocks currently

>> in exposing the bad blocks on the file system level?  Do you expect that 
>> next
>> generation of NVM memory will be so unreliable that file system needs to 
>> manage
>> bad blocks? What's about erasure coding schemes? Do file system really 
>> need to suffer
>> from the bad block issue?
>>
>> Usually, we are using LBAs and it is the responsibility of storage 
>> device to map
>> a bad physical block/page/sector into valid one. Do you mean that we have
>> access to physical NVM memory address directly? But it looks like that 
>> we can
>> have a "bad block" issue even we will access data into page cache's 
>> memory
>> page (if we will use NVM memory for page cache, of course). So, what do 
>> you
>> imply by "bad block" issue?
>
> We don't have direct physical access to the device's address space, in
> the sense the device is still free to perform remapping of chunks of NVM
> underneath us. The problem is that when a block or address range (as
> small as a cache line) goes bad, the device maintains a poison bit for
> every affected cache line. Behind the scenes, it may have already
> remapped the range, but the cache line poison has to be kept so that
> there is a notification to the user/owner of the data that something has
> been lost. Since NVM is byte addressable memory sitting on the memory
> bus, such a poisoned cache line results in memory errors and SIGBUSes.
> Compared to tradational storage where an app will get nice and friendly
> (relatively speaking..) -EIOs. The whole badblocks implementation was
> done so that the driver can intercept IO (i.e. reads) to _known_ bad
> locations, and short-circuit them with an EIO. If the driver doesn't
> catch these, the reads will turn into a memory bus access, and the
> poison will cause a SIGBUS.

 "driver" ... you mean XFS?  Or do you mean the thing that makes pmem
 look kind of like a traditional block device? :)
>>>
>>> Yes, the thing that makes pmem look like a block device :) --
>>> drivers/nvdimm/pmem.c
>>>

> This effort is to try and make this badblock checking smarter - and try
> and reduce the penalty on every IO to a smaller range, which only the
> filesystem can do.

 Though... now that XFS merged the reverse mapping support, I've been
 wondering if there'll be a resubmission of the device errors callback?
 It still would be useful to be able to inform the user that part of
 their fs has gone bad, or, better yet, if the buffer is still in memory
 someplace else, just write it back out.

 Or I suppose if we had some kind of raid1 set up between memories we
 could read one of the other copies and rewrite it into the failing
 region immediately.
>>>
>>> Yes, that is kind of what I was hoping to accomplish via this
>>> discussion. How much would filesystems want to be involved in this sort
>>> of badblocks handling, if at all. I can refresh my patches that provide
>>> the fs notification, but that's the easy bit, and a starting point.
>>>
>>
>> I have some questions. Why moving badblock handling to file system
>> level avoid the checking phase? In file system level for each I/O I
>> still have to check the badblock list, right? Do you 

Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-17 Thread Andiry Xu
On Tue, Jan 17, 2017 at 3:51 PM, Vishal Verma  wrote:
> On 01/17, Andiry Xu wrote:
>
> 
>
>> >>
>> >> The pmem_do_bvec() read logic is like this:
>> >>
>> >> pmem_do_bvec()
>> >> if (is_bad_pmem())
>> >> return -EIO;
>> >> else
>> >> memcpy_from_pmem();
>> >>
>> >> Note memcpy_from_pmem() is calling memcpy_mcsafe(). Does this imply
>> >> that even if a block is not in the badblock list, it still can be bad
>> >> and causes MCE? Does the badblock list get changed during file system
>> >> running? If that is the case, should the file system get a
>> >> notification when it gets changed? If a block is good when I first
>> >> read it, can I still trust it to be good for the second access?
>> >
>> > Yes, if a block is not in the badblocks list, it can still cause an
>> > MCE. This is the latent error case I described above. For a simple read()
>> > via the pmem driver, this will get handled by memcpy_mcsafe. For mmap,
>> > an MCE is inevitable.
>> >
>> > Yes the badblocks list may change while a filesystem is running. The RFC
>> > patches[1] I linked to add a notification for the filesystem when this
>> > happens.
>> >
>>
>> This is really bad and it makes file system implementation much more
>> complicated. And badblock notification does not help very much,
>> because any block can be bad potentially, no matter it is in badblock
>> list or not. And file system has to perform checking for every read,
>> using memcpy_mcsafe. This is disaster for file system like NOVA, which
>> uses pointer de-reference to access data structures on pmem. Now if I
>> want to read a field in an inode on pmem, I have to copy it to DRAM
>> first and make sure memcpy_mcsafe() does not report anything wrong.
>
> You have a good point, and I don't know if I have an answer for this..
> Assuming a system with MCE recovery, maybe NOVA can add a mce handler
> similar to nfit_handle_mce(), and handle errors as they happen, but I'm
> being very hand-wavey here and don't know how much/how well that might
> work..
>
>>
>> > No, if the media, for some reason, 'dvelops' a bad cell, a second
>> > consecutive read does have a chance of being bad. Once a location has
>> > been marked as bad, it will stay bad till the ACPI clear error 'DSM' has
>> > been called to mark it as clean.
>> >
>>
>> I wonder what happens to write in this case? If a block is bad but not
>> reported in badblock list. Now I write to it without reading first. Do
>> I clear the poison with the write? Or still require a ACPI DSM?
>
> With writes, my understanding is there is still a possibility that an
> internal read-modify-write can happen, and cause a MCE (this is the same
> as writing to a bad DRAM cell, which can also cause an MCE). You can't
> really use the ACPI DSM preemptively because you don't know whether the
> location was bad. The error flow will be something like write causes the
> MCE, a badblock gets added (either through the mce handler or after the
> next reboot), and the recovery path is now the same as a regular badblock.
>

This is different from my understanding. Right now write_pmem() in
pmem_do_bvec() does not use memcpy_mcsafe(). If the block is bad it
clears poison and writes to pmem again. Seems to me writing to bad
blocks does not cause MCE. Do we need memcpy_mcsafe for pmem stores?

Thanks,
Andiry

>>
>> > [1]: http://www.linux.sgi.com/archives/xfs/2016-06/msg00299.html
>> >
>>
>> Thank you for the patchset. I will look into it.
>>
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-17 Thread Vishal Verma
On 01/17, Andiry Xu wrote:



> >>
> >> The pmem_do_bvec() read logic is like this:
> >>
> >> pmem_do_bvec()
> >> if (is_bad_pmem())
> >> return -EIO;
> >> else
> >> memcpy_from_pmem();
> >>
> >> Note memcpy_from_pmem() is calling memcpy_mcsafe(). Does this imply
> >> that even if a block is not in the badblock list, it still can be bad
> >> and causes MCE? Does the badblock list get changed during file system
> >> running? If that is the case, should the file system get a
> >> notification when it gets changed? If a block is good when I first
> >> read it, can I still trust it to be good for the second access?
> >
> > Yes, if a block is not in the badblocks list, it can still cause an
> > MCE. This is the latent error case I described above. For a simple read()
> > via the pmem driver, this will get handled by memcpy_mcsafe. For mmap,
> > an MCE is inevitable.
> >
> > Yes the badblocks list may change while a filesystem is running. The RFC
> > patches[1] I linked to add a notification for the filesystem when this
> > happens.
> >
> 
> This is really bad and it makes file system implementation much more
> complicated. And badblock notification does not help very much,
> because any block can be bad potentially, no matter it is in badblock
> list or not. And file system has to perform checking for every read,
> using memcpy_mcsafe. This is disaster for file system like NOVA, which
> uses pointer de-reference to access data structures on pmem. Now if I
> want to read a field in an inode on pmem, I have to copy it to DRAM
> first and make sure memcpy_mcsafe() does not report anything wrong.

You have a good point, and I don't know if I have an answer for this..
Assuming a system with MCE recovery, maybe NOVA can add a mce handler
similar to nfit_handle_mce(), and handle errors as they happen, but I'm
being very hand-wavey here and don't know how much/how well that might
work..

> 
> > No, if the media, for some reason, 'dvelops' a bad cell, a second
> > consecutive read does have a chance of being bad. Once a location has
> > been marked as bad, it will stay bad till the ACPI clear error 'DSM' has
> > been called to mark it as clean.
> >
> 
> I wonder what happens to write in this case? If a block is bad but not
> reported in badblock list. Now I write to it without reading first. Do
> I clear the poison with the write? Or still require a ACPI DSM?

With writes, my understanding is there is still a possibility that an
internal read-modify-write can happen, and cause a MCE (this is the same
as writing to a bad DRAM cell, which can also cause an MCE). You can't
really use the ACPI DSM preemptively because you don't know whether the
location was bad. The error flow will be something like write causes the
MCE, a badblock gets added (either through the mce handler or after the
next reboot), and the recovery path is now the same as a regular badblock.

> 
> > [1]: http://www.linux.sgi.com/archives/xfs/2016-06/msg00299.html
> >
> 
> Thank you for the patchset. I will look into it.
> 
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-17 Thread Andiry Xu
On Tue, Jan 17, 2017 at 2:37 PM, Vishal Verma  wrote:
> On 01/17, Andiry Xu wrote:
>> Hi,
>>
>> On Tue, Jan 17, 2017 at 1:35 PM, Vishal Verma  
>> wrote:
>> > On 01/16, Darrick J. Wong wrote:
>> >> On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
>> >> > On 01/14, Slava Dubeyko wrote:
>> >> > >
>> >> > >  Original Message 
>> >> > > Subject: [LSF/MM TOPIC] Badblocks checking/representation in 
>> >> > > filesystems
>> >> > > Sent: Jan 13, 2017 1:40 PM
>> >> > > From: "Verma, Vishal L" 
>> >> > > To: lsf...@lists.linux-foundation.org
>> >> > > Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org, 
>> >> > > linux-fsde...@vger.kernel.org
>> >> > >
>> >> > > > The current implementation of badblocks, where we consult the 
>> >> > > > badblocks
>> >> > > > list for every IO in the block driver works, and is a last option
>> >> > > > failsafe, but from a user perspective, it isn't the easiest 
>> >> > > > interface to
>> >> > > > work with.
>> >> > >
>> >> > > As I remember, FAT and HFS+ specifications contain description of bad 
>> >> > > blocks
>> >> > > (physical sectors) table. I believe that this table was used for the 
>> >> > > case of
>> >> > > floppy media. But, finally, this table becomes to be the completely 
>> >> > > obsolete
>> >> > > artefact because mostly storage devices are reliably enough. Why do 
>> >> > > you need
>> >>
>> >> ext4 has a badblocks inode to own all the bad spots on disk, but ISTR it
>> >> doesn't support(??) extents or 64-bit filesystems, and might just be a
>> >> vestigial organ at this point.  XFS doesn't have anything to track bad
>> >> blocks currently
>> >>
>> >> > > in exposing the bad blocks on the file system level?  Do you expect 
>> >> > > that next
>> >> > > generation of NVM memory will be so unreliable that file system needs 
>> >> > > to manage
>> >> > > bad blocks? What's about erasure coding schemes? Do file system 
>> >> > > really need to suffer
>> >> > > from the bad block issue?
>> >> > >
>> >> > > Usually, we are using LBAs and it is the responsibility of storage 
>> >> > > device to map
>> >> > > a bad physical block/page/sector into valid one. Do you mean that we 
>> >> > > have
>> >> > > access to physical NVM memory address directly? But it looks like 
>> >> > > that we can
>> >> > > have a "bad block" issue even we will access data into page cache's 
>> >> > > memory
>> >> > > page (if we will use NVM memory for page cache, of course). So, what 
>> >> > > do you
>> >> > > imply by "bad block" issue?
>> >> >
>> >> > We don't have direct physical access to the device's address space, in
>> >> > the sense the device is still free to perform remapping of chunks of NVM
>> >> > underneath us. The problem is that when a block or address range (as
>> >> > small as a cache line) goes bad, the device maintains a poison bit for
>> >> > every affected cache line. Behind the scenes, it may have already
>> >> > remapped the range, but the cache line poison has to be kept so that
>> >> > there is a notification to the user/owner of the data that something has
>> >> > been lost. Since NVM is byte addressable memory sitting on the memory
>> >> > bus, such a poisoned cache line results in memory errors and SIGBUSes.
>> >> > Compared to tradational storage where an app will get nice and friendly
>> >> > (relatively speaking..) -EIOs. The whole badblocks implementation was
>> >> > done so that the driver can intercept IO (i.e. reads) to _known_ bad
>> >> > locations, and short-circuit them with an EIO. If the driver doesn't
>> >> > catch these, the reads will turn into a memory bus access, and the
>> >> > poison will cause a SIGBUS.
>> >>
>> >> "driver" ... you mean XFS?  Or do you mean the thing that makes pmem
>> >> look kind of like a traditional block device? :)
>> >
>> > Yes, the thing that makes pmem look like a block device :) --
>> > drivers/nvdimm/pmem.c
>> >
>> >>
>> >> > This effort is to try and make this badblock checking smarter - and try
>> >> > and reduce the penalty on every IO to a smaller range, which only the
>> >> > filesystem can do.
>> >>
>> >> Though... now that XFS merged the reverse mapping support, I've been
>> >> wondering if there'll be a resubmission of the device errors callback?
>> >> It still would be useful to be able to inform the user that part of
>> >> their fs has gone bad, or, better yet, if the buffer is still in memory
>> >> someplace else, just write it back out.
>> >>
>> >> Or I suppose if we had some kind of raid1 set up between memories we
>> >> could read one of the other copies and rewrite it into the failing
>> >> region immediately.
>> >
>> > Yes, that is kind of what I was hoping to accomplish via this
>> > discussion. How much would filesystems want to be involved in this sort
>> > of badblocks handling, if at all. I can refresh my patches that provide
>> > the fs notification, but that's the easy bit, and a 

Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-17 Thread Vishal Verma
On 01/17, Andiry Xu wrote:
> Hi,
> 
> On Tue, Jan 17, 2017 at 1:35 PM, Vishal Verma  
> wrote:
> > On 01/16, Darrick J. Wong wrote:
> >> On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
> >> > On 01/14, Slava Dubeyko wrote:
> >> > >
> >> > >  Original Message 
> >> > > Subject: [LSF/MM TOPIC] Badblocks checking/representation in 
> >> > > filesystems
> >> > > Sent: Jan 13, 2017 1:40 PM
> >> > > From: "Verma, Vishal L" 
> >> > > To: lsf...@lists.linux-foundation.org
> >> > > Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org, 
> >> > > linux-fsde...@vger.kernel.org
> >> > >
> >> > > > The current implementation of badblocks, where we consult the 
> >> > > > badblocks
> >> > > > list for every IO in the block driver works, and is a last option
> >> > > > failsafe, but from a user perspective, it isn't the easiest 
> >> > > > interface to
> >> > > > work with.
> >> > >
> >> > > As I remember, FAT and HFS+ specifications contain description of bad 
> >> > > blocks
> >> > > (physical sectors) table. I believe that this table was used for the 
> >> > > case of
> >> > > floppy media. But, finally, this table becomes to be the completely 
> >> > > obsolete
> >> > > artefact because mostly storage devices are reliably enough. Why do 
> >> > > you need
> >>
> >> ext4 has a badblocks inode to own all the bad spots on disk, but ISTR it
> >> doesn't support(??) extents or 64-bit filesystems, and might just be a
> >> vestigial organ at this point.  XFS doesn't have anything to track bad
> >> blocks currently
> >>
> >> > > in exposing the bad blocks on the file system level?  Do you expect 
> >> > > that next
> >> > > generation of NVM memory will be so unreliable that file system needs 
> >> > > to manage
> >> > > bad blocks? What's about erasure coding schemes? Do file system really 
> >> > > need to suffer
> >> > > from the bad block issue?
> >> > >
> >> > > Usually, we are using LBAs and it is the responsibility of storage 
> >> > > device to map
> >> > > a bad physical block/page/sector into valid one. Do you mean that we 
> >> > > have
> >> > > access to physical NVM memory address directly? But it looks like that 
> >> > > we can
> >> > > have a "bad block" issue even we will access data into page cache's 
> >> > > memory
> >> > > page (if we will use NVM memory for page cache, of course). So, what 
> >> > > do you
> >> > > imply by "bad block" issue?
> >> >
> >> > We don't have direct physical access to the device's address space, in
> >> > the sense the device is still free to perform remapping of chunks of NVM
> >> > underneath us. The problem is that when a block or address range (as
> >> > small as a cache line) goes bad, the device maintains a poison bit for
> >> > every affected cache line. Behind the scenes, it may have already
> >> > remapped the range, but the cache line poison has to be kept so that
> >> > there is a notification to the user/owner of the data that something has
> >> > been lost. Since NVM is byte addressable memory sitting on the memory
> >> > bus, such a poisoned cache line results in memory errors and SIGBUSes.
> >> > Compared to tradational storage where an app will get nice and friendly
> >> > (relatively speaking..) -EIOs. The whole badblocks implementation was
> >> > done so that the driver can intercept IO (i.e. reads) to _known_ bad
> >> > locations, and short-circuit them with an EIO. If the driver doesn't
> >> > catch these, the reads will turn into a memory bus access, and the
> >> > poison will cause a SIGBUS.
> >>
> >> "driver" ... you mean XFS?  Or do you mean the thing that makes pmem
> >> look kind of like a traditional block device? :)
> >
> > Yes, the thing that makes pmem look like a block device :) --
> > drivers/nvdimm/pmem.c
> >
> >>
> >> > This effort is to try and make this badblock checking smarter - and try
> >> > and reduce the penalty on every IO to a smaller range, which only the
> >> > filesystem can do.
> >>
> >> Though... now that XFS merged the reverse mapping support, I've been
> >> wondering if there'll be a resubmission of the device errors callback?
> >> It still would be useful to be able to inform the user that part of
> >> their fs has gone bad, or, better yet, if the buffer is still in memory
> >> someplace else, just write it back out.
> >>
> >> Or I suppose if we had some kind of raid1 set up between memories we
> >> could read one of the other copies and rewrite it into the failing
> >> region immediately.
> >
> > Yes, that is kind of what I was hoping to accomplish via this
> > discussion. How much would filesystems want to be involved in this sort
> > of badblocks handling, if at all. I can refresh my patches that provide
> > the fs notification, but that's the easy bit, and a starting point.
> >
> 
> I have some questions. Why moving badblock handling to file system
> level avoid the checking phase? In file system level for each I/O I
> still have to check 

Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-17 Thread Andiry Xu
Hi,

On Tue, Jan 17, 2017 at 1:35 PM, Vishal Verma  wrote:
> On 01/16, Darrick J. Wong wrote:
>> On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
>> > On 01/14, Slava Dubeyko wrote:
>> > >
>> > >  Original Message 
>> > > Subject: [LSF/MM TOPIC] Badblocks checking/representation in filesystems
>> > > Sent: Jan 13, 2017 1:40 PM
>> > > From: "Verma, Vishal L" 
>> > > To: lsf...@lists.linux-foundation.org
>> > > Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org, 
>> > > linux-fsde...@vger.kernel.org
>> > >
>> > > > The current implementation of badblocks, where we consult the badblocks
>> > > > list for every IO in the block driver works, and is a last option
>> > > > failsafe, but from a user perspective, it isn't the easiest interface 
>> > > > to
>> > > > work with.
>> > >
>> > > As I remember, FAT and HFS+ specifications contain description of bad 
>> > > blocks
>> > > (physical sectors) table. I believe that this table was used for the 
>> > > case of
>> > > floppy media. But, finally, this table becomes to be the completely 
>> > > obsolete
>> > > artefact because mostly storage devices are reliably enough. Why do you 
>> > > need
>>
>> ext4 has a badblocks inode to own all the bad spots on disk, but ISTR it
>> doesn't support(??) extents or 64-bit filesystems, and might just be a
>> vestigial organ at this point.  XFS doesn't have anything to track bad
>> blocks currently
>>
>> > > in exposing the bad blocks on the file system level?  Do you expect that 
>> > > next
>> > > generation of NVM memory will be so unreliable that file system needs to 
>> > > manage
>> > > bad blocks? What's about erasure coding schemes? Do file system really 
>> > > need to suffer
>> > > from the bad block issue?
>> > >
>> > > Usually, we are using LBAs and it is the responsibility of storage 
>> > > device to map
>> > > a bad physical block/page/sector into valid one. Do you mean that we have
>> > > access to physical NVM memory address directly? But it looks like that 
>> > > we can
>> > > have a "bad block" issue even we will access data into page cache's 
>> > > memory
>> > > page (if we will use NVM memory for page cache, of course). So, what do 
>> > > you
>> > > imply by "bad block" issue?
>> >
>> > We don't have direct physical access to the device's address space, in
>> > the sense the device is still free to perform remapping of chunks of NVM
>> > underneath us. The problem is that when a block or address range (as
>> > small as a cache line) goes bad, the device maintains a poison bit for
>> > every affected cache line. Behind the scenes, it may have already
>> > remapped the range, but the cache line poison has to be kept so that
>> > there is a notification to the user/owner of the data that something has
>> > been lost. Since NVM is byte addressable memory sitting on the memory
>> > bus, such a poisoned cache line results in memory errors and SIGBUSes.
>> > Compared to tradational storage where an app will get nice and friendly
>> > (relatively speaking..) -EIOs. The whole badblocks implementation was
>> > done so that the driver can intercept IO (i.e. reads) to _known_ bad
>> > locations, and short-circuit them with an EIO. If the driver doesn't
>> > catch these, the reads will turn into a memory bus access, and the
>> > poison will cause a SIGBUS.
>>
>> "driver" ... you mean XFS?  Or do you mean the thing that makes pmem
>> look kind of like a traditional block device? :)
>
> Yes, the thing that makes pmem look like a block device :) --
> drivers/nvdimm/pmem.c
>
>>
>> > This effort is to try and make this badblock checking smarter - and try
>> > and reduce the penalty on every IO to a smaller range, which only the
>> > filesystem can do.
>>
>> Though... now that XFS merged the reverse mapping support, I've been
>> wondering if there'll be a resubmission of the device errors callback?
>> It still would be useful to be able to inform the user that part of
>> their fs has gone bad, or, better yet, if the buffer is still in memory
>> someplace else, just write it back out.
>>
>> Or I suppose if we had some kind of raid1 set up between memories we
>> could read one of the other copies and rewrite it into the failing
>> region immediately.
>
> Yes, that is kind of what I was hoping to accomplish via this
> discussion. How much would filesystems want to be involved in this sort
> of badblocks handling, if at all. I can refresh my patches that provide
> the fs notification, but that's the easy bit, and a starting point.
>

I have some questions. Why moving badblock handling to file system
level avoid the checking phase? In file system level for each I/O I
still have to check the badblock list, right? Do you mean during mount
it can go through the pmem device and locates all the data structures
mangled by badblocks and handle them accordingly, so that during
normal running the badblocks will never be accessed? Or, if there is

Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-17 Thread Vishal Verma
On 01/16, Darrick J. Wong wrote:
> On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
> > On 01/14, Slava Dubeyko wrote:
> > > 
> > >  Original Message 
> > > Subject: [LSF/MM TOPIC] Badblocks checking/representation in filesystems
> > > Sent: Jan 13, 2017 1:40 PM
> > > From: "Verma, Vishal L" 
> > > To: lsf...@lists.linux-foundation.org
> > > Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org, 
> > > linux-fsde...@vger.kernel.org
> > > 
> > > > The current implementation of badblocks, where we consult the badblocks
> > > > list for every IO in the block driver works, and is a last option
> > > > failsafe, but from a user perspective, it isn't the easiest interface to
> > > > work with.
> > > 
> > > As I remember, FAT and HFS+ specifications contain description of bad 
> > > blocks
> > > (physical sectors) table. I believe that this table was used for the case 
> > > of
> > > floppy media. But, finally, this table becomes to be the completely 
> > > obsolete
> > > artefact because mostly storage devices are reliably enough. Why do you 
> > > need
> 
> ext4 has a badblocks inode to own all the bad spots on disk, but ISTR it
> doesn't support(??) extents or 64-bit filesystems, and might just be a
> vestigial organ at this point.  XFS doesn't have anything to track bad
> blocks currently
> 
> > > in exposing the bad blocks on the file system level?  Do you expect that 
> > > next
> > > generation of NVM memory will be so unreliable that file system needs to 
> > > manage
> > > bad blocks? What's about erasure coding schemes? Do file system really 
> > > need to suffer
> > > from the bad block issue? 
> > > 
> > > Usually, we are using LBAs and it is the responsibility of storage device 
> > > to map
> > > a bad physical block/page/sector into valid one. Do you mean that we have
> > > access to physical NVM memory address directly? But it looks like that we 
> > > can
> > > have a "bad block" issue even we will access data into page cache's memory
> > > page (if we will use NVM memory for page cache, of course). So, what do 
> > > you
> > > imply by "bad block" issue? 
> > 
> > We don't have direct physical access to the device's address space, in
> > the sense the device is still free to perform remapping of chunks of NVM
> > underneath us. The problem is that when a block or address range (as
> > small as a cache line) goes bad, the device maintains a poison bit for
> > every affected cache line. Behind the scenes, it may have already
> > remapped the range, but the cache line poison has to be kept so that
> > there is a notification to the user/owner of the data that something has
> > been lost. Since NVM is byte addressable memory sitting on the memory
> > bus, such a poisoned cache line results in memory errors and SIGBUSes.
> > Compared to tradational storage where an app will get nice and friendly
> > (relatively speaking..) -EIOs. The whole badblocks implementation was
> > done so that the driver can intercept IO (i.e. reads) to _known_ bad
> > locations, and short-circuit them with an EIO. If the driver doesn't
> > catch these, the reads will turn into a memory bus access, and the
> > poison will cause a SIGBUS.
> 
> "driver" ... you mean XFS?  Or do you mean the thing that makes pmem
> look kind of like a traditional block device? :)

Yes, the thing that makes pmem look like a block device :) --
drivers/nvdimm/pmem.c

> 
> > This effort is to try and make this badblock checking smarter - and try
> > and reduce the penalty on every IO to a smaller range, which only the
> > filesystem can do.
> 
> Though... now that XFS merged the reverse mapping support, I've been
> wondering if there'll be a resubmission of the device errors callback?
> It still would be useful to be able to inform the user that part of
> their fs has gone bad, or, better yet, if the buffer is still in memory
> someplace else, just write it back out.
> 
> Or I suppose if we had some kind of raid1 set up between memories we
> could read one of the other copies and rewrite it into the failing
> region immediately.

Yes, that is kind of what I was hoping to accomplish via this
discussion. How much would filesystems want to be involved in this sort
of badblocks handling, if at all. I can refresh my patches that provide
the fs notification, but that's the easy bit, and a starting point.

> 
> > > > A while back, Dave Chinner had suggested a move towards smarter
> > > > handling, and I posted initial RFC patches [1], but since then the topic
> > > > hasn't really moved forward.
> > > > 
> > > > I'd like to propose and have a discussion about the following new
> > > > functionality:
> > > > 
> > > > 1. Filesystems develop a native representation of badblocks. For
> > > > example, in xfs, this would (presumably) be linked to the reverse
> > > > mapping btree. The filesystem representation has the potential to be 
> > > > more efficient than the block driver doing the check, as the fs 

Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-16 Thread Darrick J. Wong
On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
> On 01/14, Slava Dubeyko wrote:
> > 
> >  Original Message 
> > Subject: [LSF/MM TOPIC] Badblocks checking/representation in filesystems
> > Sent: Jan 13, 2017 1:40 PM
> > From: "Verma, Vishal L" 
> > To: lsf...@lists.linux-foundation.org
> > Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org, 
> > linux-fsde...@vger.kernel.org
> > 
> > > The current implementation of badblocks, where we consult the badblocks
> > > list for every IO in the block driver works, and is a last option
> > > failsafe, but from a user perspective, it isn't the easiest interface to
> > > work with.
> > 
> > As I remember, FAT and HFS+ specifications contain description of bad blocks
> > (physical sectors) table. I believe that this table was used for the case of
> > floppy media. But, finally, this table becomes to be the completely obsolete
> > artefact because mostly storage devices are reliably enough. Why do you need

ext4 has a badblocks inode to own all the bad spots on disk, but ISTR it
doesn't support(??) extents or 64-bit filesystems, and might just be a
vestigial organ at this point.  XFS doesn't have anything to track bad
blocks currently

> > in exposing the bad blocks on the file system level?  Do you expect that 
> > next
> > generation of NVM memory will be so unreliable that file system needs to 
> > manage
> > bad blocks? What's about erasure coding schemes? Do file system really need 
> > to suffer
> > from the bad block issue? 
> > 
> > Usually, we are using LBAs and it is the responsibility of storage device 
> > to map
> > a bad physical block/page/sector into valid one. Do you mean that we have
> > access to physical NVM memory address directly? But it looks like that we 
> > can
> > have a "bad block" issue even we will access data into page cache's memory
> > page (if we will use NVM memory for page cache, of course). So, what do you
> > imply by "bad block" issue? 
> 
> We don't have direct physical access to the device's address space, in
> the sense the device is still free to perform remapping of chunks of NVM
> underneath us. The problem is that when a block or address range (as
> small as a cache line) goes bad, the device maintains a poison bit for
> every affected cache line. Behind the scenes, it may have already
> remapped the range, but the cache line poison has to be kept so that
> there is a notification to the user/owner of the data that something has
> been lost. Since NVM is byte addressable memory sitting on the memory
> bus, such a poisoned cache line results in memory errors and SIGBUSes.
> Compared to tradational storage where an app will get nice and friendly
> (relatively speaking..) -EIOs. The whole badblocks implementation was
> done so that the driver can intercept IO (i.e. reads) to _known_ bad
> locations, and short-circuit them with an EIO. If the driver doesn't
> catch these, the reads will turn into a memory bus access, and the
> poison will cause a SIGBUS.

"driver" ... you mean XFS?  Or do you mean the thing that makes pmem
look kind of like a traditional block device? :)

> This effort is to try and make this badblock checking smarter - and try
> and reduce the penalty on every IO to a smaller range, which only the
> filesystem can do.

Though... now that XFS merged the reverse mapping support, I've been
wondering if there'll be a resubmission of the device errors callback?
It still would be useful to be able to inform the user that part of
their fs has gone bad, or, better yet, if the buffer is still in memory
someplace else, just write it back out.

Or I suppose if we had some kind of raid1 set up between memories we
could read one of the other copies and rewrite it into the failing
region immediately.

> > > A while back, Dave Chinner had suggested a move towards smarter
> > > handling, and I posted initial RFC patches [1], but since then the topic
> > > hasn't really moved forward.
> > > 
> > > I'd like to propose and have a discussion about the following new
> > > functionality:
> > > 
> > > 1. Filesystems develop a native representation of badblocks. For
> > > example, in xfs, this would (presumably) be linked to the reverse
> > > mapping btree. The filesystem representation has the potential to be 
> > > more efficient than the block driver doing the check, as the fs can
> > > check the IO happening on a file against just that file's range. 

OTOH that means we'd have to check /every/ file IO request against the
rmapbt, which will make things reaally slow.  I suspect it might be
preferable just to let the underlying pmem driver throw an error at us.

(Or possibly just cache the bad extents in memory.)

> > What do you mean by "file system can check the IO happening on a file"?
> > Do you mean read or write operation? What's about metadata?
> 
> For the purpose described above, i.e. returning early EIOs when
> possible, this will be limited to reads and 

RE: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-15 Thread Slava Dubeyko

-Original Message-
From: Vishal Verma [mailto:vishal.l.ve...@intel.com] 
Sent: Friday, January 13, 2017 4:49 PM
To: Slava Dubeyko <vyacheslav.dube...@wdc.com>
Cc: lsf...@lists.linux-foundation.org; linux-nvdimm@lists.01.org; 
linux-bl...@vger.kernel.org; Linux FS Devel <linux-fsde...@vger.kernel.org>; 
Viacheslav Dubeyko <sl...@dubeyko.com>
Subject: Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems



> We don't have direct physical access to the device's address space, in the 
> sense
> the device is still free to perform remapping of chunks of NVM underneath us.
> The problem is that when a block or address range (as small as a cache line) 
> goes bad,
> the device maintains a poison bit for every affected cache line. Behind the 
> scenes,
> it may have already remapped the range, but the cache line poison has to be 
> kept so that
> there is a notification to the user/owner of the data that something has been 
> lost.
> Since NVM is byte addressable memory sitting on the memory bus, such a 
> poisoned
> cache line results in memory errors and SIGBUSes.
> Compared to tradational storage where an app will get nice and friendly 
> (relatively speaking..) -EIOs.
> The whole badblocks implementation was done so that the driver can intercept 
> IO (i.e. reads)
> to _known_ bad locations, and short-circuit them with an EIO. If the driver 
> doesn't catch these,
> the reads will turn into a memory bus access, and the poison will cause a 
> SIGBUS.
>
> This effort is to try and make this badblock checking smarter - and try and 
> reduce the penalty
> on every IO to a smaller range, which only the filesystem can do.

I still slightly puzzled and I cannot understand why the situation looks like a 
dead end.
As far as I can see, first of all, a NVM device is able to use hardware-based 
LDPC,
Reed-Solomon error correction or any other fancy code. It could provide some 
error
correction basis. Also it can provide the way of estimation of BER value. So, 
if a NVM memory's
address range degrades gradually (during weeks or months) then, practically, 
it's possible
to remap and to migrate the affected address ranges in the background. 
Otherwise,
if a NVM memory so unreliable that address range is able to degrade during 
seconds or minutes
then who will use such NVM memory?

OK. Let's imagine that NVM memory device hasn't any internal error correction 
hardware-based
scheme. Next level of defense could be any erasure coding scheme on device 
driver level. So, any
piece of data can be protected by parities. And device driver will be 
responsible for management
of erasure coding scheme. It will increase latency of read operation for the 
case of necessity
to recover the affected memory page. But, finally, all recovering activity will 
be behind the scene
and file system will be unaware about such recovering activity.

If you are going not to provide any erasure coding or error correction scheme 
then it's really
bad case. The fsck tool is not regular case tool but the last resort. If you 
are going to rely on
the fsck tool then simply forget about using your hardware. Some file systems 
haven't the fsck
tool at all. Some guys really believe that file system has to work without 
support of the fsck tool.
Even if a mature file system has reliable fsck tool then the probability of 
file system recovering
is very low in the case of serious metadata corruptions. So, it means that you 
are trying to suggest
the technique when we will lose the whole file system volumes on regular basis 
without any hope
to recover data. Even if file system has snapshots then, again, we haven't hope 
because we can
suffer from read error and for operation with snapshot.

But if we will have support of any erasure coding scheme and NVM device 
discovers poisoned
cache line for some memory page then, I suppose, that such situation could 
looks like as page fault
and memory subsystem will need to re-read the page with background recovery of 
memory page's
content.

It sounds for me that we simply have some poorly designed hardware. And it is 
impossible
to push such issue on file system level. I believe that such issue can be 
managed by block
device or DAX subsystem in the presence of any erasure coding scheme. 
Otherwise, no
file system is able to survive in such wild environment. Because, I assume that 
any file
system volume will be in unrecoverable state in 50% (or significantly more) 
cases of bad block
discovering. Because any affection of metadata block can be resulted in 
severely inconsistent
state of file system's metadata structures. And it's very non-trivial task to 
recover the consistent
state of file system's metadata structures in the case of losing some part of 
it.

> > > 
> > > A while back, Dave Chinner had suggested a move towards smarter 
> > > handling, and I posted init

Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-13 Thread Vishal Verma
On 01/14, Slava Dubeyko wrote:
> 
>  Original Message 
> Subject: [LSF/MM TOPIC] Badblocks checking/representation in filesystems
> Sent: Jan 13, 2017 1:40 PM
> From: "Verma, Vishal L" 
> To: lsf...@lists.linux-foundation.org
> Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org, 
> linux-fsde...@vger.kernel.org
> 
> > The current implementation of badblocks, where we consult the badblocks
> > list for every IO in the block driver works, and is a last option
> > failsafe, but from a user perspective, it isn't the easiest interface to
> > work with.
> 
> As I remember, FAT and HFS+ specifications contain description of bad blocks
> (physical sectors) table. I believe that this table was used for the case of
> floppy media. But, finally, this table becomes to be the completely obsolete
> artefact because mostly storage devices are reliably enough. Why do you need
> in exposing the bad blocks on the file system level?  Do you expect that next
> generation of NVM memory will be so unreliable that file system needs to 
> manage
> bad blocks? What's about erasure coding schemes? Do file system really need 
> to suffer
> from the bad block issue? 
> 
> Usually, we are using LBAs and it is the responsibility of storage device to 
> map
> a bad physical block/page/sector into valid one. Do you mean that we have
> access to physical NVM memory address directly? But it looks like that we can
> have a "bad block" issue even we will access data into page cache's memory
> page (if we will use NVM memory for page cache, of course). So, what do you
> imply by "bad block" issue? 

We don't have direct physical access to the device's address space, in
the sense the device is still free to perform remapping of chunks of NVM
underneath us. The problem is that when a block or address range (as
small as a cache line) goes bad, the device maintains a poison bit for
every affected cache line. Behind the scenes, it may have already
remapped the range, but the cache line poison has to be kept so that
there is a notification to the user/owner of the data that something has
been lost. Since NVM is byte addressable memory sitting on the memory
bus, such a poisoned cache line results in memory errors and SIGBUSes.
Compared to tradational storage where an app will get nice and friendly
(relatively speaking..) -EIOs. The whole badblocks implementation was
done so that the driver can intercept IO (i.e. reads) to _known_ bad
locations, and short-circuit them with an EIO. If the driver doesn't
catch these, the reads will turn into a memory bus access, and the
poison will cause a SIGBUS.

This effort is to try and make this badblock checking smarter - and try
and reduce the penalty on every IO to a smaller range, which only the
filesystem can do.

> 
> > 
> > A while back, Dave Chinner had suggested a move towards smarter
> > handling, and I posted initial RFC patches [1], but since then the topic
> > hasn't really moved forward.
> > 
> > I'd like to propose and have a discussion about the following new
> > functionality:
> > 
> > 1. Filesystems develop a native representation of badblocks. For
> > example, in xfs, this would (presumably) be linked to the reverse
> > mapping btree. The filesystem representation has the potential to be 
> > more efficient than the block driver doing the check, as the fs can
> > check the IO happening on a file against just that file's range. 
> 
> What do you mean by "file system can check the IO happening on a file"?
> Do you mean read or write operation? What's about metadata?

For the purpose described above, i.e. returning early EIOs when
possible, this will be limited to reads and metadata reads. If we're
about to do a metadata read, and realize the block(s) about to be read
are on the badblocks list, then we do the same thing as when we discover
other kinds of metadata corruption.

> 
> If we are talking about the discovering a bad block on read operation then
> rare modern file system is able to survive as for the case of metadata as
> for the case of user data. Let's imagine that we have really mature file
> system driver then what does it mean to encounter a bad block? The failure
> to read a logical block of some metadata (bad block) means that we are
> unable to extract some part of a metadata structure. From file system
> driver point of view, it looks like that our file system is corrupted, we need
> to stop the file system operations and, finally, to check and recover file
> system volume by means of fsck tool. If we find a bad block for some
> user file then, again, it looks like an issue. Some file systems simply
> return "unrecovered read error". Another one, theoretically, is able
> to survive because of snapshots, for example. But, anyway, it will look
> like as Read-Only mount state and the user will need to resolve such
> trouble by hands.

As far as I can tell, all of these things remain the same. The goal here
isn't to survive more NVM badblocks 

RE: [LSF/MM TOPIC] Badblocks checking/representation in filesystems

2017-01-13 Thread Slava Dubeyko

 Original Message 
Subject: [LSF/MM TOPIC] Badblocks checking/representation in filesystems
Sent: Jan 13, 2017 1:40 PM
From: "Verma, Vishal L" 
To: lsf...@lists.linux-foundation.org
Cc: linux-nvdimm@lists.01.org, linux-bl...@vger.kernel.org, 
linux-fsde...@vger.kernel.org

> The current implementation of badblocks, where we consult the badblocks
> list for every IO in the block driver works, and is a last option
> failsafe, but from a user perspective, it isn't the easiest interface to
> work with.

As I remember, FAT and HFS+ specifications contain description of bad blocks
(physical sectors) table. I believe that this table was used for the case of
floppy media. But, finally, this table becomes to be the completely obsolete
artefact because mostly storage devices are reliably enough. Why do you need
in exposing the bad blocks on the file system level?  Do you expect that next
generation of NVM memory will be so unreliable that file system needs to manage
bad blocks? What's about erasure coding schemes? Do file system really need to 
suffer
from the bad block issue? 

Usually, we are using LBAs and it is the responsibility of storage device to map
a bad physical block/page/sector into valid one. Do you mean that we have
access to physical NVM memory address directly? But it looks like that we can
have a "bad block" issue even we will access data into page cache's memory
page (if we will use NVM memory for page cache, of course). So, what do you
imply by "bad block" issue? 

> 
> A while back, Dave Chinner had suggested a move towards smarter
> handling, and I posted initial RFC patches [1], but since then the topic
> hasn't really moved forward.
> 
> I'd like to propose and have a discussion about the following new
> functionality:
> 
> 1. Filesystems develop a native representation of badblocks. For
> example, in xfs, this would (presumably) be linked to the reverse
> mapping btree. The filesystem representation has the potential to be 
> more efficient than the block driver doing the check, as the fs can
> check the IO happening on a file against just that file's range. 

What do you mean by "file system can check the IO happening on a file"?
Do you mean read or write operation? What's about metadata?

If we are talking about the discovering a bad block on read operation then
rare modern file system is able to survive as for the case of metadata as
for the case of user data. Let's imagine that we have really mature file
system driver then what does it mean to encounter a bad block? The failure
to read a logical block of some metadata (bad block) means that we are
unable to extract some part of a metadata structure. From file system
driver point of view, it looks like that our file system is corrupted, we need
to stop the file system operations and, finally, to check and recover file
system volume by means of fsck tool. If we find a bad block for some
user file then, again, it looks like an issue. Some file systems simply
return "unrecovered read error". Another one, theoretically, is able
to survive because of snapshots, for example. But, anyway, it will look
like as Read-Only mount state and the user will need to resolve such
trouble by hands.

If we are talking about discovering a bad block during write operation then,
again, we are in trouble. Usually, we are using asynchronous model
of write/flush operation. We are preparing the consistent state of all our
metadata structures in the memory, at first. The flush operations for metadata
and user data can be done in different times. And what should be done if we
discover bad block for any piece of metadata or user data? Simple tracking of
bad blocks is not enough at all. Let's consider user data, at first. If we 
cannot
write some file's block successfully then we have two ways: (1) forget about
this piece of data; (2) try to change the associated LBA for this piece of data.
The operation of re-allocation LBA number for discovered bad block
(user data case) sounds as real pain. Because you need to rebuild the metadata
that track the location of this part of file. And it sounds as practically
impossible operation, for the case of LFS file system, for example.
If we have trouble with flushing any part of metadata then it sounds as
complete disaster for any file system.

Are you really sure that file system should process bad block issue?

>In contrast, today, the block driver checks against the whole block device
> range for every IO. On encountering badblocks, the filesystem can
> generate a better notification/error message that points the user to 
> (file, offset) as opposed to the block driver, which can only provide
> (block-device, sector).
>
> 2. The block layer adds a notifier to badblock addition/removal
> operations, which the filesystem subscribes to, and uses to maintain its
> badblocks accounting. (This part is implemented as a proof of concept in
> the RFC mentioned above [1]).

I am not sure that any bad