Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Jesse
Thanks Мяу!
I will ensure I reply all :)

On 19 June 2017 at 23:38, Adam Borowski  wrote:
> On Mon, Jun 19, 2017 at 12:48:54PM +0300, Ivan Sizov wrote:
>> 2017-06-19 12:32 GMT+03:00 Jesse :
>> > So I guess that means when I initiate a post, I also need to send it
>> > to myself as well as the mail list.
>> You need to do it in the reply only, not in the initial post.
>>
>> > Does it make any difference where I put respective addresses, eg: TO: CC: 
>> > BCC:
>> You need to put a person to whom you reply in "TO" field and mailing
>> list in "CC" field.
>
> Any mail client I know (but I haven't looked at many...) can do all of this
> by "Reply All" (a button by that name in Thunderbird, 'g' in mutt, ...).
>
> It's worth noting that vger lists have rules different to those in most of
> Free Software communities: on vger, you're supposed to send copies to
> everyone -- pretty much everywhere else you are expected to send to the list
> only.  This is done by "Reply List" (in Thunderbird, 'L' in mutt, ...).
> Such lists do add a set of "List-*:" headers that help the client.
>
>
> Мяу!
> --
> ⢀⣴⠾⠻⢶⣦⠀
> ⣾⠁⢠⠒⠀⣿⡁ A dumb species has no way to open a tuna can.
> ⢿⡄⠘⠷⠚⠋⠀ A smart species invents a can opener.
> ⠈⠳⣄ A master species delegates.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Reply Urgent

2017-06-19 Thread INFO

Hello,

How are you doing? I have been sent to inform you that, We have an  
inheritance of a deceased client with your surname. Contact Mr Andrew  
Bailey Reply Email To: myinf...@gmail.com with your "Full Names" for  
more info.  Thanks for your understanding.


Reply ASAP thank you.

Melissa.
--
Correo Corporativo Hospital Universitario del Valle E.S.E
***

"Estamos re-dimensionandonos para crecer!"

**


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 00/22] fs: enhanced writeback error reporting with errseq_t (pile #1)

2017-06-19 Thread Stephen Rothwell
Hi Jeff,

On Mon, 19 Jun 2017 12:23:46 -0400 Jeff Layton  wrote:
>
> If there are no major objections to this set, I'd like to have
> linux-next start picking it up to get some wider testing. What's the
> right vehicle for this, given that it touches stuff all over the tree?
> 
> I can see 3 potential options:
> 
> 1) I could just pull these into the branch that Stephen is already
> picking up for file-locks in my tree
> 
> 2) I could put them into a new branch, and have Stephen pull that one in
> addition to the file-locks branch
> 
> 3) It could go in via someone else's tree entirely (Andrew or Al's
> maybe?)
> 
> I'm fine with any of these. Anyone have thoughts?

Given that this is a one off development, either 1 or 3 (in Al's tree)
would be fine.  2 is a possibility (but people forget to ask me to
remove one shot trees :-()

-- 
Cheers,
Stephen Rothwell
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RFC: Compression - calculate entropy for data set

2017-06-19 Thread Timofey Titovets
Hi, for last several days i try work on entropy calculation that can
be usable in btrfs compression code (for detect bad compressible
data),

I've implemented:
- avg meaning (Problems with accuracy)
- shannon entropy
- shannon entropy with only integer logic (Accuracy compared to float
shannon +-0.5%)

All writen on C with C++ inserts and can be easy ported to kernel code
if needed.
Repo there:
https://github.com/Nefelim4ag/Entropy_Calculation

It will be great if someone has an interest in profiling and
performance tests of that

Because my stupid tests with ~$ time  and 8MB of test data
Shows that lzo with level 1-6 are fastest way to detect if data are compressible
And that integer shannon entropy are much faster (in 5 times) way in
compare to any gzip level.

Thanks!

P.S.
I get this idea from:
https://btrfs.wiki.kernel.org/index.php/Project_ideas
 - Compression enhancements
- heuristics -- try to learn in a simple way how well the file
data compress, or not

-- 
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 15/22] dax: set errors in mapping when writeback fails

2017-06-19 Thread Ross Zwisler
On Sat, Jun 17, 2017 at 08:39:53AM -0400, Jeff Layton wrote:
> On Fri, 2017-06-16 at 15:34 -0400, Jeff Layton wrote:
> > Jan Kara's description for this patch is much better than mine, so I'm
> > quoting it verbatim here:
> > 
> > DAX currently doesn't set errors in the mapping when cache flushing
> > fails in dax_writeback_mapping_range(). Since this function can get
> > called only from fsync(2) or sync(2), this is actually as good as it can
> > currently get since we correctly propagate the error up from
> > dax_writeback_mapping_range() to filemap_fdatawrite()
> > 
> > However, in the future better writeback error handling will enable us to
> > properly report these errors on fsync(2) even if there are multiple file
> > descriptors open against the file or if sync(2) gets called before
> > fsync(2). So convert DAX to using standard error reporting through the
> > mapping.
> > 
> > Signed-off-by: Jeff Layton 
> > Reviewed-by: Jan Kara 
> > Reviewed-by: Christoph Hellwig 
> > Reviewed-and-Tested-by: Ross Zwisler 
> > ---
> >  fs/dax.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/dax.c b/fs/dax.c
> > index 9899f07acf72..c663e8cc2a76 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -856,8 +856,10 @@ int dax_writeback_mapping_range(struct address_space 
> > *mapping,
> >  
> > ret = dax_writeback_one(bdev, dax_dev, mapping,
> > indices[i], pvec.pages[i]);
> > -   if (ret < 0)
> > +   if (ret < 0) {
> > +   mapping_set_error(mapping, ret);
> > goto out;
> > +   }
> > }
> > }
> >  out:
> 
> I should point out here that Ross had an issue with this patch in an
> earlier set, that I addressed with a flag in the last set. The flag is
> icky though.
> 
> In this set, patch #6 should make it unnecessary:
> 
> mm: clear AS_EIO/AS_ENOSPC when writeback initiation fails
> 
> Ross, could you test that this set still works ok for you with dax? It
> should apply reasonably cleanly on top of linux-next.

Yep, v7 passes my DAX testing.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 0/2] Ensure size values are rounded down

2017-06-19 Thread David Sterba
On Fri, Jun 16, 2017 at 02:39:18PM +0300, Nikolay Borisov wrote:
> Hello, 
> 
> Here is a small series which fixes an issue that got reported internally to 
> Suse and which affects devices whose size is not multiple of sectorsize. More
> information can be found in the changelog for patch 2
> 
> The first patch in the series re-implements the btrfs_device_total_bytes
> getter/setters manually so that we can add a WARN_ON to catch future 
> offenders. 
> I haven't implemented all 4 function that the macros do since we only ever use
> the getter and setter. 
> 
> The second patch ensures that all the values passed to the setter are rounded 
> down to a multiple of sectorsize. I also have an xfstest patch which passes
> after this series is applied. 
> 
> Changes since v1: 
>  * Moved some of the text from previous cover letter, into patch 2 to make 
>  explanation of the bug more coherent. 
>  * The first patch now only implements getter/setter while the 2nd patch 
>  adds the WARN_ON 
> 
> Nikolay Borisov (2):
>   btrfs: Manually implement device_total_bytes getter/setter
>   btrfs: Round down values which are written for total_bytes_size

Added to the queue, with the fixups.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 00/22] fs: enhanced writeback error reporting with errseq_t (pile #1)

2017-06-19 Thread Jeff Layton
On Fri, 2017-06-16 at 15:34 -0400, Jeff Layton wrote:
> v7:
> ===
> This is the seventh posting of the patchset to revamp the way writeback
> errors are tracked and reported.
> 
> The main difference from the v6 posting is the removal of the
> FS_WB_ERRSEQ flag. That requires a few other incremental patches in the
> writeback code to ensure that both error tracking models are handled
> in a suitable way.
> 
> Also, a bit more cleanup of the metadata writeback codepaths, and some
> documentation updates.
> 
> Some of these patches have been posted separately, but I'm re-posting
> them here to make it clear that they're prerequisites of the later
> patches in the series.
> 
> This series is based on top of linux-next from a day or so ago. I'd like
> to have this picked up by linux-next in the near future so we can get
> some more extensive testing with it. Should I just plan to maintain a
> topic branch on top of -next and ask Stephen to pick it up?
> 
> Background:
> ===
> The basic problem is that we have (for a very long time) tracked and
> reported writeback errors based on two flags in the address_space:
> AS_EIO and AS_ENOSPC. Those flags are cleared when they are checked,
> so only the first caller to check them is able to consume them.
> 
> That model is quite unreliable, for several related reasons:
> 
> * only the first fsync caller on the inode will see the error. In a
>   world of containerized setups, that's no longer viable. Applications
>   need to know that their writes are safely stored, and they can
>   currently miss seeing errors that they should be aware of when
>   they're not.
> 
> * there are a lot of internal callers to filemap_fdatawait* and
>   filemap_write_and_wait* that clear these errors but then never report
>   them to userland in any fashion.
> 
> * Some internal callers report writeback errors, but can do so at
>   non-sensical times. For instance, we might want to truncate a file,
>   which triggers a pagecache flush. If that writeback fails, we might
>   report that error to the truncate caller, but a subsequent fsync
>   will likely not see it.
> 
> * Some internal callers try to reset the error flags after clearing
>   them, but that's racy. Another task could check the flags between
>   those two events.
> 
> Solution:
> =
> This patchset adds a new datatype called an errseq_t that represents a
> sequence of errors. It's a u32, with a field for a POSIX-flavor error
> and a counter, managed with atomics. We can sample that value at a
> particular point in time, and can later tell whether there have been any
> errors since that point.
> 
> That allows us to provide traditional check-and-clear fsync semantics
> on every open file description in a lightweight fashion. fsync callers
> no longer need to coordinate between one another in order to ensure
> that errors at fsync time are handled correctly.
> 
> Strategy:
> =
> The aim with this pile is to do the minimum possible to support for
> reliable reporting of errors on fsync, without substantially changing
> the internals of the filesystems themselves.
> 
> Most of the internal calls to filemap_fdatawait are left alone, so all
> of the internal error checkers are using the same error handling they
> always have. The only real difference here is that we're better
> reporting errors at fsync.
> 
> I think that we probably will want to eventually convert all of those
> internal callers to use errseq_t based reporting, but that can be done
> in an incremental fashion in follow-on patchsets.
> 
> Testing:
> 
> I've primarily been testing this with some new xfstests that I will post
> in a separate series. These tests use dm-error fault injection to make
> the underlying block device start throwing I/O errors, and then test the
> how the filesystem layer reports errors after that.
> 
> Jeff Layton (22):
>   fs: remove call_fsync helper function
>   buffer: use mapping_set_error instead of setting the flag
>   fs: check for writeback errors after syncing out buffers in
> generic_file_fsync
>   buffer: set errors in mapping at the time that the error occurs
>   jbd2: don't clear and reset errors after waiting on writeback
>   mm: clear AS_EIO/AS_ENOSPC when writeback initiation fails
>   mm: don't TestClearPageError in __filemap_fdatawait_range
>   mm: clean up error handling in write_one_page
>   fs: always sync metadata in __generic_file_fsync
>   lib: add errseq_t type and infrastructure for handling it
>   fs: new infrastructure for writeback error handling and reporting
>   mm: tracepoints for writeback error events
>   mm: set both AS_EIO/AS_ENOSPC and errseq_t in mapping_set_error
>   Documentation: flesh out the section in vfs.txt on storing and
> reporting writeback errors
>   dax: set errors in mapping when writeback fails
>   block: convert to errseq_t based writeback error tracking
>   ext4: use errseq_t based error handling for reporting data writeback
> errors
>   fs: 

Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Adam Borowski
On Mon, Jun 19, 2017 at 12:48:54PM +0300, Ivan Sizov wrote:
> 2017-06-19 12:32 GMT+03:00 Jesse :
> > So I guess that means when I initiate a post, I also need to send it
> > to myself as well as the mail list.
> You need to do it in the reply only, not in the initial post.
> 
> > Does it make any difference where I put respective addresses, eg: TO: CC: 
> > BCC:
> You need to put a person to whom you reply in "TO" field and mailing
> list in "CC" field.

Any mail client I know (but I haven't looked at many...) can do all of this
by "Reply All" (a button by that name in Thunderbird, 'g' in mutt, ...).

It's worth noting that vger lists have rules different to those in most of
Free Software communities: on vger, you're supposed to send copies to
everyone -- pretty much everywhere else you are expected to send to the list
only.  This is done by "Reply List" (in Thunderbird, 'L' in mutt, ...).
Such lists do add a set of "List-*:" headers that help the client.


Мяу!
-- 
⢀⣴⠾⠻⢶⣦⠀ 
⣾⠁⢠⠒⠀⣿⡁ A dumb species has no way to open a tuna can.
⢿⡄⠘⠷⠚⠋⠀ A smart species invents a can opener.
⠈⠳⣄ A master species delegates.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: csum failed root -9

2017-06-19 Thread Henk Slager
>> I think I leave it as is for the time being, unless there is some news
>> how to fix things with low risk (or maybe via a temp overlay snapshot
>> with DM). But the lowmem check took 2 days, that's not really fun.
>> The goal for the 8TB fs is to have an up to 7 year snapshot history at
>> sometime, now the oldest snapshot is from early 2014, so almost
>> halfway :)
>
> Btrfs is still much too unstable to trust 7 years worth of backup to
> it. You will probably loose it at some point, especially while many
> snapshots are still such a huge performance breaker in btrfs. I suggest
> trying out also other alternatives like borg backup for such a project.

Maybe I should clarify that I don't use snapshotting for archiving
explicitly. So in the latest snapshot still old but unused files from
many years back are there, like a disk image from a windowsxp laptop
(already recycled) for example. Userdata that is in older snapshots
but not in newer ones is what I consider useless data today, so I had
deleted that explicitly. But who knows maybe for some statistic or
whatever btrfs experiment it might be interesting to have a long and
may snapshot increments.

Another reason is the SMR characteristics of the disk, that made me
decide to designate this fs write-only. If I remove snapshots, the fs
gets free space fragmentation and then writing to it will be much
slower. This disk was relatively cheap and I don't want to experience
the slowness and longer on time.

I snapshot no more than 3 subvolumes monthly, then after 7 years the
fs has 252 snapshots, that is considered no problem for btrfs.
I think borg backup is interesting, but from kernel 3.11 to 4.11 (even
using raid5 up to 4.1) I have managed to keep it running/cloning
multi-site with just a few relatively simple scripts and btrfs
kernel+tools itself, also working on a low-power ARM platform. I don't
like yet another commandset and that borg uses its own extra repo or
small database for tracking diffs (I haven't used it so I am not
sure). But what I need, differential/incremental + compression, is
just build-in in btrfs, that I anyhow use for local snapshotting. I
finally put also some ARM boards btrfs rootfs recently, I am not sure
if/when I am going to use other backup tooling besides just rsync and
btrfs features.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: csum failed root -9

2017-06-19 Thread Henk Slager
On Thu, Jun 15, 2017 at 9:13 AM, Qu Wenruo  wrote:
>
>
> At 06/14/2017 09:39 PM, Henk Slager wrote:
>>
>> On Tue, Jun 13, 2017 at 12:47 PM, Henk Slager  wrote:
>>>
>>> On Tue, Jun 13, 2017 at 7:24 AM, Kai Krakow  wrote:

 Am Mon, 12 Jun 2017 11:00:31 +0200
 schrieb Henk Slager :

> Hi all,
>
> there is 1-block corruption a 8TB filesystem that showed up several
> months ago. The fs is almost exclusively a btrfs receive target and
> receives monthly sequential snapshots from two hosts but 1 received
> uuid. I do not know exactly when the corruption has happened but it
> must have been roughly 3 to 6 months ago. with monthly updated
> kernel+progs on that host.
>
> Some more history:
> - fs was created in november 2015 on top of luks
> - initially bcache between the 2048-sector aligned partition and luks.
> Some months ago I removed 'the bcache layer' by making sure that cache
> was clean and then zeroing 8K bytes at start of partition in an
> isolated situation. Then setting partion offset to 2064 by
> delete-recreate in gdisk.
> - in december 2016 there were more scrub errors, but related to the
> monthly snapshot of december2016. I have removed that snapshot this
> year and now only this 1-block csum error is the only issue.
> - brand/type is seagate 8TB SMR. At least since kernel 4.4+ that
> includes some SMR related changes in the blocklayer this disk works
> fine with btrfs.
> - the smartctl values show no error so far but I will run an extended
> test this week after another btrfs check which did not show any error
> earlier with the csum fail being there
> - I have noticed that the board that has the disk attached has been
> rebooted due to power-failures many times (unreliable power switch and
> power dips from energy company) and the 150W powersupply is broken and
> replaced since then. Also due to this, I decided to remove bcache
> (which has been in write-through and write-around only).
>
> Some btrfs inpect-internal exercise shows that the problem is in a
> directory in the root that contains most of the data and snapshots.
> But an  rsync -c  with an identical other clone snapshot shows no
> difference (no writes to an rw snapshot of that clone). So the fs is
> still OK as file-level backup, but btrfs replace/balance will fatal
> error on just this 1 csum error. It looks like that this is not a
> media/disk error but some HW induced error or SW/kernel issue.
> Relevant btrfs commands + dmesg info, see below.
>
> Any comments on how to fix or handle this without incrementally
> sending all snapshots to a new fs (6+ TiB of data, assuming this won't
> fail)?
>
>
> # uname -r
> 4.11.3-1-default
> # btrfs --version
> btrfs-progs v4.10.2+20170406


 There's btrfs-progs v4.11 available...
>>>
>>>
>>> I started:
>>> # btrfs check -p --readonly /dev/mapper/smr
>>> but it stopped with printing 'Killed' while checking extents. The
>>> board has 8G RAM, no swap (yet), so I just started lowmem mode:
>>> # btrfs check -p --mode lowmem --readonly /dev/mapper/smr
>>>
>>> Now after a 1 day 77 lines like this are printed:
>>> ERROR: extent[5365470154752, 81920] referencer count mismatch (root:
>>> 6310, owner: 1771130, offset: 33243062272) wanted: 1, have: 2
>>>
>>> It is still running, hopefully it will finish within 2 days. But
>>> lateron I can compile/use latest progs from git. Same for kernel,
>>> maybe with some tweaks/patches, but I think I will also plug the disk
>>> into a faster machine then ( i7-4770 instead of the J1900 ).
>>>
> fs profile is dup for system+meta, single for data
>
> # btrfs scrub start /local/smr


 What looks strange to me is that the parameters of the error reports
 seem to be rotated by one... See below:

> [27609.626555] BTRFS error (device dm-0): parent transid verify failed
> on 6350718500864 wanted 23170 found 23076
> [27609.685416] BTRFS info (device dm-0): read error corrected: ino 1
> off 6350718500864 (dev /dev/mapper/smr sector 11681212672)
> [27609.685928] BTRFS info (device dm-0): read error corrected: ino 1
> off 6350718504960 (dev /dev/mapper/smr sector 11681212680)
> [27609.686160] BTRFS info (device dm-0): read error corrected: ino 1
> off 6350718509056 (dev /dev/mapper/smr sector 11681212688)
> [27609.687136] BTRFS info (device dm-0): read error corrected: ino 1
> off 6350718513152 (dev /dev/mapper/smr sector 11681212696)
> [37663.606455] BTRFS error (device dm-0): parent transid verify failed
> on 6350453751808 wanted 23170 found 23075
> [37663.685158] BTRFS info (device dm-0): read error corrected: ino 1
> off 6350453751808 (dev /dev/mapper/smr sector 11679647008)
> 

Re: [PATCH 01/35] fscache: Remove unused ->now_uncached callback

2017-06-19 Thread Jan Kara
On Thu 01-06-17 13:34:34, Jan Kara wrote:
> On Thu 01-06-17 11:26:08, David Howells wrote:
> > Jan Kara  wrote:
> > 
> > > The callback doesn't ever get called. Remove it.
> > 
> > Hmmm...  I should perhaps be calling this.  I'm not sure why I never did.
> > 
> > At the moment, it doesn't strictly matter as ops on pages marked with
> > PG_fscache get ignored if the cache has suffered an I/O error or has been
> > withdrawn - but it will incur a performance penalty (the PG_fscache flag is
> > checked in the netfs before calling into fscache).
> > 
> > The downside of calling this is that when a cache is removed, fscache would 
> > go
> > through all the cookies for that cache and iterate over all the pages
> > associated with those cookies - which could cause a performance dip in the
> > system.
> 
> So I know nothing about fscache. If you decide these functions should stay
> in as you are going to use them soon, then I can just convert them to the
> new API as everything else. What just caught my eye and why I had a more
> detailed look is that I didn't understand that 'PAGEVEC_SIZE -
> pagevec_count()' as a pagevec_lookup() argument since pagevec_count()
> should always return 0 at that point?

David, what is your final decision regarding this? Do you want to keep
these unused functions (and I will just update my patch to convert them to
the new calling convention) or will you apply the patch to remove them?

Honza
-- 
Jan Kara 
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] btrfs-progs: Fix false alert about EXTENT_DATA shouldn't be hole

2017-06-19 Thread Henk Slager
On 16-06-17 03:43, Qu Wenruo wrote:
> Since incompat feature NO_HOLES still allow us to have explicit hole
> file extent, current check is too restrict and will cause false alert
> like:
>
> root 5 EXTENT_DATA[257, 0] shouldn't be hole
>
> Fix it by removing the restrict hole file extent check.
>
> Reported-by: Henk Slager 
> Signed-off-by: Qu Wenruo 
> ---
>  cmds-check.c | 6 +-
>  1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/cmds-check.c b/cmds-check.c
> index c052f66e..7bd57677 100644
> --- a/cmds-check.c
> +++ b/cmds-check.c
> @@ -4841,11 +4841,7 @@ static int check_file_extent(struct btrfs_root *root, 
> struct btrfs_key *fkey,
>   }
>  
>   /* Check EXTENT_DATA hole */
> - if (no_holes && is_hole) {
> - err |= FILE_EXTENT_ERROR;
> - error("root %llu EXTENT_DATA[%llu %llu] shouldn't be hole",
> -   root->objectid, fkey->objectid, fkey->offset);
> - } else if (!no_holes && *end != fkey->offset) {
> + if (!no_holes && *end != fkey->offset) {
>   err |= FILE_EXTENT_ERROR;
>   error("root %llu EXTENT_DATA[%llu %llu] interrupt",
> root->objectid, fkey->objectid, fkey->offset);


Thanks for the patch, I applied it on v4.11 btrfs-progs and re-ran the check:
# btrfs check -p --readonly /dev/mapper/smr

on filesystem mentioned in:
https://www.spinics.net/lists/linux-btrfs/msg66374.html

and now the "shouldn't be hole" errors don't show up anymore.

Tested-by: Henk Slager 

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] btrfs: use new block error code

2017-06-19 Thread Dan Carpenter
This function is supposed to return blk_status_t error codes now but
there was a stray -ENOMEM left behind.

Fixes: 4e4cbee93d56 ("block: switch bios to blk_status_t")
Signed-off-by: Dan Carpenter 

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index e536e98fe351..2c0b7b57fcd5 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -584,7 +584,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode 
*inode, struct bio *bio,
  __GFP_HIGHMEM);
if (!cb->compressed_pages[pg_index]) {
faili = pg_index - 1;
-   ret = -ENOMEM;
+   ret = BLK_STS_RESOURCE;
goto fail2;
}
}
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Please help. Repair probably bitflip damage and suspected bug

2017-06-19 Thread Jesse
I just noticed a series of seemingly btrfs related call traces that
for the first time, did not lock up the system.

I have uploaded dmesg to https://paste.ee/p/An8Qy

Anyone able to help advise on these?

Thanks

Jesse


On 19 June 2017 at 17:19, Jesse  wrote:
> Further to the above message reporting problems, I have been able to
> capture a call trace under the main system rather than live media.
>
> Note this occurred in rsync from btrfs to a separate drive running xfs
> on a local filesystem (both sata drives). So I presume that btrfs is
> only reading the drive at the time of crash, unless rsync is also
> doing some sort of disc caching of the files to btrfs as it is the OS
> filesystem.
>
> The destination drive directories being copied to in this case were
> empty, so I was making a copy of the data off of the btrfs drive (due
> to the btrfs tree errors and problems reported in the post I am here
> replying to).
>
> I am suspecting that there is a direct correlation to using rsync
> while (or subsequent to) touching areas of the btrfs tree that have
> corruption which results in a complete system lockup/crash.
>
> I have also noted that when these crashes while running rsync occur,
> the prior x files (eg: 10 files) show in the rsync log as being
> synced, however, show on the destination drive with filesize of zero.
>
> The trace (/var/log/messages | grep btrfs) I have uploaded to
> https://paste.ee/p/nRcj0
>
> The important part of which is:
>
> Jun 18 23:43:24 Orion vmunix: [38084.183174] BTRFS info (device sda2):
> no csum found for inode 12497 start 0
> Jun 18 23:43:24 Orion vmunix: [38084.183195] BTRFS info (device sda2):
> no csum found for inode 12497 start 0
> Jun 18 23:43:24 Orion vmunix: [38084.183209] BTRFS info (device sda2):
> no csum found for inode 12497 start 0
> Jun 18 23:43:24 Orion vmunix: [38084.183222] BTRFS info (device sda2):
> no csum found for inode 12497 start 0
> Jun 18 23:43:24 Orion vmunix: [38084.217552] BTRFS info (device sda2):
> csum failed ino 12497 extent 1700305813504 csum 1405070872 wanted 0
> mirror 0
> Jun 18 23:43:24 Orion vmunix: [38084.217626] BTRFS info (device sda2):
> no csum found for inode 12497 start 0
> Jun 18 23:43:24 Orion vmunix: [38084.217643] BTRFS info (device sda2):
> no csum found for inode 12497 start 0
> Jun 18 23:43:24 Orion vmunix: [38084.217657] BTRFS info (device sda2):
> no csum found for inode 12497 start 0
> Jun 18 23:43:24 Orion vmunix: [38084.217669] BTRFS info (device sda2):
> no csum found for inode 12497 start 0
> Jun 18 23:43:24 Orion vmunix:  auth_rpcgss nfs_acl nfs lockd grace
> sunrpc fscache zfs(POE) zunicode(POE) zcommon(POE) znvpair(POE)
> spl(OE) zavl(POE) btrfs xor raid6_pq dm_mirror dm_region_hash dm_log
> hid_generic usbhid hid uas usb_storage radeon i2c_algo_bit ttm
> drm_kms_helper drm r8169 ahci mii libahci wmi
> Jun 18 23:43:24 Orion vmunix: [38084.220604] Workqueue: btrfs-endio
> btrfs_endio_helper [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.220812] RIP:
> 0010:[]  []
> __btrfs_map_block+0x32a/0x1180 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.222459]  [] ?
> __btrfs_lookup_bio_sums.isra.8+0x3e0/0x540 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.222632]  []
> btrfs_map_bio+0x7d/0x2b0 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.222781]  []
> btrfs_submit_compressed_read+0x484/0x4e0 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.222948]  []
> btrfs_submit_bio_hook+0x1c1/0x1d0 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.223198]  [] ?
> btrfs_create_repair_bio+0xf0/0x110 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.223360]  []
> bio_readpage_error+0x117/0x180 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.223514]  [] ?
> clean_io_failure+0x1b0/0x1b0 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.223667]  []
> end_bio_extent_readpage+0x3be/0x3f0 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.223996]  []
> end_workqueue_fn+0x48/0x60 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.224145]  []
> normal_work_helper+0x82/0x210 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.224297]  []
> btrfs_endio_helper+0x12/0x20 [btrfs]
> Jun 18 23:43:24 Orion vmunix:  auth_rpcgss nfs_acl nfs lockd grace
> sunrpc fscache zfs(POE) zunicode(POE) zcommon(POE) znvpair(POE)
> spl(OE) zavl(POE) btrfs xor raid6_pq dm_mirror dm_region_hash dm_log
> hid_generic usbhid hid uas usb_storage radeon i2c_algo_bit ttm
> drm_kms_helper drm r8169 ahci mii libahci wmi
> Jun 18 23:43:24 Orion vmunix: [38084.330053]  [] ?
> __btrfs_map_block+0x32a/0x1180 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.330106]  [] ?
> __btrfs_map_block+0x2cc/0x1180 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.330154]  [] ?
> __btrfs_lookup_bio_sums.isra.8+0x3e0/0x540 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.330205]  []
> btrfs_map_bio+0x7d/0x2b0 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.330257]  []
> btrfs_submit_compressed_read+0x484/0x4e0 [btrfs]
> Jun 18 23:43:24 Orion vmunix: [38084.330304]  []
> 

Re: [PATCH V21 00/19] Allow I/O on blocks whose size is less than page size

2017-06-19 Thread Chandan Rajendra
On Sunday, October 2, 2016 6:54:09 PM IST Chandan Rajendra wrote:
> Btrfs assumes block size to be the same as the machine's page
> size. This would mean that a Btrfs instance created on a 4k page size
> machine (e.g. x86) will not be mountable on machines with larger page
> sizes (e.g. PPC64/AARCH64). This patchset aims to resolve this
> incompatibility.
> 
> This patchset continues with the work posted previously at
> http://marc.info/?l=linux-btrfs=146760691422240=2
> 
> This patchset is based on top of Josef's
> 1. Metadata throttling in writeback patches
> 2. Kill the btree inode patches

Hi Josef,

Did you get any chance to work on the above listed patchsets? 

Please let me know when you get a fairly working solution uploaded on your 
Linux git tree. I could use it to rebase my patchset and start testing the
code base.

I have put in a lot of time & effort to get the subpage-blocksize
patchset in its current form. Rebasing and retesting the
subpage-blocksize patchset across various kernel releases also would
consume time. It would be great to have it merged into the mainline
kernel. Once that is done, I will have to get other features of Btrfs
(scrub, compression, etc) to work in subpage-blocksize scenario.

It would be great to have it merged into the mainline kernel
soon. Once that is done, I will have to get other features of Btrfs
(scrub, compression, etc) to work in subpage-blocksize scenario.

> The major change in this version is the usage of kmalloc()-ed memory for
> holding metadata blocks whose size is less than the machine's page size. This
> vastly reduces the complexity of extent buffer mangement (Thanks to Josef's
> "Kill the btree inode patches").
> 
> When writing back dirty extent buffers, we currently track the corresponding
> extent buffers using the pointer at page->private. With kmalloc-ed() memory
> this isn't possible and hence we track the first extent buffer under writeback
> using bio->bi_private. Also, For kmalloc-ed() extent buffers this patchset
> currently limits the number of dirty extent buffers in a "write" bio to
> 1. This limit will be removed in a future patchset.
> 

-- 
chandan

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Ivan Sizov
2017-06-19 13:15 GMT+03:00 Jesse :
> Thanks again. So am I to understand that you go into your 'sent'
> folder, find a mail to the mail list (that is not CC to yourself),
> then you reply to this and add the mail list when you need to update
> your own post that no-one has yet replied to?
Yes, exactly.

-- 
Ivan Sizov
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Ivan Sizov
2017-06-19 13:03 GMT+03:00 Jesse :
> Thanks Ivan.
> What about when initiating a post, do I do the same eg:
> TO: myself
> CC: mailing list
>
> or do I
> TO: mailing list
> CC: myself
If your mail client doesn't have "sent" folder, you can, of course,
follow one of these examples. But I didn't face with such situation
ever.

-- 
Ivan Sizov
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Ivan Sizov
2017-06-19 13:03 GMT+03:00 Jesse :
> Thanks Ivan.
> What about when initiating a post, do I do the same eg:
> TO: myself
> CC: mailing list
>
> or do I
> TO: mailing list
> CC: myself
When initiating a post you should to specify "TO: mailing list" only,
without any other addresses. At least I used to initiate posts in such
way.



-- 
Ivan Sizov
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Jesse
Thanks Ivan.
What about when initiating a post, do I do the same eg:
TO: myself
CC: mailing list

or do I
TO: mailing list
CC: myself

TIA

On 19 June 2017 at 17:48, Ivan Sizov  wrote:
> 2017-06-19 12:32 GMT+03:00 Jesse :
>> So I guess that means when I initiate a post, I also need to send it
>> to myself as well as the mail list.
> You need to do it in the reply only, not in the initial post.
>
>> Does it make any difference where I put respective addresses, eg: TO: CC: 
>> BCC:
> You need to put a person to whom you reply in "TO" field and mailing
> list in "CC" field.
>
> --
> Ivan Sizov
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Ivan Sizov
2017-06-19 12:32 GMT+03:00 Jesse :
> So I guess that means when I initiate a post, I also need to send it
> to myself as well as the mail list.
You need to do it in the reply only, not in the initial post.

> Does it make any difference where I put respective addresses, eg: TO: CC: BCC:
You need to put a person to whom you reply in "TO" field and mailing
list in "CC" field.

-- 
Ivan Sizov
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Jesse
Ok thanks Ivan.

So I guess that means when I initiate a post, I also need to send it
to myself as well as the mail list.
Does it make any difference where I put respective addresses, eg: TO: CC: BCC:

Regards

Jesse

On 19 June 2017 at 17:20, Ivan Sizov  wrote:
> You should reply both to linux-btrfs@vger.kernel.org and the person
> whom you talk to.
>
> 2017-06-19 11:37 GMT+03:00 Jesse :
>> I have subscribed successfully and am able to post successfully and
>> eventually view the post on spinics.net when it becomes available:
>> eg: http://www.spinics.net/lists/linux-btrfs/msg66605.html
>>
>> However I do not know how to reply to messages, especially my own to
>> add more information, such as a call trace.
>> 1. I do not receive an email of my post for which I could reply
>> 2. The emails that I do receive from the list are from the respective
>> sender, and not the vger.kernel.org, as such I do not even know how to
>> reply to someone in a way that it ends up on the mailing list and not
>> directly to that person.
>>
>> Could someone please be so kind as to direct me to a good guide for
>> using this mailing list?
>>
>> Thanks
>>
>> Jesse
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
> --
> Ivan Sizov
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help on using linux-btrfs mailing list please

2017-06-19 Thread Ivan Sizov
You should reply both to linux-btrfs@vger.kernel.org and the person
whom you talk to.

2017-06-19 11:37 GMT+03:00 Jesse :
> I have subscribed successfully and am able to post successfully and
> eventually view the post on spinics.net when it becomes available:
> eg: http://www.spinics.net/lists/linux-btrfs/msg66605.html
>
> However I do not know how to reply to messages, especially my own to
> add more information, such as a call trace.
> 1. I do not receive an email of my post for which I could reply
> 2. The emails that I do receive from the list are from the respective
> sender, and not the vger.kernel.org, as such I do not even know how to
> reply to someone in a way that it ends up on the mailing list and not
> directly to that person.
>
> Could someone please be so kind as to direct me to a good guide for
> using this mailing list?
>
> Thanks
>
> Jesse
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Ivan Sizov
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Please help. Repair probably bitflip damage and suspected bug

2017-06-19 Thread Jesse
Further to the above message reporting problems, I have been able to
capture a call trace under the main system rather than live media.

Note this occurred in rsync from btrfs to a separate drive running xfs
on a local filesystem (both sata drives). So I presume that btrfs is
only reading the drive at the time of crash, unless rsync is also
doing some sort of disc caching of the files to btrfs as it is the OS
filesystem.

The destination drive directories being copied to in this case were
empty, so I was making a copy of the data off of the btrfs drive (due
to the btrfs tree errors and problems reported in the post I am here
replying to).

I am suspecting that there is a direct correlation to using rsync
while (or subsequent to) touching areas of the btrfs tree that have
corruption which results in a complete system lockup/crash.

I have also noted that when these crashes while running rsync occur,
the prior x files (eg: 10 files) show in the rsync log as being
synced, however, show on the destination drive with filesize of zero.

The trace (/var/log/messages | grep btrfs) I have uploaded to
https://paste.ee/p/nRcj0

The important part of which is:

Jun 18 23:43:24 Orion vmunix: [38084.183174] BTRFS info (device sda2):
no csum found for inode 12497 start 0
Jun 18 23:43:24 Orion vmunix: [38084.183195] BTRFS info (device sda2):
no csum found for inode 12497 start 0
Jun 18 23:43:24 Orion vmunix: [38084.183209] BTRFS info (device sda2):
no csum found for inode 12497 start 0
Jun 18 23:43:24 Orion vmunix: [38084.183222] BTRFS info (device sda2):
no csum found for inode 12497 start 0
Jun 18 23:43:24 Orion vmunix: [38084.217552] BTRFS info (device sda2):
csum failed ino 12497 extent 1700305813504 csum 1405070872 wanted 0
mirror 0
Jun 18 23:43:24 Orion vmunix: [38084.217626] BTRFS info (device sda2):
no csum found for inode 12497 start 0
Jun 18 23:43:24 Orion vmunix: [38084.217643] BTRFS info (device sda2):
no csum found for inode 12497 start 0
Jun 18 23:43:24 Orion vmunix: [38084.217657] BTRFS info (device sda2):
no csum found for inode 12497 start 0
Jun 18 23:43:24 Orion vmunix: [38084.217669] BTRFS info (device sda2):
no csum found for inode 12497 start 0
Jun 18 23:43:24 Orion vmunix:  auth_rpcgss nfs_acl nfs lockd grace
sunrpc fscache zfs(POE) zunicode(POE) zcommon(POE) znvpair(POE)
spl(OE) zavl(POE) btrfs xor raid6_pq dm_mirror dm_region_hash dm_log
hid_generic usbhid hid uas usb_storage radeon i2c_algo_bit ttm
drm_kms_helper drm r8169 ahci mii libahci wmi
Jun 18 23:43:24 Orion vmunix: [38084.220604] Workqueue: btrfs-endio
btrfs_endio_helper [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.220812] RIP:
0010:[]  []
__btrfs_map_block+0x32a/0x1180 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.222459]  [] ?
__btrfs_lookup_bio_sums.isra.8+0x3e0/0x540 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.222632]  []
btrfs_map_bio+0x7d/0x2b0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.222781]  []
btrfs_submit_compressed_read+0x484/0x4e0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.222948]  []
btrfs_submit_bio_hook+0x1c1/0x1d0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.223198]  [] ?
btrfs_create_repair_bio+0xf0/0x110 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.223360]  []
bio_readpage_error+0x117/0x180 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.223514]  [] ?
clean_io_failure+0x1b0/0x1b0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.223667]  []
end_bio_extent_readpage+0x3be/0x3f0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.223996]  []
end_workqueue_fn+0x48/0x60 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.224145]  []
normal_work_helper+0x82/0x210 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.224297]  []
btrfs_endio_helper+0x12/0x20 [btrfs]
Jun 18 23:43:24 Orion vmunix:  auth_rpcgss nfs_acl nfs lockd grace
sunrpc fscache zfs(POE) zunicode(POE) zcommon(POE) znvpair(POE)
spl(OE) zavl(POE) btrfs xor raid6_pq dm_mirror dm_region_hash dm_log
hid_generic usbhid hid uas usb_storage radeon i2c_algo_bit ttm
drm_kms_helper drm r8169 ahci mii libahci wmi
Jun 18 23:43:24 Orion vmunix: [38084.330053]  [] ?
__btrfs_map_block+0x32a/0x1180 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330106]  [] ?
__btrfs_map_block+0x2cc/0x1180 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330154]  [] ?
__btrfs_lookup_bio_sums.isra.8+0x3e0/0x540 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330205]  []
btrfs_map_bio+0x7d/0x2b0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330257]  []
btrfs_submit_compressed_read+0x484/0x4e0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330304]  []
btrfs_submit_bio_hook+0x1c1/0x1d0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330361]  [] ?
btrfs_create_repair_bio+0xf0/0x110 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330412]  []
bio_readpage_error+0x117/0x180 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330462]  [] ?
clean_io_failure+0x1b0/0x1b0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330513]  []
end_bio_extent_readpage+0x3be/0x3f0 [btrfs]
Jun 18 23:43:24 Orion vmunix: [38084.330568]  []
end_workqueue_fn+0x48/0x60 [btrfs]
Jun 

Re: [PATCH 0/6] add sanity check for extent inline ref type

2017-06-19 Thread Ivan Sizov
2017-06-02 1:57 GMT+03:00 Liu Bo :
> On Thu, Jun 01, 2017 at 11:26:26PM +0300, Ivan Sizov wrote:
>> 2017-06-01 20:35 GMT+03:00 Liu Bo :
>> > After I went through the output of leaf's content, most parts of the
>> > leaf is sane except the two corrupted items, it's still not clear to
>> > me what caused the corruption, there could be some corner cases that
>> > I'm not aware of.
>> >
>> > If fsck doesn't work for you, then a recovery from backup may be the
>> > best option.
>>
>> I don't need to run any repair procedures because system is working
>> normally. Most likely that corrupted extents belong to files located
>> somewhere in /home.
>> Do you mean I should run fsck in order to determine which files are
>> corrupted? What are proper options to run fsck with?
>
> I see.  Scrub has found that there are some corrupted metadata, if you
> want to fix that corrupted thing, fsck may be helpful.  Due to my
> test, 'btrfs check /your_disk' fixed my corruption.
>
> With this patch set, at least you won't get a crash when accessing the
> corrupted extent inline ref.
>
> -liubo

After applying patchset I unwisely ran rsync and then FS became RW-unmountable.
"btrfs-check -p --readonly" gave me many errors of different types.
After "btrfs check -p --repair" I'd mounted the FS and ran scrub.
But not all errors was fixed and boot attempt caused RO-remounting again.

Now check gives this:

[liveuser@localhost-live ~]$ sudo btrfs check -p --readonly /dev/sda1
Checking filesystem on /dev/sda1
UUID: 4b30bf4a-2331-40fb-a108-e1a34aa14221
ref mismatch on [2398147911680 4096] extent item 12, found 13
Backref 2398147911680 root 27665 owner 17074270 offset 4096 num_refs 0
not found in extent tree
Incorrect local backref count on 2398147911680 root 27665 owner
17074270 offset 4096 found 1 wanted 0 back 0x562ce582c260
backpointer mismatch on [2398147911680 4096]
ref mismatch on [2398176096256 4096] extent item 12, found 13
Backref 2398176096256 root 27665 owner 17074270 offset 8192 num_refs 0
not found in extent tree
Incorrect local backref count on 2398176096256 root 27665 owner
17074270 offset 8192 found 1 wanted 0 back 0x562cd886b7f0
backpointer mismatch on [2398176096256 4096]
ref mismatch on [2398285246464 4096] extent item 12, found 13
Backref 2398285246464 root 27665 owner 17074270 offset 16384 num_refs
0 not found in extent tree
Incorrect local backref count on 2398285246464 root 27665 owner
17074270 offset 16384 found 1 wanted 0 back 0x562cd7e16900
backpointer mismatch on [2398285246464 4096]
ref mismatch on [2404820451328 16384] extent item 0, found 1
Backref 2404820451328 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2404820451328 16384]
owner ref check failed [2404820451328 16384]
ref mismatch on [2405008687104 16384] extent item 0, found 1
Backref 2405008687104 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2405008687104 16384]
owner ref check failed [2405008687104 16384]
ref mismatch on [2405015797760 16384] extent item 0, found 1
Backref 2405015797760 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2405015797760 16384]
owner ref check failed [2405015797760 16384]
ref mismatch on [2405036081152 16384] extent item 0, found 1
Backref 2405036081152 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2405036081152 16384]
owner ref check failed [2405036081152 16384]
ref mismatch on [2405057970176 16384] extent item 0, found 1
Backref 2405057970176 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2405057970176 16384]
owner ref check failed [2405057970176 16384]
ref mismatch on [2405176311808 16384] extent item 0, found 1
Backref 2405176311808 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2405176311808 16384]
owner ref check failed [2405176311808 16384]
ref mismatch on [2477885390848 16384] extent item 0, found 1
Backref 2477885390848 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2477885390848 16384]
owner ref check failed [2477885390848 16384]
ref mismatch on [2478010597376 16384] extent item 0, found 1
Backref 2478010597376 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2478010597376 16384]
ref mismatch on [2478014939136 16384] extent item 0, found 1
Backref 2478014939136 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2478014939136 16384]
owner ref check failed [2478014939136 16384]
ref mismatch on [2478015250432 16384] extent item 0, found 1
Backref 2478015250432 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2478015250432 16384]
ref mismatch on [2478640545792 16384] extent item 0, found 1
Backref 2478640545792 parent 27641 root 27641 not found in extent tree
backpointer mismatch on [2478640545792 16384]
ref mismatch on [2478698627072 16384] extent item 0, found 1
Backref 2478698627072 parent 27641 root 27641 not 

Help on using linux-btrfs mailing list please

2017-06-19 Thread Jesse
I have subscribed successfully and am able to post successfully and
eventually view the post on spinics.net when it becomes available:
eg: http://www.spinics.net/lists/linux-btrfs/msg66605.html

However I do not know how to reply to messages, especially my own to
add more information, such as a call trace.
1. I do not receive an email of my post for which I could reply
2. The emails that I do receive from the list are from the respective
sender, and not the vger.kernel.org, as such I do not even know how to
reply to someone in a way that it ends up on the mailing list and not
directly to that person.

Could someone please be so kind as to direct me to a good guide for
using this mailing list?

Thanks

Jesse
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Feature request: Tuneable zlib compression level

2017-06-19 Thread Anand Jain


Thanks! for suggesting Jason. More below.

On 06/19/2017 07:15 AM, Jason Detring wrote:

Hello list,

I'd like to request a new feature:  make zlib compression level a tuneable.

The general use-case is a device where CPU cycles are "cheap enough",
but backing storage is expensive or difficult-to-replace.   An example
might be a powerful embedded system with eMMC that can't be swapped
for a larger part.

My particular case is re-purposing a Wyse Z-class thin client as a
desktop.  It has an AMD 1.5 Ghz G-series CPU (G-T52R) paired with a 2
GB SATA Disk-on-Module.   1.7 GB is usable for the rootfs after
partitioning.   Using Btrfs zlib compression and data+metadata mixed
mode I've stuffed most of a nicely outfitted Slackware desktop on it
(~3.5 GB), but have bumped into a filled disk a time or two.

An experiment in replacing
zlib_deflateInit(>strm, 3)
with
zlib_deflateInit(>strm, Z_BEST_COMPRESSION)
and then recompressing the filesystem ("btrfs fi defrag -czlib -v -r
/") has yielded an increase in free space from 125 MB to 215 MB.
That's an increase of 90 MB!   Not to mention that new items stored in
that newly freed space will also be compressed at maximum effort.
For a tight disk, that is a huge win!   Desktop usability is not
greatly impacted.


  Nice. This makes sense to me.



 Large writes such as software upgrades are slower,
but small day-to-day desktop writes such as web browser cache
insertions are swallowed up by Linux's disk buffers and never felt to
any great degree.  I haven't perceived any change in time spent
decompressing reads for anything--maybe a cold X11 startup is slower?



I'm not familiar with Btrfs internals.  I don't know how to add this
as a tuneable myself.  It's easy to stuff a number in a function call
to test a theory, but I can foresee this needing a bit more care to be
added into the code.



Perhaps as a knob that can be adjusted on a live filesystem.



Perhaps as a property somewhere so that other
mounted btrfses (a USB flash drive, for example) are not subject to
the same time-space trade-off conclusion.



 Perhaps all the way down
to the node level so it can be inherited from a parent directory
differently than other directories on the same file system.


 This should be a subvol property. I am attempting to write one.

 As of now we do not have the facility to make two properties
 depending on each other. We would need such a feature if
 we are creating a new property just for the compression ratio.
 And such a type of depending properties will also help encryption
 (not fs/crypto) in the long term. Will think on how to do that.

 But, other easy way to do this will be to modify the compression
 property which will also accept the compression-ratio in the input.
 For example:
   btrfs prop set  compression zlib:8
   mount -o compress=zlib:8 /dev/sdb /btrfs

 (In mount its compress and in property its compression, IMO we
 should use common names).


Thanks, Anand
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html