Chunk root problem

2017-07-05 Thread Daniel Brady
Hello,

My system suddenly decided it did not want to mount my BTRFS setup. I
recently rebooted the computer. When it came back, the file system was
in read only mode. I gave it another boot, but now it does not want to
mount at all. Anything I can do to recover? This is a Rockstor setup
that I have had running for about a year.

uname -a
Linux hobonas 4.10.6-1.el7.elrepo.x86_64 #1 SMP Sun Mar 26 12:19:32
EDT 2017 x86_64 x86_64 x86_64 GNU/Linux

btrfs --version
btrfs-progs v4.10.1

btrfs fi show
Label: 'rockstor_rockstor'  uuid: 33e2af57-c30a-468a-9ed5-22994780f6b4
Total devices 1 FS bytes used 5.50GiB
devid1 size 215.39GiB used 80.02GiB path /dev/sda3

Label: 'Nexus'  uuid: 1c3595a9-3faa-4973-affc-ee8d14d922bf
Total devices 5 FS bytes used 3.93TiB
devid1 size 2.73TiB used 1.12TiB path /dev/sdd
devid2 size 2.73TiB used 1.12TiB path /dev/sdb
devid3 size 2.73TiB used 1.12TiB path /dev/sdc
devid4 size 2.73TiB used 1.12TiB path /dev/sdf
devid5 size 2.73TiB used 1.12TiB path /dev/sde

dmesg
[   18.572846] BTRFS: device label Nexus devid 2 transid 595679 /dev/sdb
[   18.572933] BTRFS: device label Nexus devid 3 transid 595679 /dev/sdc
[   18.573027] BTRFS: device label Nexus devid 1 transid 595679 /dev/sdd
[   18.573119] BTRFS: device label Nexus devid 5 transid 595679 /dev/sde
[   18.573200] BTRFS: device label Nexus devid 4 transid 595679 /dev/sdf
[   20.846060] device-mapper: uevent: version 1.0.3
[   20.846114] device-mapper: ioctl: 4.35.0-ioctl (2016-06-23)
initialised: dm-de...@redhat.com
[   21.073884] BTRFS info (device sdf): use no compression
[   21.073886] BTRFS info (device sdf): disk space caching is enabled
[   21.073887] BTRFS info (device sdf): has skinny extents
[   21.084353] BTRFS error (device sdf): parent transid verify failed
on 8419247390720 wanted 542466 found 485869
[   21.230919] BTRFS info (device sdf): bdev /dev/sde errs: wr 402545,
rd 234683174, flush 194501, corrupt 0, gen 0
[   21.794749] BTRFS error (device sdf): parent transid verify failed
on 893915128 wanted 594920 found 490791
[   21.841317] BTRFS error (device sdf): parent transid verify failed
on 8939187814400 wanted 594923 found 490824
[   21.870392] BTRFS error (device sdf): parent transid verify failed
on 8418984427520 wanted 594877 found 490575
[   21.951901] BTRFS error (device sdf): parent transid verify failed
on 8939107860480 wanted 594915 found 465207
[   22.015789] BTRFS error (device sdf): parent transid verify failed
on 8939284430848 wanted 594958 found 465274
[   22.034840] BTRFS error (device sdf): parent transid verify failed
on 8418907701248 wanted 594869 found 351596
[   22.070516] BTRFS error (device sdf): parent transid verify failed
on 8939032035328 wanted 594899 found 465175
[   22.091734] BTRFS error (device sdf): parent transid verify failed
on 8939123818496 wanted 594917 found 490777
[   22.110531] BTRFS error (device sdf): parent transid verify failed
on 8939121917952 wanted 594917 found 490775
[   23.393973] BTRFS error (device sdf): failed to read block groups: -5
[   23.419807] BTRFS error (device sdf): open_ctree failed

mount -t btrfs -o recovery,ro /dev/sdb /mnt2/Nexus
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
   missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Thanks,
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Btrfs Compression

2017-07-05 Thread Paul Jones
While reading the thread about adding zstd compression, it occurred to me that 
there is potentially another thing affecting performance - Compressed extent 
size. (correct my terminology if it's incorrect).
I have two near identical RAID1 filesystems (used for backups) on near 
identical discs (HGST 3T), one compressed and one not. The filesystems have 
about 40 snapshots and are about 50% full. The uncompressed filesystem runs at 
about 60 MB/s, the compressed filesystem about 5-10 MB/s. There is noticeably 
more "noise" from the compressed filesystem from all the head thrashing that 
happens while rsync is happening.

Which brings me to my point - In terms of performance for compression, is there 
some low hanging fruit in adjusting the extent size to be more like 
uncompressed extents so there is not so much seeking happening? With spinning 
discs with large data sets it seems pointless making the numerical calculations 
faster if the discs can't keep up. Obviously this is assuming optimisation for 
speed over compression ratio.

Thoughts?

Paul.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PULL] Btrfs for 4.13

2017-07-05 Thread Christoph Hellwig
On Wed, Jul 05, 2017 at 04:45:49PM -0700, Linus Torvalds wrote:
> Ouch, yeah, that was annoying. And yes, I ended up with several
> whitespace differences but other than that it looks the same. Oh well.
> 
> Jens, Christoph, let's not do that stupid thing with status-vs-error
> ever again, ok?

It should be done now :)

Back to serious - this work had to be done as we could not communicate
errors properly in the block layer with people inventing their own
codes all the time.  That being said, in retrospective I should have
tried a gradual approach that first defines the new code to the old
errors and left the fields in place.  I opted for the hard break to
make sure we can catch matching conversions by the compiler as very
few people run sparse for the __bitwise annotations, and some of
the drivers create so many sparse warnings that they would be lost
anyway.

But I've learned and will try to make something like this gradual
if it comes up, and hope we won't have to do something like it again
at all.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/4] btrfs: Add zstd support

2017-07-05 Thread Nick Terrell
On 7/5/17, 12:57 PM, "Austin S. Hemmelgarn"  wrote:
> It's the slower compression speed that has me arguing for the 
> possibility of configurable levels on zlib.  11MB/s is painfully slow 
> considering that most decent HDD's these days can get almost 5-10x that 
> speed with no compression.  There are cases (WORM pattern archival 
> storage for example) where slow writes to that degree may be acceptable, 
> but for most users they won't be, and zlib at level 9 would probably be 
> a better choice.  I don't think it can beat zstd at level 15 for 
> compression ratio, but if they're even close, then zlib would still be a 
> better option at that high of a compression level most of the time.

I don't imagine the very high zstd levels would be useful to too many
btrfs users, except in rare cases. However, lower levels of zstd should
outperform zlib level 9 in all aspects except memory usage. I would expect
zstd level 7 would compress as well as or better than zlib 9 with faster
compression and decompression speed. It's worth benchmarking to ensure that
it holds for many different workloads, but I wouldn't expect zlib 9 to
compress better than zstd 7 often. zstd up to level 12 should compress as
fast as or faster than zlib level 9. zstd levels 12 and beyond allow
stronger compression than zlib, at the cost of slow compression and more
memory usage. 

Supporting multiple zlib compression levels could be intersting for older
kernels, lower memory usage, or backwards compatibility with older btrfs
versions. But for every zlib level, zstd has a level that provides better
compression ratio, compression speed, and decompression speed.




Re: [GIT PULL] Btrfs for 4.13

2017-07-05 Thread Linus Torvalds
On Tue, Jul 4, 2017 at 8:19 AM, David Sterba  wrote:
>
> There are conflicts with the recently merged block layer branch, the
> resolutions are a bit tedious but still straightforward. Stephen sent a mail
> about that [1]. You can find my merge at for-4.13-part1-merged, there might be
> some whitespace formatting differences but the result is the same.

Ouch, yeah, that was annoying. And yes, I ended up with several
whitespace differences but other than that it looks the same. Oh well.

Jens, Christoph, let's not do that stupid thing with status-vs-error
ever again, ok?

   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/4] btrfs: Add zstd support

2017-07-05 Thread Austin S. Hemmelgarn

On 2017-07-05 15:35, Nick Terrell wrote:

On 7/5/17, 11:45 AM, "Austin S. Hemmelgarn"  wrote:

On 2017-07-05 14:18, Adam Borowski wrote:

On Wed, Jul 05, 2017 at 07:43:27AM -0400, Austin S. Hemmelgarn
wrote:

On 2017-06-30 19:01, Nick Terrell wrote:

There is also the fact of deciding what to use for the default
when specified without a level.  This is easy for lzo and zlib,
where we can just use the existing level, but for zstd we would
need to decide how to handle a user just specifying 'zstd'
without a level.  I agree with E V that level 3 appears to be
the turnover point, and would suggest using that for the
default.


I chose level 1 because I thought if we had to choose one
speed/compression trade off, faster would be better. However,
with a configerable compression level, level 3 is a great
default, and is the default of the zstd CLI.

Actually, even if it's not configurable, I would prefer 3, as that
still performs better in both respects (speed and compression
ratio) than zlib while being sufficiently different from lzo
performance to make it easy to decide on one or the other.  As far
as configurable levels for regular usage on a filesystem, there are
only three levels you benchmarked that I would be interested in,
namely level 1 (highly active data on slow storage with a fast
CPU), 3 (stuff I would use zlib for today), and 15 (stuff I would
use out-of-band compression for today (for example, archival
storage)).


If you guys are going to argue between 1 and 3, just go the cracovian
deal and settle at 2. :þ


I'll change the default to level 3 in the next update.



But more seriously: zstd looks so much better than lzo and zlib than
I'd suggest making it the default compression in cases where there's
no way to choose, such as chattr +c.  But then, changing the default
before the previous LTS kernel can mount it would be irresponsible --
thus, if you can get it into 4.14, we're looking at 4.19 at soonest
(or later if we consider distro kernels).

To be entirely honest, we probably should have switched to LZO as the
default a while back to put things more in-line with ZFS (which
traditionally favors performance for in-line compression) and Windows
(which uses a custom LZ77 derivative that's wicked fast on most modern
systems).  I would say that as soon as we get to the point that the last
two LTS releases support it, zstd should probably become the default.

Also, slightly OT, but I would love to have the ability to set per
volume (not subvolume but volume itself) what to use for compression
when no algorithm is specified.


Which means the timing is quite tight: if, per DSterba's request,
/lib/ parts are going via a non-btrfs tree, there'll be not enough
adequate testing in -next.  Thus, would it be possible to have /lib/
patches in btrfs-next but not in for-linus?  That would allow testing
so you can catch the LTS train.


So, I don't see any problem making the level configurable.

I would actually love to see this, I regularly make use of
varying compression both on BTRFS (with separate filesystems)
and on the ZFS-based NAS systems we have at work (where it can
be set per-dataset) to allow better compression on less
frequently accessed data.


I would love to see configurable compression level as well. Would
you want me to add it to my patch set, or should I adapt my patch
set to work on top of it when it is ready?


Note that as zstd is new, there's no backwards compat to care of,
thus you are free to use whatever -o/prop syntax you'd like.  If zstd
ends up being configurable while zlib is not -- oh well, there's no
reason to use zlib anymore other than for mounting with old kernels,
in which case we can't use configurable props anyway.  Unlike zlib,
lzo is not strictly worse than configurable zstd, but it has only one
level so there's nothing to configure as well.

Thus, I'd suggest skipping the issue and implement levels for zstd
only.

I would mostly agree, with one possible exception.  _If_ zlib at the max
level gets similar compression ratios to zstd on it's higher levels
_and_ it also gets better performance on at least one aspect (I think
zlib at level 9 will probably have better compression performance than
zstd at level 15, but I'm not sure about the compression ratio), then I
would argue that it might be worth implementing levels for zlib.
Actually determining that will of course require proper testing, but
that's probably better left as a separate discussion.


For every zlib level, there is one or more level if zstd that performes
better in all of compression speed, decompression speed, and compression
level. zstd also has levels that compress better than zlib, but slower
compression speed and faster decompression speed.
It's the slower compression speed that has me arguing for the 
possibility of configurable levels on zlib.  11MB/s is painfully slow 
considering that most decent HDD's these days can get almost 5-10x that 
speed with no compression.  There 

Re: [PATCH v2 3/4] btrfs: Add zstd support

2017-07-05 Thread Nick Terrell
On 7/5/17, 11:45 AM, "Austin S. Hemmelgarn"  wrote:
>On 2017-07-05 14:18, Adam Borowski wrote:
>> On Wed, Jul 05, 2017 at 07:43:27AM -0400, Austin S. Hemmelgarn
>> wrote:
>>> On 2017-06-30 19:01, Nick Terrell wrote:
> There is also the fact of deciding what to use for the default
> when specified without a level.  This is easy for lzo and zlib,
> where we can just use the existing level, but for zstd we would
> need to decide how to handle a user just specifying 'zstd'
> without a level.  I agree with E V that level 3 appears to be
> the turnover point, and would suggest using that for the
> default.
 
 I chose level 1 because I thought if we had to choose one
 speed/compression trade off, faster would be better. However,
 with a configerable compression level, level 3 is a great
 default, and is the default of the zstd CLI.
>>> Actually, even if it's not configurable, I would prefer 3, as that
>>> still performs better in both respects (speed and compression
>>> ratio) than zlib while being sufficiently different from lzo
>>> performance to make it easy to decide on one or the other.  As far
>>> as configurable levels for regular usage on a filesystem, there are
>>> only three levels you benchmarked that I would be interested in,
>>> namely level 1 (highly active data on slow storage with a fast
>>> CPU), 3 (stuff I would use zlib for today), and 15 (stuff I would
>>> use out-of-band compression for today (for example, archival
>>> storage)).
>> 
>> If you guys are going to argue between 1 and 3, just go the cracovian
>> deal and settle at 2. :þ

I'll change the default to level 3 in the next update.

>> 
>> But more seriously: zstd looks so much better than lzo and zlib than
>> I'd suggest making it the default compression in cases where there's
>> no way to choose, such as chattr +c.  But then, changing the default
>> before the previous LTS kernel can mount it would be irresponsible --
>> thus, if you can get it into 4.14, we're looking at 4.19 at soonest
>> (or later if we consider distro kernels).
> To be entirely honest, we probably should have switched to LZO as the
> default a while back to put things more in-line with ZFS (which
> traditionally favors performance for in-line compression) and Windows
> (which uses a custom LZ77 derivative that's wicked fast on most modern
> systems).  I would say that as soon as we get to the point that the last 
> two LTS releases support it, zstd should probably become the default.
>
> Also, slightly OT, but I would love to have the ability to set per 
> volume (not subvolume but volume itself) what to use for compression 
> when no algorithm is specified.
>> 
>> Which means the timing is quite tight: if, per DSterba's request,
>> /lib/ parts are going via a non-btrfs tree, there'll be not enough
>> adequate testing in -next.  Thus, would it be possible to have /lib/
>> patches in btrfs-next but not in for-linus?  That would allow testing
>> so you can catch the LTS train.
>> 
>> So, I don't see any problem making the level configurable.
> I would actually love to see this, I regularly make use of
> varying compression both on BTRFS (with separate filesystems)
> and on the ZFS-based NAS systems we have at work (where it can
> be set per-dataset) to allow better compression on less
> frequently accessed data.
 
 I would love to see configurable compression level as well. Would
 you want me to add it to my patch set, or should I adapt my patch
 set to work on top of it when it is ready?
>> 
>> Note that as zstd is new, there's no backwards compat to care of,
>> thus you are free to use whatever -o/prop syntax you'd like.  If zstd
>> ends up being configurable while zlib is not -- oh well, there's no
>> reason to use zlib anymore other than for mounting with old kernels,
>> in which case we can't use configurable props anyway.  Unlike zlib,
>> lzo is not strictly worse than configurable zstd, but it has only one
>> level so there's nothing to configure as well.
>> 
>> Thus, I'd suggest skipping the issue and implement levels for zstd
>> only.
> I would mostly agree, with one possible exception.  _If_ zlib at the max
> level gets similar compression ratios to zstd on it's higher levels
> _and_ it also gets better performance on at least one aspect (I think
> zlib at level 9 will probably have better compression performance than
> zstd at level 15, but I'm not sure about the compression ratio), then I
> would argue that it might be worth implementing levels for zlib.
> Actually determining that will of course require proper testing, but
> that's probably better left as a separate discussion.

For every zlib level, there is one or more level if zstd that performes
better in all of compression speed, decompression speed, and compression
level. zstd also has levels that compress better than zlib, but slower
compression speed and faster decompression speed.


Re: [PATCH 01/14] VFS: Don't use save/replace_mount_options if not using generic_show_options

2017-07-05 Thread Steven Rostedt
On Wed, 05 Jul 2017 16:33:29 +0100
David Howells  wrote:

> Steven Rostedt  wrote:
> 
> > As this is patch 1/14, is there any dependency on this working? That
> > is, it is safe to simply remove this caller from those locations? Or
> > are there dependencies on having any of the 2-14 patches added to make
> > it not cause issues?  
> 
> Patch 14 removes the then-unused helpers, but patches 1-13 otherwise are
> independent of each other and have no dependencies on earlier patches.
> 

So basically it's an useless function call wrt tracefs. In that case...

Acked-by: Steven Rostedt (VMware) 

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/14] VFS: Don't use save/replace_mount_options if not using generic_show_options

2017-07-05 Thread Greg Kroah-Hartman
On Wed, Jul 05, 2017 at 04:24:09PM +0100, David Howells wrote:
> btrfs, debugfs, reiserfs and tracefs call save_mount_options() and reiserfs
> calls replace_mount_options(), but they then implement their own
> ->show_options() methods and don't touch s_options, rendering the saved
> options unnecessary.  I'm trying to eliminate s_options to make it easier
> to implement a context-based mount where the mount options can be passed
> individually over a file descriptor.
> 
> Remove the calls to save/replace_mount_options() call in these cases.
> 
> Signed-off-by: David Howells 
> cc: Chris Mason 
> cc: Greg Kroah-Hartman 
> cc: Steven Rostedt 
> cc: linux-btrfs@vger.kernel.org
> cc: reiserfs-de...@vger.kernel.org

For debugfs:
Acked-by: Greg Kroah-Hartman 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/4] btrfs: Add zstd support

2017-07-05 Thread Austin S. Hemmelgarn

On 2017-07-05 14:18, Adam Borowski wrote:

On Wed, Jul 05, 2017 at 07:43:27AM -0400, Austin S. Hemmelgarn
wrote:

On 2017-06-30 19:01, Nick Terrell wrote:

There is also the fact of deciding what to use for the default
when specified without a level.  This is easy for lzo and zlib,
where we can just use the existing level, but for zstd we would
need to decide how to handle a user just specifying 'zstd'
without a level.  I agree with E V that level 3 appears to be
the turnover point, and would suggest using that for the
default.


I chose level 1 because I thought if we had to choose one
speed/compression trade off, faster would be better. However,
with a configerable compression level, level 3 is a great
default, and is the default of the zstd CLI.

Actually, even if it's not configurable, I would prefer 3, as that
still performs better in both respects (speed and compression
ratio) than zlib while being sufficiently different from lzo
performance to make it easy to decide on one or the other.  As far
as configurable levels for regular usage on a filesystem, there are
only three levels you benchmarked that I would be interested in,
namely level 1 (highly active data on slow storage with a fast
CPU), 3 (stuff I would use zlib for today), and 15 (stuff I would
use out-of-band compression for today (for example, archival
storage)).


If you guys are going to argue between 1 and 3, just go the cracovian
deal and settle at 2. :þ

But more seriously: zstd looks so much better than lzo and zlib than
I'd suggest making it the default compression in cases where there's
no way to choose, such as chattr +c.  But then, changing the default
before the previous LTS kernel can mount it would be irresponsible --
thus, if you can get it into 4.14, we're looking at 4.19 at soonest
(or later if we consider distro kernels).

To be entirely honest, we probably should have switched to LZO as the
default a while back to put things more in-line with ZFS (which
traditionally favors performance for in-line compression) and Windows
(which uses a custom LZ77 derivative that's wicked fast on most modern
systems).  I would say that as soon as we get to the point that the last 
two LTS releases support it, zstd should probably become the default.


Also, slightly OT, but I would love to have the ability to set per 
volume (not subvolume but volume itself) what to use for compression 
when no algorithm is specified.


Which means the timing is quite tight: if, per DSterba's request,
/lib/ parts are going via a non-btrfs tree, there'll be not enough
adequate testing in -next.  Thus, would it be possible to have /lib/
patches in btrfs-next but not in for-linus?  That would allow testing
so you can catch the LTS train.


So, I don't see any problem making the level configurable.

I would actually love to see this, I regularly make use of
varying compression both on BTRFS (with separate filesystems)
and on the ZFS-based NAS systems we have at work (where it can
be set per-dataset) to allow better compression on less
frequently accessed data.


I would love to see configurable compression level as well. Would
you want me to add it to my patch set, or should I adapt my patch
set to work on top of it when it is ready?


Note that as zstd is new, there's no backwards compat to care of,
thus you are free to use whatever -o/prop syntax you'd like.  If zstd
ends up being configurable while zlib is not -- oh well, there's no
reason to use zlib anymore other than for mounting with old kernels,
in which case we can't use configurable props anyway.  Unlike zlib,
lzo is not strictly worse than configurable zstd, but it has only one
level so there's nothing to configure as well.

Thus, I'd suggest skipping the issue and implement levels for zstd
only.

I would mostly agree, with one possible exception.  _If_ zlib at the max
level gets similar compression ratios to zstd on it's higher levels
_and_ it also gets better performance on at least one aspect (I think
zlib at level 9 will probably have better compression performance than
zstd at level 15, but I'm not sure about the compression ratio), then I
would argue that it might be worth implementing levels for zlib.
Actually determining that will of course require proper testing, but
that's probably better left as a separate discussion.



As for testing: I assume you guys are doing stress testing on amd64
already, right?  I got two crappy arm64 machines but won't be able to
test there before 4.13 (Icenowy has been lazy and didn't push any
near-mainline patch set recently; every single Pine64/Pinebook kernel
pushed by anyone but her didn't work for me so I don't even bother
trying anymore).  On the other hand, the armhf box I'm running Debian
rebuilds on is a gem among cheap SoCs.  After Nick fixed the
flickering workspaces bug, there were no further hiccups; on monday I
switched to 4.12.0 + btrfs-for-4.13 + zstd (thus luckily avoiding
that nowait aio bug), also no explosions yet.

Which reminds me, I 

Re: [PATCH v2 3/4] btrfs: Add zstd support

2017-07-05 Thread Adam Borowski
On Wed, Jul 05, 2017 at 07:43:27AM -0400, Austin S. Hemmelgarn wrote:
> On 2017-06-30 19:01, Nick Terrell wrote:
> > > There is also the fact of deciding what to use for the default when
> > > specified without a level.  This is easy for lzo and zlib, where we can
> > > just use the existing level, but for zstd we would need to decide how to
> > > handle a user just specifying 'zstd' without a level.  I agree with E V
> > > that level 3 appears to be the turnover point, and would suggest using
> > > that for the default.
> > 
> > I chose level 1 because I thought if we had to choose one speed/compression
> > trade off, faster would be better. However, with a configerable compression
> > level, level 3 is a great default, and is the default of the zstd CLI.
> Actually, even if it's not configurable, I would prefer 3, as that still
> performs better in both respects (speed and compression ratio) than zlib
> while being sufficiently different from lzo performance to make it easy to
> decide on one or the other.  As far as configurable levels for regular usage
> on a filesystem, there are only three levels you benchmarked that I would be
> interested in, namely level 1 (highly active data on slow storage with a
> fast CPU), 3 (stuff I would use zlib for today), and 15 (stuff I would use
> out-of-band compression for today (for example, archival storage)).

If you guys are going to argue between 1 and 3, just go the cracovian deal
and settle at 2. :þ

But more seriously: zstd looks so much better than lzo and zlib than I'd
suggest making it the default compression in cases where there's no way to
choose, such as chattr +c.  But then, changing the default before the
previous LTS kernel can mount it would be irresponsible -- thus, if you can
get it into 4.14, we're looking at 4.19 at soonest (or later if we consider
distro kernels).

Which means the timing is quite tight: if, per DSterba's request, /lib/
parts are going via a non-btrfs tree, there'll be not enough adequate
testing in -next.  Thus, would it be possible to have /lib/ patches in
btrfs-next but not in for-linus?  That would allow testing so you can catch
the LTS train.

> > > > So, I don't see any problem making the level configurable.
> > > I would actually love to see this, I regularly make use of varying
> > > compression both on BTRFS (with separate filesystems) and on the
> > > ZFS-based NAS systems we have at work (where it can be set per-dataset)
> > > to allow better compression on less frequently accessed data.
> > 
> > I would love to see configurable compression level as well. Would you want
> > me to add it to my patch set, or should I adapt my patch set to work on top
> > of it when it is ready?

Note that as zstd is new, there's no backwards compat to care of, thus you
are free to use whatever -o/prop syntax you'd like.  If zstd ends up being
configurable while zlib is not -- oh well, there's no reason to use zlib
anymore other than for mounting with old kernels, in which case we can't use
configurable props anyway.  Unlike zlib, lzo is not strictly worse than
configurable zstd, but it has only one level so there's nothing to configure
as well.

Thus, I'd suggest skipping the issue and implement levels for zstd only.


As for testing: I assume you guys are doing stress testing on amd64 already,
right?  I got two crappy arm64 machines but won't be able to test there
before 4.13 (Icenowy has been lazy and didn't push any near-mainline patch
set recently; every single Pine64/Pinebook kernel pushed by anyone but her
didn't work for me so I don't even bother trying anymore).  On the other
hand, the armhf box I'm running Debian rebuilds on is a gem among cheap
SoCs.  After Nick fixed the flickering workspaces bug, there were no further
hiccups; on monday I switched to 4.12.0 + btrfs-for-4.13 + zstd (thus
luckily avoiding that nowait aio bug), also no explosions yet.


Meow!
-- 
⢀⣴⠾⠻⢶⣦⠀ 
⣾⠁⢠⠒⠀⣿⡁ A dumb species has no way to open a tuna can.
⢿⡄⠘⠷⠚⠋⠀ A smart species invents a can opener.
⠈⠳⣄ A master species delegates.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [proposal] making filesystem tools more machine friendly

2017-07-05 Thread Christoph Hellwig
Hi Jan,

I really like the idea out json output for fs/block tools.

nvme-cli has been doing it for a while, and that has been very
useful.  It comes with it's own little handrolled json output
helpers:

https://github.com/linux-nvme/nvme-cli/blob/master/json.c

so it doesn't introduce any new dependencies.  If you want to parse
input as well a library might be a better approach, though.

I really don't like the shared object evms model mention in the thread,
as that creates hard ELF dependencies.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/14] VFS: Don't use save/replace_mount_options if not using generic_show_options

2017-07-05 Thread David Howells
Steven Rostedt  wrote:

> As this is patch 1/14, is there any dependency on this working? That
> is, it is safe to simply remove this caller from those locations? Or
> are there dependencies on having any of the 2-14 patches added to make
> it not cause issues?

Patch 14 removes the then-unused helpers, but patches 1-13 otherwise are
independent of each other and have no dependencies on earlier patches.

David
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/14] VFS: Don't use save/replace_mount_options if not using generic_show_options

2017-07-05 Thread Steven Rostedt
On Wed, 05 Jul 2017 16:24:09 +0100
David Howells  wrote:

> btrfs, debugfs, reiserfs and tracefs call save_mount_options() and reiserfs
> calls replace_mount_options(), but they then implement their own
> ->show_options() methods and don't touch s_options, rendering the saved  
> options unnecessary.  I'm trying to eliminate s_options to make it easier
> to implement a context-based mount where the mount options can be passed
> individually over a file descriptor.
> 
> Remove the calls to save/replace_mount_options() call in these cases.
> 

As this is patch 1/14, is there any dependency on this working? That
is, it is safe to simply remove this caller from those locations? Or
are there dependencies on having any of the 2-14 patches added to make
it not cause issues?

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 01/14] VFS: Don't use save/replace_mount_options if not using generic_show_options

2017-07-05 Thread David Howells
btrfs, debugfs, reiserfs and tracefs call save_mount_options() and reiserfs
calls replace_mount_options(), but they then implement their own
->show_options() methods and don't touch s_options, rendering the saved
options unnecessary.  I'm trying to eliminate s_options to make it easier
to implement a context-based mount where the mount options can be passed
individually over a file descriptor.

Remove the calls to save/replace_mount_options() call in these cases.

Signed-off-by: David Howells 
cc: Chris Mason 
cc: Greg Kroah-Hartman 
cc: Steven Rostedt 
cc: linux-btrfs@vger.kernel.org
cc: reiserfs-de...@vger.kernel.org
---

 fs/btrfs/super.c|1 -
 fs/debugfs/inode.c  |2 --
 fs/reiserfs/super.c |4 
 fs/tracefs/inode.c  |2 --
 4 files changed, 9 deletions(-)

diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index 4f1cdd5058f1..8e9758b3eb23 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -1164,7 +1164,6 @@ static int btrfs_fill_super(struct super_block *sb,
goto fail_close;
}
 
-   save_mount_options(sb, data);
cleancache_init_fs(sb);
sb->s_flags |= MS_ACTIVE;
return 0;
diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
index e892ae7d89f8..0dc9e9c0e0f8 100644
--- a/fs/debugfs/inode.c
+++ b/fs/debugfs/inode.c
@@ -203,8 +203,6 @@ static int debug_fill_super(struct super_block *sb, void 
*data, int silent)
struct debugfs_fs_info *fsi;
int err;
 
-   save_mount_options(sb, data);
-
fsi = kzalloc(sizeof(struct debugfs_fs_info), GFP_KERNEL);
sb->s_fs_info = fsi;
if (!fsi) {
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
index 685f1e056998..306e4e9d172d 100644
--- a/fs/reiserfs/super.c
+++ b/fs/reiserfs/super.c
@@ -1599,8 +1599,6 @@ static int reiserfs_remount(struct super_block *s, int 
*mount_flags, char *arg)
}
 
 out_ok_unlocked:
-   if (new_opts)
-   replace_mount_options(s, new_opts);
return 0;
 
 out_err_unlock:
@@ -1916,8 +1914,6 @@ static int reiserfs_fill_super(struct super_block *s, 
void *data, int silent)
char *qf_names[REISERFS_MAXQUOTAS] = {};
unsigned int qfmt = 0;
 
-   save_mount_options(s, data);
-
sbi = kzalloc(sizeof(struct reiserfs_sb_info), GFP_KERNEL);
if (!sbi)
return -ENOMEM;
diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
index 328e89c2cf83..bea8ad876bf9 100644
--- a/fs/tracefs/inode.c
+++ b/fs/tracefs/inode.c
@@ -270,8 +270,6 @@ static int trace_fill_super(struct super_block *sb, void 
*data, int silent)
struct tracefs_fs_info *fsi;
int err;
 
-   save_mount_options(sb, data);
-
fsi = kzalloc(sizeof(struct tracefs_fs_info), GFP_KERNEL);
sb->s_fs_info = fsi;
if (!fsi) {

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/4] btrfs: Add zstd support

2017-07-05 Thread Austin S. Hemmelgarn

On 2017-06-30 19:01, Nick Terrell wrote:

If we're going to make that configurable, there are some things to
consider:

* the underlying compressed format -- does not change for different
levels


This is true for zlib and zstd. lzo in the kernel only supports one
compression level.
I had actually forgot that the kernel only implements one compression 
level for LZO.



* the configuration interface -- mount options, defrag ioctl

* backward compatibility

There is also the fact of deciding what to use for the default when
specified without a level.  This is easy for lzo and zlib, where we can
just use the existing level, but for zstd we would need to decide how to
handle a user just specifying 'zstd' without a level.  I agree with E V
that level 3 appears to be the turnover point, and would suggest using
that for the default.


I chose level 1 because I thought if we had to choose one speed/compression
trade off, faster would be better. However, with a configerable compression
level, level 3 is a great default, and is the default of the zstd CLI.
Actually, even if it's not configurable, I would prefer 3, as that still 
performs better in both respects (speed and compression ratio) than zlib 
while being sufficiently different from lzo performance to make it easy 
to decide on one or the other.  As far as configurable levels for 
regular usage on a filesystem, there are only three levels you 
benchmarked that I would be interested in, namely level 1 (highly active 
data on slow storage with a fast CPU), 3 (stuff I would use zlib for 
today), and 15 (stuff I would use out-of-band compression for today (for 
example, archival storage)).



So, I don't see any problem making the level configurable.

I would actually love to see this, I regularly make use of varying
compression both on BTRFS (with separate filesystems) and on the
ZFS-based NAS systems we have at work (where it can be set per-dataset)
to allow better compression on less frequently accessed data.


I would love to see configurable compression level as well. Would you want
me to add it to my patch set, or should I adapt my patch set to work on top
of it when it is ready?
David or one of the other developers would be a better person to ask, I 
mostly review kernel patches from a admin perspective, not a development 
perspective, so I don't really know which option would be better in this 
case.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html