Austin S Hemmelgarn wrote (ao):
The data is probably still cached in the block layer, so after
unmounting, you could try 'echo 1 /proc/sys/vm/drop_caches' before
mounting again, but make sure to run sync right before doing that,
otherwise you might lose data.
Lose data? Where you get this
On 3 January 2014 06:10, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Btrfs can be remounted without barrier, but there is no barrier option
so nobody can remount btrfs back with barrier on. Only umount and
mount again can re-enable barrier.(Quite awkward)
Also the mount options in the document
Kai Krakow posted on Fri, 03 Jan 2014 02:24:01 +0100 as excerpted:
Duncan 1i5t5.dun...@cox.net schrieb:
But because a full balance rewrites everything anyway, it'll
effectively defrag too.
Is that really true? I thought it just rewrites each distinct extent and
shuffels chunks around...
We only intent to fua the first superblock in every device from
comments, fix it.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
---
fs/btrfs/disk-io.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 9417b73..b016657
On 2014-01-03 03:39, Sander wrote:
Austin S Hemmelgarn wrote (ao):
The data is probably still cached in the block layer, so after
unmounting, you could try 'echo 1 /proc/sys/vm/drop_caches'
before mounting again, but make sure to run sync right before
doing that, otherwise you might lose
Hi Josef,
FYI. We are doing 0day performance tests and happen to notice that
btrfs write throughput increased considerably during v3.10-11 time
frame:
v3.10 v3.11 v3.12
v3.13-rc6
--- -
On Fri, 2014-01-03 at 23:54 +0800, fengguang...@intel.com wrote:
Hi Josef,
FYI. We are doing 0day performance tests and happen to notice that
btrfs write throughput increased considerably during v3.10-11 time
frame:
v3.10 v3.11 v3.12
Back in Feb 2013 there was quite a bit of press about the preliminary
raid5/6 implementation in Btrfs. At the time it wasn't useful for
anything other then testing and it's my understanding that this is
still the case.
I've seen a few git commits and some chatter on this list but it would
appear
On Fri, Jan 03, 2014 at 06:22:57PM +0800, Wang Shilong wrote:
We only intent to fua the first superblock in every device from
comments, fix it.
Good catch, this could gain some speedup when there are up to 2 less
flushes.
There's one thing that's a different from currnet behaviour:
Without
On Fri, Jan 03, 2014 at 02:10:26PM +0800, Qu Wenruo wrote:
Add nocheck_int mount option to disable integrity check with
remount option.
+ nocheck_int disables all the debug options above.
I think this option is not needed, the integrity checker is a
deveoplment functionality and used by
On Fri, 2014-01-03 at 18:03 +0100, David Sterba wrote:
On Fri, Jan 03, 2014 at 06:22:57PM +0800, Wang Shilong wrote:
We only intent to fua the first superblock in every device from
comments, fix it.
Good catch, this could gain some speedup when there are up to 2 less
flushes.
There's
On 1/3/14, 12:10 AM, Qu Wenruo wrote:
Some options should be paired to support triggering different functions
when remounting.
This patchset add these missing pairing mount options.
I think this really would benefit from a regression test which
ensures that every remount transition works
On Fri, Jan 03, 2014 at 02:10:30PM +0800, Qu Wenruo wrote:
Add noinode_cache mount option to disable inode map cache with
remount option.
This looks almost safe, there's a sync_filesystem called before the
filesystem's remount handler, the transaction gets committed and flushes
all tha data
On Fri, Jan 03, 2014 at 02:10:23PM +0800, Qu Wenruo wrote:
Some options should be paired to support triggering different functions
when remounting.
This patchset add these missing pairing mount options.
Thanks!
btrfs: Add nocheck_int mount option.
btrfs: Add noinode_cache mount option.
On Fri, Jan 03, 2014 at 12:29:51AM +, Holger Hoffstätte wrote:
Conversion from ext4 works really well and is an important step for
adoption. After recently converting a large-ish device I noticed
dodgy performance, even after defragment rebalance; noticeably
different from the quite good
On Fri, Jan 03, 2014 at 05:27:51PM +0800, Miao Xie wrote:
On Thu, 2 Jan 2014 18:49:55 +0100, David Sterba wrote:
On Thu, Dec 26, 2013 at 01:07:05PM +0800, Miao Xie wrote:
+#define BTRFS_DELAYED_NODE_IN_LIST0
+#define BTRFS_DELAYED_NODE_INODE_DIRTY1
+
struct
Looks like Kent missed the btrfs endio in the original commit. How
about this patch:
-
In btrfs_end_bio, call bio_endio_nodec on the restored bio so the
bi_remaining is accounted for correctly.
Reported-by: fengguang...@intel.com
Cc: Kent Overstreet k...@daterainc.com
CC: Jens Axboe
First, a big thank you for taking the time to post this very informative
message.
On Wed, Jan 01, 2014 at 12:37:42PM +, Duncan wrote:
Apparently the way some distribution installation scripts work results in
even a brand new installation being highly fragmented. =:^( If in
addition they
On Thu, Jan 02, 2014 at 10:37:28AM -0700, Chris Murphy wrote:
On Jan 1, 2014, at 3:35 PM, Oliver Mangold o.mang...@gmail.com wrote:
On 01.01.2014 22:58, Chris Murphy wrote:
On Jan 1, 2014, at 2:27 PM, Oliver Mangold o.mang...@gmail.com wrote:
I fear, I broke my FS by running btrfsck.
On Mon, Dec 30, 2013 at 09:57:40AM -0800, Marc MERLIN wrote:
On Mon, Dec 30, 2013 at 10:48:10AM -0700, Chris Murphy wrote:
On Dec 30, 2013, at 10:10 AM, Marc MERLIN m...@merlins.org wrote:
If one day, it could at least work on a subvolume level (only sync a
subvolume), then it
On Fri, 2014-01-03 at 12:15 -0800, Marc MERLIN wrote:
On Mon, Dec 30, 2013 at 09:57:40AM -0800, Marc MERLIN wrote:
On Mon, Dec 30, 2013 at 10:48:10AM -0700, Chris Murphy wrote:
On Dec 30, 2013, at 10:10 AM, Marc MERLIN m...@merlins.org wrote:
If one day, it could at least work
Marc MERLIN posted on Fri, 03 Jan 2014 09:25:06 -0800 as excerpted:
First, a big thank you for taking the time to post this very informative
message.
On Wed, Jan 01, 2014 at 12:37:42PM +, Duncan wrote:
Apparently the way some distribution installation scripts work results
in even a
I'm using Ubuntu 12.04.3 with an up-to-date 3.11 kernel, and the
btrfs-progs from Debian Sid (since the ones from Ubuntu are ancient).
I discovered to my horror during testing today that neither raid1 nor
raid10 arrays are fault tolerant of losing an actual disk.
mkfs.btrfs -d raid10 -m
Am 03.01.2014 23:28, schrieb Jim Salter:
I'm using Ubuntu 12.04.3 with an up-to-date 3.11 kernel, and the
btrfs-progs from Debian Sid (since the ones from Ubuntu are ancient).
I discovered to my horror during testing today that neither raid1 nor
raid10 arrays are fault tolerant of losing an
I actually read the wiki pretty obsessively before blasting the list -
could not successfully find anything answering the question, by scanning
the FAQ or by Googling.
You're right - mount -t btrfs -o degraded /dev/vdb /test worked fine.
HOWEVER - this won't allow a root filesystem to mount.
Am 03.01.2014 23:56, schrieb Jim Salter:
I actually read the wiki pretty obsessively before blasting the list -
could not successfully find anything answering the question, by scanning
the FAQ or by Googling.
You're right - mount -t btrfs -o degraded /dev/vdb /test worked fine.
don't forget
On Fri, Jan 03, 2014 at 05:56:42PM -0500, Jim Salter wrote:
I actually read the wiki pretty obsessively before blasting the list
- could not successfully find anything answering the question, by
scanning the FAQ or by Googling.
You're right - mount -t btrfs -o degraded /dev/vdb /test worked
Sorry - where do I put this in GRUB? /boot/grub/grub.cfg is still kinda
black magic to me, and I don't think I'm supposed to be editing it
directly at all anymore anyway, if I remember correctly...
HOWEVER - this won't allow a root filesystem to mount. How do you deal
with this if you'd set up
On Fri, Jan 03, 2014 at 06:13:25PM -0500, Jim Salter wrote:
Sorry - where do I put this in GRUB? /boot/grub/grub.cfg is still
kinda black magic to me, and I don't think I'm supposed to be
editing it directly at all anymore anyway, if I remember
correctly...
You don't need to edit grub.cfg
On Jan 3, 2014, at 3:56 PM, Jim Salter j...@jrs-s.net wrote:
I actually read the wiki pretty obsessively before blasting the list - could
not successfully find anything answering the question, by scanning the FAQ or
by Googling.
You're right - mount -t btrfs -o degraded /dev/vdb /test
Yep - had just figured that out and successfully booted with it, and was
in the process of typing up instructions for the list (and posterity).
One thing that concerns me is that edits made directly to grub.cfg will
get wiped out with every kernel upgrade when update-grub is run - any
idea
On Jan 3, 2014, at 4:13 PM, Jim Salter j...@jrs-s.net wrote:
Sorry - where do I put this in GRUB? /boot/grub/grub.cfg is still kinda black
magic to me, and I don't think I'm supposed to be editing it directly at all
anymore anyway, if I remember correctly…
Don't edit the grub.cfg directly.
On Jan 3, 2014, at 4:25 PM, Jim Salter j...@jrs-s.net wrote:
One thing that concerns me is that edits made directly to grub.cfg will get
wiped out with every kernel upgrade when update-grub is run - any idea where
I'd put this in /etc/grub.d to have a persistent change?
/etc/default/grub
On Jan 3, 2014, at 12:41 PM, Hendrik Friedel hend...@friedels.name wrote:
Hello,
I ran btrfsck on my volume with the repair option. When I re-run it, I get
the same errors as before.
Did you try mounting with -o recovery first?
https://btrfs.wiki.kernel.org/index.php/Problem_FAQ
What
For anybody else interested, if you want your system to automatically
boot a degraded btrfs array, here are my crib notes, verified working:
* boot degraded
1. edit /etc/grub.d/10_linux, add degraded to the rootflags
Minor correction: you need to close the double-quotes at the end of the
GRUB_CMDLINE_LINUX line:
GRUB_CMDLINE_LINUX=rootflags=degraded,subvol=${rootsubvol}
${GRUB_CMDLINE_LINUX}
On 01/03/2014 06:42 PM, Jim Salter wrote:
For anybody else interested, if you want your system to
On Jan 3, 2014, at 5:33 AM, Marc MERLIN m...@merlins.org wrote:
Would it be possible for whoever maintains btrfs-tools to change both
the man page and the help included in the tool to clearly state that
running the fsck tool is unlikely to be the right course of action
and talk about
On Jan 3, 2014, at 4:42 PM, Jim Salter j...@jrs-s.net wrote:
For anybody else interested, if you want your system to automatically boot a
degraded btrfs array, here are my crib notes, verified working:
* boot degraded
1. edit /etc/grub.d/10_linux, add
I personally consider proper RAID6 support with gracious non-intrusive
handling of failing drives and a proper warning mechanism the most
important missing feature of btrfs, and I know this view is shared by
many others with software RAID based storage systems, currently
limited by the existing
On 01/03/2014 07:27 PM, Chris Murphy wrote:
This is the wrong way to solve this. /etc/grub.d/10_linux is subject
to being replaced on updates. It is not recommended it be edited, same
as for grub.cfg. The correct way is as I already stated, which is to
edit the GRUB_CMDLINE_LINUX= line in
On Fri, Jan 3, 2014 at 9:59 PM, Jim Salter j...@jrs-s.net wrote:
You're suggesting the wrong alternatives here (mdraid, LVM, etc) - they
don't provide the features that I need or are accustomed to (true snapshots,
copy on write, self-correcting redundant arrays, and on down the line). If
Chris Murphy posted on Fri, 03 Jan 2014 16:22:44 -0700 as excerpted:
I would not make this option persistent by putting it permanently in the
grub.cfg; although I don't know the consequence of always mounting with
degraded even if not necessary it could have some negative effects (?)
Degraded
42 matches
Mail list logo