Kai Krakow posted on Mon, 15 May 2017 21:12:06 +0200 as excerpted:
> Am Mon, 15 May 2017 14:09:20 +0100
> schrieb Tomasz Kusmierz :
>>
>> Not true. When HDD uses 10% (10% is just for easy example) of space
>> as spare than aligment on disk is (US - used sector, SS - spare
Oops, sorry, I introduced those two issues in recent patches and
missed (skipped?) them while testing. With above patch, 008/009
test-cases are working fine now. thanks.
On 5/16/17, Tsutomu Itoh wrote:
> In btrfs-progs-v4.11-rc1, the following convert-tests failed.
>
>
In btrfs-progs-v4.11-rc1, the following convert-tests failed.
[TEST/conv] 008-readonly-image
[TEST/conv] readonly image test, btrfs defaults
failed: mke2fs -t ext4 -b 4096 -F /Build/btrfs-progs-v4.11-rc1/tests/test.img
test failed for case 008-readonly-image
Makefile:271: recipe for
Am Mon, 15 May 2017 22:05:05 +0200
schrieb Tomasz Torcz :
> On Mon, May 15, 2017 at 09:49:38PM +0200, Kai Krakow wrote:
> >
> > > It's worth noting also that on average, COW filesystems like BTRFS
> > > (or log-structured-filesystems will not benefit as much as
> > >
On Mon, May 15, 2017 at 09:49:38PM +0200, Kai Krakow wrote:
>
> > It's worth noting also that on average, COW filesystems like BTRFS
> > (or log-structured-filesystems will not benefit as much as
> > traditional filesystems from SSD caching unless the caching is built
> > into the filesystem
Am Mon, 15 May 2017 08:03:48 -0400
schrieb "Austin S. Hemmelgarn" :
> > That's why I don't trust any of my data to them. But I still want
> > the benefit of their speed. So I use SSDs mostly as frontend caches
> > to HDDs. This gives me big storage with fast access. Indeed,
Am Mon, 15 May 2017 07:46:01 -0400
schrieb "Austin S. Hemmelgarn" :
> On 2017-05-12 14:27, Kai Krakow wrote:
> > Am Tue, 18 Apr 2017 15:02:42 +0200
> > schrieb Imran Geriskovan :
> >
> >> On 4/17/17, Austin S. Hemmelgarn
Am Mon, 15 May 2017 14:09:20 +0100
schrieb Tomasz Kusmierz :
> > Traditional hard drives usually do this too these days (they've
> > been under-provisioned since before SSD's existed), which is part
> > of why older disks tend to be noisier and slower (the reserved
> >
On Mon, 2017-05-15 at 12:42 +0200, Jan Kara wrote:
> On Tue 09-05-17 11:49:18, Jeff Layton wrote:
> > Now that we have a better way to store and report errors that occur
> > during writeback, we need to convert the existing codebase to use it. We
> > could just adapt all of the filesystem code and
On Tue, May 09, 2017 at 12:12:44PM -0400, Jeff Layton wrote:
> The writeback error handling test requires that you put the journal on a
> separate device. This allows us to use dmerror to simulate data
> writeback failure, without affecting the journal.
>
> xfs already has infrastructure for this
Running "btrfsck --repair /dev/sdd2" crashed as it can happen in
(corrupted) file systems, that slot > nritems:
> (gdb) bt full
> #0 0x77020e71 in __memmove_sse2_unaligned_erms () from
> /lib/x86_64-linux-gnu/libc.so.6
> #1 0x00438764 in btrfs_del_ptr (trans=,
> root=0x6e4fe0,
On Tue, May 09, 2017 at 12:40:53PM -0700, Liu Bo wrote:
> On Fri, May 05, 2017 at 06:52:45PM +0200, David Sterba wrote:
> > On Thu, Apr 13, 2017 at 06:11:56PM -0700, Liu Bo wrote:
> > > With raid1 profile, dio read isn't tolerating IO errors if read length is
> > > less than the stripe length
On Mon, May 15, 2017 at 09:57:09AM +0200, Philipp Hahn wrote:
> Running "btrfsck --repair /dev/sdd2" crashed:
> > (gdb) bt full
> > #0 0x77020e71 in __memmove_sse2_unaligned_erms () from
> > /lib/x86_64-linux-gnu/libc.so.6
> > No symbol table info available.
> > #1 0x00438764 in
Hi,
a pre-release has been tagged. The 4.11 release is going to be a small one,
just a handful of updates. I was too busy with 4.12 kernel patches. Something
will have to change regarding btrfs-progs management, as the number of
unreviewed and unmerged patches is not decreasing. I'll write more
> Traditional hard drives usually do this too these days (they've been
> under-provisioned since before SSD's existed), which is part of why older
> disks tend to be noisier and slower (the reserved space is usually at the far
> inside or outside of the platter, so using sectors from there to
On 2017-05-15 04:14, Hugo Mills wrote:
On Sun, May 14, 2017 at 04:16:52PM -0700, Marc MERLIN wrote:
On Sun, May 14, 2017 at 09:21:11PM +, Hugo Mills wrote:
2) balance -musage=0
3) balance -musage=20
In most cases, this is going to make ENOSPC problems worse, not
better. The reason for
On Thu 11-05-17 14:17:04, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> RWF_NOWAIT informs kernel to bail out if an AIO request will block
> for reasons such as file allocations, or a writeback triggered,
> or would block while allocating requests while performing
>
On 2017-05-12 14:36, Kai Krakow wrote:
Am Fri, 12 May 2017 15:02:20 +0200
schrieb Imran Geriskovan :
On 5/12/17, Duncan <1i5t5.dun...@cox.net> wrote:
FWIW, I'm in the market for SSDs ATM, and remembered this from a
couple weeks ago so went back to find it. Thanks.
On Tue 09-05-17 11:49:24, Jeff Layton wrote:
> Don't try to check PageError since that's potentially racy and not
> necessarily going to be set after writepage errors out.
>
> Instead, sample the mapping error early on, and use that value to tell
> us whether we got a writeback error since then.
On Tue 09-05-17 11:49:25, Jeff Layton wrote:
> Now that we don't clear writeback errors after fetching them, there is
> no need to reset them. This is also potentially racy.
>
> Signed-off-by: Jeff Layton
Looks good. You can add:
Reviewed-by: Jan Kara
On Tue 09-05-17 11:49:22, Jeff Layton wrote:
> I noticed on xfs that I could still sometimes get back an error on fsync
> on a fd that was opened after the error condition had been cleared.
>
> The problem is that the buffer code sets the write_io_error flag and
> then later checks that flag to
On 2017-05-12 14:27, Kai Krakow wrote:
Am Tue, 18 Apr 2017 15:02:42 +0200
schrieb Imran Geriskovan :
On 4/17/17, Austin S. Hemmelgarn wrote:
Regarding BTRFS specifically:
* Given my recently newfound understanding of what the 'ssd' mount
Le 15/05/2017 à 10:14, Hugo Mills a écrit :
> [...]
>> As for limit= I'm not sure if it would be helpful since I run this
>> nightly. Anything that doesn't get done tonight due to limit, would be
>> done tomorrow?
>I'm suggesting limit= on its own. It's a fixed amount of work
> compared to
On 5/15/17, Tomasz Kusmierz wrote:
> Theoretically all sectors in over provision are erased - practically they
> are either erased or waiting to be erased or broken.
> Over provisioned area does have more uses than that. For example if you have
> a 1TB drive where you
On Tue 09-05-17 11:49:18, Jeff Layton wrote:
> Now that we have a better way to store and report errors that occur
> during writeback, we need to convert the existing codebase to use it. We
> could just adapt all of the filesystem code and related infrastructure
> to the new API, but that's a lot
On Mon, May 15, 2017 at 09:40:29AM +0800, Qu Wenruo wrote:
> >bug: https://bugzilla.kernel.org/show_bug.cgi?id=194795
>
> Errr, it seems that you forgot to update ext2_open_fs() to update how we get
> cctx->block_counts.
>
> Without that update, we still get wrong total size of original fs, so
>
Before this patch, btrfs check lowmem mode manually checks found chunk
item, even we already have the generic chunk validation checker,
btrfs_check_chunk_valid().
This patch will use btrfs_check_chunk_valid() to replace open-coded
chunk validation checker in check_chunk_item().
Signed-off-by: Qu
When checking chunk or dev extent, lowmem mode uses chunk length as dev
extent length, and if they mismatch, report missing chunk or dev extent
like:
--
ERROR: chunk[256 4324327424) stripe 0 did not find the related dev extent
ERROR: chunk[256 4324327424) stripe 1 did not find the related dev
Introduce a new function, btrfs_get_chunk_stripe_len() to get correct
stripe length.
This is very handy for lowmem mode, which checks the mapping between
device extent and chunk item.
Signed-off-by: Qu Wenruo
---
volumes.c | 44
In btrfs_check_chunk_valid() we calculates chunk item using open code.
use btrfs_chunk_item_size() to replace them.
Signed-off-by: Qu Wenruo
---
volumes.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/volumes.c b/volumes.c
index
btrfs_check_chunk_valid() doesn't check if
1) chunk flag has conflicting flags
For example chunk type DATA|METADATA|RAID1|RAID10 is completely
invalid, while current check_chunk_valid() can't detect it.
2) num_stripes is invalid for RAID10
Num_stripes 5 is not valid for RAID10.
This
On Sun, May 14, 2017 at 04:16:52PM -0700, Marc MERLIN wrote:
> On Sun, May 14, 2017 at 09:21:11PM +, Hugo Mills wrote:
> > > 2) balance -musage=0
> > > 3) balance -musage=20
> >
> >In most cases, this is going to make ENOSPC problems worse, not
> > better. The reason for doign this kind
Running "btrfsck --repair /dev/sdd2" crashed:
> (gdb) bt full
> #0 0x77020e71 in __memmove_sse2_unaligned_erms () from
> /lib/x86_64-linux-gnu/libc.so.6
> No symbol table info available.
> #1 0x00438764 in btrfs_del_ptr (trans=,
> root=0x6e4fe0, path=0x1d17880, level=0, slot=7)
33 matches
Mail list logo