Am 01.12.2016 um 00:02 schrieb Chris Murphy:
> On Wed, Nov 30, 2016 at 2:03 PM, Stefan Priebe - Profihost AG
> wrote:
>> Hello,
>>
>> # btrfs balance start -v -dusage=0 -musage=1 /ssddisk/
>> Dumping filters: flags 0x7, state 0x0, force is off
>> DATA (flags 0x2): balancing, usage=0
>> METADAT
Austin S. Hemmelgarn posted on Wed, 30 Nov 2016 11:48:57 -0500 as
excerpted:
> On 2016-11-30 10:49, Wilson Meier wrote:
>> Do you also have all home users in mind, which go to vacation (sometime
>>> 3 weeks) and don't have a 24/7 support team to replace monitored disks
>> which do report SMART er
[BUG]
Under the following case, we can underflow qgroup reserved space.
Task A|Task B
---
Quota disabled |
Buffered write |
|- btrfs_check_data_free_space() |
btrfs_qgroup_release/free_data() only returns 0 or minus error
number(ENOMEM is the only possible error).
This is normally good enough, but sometimes we need the accurate byte
number it freed/released.
Change it to return actually released/freed bytenr number instead of 0
for success.
And slightl
Hi Filipe,
Thank you for your review.
I have seen your modified change log with below
Btrfs: fix tree search logic when replaying directory entry deletes
Btrfs: fix deadlock caused by fsync when logging directory entries
Btrfs: fix enospc in hole punching
So what's the next step ?
mod
At 12/01/2016 12:34 AM, David Sterba wrote:
On Wed, Nov 16, 2016 at 10:27:59AM +0800, Qu Wenruo wrote:
Yes please. Third namespace for existing error bits is not a good
option. Move the I_ERR bits to start from 32 and use them in the low-mem
code that's been merged to devel.
I didn't see suc
On Wed, Nov 30, 2016 at 4:57 PM, Eric Wheeler wrote:
> On Wed, 30 Nov 2016, Marc MERLIN wrote:
>> +btrfs mailing list, see below why
>>
>> On Tue, Nov 29, 2016 at 12:59:44PM -0800, Eric Wheeler wrote:
>> > On Mon, 27 Nov 2016, Coly Li wrote:
>> > >
>> > > Yes, too many work queues... I guess the l
On Wed, Nov 30, 2016 at 03:57:28PM -0800, Eric Wheeler wrote:
> > I'll start another separate thread with the btrfs folks on how much
> > pressure is put on the system, but on your side it would be good to help
> > ensure that bcache doesn't crash the system altogether if too many
> > requests are
On Wed, 30 Nov 2016, Marc MERLIN wrote:
> +btrfs mailing list, see below why
>
> On Tue, Nov 29, 2016 at 12:59:44PM -0800, Eric Wheeler wrote:
> > On Mon, 27 Nov 2016, Coly Li wrote:
> > >
> > > Yes, too many work queues... I guess the locking might be caused by some
> > > very obscure reference
On Wed, Nov 30, 2016 at 2:03 PM, Stefan Priebe - Profihost AG
wrote:
> Hello,
>
> # btrfs balance start -v -dusage=0 -musage=1 /ssddisk/
> Dumping filters: flags 0x7, state 0x0, force is off
> DATA (flags 0x2): balancing, usage=0
> METADATA (flags 0x2): balancing, usage=1
> SYSTEM (flags 0x2
Hello,
# btrfs balance start -v -dusage=0 -musage=1 /ssddisk/
Dumping filters: flags 0x7, state 0x0, force is off
DATA (flags 0x2): balancing, usage=0
METADATA (flags 0x2): balancing, usage=1
SYSTEM (flags 0x2): balancing, usage=1
ERROR: error during balancing '/ssddisk/': No space left on d
On 30 November 2016 at 19:09, Chris Murphy wrote:
> On Wed, Nov 30, 2016 at 7:37 AM, Austin S. Hemmelgarn
> wrote:
>
>> The stability info could be improved, but _absolutely none_ of the things
>> mentioned as issues with raid1 are specific to raid1. And in general, in
>> the context of a featur
Am Mittwoch, 30. November 2016, 12:09:23 CET schrieb Chris Murphy:
> On Wed, Nov 30, 2016 at 7:37 AM, Austin S. Hemmelgarn
>
> wrote:
> > The stability info could be improved, but _absolutely none_ of the things
> > mentioned as issues with raid1 are specific to raid1. And in general, in
> > the
On Wed, Nov 30, 2016 at 7:37 AM, Austin S. Hemmelgarn
wrote:
> The stability info could be improved, but _absolutely none_ of the things
> mentioned as issues with raid1 are specific to raid1. And in general, in
> the context of a feature stability matrix, 'OK' generally means that there
> are n
On Wed, Nov 30, 2016 at 7:04 AM, Roman Mamedov wrote:
> On Wed, 30 Nov 2016 07:50:17 -0500
> Also I don't know what is particularly insane about copying a 4-8 GB file onto
> a storage array. I'd expect both disks to write at the same time (like they
> do in pretty much any other RAID1 system), no
+folks from linux-mm thread for your suggestion
On Wed, Nov 30, 2016 at 01:00:45PM -0500, Austin S. Hemmelgarn wrote:
> > swraid5 < bcache < dmcrypt < btrfs
> >
> > Copying with btrfs send/receive causes massive hangs on the system.
> > Please see this explanation from Linus on why the workaround
On 2016-11-30 12:18, Marc MERLIN wrote:
On Wed, Nov 30, 2016 at 08:46:46AM -0800, Marc MERLIN wrote:
+btrfs mailing list, see below why
Ok, Linus helped me find a workaround for this problem:
https://lkml.org/lkml/2016/11/29/667
namely:
echo 2 > /proc/sys/vm/dirty_ratio
echo 1 > /proc/sys
From: Filipe Manana
Hi Chris,
Here follows a small list of fixes a couple cleanups for the 4.10 merge
window. It contains all the patches from the previous pull request (which
got unanswered nor were the changes pulled yet apparently). The most important
change is still the fix for the extent tr
On Wed, Nov 30, 2016 at 08:46:46AM -0800, Marc MERLIN wrote:
> +btrfs mailing list, see below why
>
> On Tue, Nov 29, 2016 at 12:59:44PM -0800, Eric Wheeler wrote:
> > On Mon, 27 Nov 2016, Coly Li wrote:
> > >
> > > Yes, too many work queues... I guess the locking might be caused by some
> > > ve
On Wed, Nov 30, 2016 at 08:46:46AM -0800, Marc MERLIN wrote:
> +btrfs mailing list, see below why
>
> Ok, Linus helped me find a workaround for this problem:
> https://lkml.org/lkml/2016/11/29/667
> namely:
>echo 2 > /proc/sys/vm/dirty_ratio
>echo 1 > /proc/sys/vm/dirty_background_ratio
>
On Fri, Oct 7, 2016 at 10:30 AM, robbieko wrote:
> From: Robbie Ko
>
> if log tree like below:
> leaf N:
> ...
> item 240 key (282 DIR_LOG_ITEM 0) itemoff 8189 itemsize 8
> dir log end 1275809046
> leaf N+1:
> item 0 key (282 DIR_LOG_ITEM 3936149215) itemof
On 2016-11-30 10:49, Wilson Meier wrote:
Am 30/11/16 um 15:37 schrieb Austin S. Hemmelgarn:
On 2016-11-30 08:12, Wilson Meier wrote:
Am 30/11/16 um 11:41 schrieb Duncan:
Wilson Meier posted on Wed, 30 Nov 2016 09:35:36 +0100 as excerpted:
Am 30/11/16 um 09:06 schrieb Martin Steigerwald:
A
+btrfs mailing list, see below why
On Tue, Nov 29, 2016 at 12:59:44PM -0800, Eric Wheeler wrote:
> On Mon, 27 Nov 2016, Coly Li wrote:
> >
> > Yes, too many work queues... I guess the locking might be caused by some
> > very obscure reference of closure code. I cannot have any clue if I
> > canno
On Wed, Nov 16, 2016 at 10:27:59AM +0800, Qu Wenruo wrote:
> > Yes please. Third namespace for existing error bits is not a good
> > option. Move the I_ERR bits to start from 32 and use them in the low-mem
> > code that's been merged to devel.
>
> I didn't see such fix in devel branch.
Well, that
Am Mittwoch, 30. November 2016, 16:49:59 CET schrieb Wilson Meier:
> Am 30/11/16 um 15:37 schrieb Austin S. Hemmelgarn:
> > On 2016-11-30 08:12, Wilson Meier wrote:
> >> Am 30/11/16 um 11:41 schrieb Duncan:
> >>> Wilson Meier posted on Wed, 30 Nov 2016 09:35:36 +0100 as excerpted:
> Am 30/11/1
On Fri, Oct 28, 2016 at 3:48 AM, robbieko wrote:
> From: Robbie Ko
>
> We found a fsync deadlock in log_new_dir_dentries, because
> btrfs_search_forward get path lock, then call btrfs_iget will
> get another extent_buffer lock, maybe occur deadlock.
This still doesn't explain how the deadlock ha
On Tue, Nov 08, 2016 at 09:45:54AM +0800, Qu Wenruo wrote:
> > Yes please. Third namespace for existing error bits is not a good
> > option. Move the I_ERR bits to start from 32 and use them in the low-mem
> > code that's been merged to devel.
> >
> Should I submit a separate fix or replace the pat
On Fri, Oct 28, 2016 at 3:32 AM, robbieko wrote:
> From: Robbie Ko
>
> The hole punching can result in adding new leafs (and as a consequence
> new nodes) to the tree because when we find file extent items that span
> beyond the hole range we may end up not deleting them (just adjusting them)
> a
I completely agree, the whole wiki status is simply *FRUSTRATING*.
Niccolò Belli
On mercoledì 30 novembre 2016 14:12:36 CET, Wilson Meier wrote:
Am 30/11/16 um 11:41 schrieb Duncan:
Wilson Meier posted on Wed, 30 Nov 2016 09:35:36 +0100 as excerpted:
...
Hi Duncan,
i understand your argumen
Am 30/11/16 um 15:37 schrieb Austin S. Hemmelgarn:
> On 2016-11-30 08:12, Wilson Meier wrote:
>> Am 30/11/16 um 11:41 schrieb Duncan:
>>> Wilson Meier posted on Wed, 30 Nov 2016 09:35:36 +0100 as excerpted:
>>>
Am 30/11/16 um 09:06 schrieb Martin Steigerwald:
> Am Mittwoch, 30. November
On 2016-11-30 09:04, Roman Mamedov wrote:
On Wed, 30 Nov 2016 07:50:17 -0500
"Austin S. Hemmelgarn" wrote:
*) Read performance is not optimized: all metadata is always read from the
first device unless it has failed, data reads are supposedly balanced between
devices per PID of the process rea
On Thu, Nov 10, 2016 at 09:01:47AM -0600, Goldwyn Rodrigues wrote:
> Simplifying the logic of fixing.
>
> Calling fixup_extent_ref() after encountering every error causes
> more error messages after the extent is fixed. In case of multiple errors,
> this is confusing because the error message is d
Hi,
this patch lacks basic formatting requirements, this has been
extensively documented eg. here
https://btrfs.wiki.kernel.org/index.php/Developer's_FAQ .
Besides the formalities, I'm missing what's the change rationale. It
deals with a strange case when the xattr name length is 0, which is
une
On Mon, Nov 28, 2016 at 07:51:53PM +0100, Adam Borowski wrote:
> People who don't frequent IRC nor the mailing list tend to believe RAID 5/6
> are stable; this leads to data loss. Thus, let's do warn them.
>
> At this point, I think fiery letters that won't be missed are warranted.
>
> Kernel 4.
On Tue, Nov 29, 2016 at 10:25:14AM -0600, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> Code reduction. Call warning_trace from assert_trace in order to
> reduce the printf's used. Also, trace variable in warning_trace()
> is not required because it is already handled by BTRFS_DISABLE_BA
On Tue, Nov 29, 2016 at 10:24:52AM -0600, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> The values passed to BUG_ON/WARN_ON are negated(!) and printed, which
> results in printing the value zero for each bug/warning. For example:
> volumes.c:988: btrfs_alloc_chunk: Assertion `ret` failed
On 2016-11-30 08:12, Wilson Meier wrote:
Am 30/11/16 um 11:41 schrieb Duncan:
Wilson Meier posted on Wed, 30 Nov 2016 09:35:36 +0100 as excerpted:
Am 30/11/16 um 09:06 schrieb Martin Steigerwald:
Am Mittwoch, 30. November 2016, 10:38:08 CET schrieb Roman Mamedov:
[snip]
So the stability mat
On Tue, Nov 29, 2016 at 08:29:02PM +0530, Chandan Rajendra wrote:
> btrfs_super_block->sys_chunk_array_size is stored as le32 data on
> disk. However insert_temp_chunk_item() writes sys_chunk_array_size in
> host cpu order. This commit fixes this by using super block access
> helper functions to re
On Wed, 30 Nov 2016 07:50:17 -0500
"Austin S. Hemmelgarn" wrote:
> > *) Read performance is not optimized: all metadata is always read from the
> > first device unless it has failed, data reads are supposedly balanced
> > between
> > devices per PID of the process reading. Better implementations
Am 30/11/16 um 11:41 schrieb Duncan:
> Wilson Meier posted on Wed, 30 Nov 2016 09:35:36 +0100 as excerpted:
>
>> Am 30/11/16 um 09:06 schrieb Martin Steigerwald:
>>> Am Mittwoch, 30. November 2016, 10:38:08 CET schrieb Roman Mamedov:
[snip]
>>> So the stability matrix would need to be updated
h changes up to 515bdc479097ec9d5f389202842345af3162f71c:
Merge branch 'misc-4.10' into for-chris-4.10-20161130 (2016-11-30 14:02:20
+0100)
Adam Borowski (1):
btrfs: make block group flags in balance printks human-readable
Christoph
On Wed, Nov 30, 2016 at 08:24:32AM +0800, Qu Wenruo wrote:
>
>
> At 11/30/2016 12:10 AM, David Sterba wrote:
> > On Mon, Nov 28, 2016 at 09:40:07AM +0800, Qu Wenruo wrote:
> >> Goldwyn Rodrigues has exposed and fixed a bug which underflows btrfs
> >> qgroup reserved space, and leads to non-writab
On 2016-11-30 00:38, Roman Mamedov wrote:
On Wed, 30 Nov 2016 00:16:48 +0100
Wilson Meier wrote:
That said, btrfs shouldn't be used for other then raid1 as every other
raid level has serious problems or at least doesn't work as the expected
raid level (in terms of failure recovery).
RAID1 sh
Hi,
btrfs-progs version 4.8.5 have been released, contains an urgent bugfix for
receive that mistakenly reports error on valid streams. Bug introduced in
4.8.4 by me, my appologies.
Changes:
* receive: fix detection of end of stream (error reported even for valid
streams)
* other:
*
Wilson Meier posted on Wed, 30 Nov 2016 09:35:36 +0100 as excerpted:
> Am 30/11/16 um 09:06 schrieb Martin Steigerwald:
>> Am Mittwoch, 30. November 2016, 10:38:08 CET schrieb Roman Mamedov:
>>> On Wed, 30 Nov 2016 00:16:48 +0100
>>>
>>> Wilson Meier wrote:
That said, btrfs shouldn't be used
Am 30/11/16 um 09:06 schrieb Martin Steigerwald:
> Am Mittwoch, 30. November 2016, 10:38:08 CET schrieb Roman Mamedov:
>> On Wed, 30 Nov 2016 00:16:48 +0100
>>
>> Wilson Meier wrote:
>>> That said, btrfs shouldn't be used for other then raid1 as every other
>>> raid level has serious problems or
Am Mittwoch, 30. November 2016, 10:38:08 CET schrieb Roman Mamedov:
> On Wed, 30 Nov 2016 00:16:48 +0100
>
> Wilson Meier wrote:
> > That said, btrfs shouldn't be used for other then raid1 as every other
> > raid level has serious problems or at least doesn't work as the expected
> > raid level (
47 matches
Mail list logo