On 2017-11-08 13:31, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 11:10 AM, Austin S. Hemmelgarn
wrote:
On 2017-11-08 12:54, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 10:22 AM, Hugo Mills wrote:
On Wed, Nov 08, 2017 at 10:17:28AM -0700, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 5:13 AM,
On Wed, Nov 8, 2017 at 11:10 AM, Austin S. Hemmelgarn
wrote:
> On 2017-11-08 12:54, Chris Murphy wrote:
>>
>> On Wed, Nov 8, 2017 at 10:22 AM, Hugo Mills wrote:
>>>
>>> On Wed, Nov 08, 2017 at 10:17:28AM -0700, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 5:13 AM, Austin S. Hemmelgarn
>>>
On 2017-11-08 12:54, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 10:22 AM, Hugo Mills wrote:
On Wed, Nov 08, 2017 at 10:17:28AM -0700, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 5:13 AM, Austin S. Hemmelgarn
wrote:
It definitely does fix ups during normal operations. During reads, if
there's
On Wed, Nov 8, 2017 at 10:22 AM, Hugo Mills wrote:
> On Wed, Nov 08, 2017 at 10:17:28AM -0700, Chris Murphy wrote:
>> On Wed, Nov 8, 2017 at 5:13 AM, Austin S. Hemmelgarn
>> wrote:
>>
>> >> It definitely does fix ups during normal operations. During reads, if
>> >> there's a UNC or there's corrup
On Wed, Nov 08, 2017 at 10:17:28AM -0700, Chris Murphy wrote:
> On Wed, Nov 8, 2017 at 5:13 AM, Austin S. Hemmelgarn
> wrote:
>
> >> It definitely does fix ups during normal operations. During reads, if
> >> there's a UNC or there's corruption detected, Btrfs gets the good
> >> copy, and does a (
On Wed, Nov 8, 2017 at 5:13 AM, Austin S. Hemmelgarn
wrote:
>> It definitely does fix ups during normal operations. During reads, if
>> there's a UNC or there's corruption detected, Btrfs gets the good
>> copy, and does a (I think it's an overwrite, not COW) fixup. Fixups
>> don't just happen wit
On 2017-11-07 23:50, Chris Murphy wrote:
On Tue, Nov 7, 2017 at 6:02 AM, Austin S. Hemmelgarn
wrote:
* Optional automatic correction of errors detected during normal usage.
Right now, you have to run a scrub to correct errors. Such a design makes
sense with MD and LVM, where you don't know whi
On Tue, Nov 7, 2017 at 6:02 AM, Austin S. Hemmelgarn
wrote:
> * Optional automatic correction of errors detected during normal usage.
> Right now, you have to run a scrub to correct errors. Such a design makes
> sense with MD and LVM, where you don't know which copy is correct, but BTRFS
> does k
On 2017-11-07 02:01, Dave wrote:
On Sat, Nov 4, 2017 at 1:25 PM, Chris Murphy wrote:
On Sat, Nov 4, 2017 at 1:26 AM, Dave wrote:
On Mon, Oct 30, 2017 at 5:37 PM, Chris Murphy wrote:
That is not a general purpose file system. It's a file system for admins who
understand where the bodies a
On Sat, Nov 4, 2017 at 1:25 PM, Chris Murphy wrote:
>
> On Sat, Nov 4, 2017 at 1:26 AM, Dave wrote:
> > On Mon, Oct 30, 2017 at 5:37 PM, Chris Murphy
> > wrote:
> >>
> >> That is not a general purpose file system. It's a file system for admins
> >> who understand where the bodies are buried.
>
On 2017-11-06 13:45, Chris Murphy wrote:
On Mon, Nov 6, 2017 at 6:29 AM, Austin S. Hemmelgarn
wrote:
With ATA devices (including SATA), except on newer SSD's, TRIM commands
can't be queued,
SATA spec 3.1 includes queued trim. There are SATA spec 3.1 products
on the market claiming to do que
On Mon, Nov 6, 2017 at 6:29 AM, Austin S. Hemmelgarn
wrote:
>
> With ATA devices (including SATA), except on newer SSD's, TRIM commands
> can't be queued,
SATA spec 3.1 includes queued trim. There are SATA spec 3.1 products
on the market claiming to do queued trim. Some of them fuck up, and
have
On 2017-11-04 13:14, Chris Murphy wrote:
On Fri, Nov 3, 2017 at 10:46 PM, Adam Borowski wrote:
On Fri, Nov 03, 2017 at 04:03:44PM -0600, Chris Murphy wrote:
On Tue, Oct 31, 2017 at 5:28 AM, Austin S. Hemmelgarn
wrote:
If you're running on an SSD (or thinly provisioned storage, or something
On Sat, Nov 4, 2017 at 1:26 AM, Dave wrote:
> On Mon, Oct 30, 2017 at 5:37 PM, Chris Murphy wrote:
>>
>> That is not a general purpose file system. It's a file system for admins who
>> understand where the bodies are buried.
>
> I'm not sure I understand your comment...
>
> Are you saying BTRFS
On Fri, Nov 3, 2017 at 10:46 PM, Adam Borowski wrote:
> On Fri, Nov 03, 2017 at 04:03:44PM -0600, Chris Murphy wrote:
>> On Tue, Oct 31, 2017 at 5:28 AM, Austin S. Hemmelgarn
>> wrote:
>>
>> > If you're running on an SSD (or thinly provisioned storage, or something
>> > else which supports discar
>How is this an issue? Discard is issued only once we're positive
>there's no
>reference to the freed blocks anywhere. At that point, they're also
>open
>for reuse, thus they can be arbitrarily scribbled upon.
Point was, how about keeping this reference for some time period?
>Unless your ha
On Mon, Oct 30, 2017 at 5:37 PM, Chris Murphy wrote:
>
> That is not a general purpose file system. It's a file system for admins who
> understand where the bodies are buried.
I'm not sure I understand your comment...
Are you saying BTRFS is not a general purpose file system?
If btrfs isn't ab
On Fri, Nov 03, 2017 at 04:03:44PM -0600, Chris Murphy wrote:
> On Tue, Oct 31, 2017 at 5:28 AM, Austin S. Hemmelgarn
> wrote:
>
> > If you're running on an SSD (or thinly provisioned storage, or something
> > else which supports discards) and have the 'discard' mount option enabled,
> > then the
On Tue, Oct 31, 2017 at 5:28 AM, Austin S. Hemmelgarn
wrote:
> If you're running on an SSD (or thinly provisioned storage, or something
> else which supports discards) and have the 'discard' mount option enabled,
> then there is no backup metadata tree (this issue was mentioned on the list
> a wh
On 2017-11-03 03:42, Kai Krakow wrote:
Am Tue, 31 Oct 2017 07:28:58 -0400
schrieb "Austin S. Hemmelgarn" :
On 2017-10-31 01:57, Marat Khalili wrote:
On 31/10/17 00:37, Chris Murphy wrote:
But off hand it sounds like hardware was sabotaging the expected
write ordering. How to test a given hard
Am Tue, 31 Oct 2017 07:28:58 -0400
schrieb "Austin S. Hemmelgarn" :
> On 2017-10-31 01:57, Marat Khalili wrote:
> > On 31/10/17 00:37, Chris Murphy wrote:
> >> But off hand it sounds like hardware was sabotaging the expected
> >> write ordering. How to test a given hardware setup for that, I
> >
On 2017-10-31 01:57, Marat Khalili wrote:
On 31/10/17 00:37, Chris Murphy wrote:
But off hand it sounds like hardware was sabotaging the expected write
ordering. How to test a given hardware setup for that, I think, is
really overdue. It affects literally every file system, and Linux
storage tec
On 31/10/17 00:37, Chris Murphy wrote:
But off hand it sounds like hardware was sabotaging the expected write
ordering. How to test a given hardware setup for that, I think, is
really overdue. It affects literally every file system, and Linux
storage technology.
It kinda sounds like to me someth
Dave posted on Sun, 29 Oct 2017 23:31:57 -0400 as excerpted:
> It's all part of the process of gaining critical experience with BTRFS.
> Whether or not BTRFS is ready for production use is (it seems to me)
> mostly a question of how knowledgeable and experienced are the people
> administering it.
On Mon, Oct 30, 2017 at 4:31 AM, Dave wrote:
> This is a very helpful thread. I want to share an interesting related story.
>
> We have a machine with 4 btrfs volumes and 4 Snapper configs. I
> recently discovered that Snapper timeline cleanup been turned off for
> 3 of those volumes. In the Snapp
This is a very helpful thread. I want to share an interesting related story.
We have a machine with 4 btrfs volumes and 4 Snapper configs. I
recently discovered that Snapper timeline cleanup been turned off for
3 of those volumes. In the Snapper configs I found this setting:
TIMELINE_CLEANUP="no"
Yes I was running qgroups.
Yes the filesystem is highly fragmented.
Yes I have way too many snapshots.
I think it's clear that the problem is on my end. I simply placed too
many demands on the filesystem without fully understanding the
implications. Now I have to deal with the consequences.
It w
At 04/25/2017 01:33 PM, Marat Khalili wrote:
On 25/04/17 03:26, Qu Wenruo wrote:
IIRC qgroup for subvolume deletion will cause full subtree rescan
which can cause tons of memory.
Could it be this bad, 24GB of RAM for a 5.6TB volume? What does it even
use this absurd amount of memory for? Is
On 25/04/17 03:26, Qu Wenruo wrote:
IIRC qgroup for subvolume deletion will cause full subtree rescan
which can cause tons of memory.
Could it be this bad, 24GB of RAM for a 5.6TB volume? What does it even
use this absurd amount of memory for? Is it swappable?
Haven't read about RAM limitatio
Chris Murphy posted on Mon, 24 Apr 2017 11:02:02 -0600 as excerpted:
> On Mon, Apr 24, 2017 at 9:27 AM, Fred Van Andel
> wrote:
>> I have a btrfs file system with a few thousand snapshots. When I
>> attempted to delete 20 or so of them the problems started.
>>
>> The disks are being read but exc
At 04/24/2017 11:27 PM, Fred Van Andel wrote:
I have a btrfs file system with a few thousand snapshots. When I
attempted to delete 20 or so of them the problems started.
The disks are being read but except for the first few minutes there
are no writes.
Memory usage keeps growing until all th
On Mon, Apr 24, 2017 at 9:27 AM, Fred Van Andel wrote:
> I have a btrfs file system with a few thousand snapshots. When I
> attempted to delete 20 or so of them the problems started.
>
> The disks are being read but except for the first few minutes there
> are no writes.
>
> Memory usage keeps gr
I have a btrfs file system with a few thousand snapshots. When I
attempted to delete 20 or so of them the problems started.
The disks are being read but except for the first few minutes there
are no writes.
Memory usage keeps growing until all the memory (24 Gb) is used in a
few hours. Eventuall
33 matches
Mail list logo