$ uname -a
Linux nas 4.17.0-1-amd64 #1 SMP Debian 4.17.8-1 (2018-07-20) x86_64 GNU/Linux
$ dmesg | grep Btrfs
[8.168408] Btrfs loaded, crc32c=crc32c-intel
$ lsmod | grep crc32
crc32_pclmul 16384 0
libcrc32c 16384 1 btrfs
crc32c_generic 16384 0
crc32c_intel
On 9 March 2018 at 20:05, Alex Adriaanse wrote:
>
> Yes, we have PostgreSQL databases running these VMs that put a heavy I/O load
> on these machines.
Dump the databases and recreate them with --data-checksums and Btrfs
No_COW attribute.
You can add this to
On 1 December 2017 at 08:18, Duncan <1i5t5.dun...@cox.net> wrote:
>
> When udev sees a device it triggers
> a btrfs device scan, which lets btrfs know which devices belong to which
> individual btrfs. But once it associates a device with a particular
> btrfs, there's nothing to unassociate it --
On 14 November 2017 at 09:36, Klaus Agnoletti wrote:
>
> How do you guys think I should go about this?
I'd clone the disk with GNU ddrescue.
https://www.gnu.org/software/ddrescue/
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a
mber 9, 2017 1:29:08 PM GMT+03:00, Patrik Lundquist
> <patrik.lundqu...@gmail.com> wrote:
>>On 9 September 2017 at 12:05, Marat Khalili <m...@rqc.ru> wrote:
>>> Forgot to add, I've got a spare empty bay if it can be useful here.
>>
>>That makes it much eas
On 9 September 2017 at 12:05, Marat Khalili wrote:
> Forgot to add, I've got a spare empty bay if it can be useful here.
That makes it much easier since you don't have to mount it degraded,
with the risks involved.
Add and partition the disk.
# btrfs replace start /dev/sdb7
On 9 September 2017 at 09:46, Marat Khalili wrote:
>
> Dear list,
>
> I'm going to replace one hard drive (partition actually) of a btrfs raid1.
> Can you please spell exactly what I need to do in order to get my filesystem
> working as RAID1 again after replacement, exactly as it
Print Device slack: 0.00B
instead of Device slack: 16.00EiB
Signed-off-by: Patrik Lundquist <patrik.lundqu...@gmail.com>
---
cmds-fi-usage.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/cmds-fi-usage.c b/cmds-fi-usage.c
index 101a0c4..6
On 10 August 2016 at 23:21, Chris Murphy wrote:
>
> I'm using LUKS, aes xts-plain64, on six devices. One is using mixed-bg
> single device. One is dsingle mdup. And then 2x2 mraid1 draid1. I've
> had zero problems. The two computers these run on do have aesni
> support.
On 21 July 2016 at 15:34, Chris Murphy wrote:
>
> Do programs have a way to communicate what portion of a data file is
> modified, so that only changed blocks are COW'd? When I change a
> single pixel in a 400MiB image and do a save (to overwrite the
> original file), it
On 7 May 2016 at 18:11, Niccolò Belli wrote:
> Which kind of hardware issue? I did a full memtest86 check, a full
> smartmontools extended check and even a badblocks -wsv.
> If this is really an hardware issue that we can identify I would be more than
> happy because
On 7 April 2016 at 17:33, Ivan P wrote:
>
> After running btrfsck --readonly again, the output is:
>
> ===
> Checking filesystem on /dev/sdb
> UUID: 013cda95-8aab-4cb2-acdd-2f0f78036e02
> checking extents
> checking free space cache
> block
Print e.g. "[devid:4].write_io_errs 6" instead of
"[(null)].write_io_errs 6" when device is missing.
Signed-off-by: Patrik Lundquist <patrik.lundqu...@gmail.com>
---
cmds-device.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/cmds-device.c b/cmds-devic
On 2 April 2016 at 20:31, Kai Krakow wrote:
> Am Sat, 2 Apr 2016 11:44:32 +0200
> schrieb Marc Haber :
>
>> On Sat, Apr 02, 2016 at 11:03:53AM +0200, Kai Krakow wrote:
>> > Am Fri, 1 Apr 2016 07:57:25 +0200
>> > schrieb Marc Haber
On 29 March 2016 at 22:46, Chris Murphy wrote:
> On Tue, Mar 29, 2016 at 2:21 PM, Warren, Daniel
> wrote:
>> Greetings all,
>>
>> I'm running 4.4.0 from deb sid
>>
>> btrfs fi sh http://pastebin.com/QLTqSU8L
>> kernel panic
On 28 March 2016 at 05:54, Anand Jain <anand.j...@oracle.com> wrote:
>
> On 03/26/2016 07:51 PM, Patrik Lundquist wrote:
>>
>> # btrfs device stats /mnt
>>
>> [/dev/sde].write_io_errs 11
>> [/dev/sde].read_io_errs0
>> [/dev/sde].flush_io_errs
So with the lessons learned:
# mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
# mount /dev/sdb /mnt; dmesg | tail
# touch /mnt/test1; sync; btrfs device usage /mnt
Only raid10 profiles.
# echo 1 >/sys/block/sde/device/delete
We lost a disk.
# touch /mnt/test2; sync; dmesg
On 25 March 2016 at 18:20, Stephen Williams wrote:
>
> Your information below was very helpful and I was able to recreate the
> Raid array. However my initial question still stands - What if the
> drives dies completely? I work in a Data center and we see this quite a
> lot
On Debian Stretch with Linux 4.4.6, btrfs-progs 4.4 in VirtualBox
5.0.16 with 4*2GB VDIs:
# mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sdbe
# mount /dev/sdb /mnt
# touch /mnt/test
# umount /mnt
Everything fine so far.
# wipefs -a /dev/sde
*reboot*
# mount /dev/sdb /mnt
On 23 March 2016 at 20:33, Chris Murphy wrote:
>
> On Wed, Mar 23, 2016 at 1:10 PM, Brad Templeton wrote:
> >
> > I am surprised to hear it said that having the mixed sizes is an odd
> > case.
>
> Not odd as in wrong, just uncommon compared to other
On 25 March 2016 at 12:49, Stephen Williams wrote:
>
> So catch 22, you need all the drives otherwise it won't let you mount,
> But what happens if a drive dies and the OS doesn't detect it? BTRFS
> wont allow you to mount the raid volume to remove the bad disk!
Version of
On 23 February 2016 at 18:26, Marc MERLIN wrote:
>
> I'm currently doing a very slow defrag to see if it'll help (looks like
> it's going to take days).
> I'm doing this:
> for i in dir1 dir2 debian32 debian64 ubuntu dir4 ; do echo $i; time btrfs fi
> defragment -v -r $i; done
On 30 January 2016 at 15:50, Patrik Lundquist
<patrik.lundqu...@gmail.com> wrote:
> On 29 January 2016 at 13:14, Austin S. Hemmelgarn <ahferro...@gmail.com>
> wrote:
>>
>> Last I checked, Seagate's 'NAS' drives and whatever they've re-branded their
>> other
On 30 January 2016 at 12:59, Christian Pernegger wrote:
>
> This is on a 1-month-old Debian stable (jessie) install and yes, I
> know that means the kernel and btrfs-progs are ancient
apt-get install -t jessie-backports linux-image-4.3.0-0.bpo.1-amd64
Or something like that
On 1 January 2016 at 16:44, Jan Koester wrote:
>
> Hi,
>
> if I try to repair filesystem got I'am assert. I use Raid6.
>
> Linux dibsi 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt4-3~bpo70+1
> (2015-02-12) x86_64 GNU/Linux
Raid6 wasn't completed until Linux 3.19 and I
On 19 November 2015 at 06:58, Roman Mamedov wrote:
>
> On Wed, 18 Nov 2015 19:53:03 +0100
> linux-btrfs.tebu...@xoxy.net wrote:
>
> > $ uname -a
> > Linux neptun 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8
> > 10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[...]
>
>
On 14 November 2015 at 15:11, CHENG Yuk-Pong, Daniel wrote:
>
> Background info:
>
> I am running a heavy-write database server with 96GB ram. In the worse
> case it cause multi minutes of high cpu loads. Systemd keeping kill
> and restarting services, and old job don't die
On 6 November 2015 at 10:03, Janos Toth F. wrote:
>
> Although I updated the firmware of the drives. (I found an IMPORTANT
> update when I went there to download SeaTools, although there was no
> change log to tell me why this was important). This might changed the
> error
On 7 August 2015 at 00:17, Peter Foley pefol...@pefoley.com wrote:
Hi,
I have an btrfs volume that spans multiple disks (no raid, just
single), and earlier this morning I hit some hardware problems with
one of the disks.
I tried btrfs dev del /dev/sda1 /, but btrfs was unable to migrate the
On 25 July 2015 at 10:56, Mojtaba ker...@rp2.org wrote:
System is debian wheezy or Jessie.
This is Debian Jessie:
root@s2:/# uname -a
Linux s2 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux
That's a way too old kernel to be running Btrfs on. You should be
running on at least
to 1073741824.
Also added a missing newline.
Signed-off-by: Patrik Lundquist patrik.lundqu...@gmail.com
---
cmds-filesystem.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 800aa4d..00a3f78 100644
--- a/cmds-filesystem.c
+++ b/cmds
A leftover from when recursive defrag was added.
Signed-off-by: Patrik Lundquist patrik.lundqu...@gmail.com
---
cmds-filesystem.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 00a3f78..1b7b4c1 100644
--- a/cmds-filesystem.c
On 14 July 2015 at 21:15, Hugo Mills h...@carfax.org.uk wrote:
On Tue, Jul 14, 2015 at 09:09:00PM +0200, Patrik Lundquist wrote:
On 14 July 2015 at 20:41, Hugo Mills h...@carfax.org.uk wrote:
On Tue, Jul 14, 2015 at 01:57:07PM +0200, Patrik Lundquist wrote:
On 24 June 2015 at 12:46, Duncan
On 24 June 2015 at 12:46, Duncan 1i5t5.dun...@cox.net wrote:
Regardless of whether 1 or huge -t means maximum defrag, however, the
nominal data chunk size of 1 GiB means that 30 GiB file you mentioned
should be considered ideally defragged at 31 extents. This is a
departure from ext4, which
On 14 July 2015 at 20:41, Hugo Mills h...@carfax.org.uk wrote:
On Tue, Jul 14, 2015 at 01:57:07PM +0200, Patrik Lundquist wrote:
On 24 June 2015 at 12:46, Duncan 1i5t5.dun...@cox.net wrote:
Regardless of whether 1 or huge -t means maximum defrag, however, the
nominal data chunk size of 1
On 10 July 2015 at 06:05, None None whocares0...@freemail.hu wrote:
According to dmesg sda returns bad data but the smart values for it seem fine.
# smartctl -a /dev/sda
...
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Signed-off-by: Patrik Lundquist patrik.lundqu...@gmail.com
---
cmds-inspect.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmds-inspect.c b/cmds-inspect.c
index 053cf8e..aafe37d 100644
--- a/cmds-inspect.c
+++ b/cmds-inspect.c
@@ -293,7 +293,7 @@ static int
On 25 June 2015 at 06:01, Duncan 1i5t5.dun...@cox.net wrote:
Patrik Lundquist posted on Wed, 24 Jun 2015 14:05:57 +0200 as excerpted:
On 24 June 2015 at 12:46, Duncan 1i5t5.dun...@cox.net wrote:
If it's uint32 limited, either kill everything above that in both the
documentation and code
On 24 June 2015 at 05:20, Marc MERLIN m...@merlins.org wrote:
Hello again,
Just curious, is anyone seeing similar things with big VM images or other
DBs?
I forgot to mention that my vdi file is 88GB.
It's surprising that it took longer to count the fragments than to actually
defragment
On 24 June 2015 at 12:46, Duncan 1i5t5.dun...@cox.net wrote:
Patrik Lundquist posted on Wed, 24 Jun 2015 10:28:09 +0200 as excerpted:
AFAIK, it's set huge to defrag everything,
It's set to 256K by default.
Assuming set a huge -t to defrag to the maximum extent possible is
correct
btrfs fi defrag -t 1T overflows the u32 thresh variable and default, instead of
max, threshold is used.
Signed-off-by: Patrik Lundquist patrik.lundqu...@gmail.com
---
cmds-filesystem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
On 12 January 2015 at 15:54, Austin S Hemmelgarn ahferro...@gmail.com wrote:
Another thing to consider is that the kernel's default I/O scheduler and the
default parameters for that I/O scheduler are almost always suboptimal for
SSD's, and this tends to show far more with BTRFS than anything
Hi,
I've been looking at recommended cryptsetup options for Btrfs and I
have one question:
Marc uses cryptsetup luksFormat --align-payload=1024 directly on a
disk partition and not on e.g. a striped mdraid. Is there a Btrfs
reason for that alignment?
On 28 December 2014 at 13:03, Martin Steigerwald mar...@lichtvoll.de wrote:
BTW, I found that the Oracle blog didn´t work at all for me. I completed
a cycle of defrag, sdelete -c and VBoxManage compact, [...] and it
apparently did *nothing* to reduce the size of the file.
They've changed the
On 12 December 2014 at 14:29, Robert White rwh...@pobox.com wrote:
You yourself even found the annotation in the wiki that said you should have
e4defragged the system before conversion.
There's no mention of e4defrag on the Btrfs wiki, it says to btrfs
defrag before balance to avoid ENOSPC, as
I'll reboot the thread with a recap and my latest findings.
* Half full 3TB disk converted from ext4 to Btrfs, after first
verifying it with fsck.
* Undo subvolume deleted after being happy with the conversion.
* Recursive defrag.
* Full balance, that ended with 98 enospc errors during balance.
On 11 December 2014 at 09:42, Robert White rwh...@pobox.com wrote:
On 12/10/2014 05:36 AM, Patrik Lundquist wrote:
On 10 December 2014 at 13:17, Robert White rwh...@pobox.com wrote:
On 12/09/2014 11:19 PM, Patrik Lundquist wrote:
BUT FIRST UNDERSTAND: you do _not_ need to balance a newly
On 11 December 2014 at 05:13, Duncan 1i5t5.dun...@cox.net wrote:
Patrik correct me if I have this wrong, but filling in the history as I
believe I have it...
You're right Duncan, except it began as a private question about an
error in a blog and went from there. Not that it matters, except the
On 11 December 2014 at 11:18, Robert White rwh...@pobox.com wrote:
So far I don't see a bug.
Fair enough, lets call it a huge problem with btrfs convert. I think
it warrants a note in the wiki.
On 12/11/2014 12:18 AM, Patrik Lundquist wrote:
Running defrag several more times and balance
On 11 December 2014 at 23:00, Robert White rwh...@pobox.com wrote:
On 12/11/2014 12:18 AM, Patrik Lundquist wrote:
* Full balance, that ended with 98 enospc errors during balance.
Assuming that quote is an actual quote from the output of the balance...
It is, from dmesg.
Bugs
On 10 December 2014 at 13:17, Robert White rwh...@pobox.com wrote:
On 12/09/2014 11:19 PM, Patrik Lundquist wrote:
BUT FIRST UNDERSTAND: you do _not_ need to balance a newly converted
filesystem. That is, the recommended balance (and recursive defrag) is _not_
a useability issue, its
On 10 December 2014 at 14:11, Duncan 1i5t5.dun...@cox.net wrote:
From there... I've never used it but I /think/ btrfs inspect-internal
logical-resolve should let you map the 182109... address to a filename.
From there, moving that file out of the filesystem and back in should
eliminate that
On 10 December 2014 at 13:47, Duncan 1i5t5.dun...@cox.net wrote:
The recursive btrfs defrag after deleting the saved ext* subvolume
_should_ have split up any such 1 GiB extents so balance could deal
with them, but either it failed for some reason on at least one such
file, or there's some
On 10 December 2014 at 23:28, Robert White rwh...@pobox.com wrote:
On 12/10/2014 10:56 AM, Patrik Lundquist wrote:
On 10 December 2014 at 14:11, Duncan 1i5t5.dun...@cox.net wrote:
Assuming no snapshots still contain the file, of course, and that the
ext* saved subvolume has already been
On 24 November 2014 at 13:35, Patrik Lundquist
patrik.lundqu...@gmail.com wrote:
On 24 November 2014 at 05:23, Duncan 1i5t5.dun...@cox.net wrote:
Patrik Lundquist posted on Sun, 23 Nov 2014 16:12:54 +0100 as excerpted:
The balance run now finishes without errors with usage=99 and I think
I'll
On 10 December 2014 at 00:13, Robert White rwh...@pobox.com wrote:
On 12/09/2014 02:29 PM, Patrik Lundquist wrote:
Label: none uuid: 770fe01d-6a45-42b9-912e-e8f8b413f6a4
Total devices 1 FS bytes used 1.35TiB
devid1 size 2.73TiB used 1.36TiB path /dev/sdc1
Data, single: total
On 25 November 2014 at 22:34, Phillip Susi ps...@ubuntu.com wrote:
On 11/19/2014 7:05 PM, Chris Murphy wrote:
I'm not a hard drive engineer, so I can't argue either point. But
consumer drives clearly do behave this way. On Linux, the kernel's
default 30 second command timer eventually
On 25 November 2014 at 23:14, Phillip Susi ps...@ubuntu.com wrote:
On 11/19/2014 6:59 PM, Duncan wrote:
The paper specifically mentioned that it wasn't necessarily the
more expensive devices that were the best, either, but the ones
that faired best did tend to have longer device-ready times.
On 23 November 2014 at 08:52, Duncan 1i5t5.dun...@cox.net wrote:
[a whole lot]
Thanks for the long post, Duncan.
My venture into the finer details of balance began with converting an
ext4 fs to btrfs and after an inital defrag having a full balance fail
with about a third to go.
Consecutive
On 22 November 2014 at 23:26, Marc MERLIN m...@merlins.org wrote:
This one hurts my brain every time I think about it :)
I'm new to Btrfs so I may very well be wrong, since I haven't really
read up on it. :-)
So, the bigger the -dusage number, the more work btrfs has to do.
Agreed.
60 matches
Mail list logo