Hello,
I have noticed an unusual amount of crc-errors in downloaded rars,
beginning about a week ago. But lets start with the preliminaries. I
am using Debian Stretch.
Kernel: Linux mars 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4
(2018-08-21) x86_64 GNU/Linux
BTRFS-Tools btrfs-progs 4.7.3-1
Hello everybody,
first, let me thank everybody for their advice. What I did was close
the terminal with the device delete-process running in it and fired it
up again. It took about 5 minutes of intensive IO-Usage and the data
was redistributed and and the /dev/sda/ removed from the list of
Hello everybody,
I think this might be useful:
root@mars:~# btrfs dev usage /mnt/btrfs-raid/
/dev/sda, ID: 1
Device size: 3.64TiB
Device slack: 0.00B
Data,RAID1: 7.00GiB
Metadata,RAID1: 1.00GiB
Unallocated: 3.63TiB
Yours
Hallo,
I originally had RAID with six 4TB drives, which was more than 80
percent full. So now I bought
a 10TB drive, added it to the Array and gave the command to remove the
oldest drive in the array.
btrfs device delete /dev/sda /mnt/btrfs-raid
I kept a terminal with "watch btrfs fi show"
> How much RAM on the machine and how much swap available? This looks like a
> lot of dirty data has accumulated, and then also there's swapping happening.
> Both swap out and swap in.
The machine has 16GB Ram and 40GB Swap on a SSD. Its not doing much
besides being my personal file archive, so
Hello,
I have encountered what I think is a problem with btrfs, which causes
my file server to become unresponsive. But let‘s start with the basic
information:
uname -a = Linux mars 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2
(2018-01-04) x86_64 GNU/Linux
btrfs –version = btrfs-progs v4.7.3
Hello
For reference please see this post.
https://mail-archive.com/linux-btrfs@vger.kernel.org/msg58461.html
Please note that I downgraded to btrfs-progs 4.6.1 as advised.
After exchanging the malfunctioning drive I re-created the filesystem
and restored the backup from my NAS. (I didnt entirely
cks
"btrfs dev stats" for None-Zero Entries.
Thanks everybody for your advice.
Stefan
2016-10-17 18:44 GMT+02:00 Stefan Malte Schumacher
<stefan.m.schumac...@gmail.com>:
> Hello
>
> I would like to monitor my btrfs-filesystem for missing drives. On
> Debian mdadm use
Hello
One of the drives which I added to my array two days ago was most
likely already damaged when I bought it - 312 read errors while
scrubbing and lots of SMART errors. I want to take the drive out, go
to my hardware vendor and have it replaced. So I issued the command:
"btrfs dev del /dev/sdf
Hello
I would like to monitor my btrfs-filesystem for missing drives. On
Debian mdadm uses a script in /etc/cron.daily, which calls mdadm and
sends an email if anything is wrong with the array. I would like to do
the same with btrfs. In my first attempt I grepped and cut the
information from
gt; wrote:
>>> On Thu, Sep 15, 2016 at 9:48 AM, Stefan Malte Schumacher
>>> <stefan.m.schumac...@gmail.com> wrote:
> ...
>>> I believe it may be a result of replacing my old installation of
>>> Debian Jessie with Debian Stretch
> ...
>>
>>
he single data to
> raid1. imho.
> then do the scrub again.
> btw are there any scrubbing errors in dmesg? disks are ok?! any
> compression involved? changed freespacecache to v2?
>
>
> sash
>
>
>
> Am 15.09.2016 um 17:48 schrieb Stefan Malte Schumacher:
>> H
Hello
I have encountered a very strange phenomenon while using btrfs-scrub.
I believe it may be a result of replacing my old installation of
Debian Jessie with Debian Stretch, resulting in a Kernel Switch from
3.16+63 to 4.6.0-1. I scrub my filesystem once a month and let anacron
send me the
Hello
I have created multiple filesystems with btrfs, in all cases directly
on the devices themself without creating partitions beforehand. Now,
if I open the disks containing the multi-device filesystem in parted
it outputs the partion table as loop and shows one partition with
btrfs which
So try this one:
btrfs balance start -musage=0 -v
I fear that didn't work too.
mars:/mnt # btrfs balance start -musage=0 -v btrfs/
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=0
SYSTEM (flags 0x2): balancing, usage=0
Done, had to relocate
Hello
Chris and Duncan: I tried both your suggestions but unfortunately
without success. Here is the output:
mars:~ # btrfs balance start -susage=0 -f -v /mnt/btrfs/
Dumping filters: flags 0xa, state 0x0, force is on
SYSTEM (flags 0x2): balancing, usage=0
Done, had to relocate 0 out of 2708
Hello
Yesterday I created a btrfs-filesystem on two disk, using raid1 for
data and metadata. I then mounted it and rsynced several TB of data
onto it.
mkfs.btrfs -m raid1 -d raid1 /dev/sdf /dev/sdg
The command btrfs fi df /mnt/btrfs result in the following output:
Data, RAID1: total=2.64TiB,
They're harmless -- it's a side-effect of the way that mkfs works.
They'll go away if you balance them:
btrfs balance start -dprofiles=single -mprofiles=single -sprofiles=single
/mountpoint
btrfs refused this command, I had to pass --force to execute it.
It exited with this:Done,
18 matches
Mail list logo