On Mon, Dec 3, 2018, at 4:31 AM, Stefan Malte Schumacher wrote:
> I have noticed an unusual amount of crc-errors in downloaded rars,
> beginning about a week ago. But lets start with the preliminaries. I
> am using Debian Stretch.
> Kernel: Linux mars 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4
On 2018-10-29 02:11 PM, Ulli Horlacher wrote:
> I want to know how many free space is left and have problems in
> interpreting the output of:
>
> btrfs filesystem usage
> btrfs filesystem df
> btrfs filesystem show
>
>
In my not so humble opinion, the filesystem usage command has the
easiest
On 2018-10-27 04:19 PM, Marc MERLIN wrote:
> Thanks for confirming. Because I always have snapshots for btrfs
> send/receive, defrag will duplicate as you say, but once the older
> snapshots get freed up, the duplicate blocks should go away, correct?
>
> Back to usage, thanks for pointing out
On 2018-10-27 01:42 PM, Marc MERLIN wrote:
>
> I've been using btrfs for a long time now but I've never had a
> filesystem where I had 15GB apparently unusable (7%) after a balance.
>
The space isn't unusable. It's just allocated.. (It's used in the sense
that it's reserved for data chunks.).
On 2018-10-06 07:23 PM, evan d wrote:
> I have two hard drives that were never partitioned, but set up as two
> independent BRTFS filesystems. Both drives were used in the same
> machine running Arch Linux and the drives contain(ed) largely static
> data.
>
> I decommissioned the machine they
On 2018-09-20 05:35 PM, Adrian Bastholm wrote:
> Thanks a lot for the detailed explanation.
> Aabout "stable hardware/no lying hardware". I'm not running any raid
> hardware, was planning on just software raid. three drives glued
> together with "mkfs.btrfs -d raid5 /dev/sdb /dev/sdc /dev/sdd".
On 2018-09-19 04:43 AM, Tomasz Chmielewski wrote:
> I have a mysql slave which writes to a RAID-1 btrfs filesystem (with
> 4.17.14 kernel) on 3 x ~1.9 TB SSD disks; filesystem is around 40% full.
>
> The slave receives around 0.5-1 MB/s of data from the master over the
> network, which is then
On 2018-09-06 11:32 PM, Duncan wrote:
> Without the mentioned patches, the only way (other than reboot) is to
> remove and reinsert the btrfs kernel module (assuming it's a module, not
> built-in), thus forcing it to forget state.
>
> Of course if other critical mounted filesystems (such as
I'm trying to use a BTRFS filesystem on a removable drive.
The first drive drive was added to the system, it was /dev/sdb
Files were added and device unmounted without error.
But when I re-attach the drive, it becomes /dev/sdg (kernel is fussy
about re-using /dev/sdb).
btrfs fi show: output:
On 2018-08-29 08:00 AM, Jorge Bastos wrote:
>
> Look for example at snapshots from July 21st and 22nd, total used
> space went from 199 to 277GiB, this is mostly from new added files, as
> I confirmed from browsing those snapshots, there were no changes on
> the 23rd, and a lot of files were
On 2018-08-02 03:07 AM, Qu Wenruo wrote:
> For data, since we have cow (along with csum), it should be no problem
> to recover.
>
> And since datacow is used, transaction on each device should be atomic,
> thus we should be able to handle one-time device out-of-sync case.
> (For multiple
On 2018-07-31 11:45 PM, MegaBrutal wrote:
> I know that with nodatacow, I take away most of the benefits of BTRFS
> (those are actually hurting database performance – the exact CoW
> nature that is elsewhere a blessing, with databases it's a drawback).
> But are there any advantages of still
> Acceptable, but not really apply to software based RAID1.
>
Which completely disregards the minor detail that all the software
Raid's I know of can handle exactly this kind of situation without
loosing or corrupting a single byte of data, (Errors on the remaining
hard drive notwithstanding.)
On 2018-06-28 10:36 AM, Adam Borowski wrote:
>
> Uhm, that'd be a nasty regression for the regular (no-nodatacow) case.
> The vast majority of data is fine, and extents that have been written to
> while a device is missing will be either placed elsewhere (if the filesystem
> knew it was
On 2018-06-28 10:17 AM, Chris Murphy wrote:
> 2. The new data goes in a single chunk; even if the user does a manual
> balance (resync) their data isn't replicated. They must know to do a
> -dconvert balance to replicate the new data. Again this is a net worse
> behavior than mdadm out of the
On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote:
>
> Please get yourself clear of what other raid1 is doing.
A drive failure, where the drive is still there when the computer reboots, is a
situation that *any* raid 1, (or for that matter, raid 5, raid 6, anything but
raid 0) will
On 2018-06-27 09:58 PM, Qu Wenruo wrote:
>
>
> On 2018年06月28日 09:42, Remi Gauvin wrote:
>> There seems to be a major design flaw with BTRFS that needs to be better
>> documented, to avoid massive data loss.
>>
>> Tested with Raid 1 on Ubuntu Kernel 4.1
There seems to be a major design flaw with BTRFS that needs to be better
documented, to avoid massive data loss.
Tested with Raid 1 on Ubuntu Kernel 4.15
The use case being tested was a Virtualbox VDI file created with
NODATACOW attribute, (as is often suggested, due to the painful
performance
18 matches
Mail list logo