Thanks Lionel for your explanations!
I just noticed that a second device with the same setup (which has been
working only some hours ago) failed as well. So two systems which were
running with a non-raid1 and non-btrfs setup for weeks or months before,
and which were updated to the btrfs
ring an operating system and application data but for
storing pictures and videos written on a VFAT...
>> Recently I
>> switched to a btrfs-raid1 configuration, hoping to make my system more
>> resistant against power failures and flash-memory specific problems.
Note that there's
Am 25.02.2016 um 18:34 schrieb Hegner Robert:
Hi all!
I'm working on a embedded system (ARM) running from a SDcard. Recently I
switched to a btrfs-raid1 configuration, hoping to make my system more
resistant against power failures and flash-memory specific problems.
However today one of my
Hi all!
I'm working on a embedded system (ARM) running from a SDcard. Recently I
switched to a btrfs-raid1 configuration, hoping to make my system more
resistant against power failures and flash-memory specific problems.
However today one of my devices wouldn't mount my root filesystem as rw
On Thu, Jan 28, 2016 at 01:47:36PM -0500, Sean Greenslade wrote:
> OK, I just misunderstood how that syntax worked. All seems good now.
> I'll try to play around with some dummy configurations this weekend to
> see if I can reproduce the post-replace mount bug.
So I finally got some time to play
On 7 February 2016 at 20:27, Lionel Bouton
<lionel-subscript...@bouton.name> wrote:
> Hi,
>
> Le 07/02/2016 14:15, Andreas Hild a écrit :
>> Dear All,
>>
>> The file system on a RAID1 Debian server seems corrupted in a major
>> way, with 99% of t
On 7 February 2016 at 09:27, Qu Wenruo wrote:
>
>
> On 02/07/2016 10:23 PM, Andreas Hild wrote:
>>
>> On 7 February 2016 at 20:56, Qu Wenruo wrote:
>>>
>>>
>>> You are wondering why data is still 168G, but that's the allocated data
>>> chunk size.
Hi,
Le 07/02/2016 14:15, Andreas Hild a écrit :
> Dear All,
>
> The file system on a RAID1 Debian server seems corrupted in a major
> way, with 99% of the files not found. This was the result of a
> precarious shutdown after a crash that was preceded by an accidental
> miscon
Dear All,
The file system on a RAID1 Debian server seems corrupted in a major
way, with 99% of the files not found. This was the result of a
precarious shutdown after a crash that was preceded by an accidental
misconfiguration in /etc/fstab; it pointed "/" and "/tmp" to o
On 7 February 2016 at 20:56, Qu Wenruo wrote:
>
> You are wondering why data is still 168G, but that's the allocated data
> chunk size.
>
> It means 168G space is allocated to store data, but only 42M is used.
> Matches with your vanilla df output.
>
> So it doesn't mean
On 02/07/2016 09:15 PM, Andreas Hild wrote:
Dear All,
The file system on a RAID1 Debian server seems corrupted in a major
way, with 99% of the files not found. This was the result of a
precarious shutdown after a crash that was preceded by an accidental
misconfiguration in /etc/fstab
On 02/07/2016 10:23 PM, Andreas Hild wrote:
On 7 February 2016 at 20:56, Qu Wenruo wrote:
You are wondering why data is still 168G, but that's the allocated data
chunk size.
It means 168G space is allocated to store data, but only 42M is used.
Matches with your
On 30 January 2016 at 15:50, Patrik Lundquist
wrote:
> On 29 January 2016 at 13:14, Austin S. Hemmelgarn
> wrote:
>>
>> Last I checked, Seagate's 'NAS' drives and whatever they've re-branded their
>> other enterprise line as, as well as WD's
On Tue, Feb 02, 2016 at 03:16:40PM +0100, Marco Lorenzo Crociani wrote:
> Hi,
> df on btrfs RAID1 fs on 3 disks (they are RAID0 hardware, two disk,
> arrays but its not important right?) show wrong avail space when
> empty
>
> # uname -a
> Linux kvm4.prisma 4.4.0-2.el7.elr
Hi,
df on btrfs RAID1 fs on 3 disks (they are RAID0 hardware, two disk,
arrays but its not important right?) show wrong avail space when empty
# uname -a
Linux kvm4.prisma 4.4.0-2.el7.elrepo.x86_64 #1 SMP Tue Jan 26 13:06:01
EST 2016 x86_64 x86_64 x86_64 GNU/Linux
# btrfs --version
btrfs
On 2016-01-29 17:06, Henk Slager wrote:
On Fri, Jan 29, 2016 at 9:40 PM, Austin S. Hemmelgarn
wrote:
On 2016-01-29 15:27, Henk Slager wrote:
On Fri, Jan 29, 2016 at 1:14 PM, Austin S. Hemmelgarn
wrote:
On 2016-01-28 18:01, Chris Murphy wrote:
On Sat, Jan 30, 2016 at 7:50 AM, Patrik Lundquist
wrote:
> On 29 January 2016 at 13:14, Austin S. Hemmelgarn
> wrote:
>>
>> Last I checked, Seagate's 'NAS' drives and whatever they've re-branded their
>> other enterprise line as, as well as
;>
>>> -Psalle.
>>>
>>>
>>> On 04/01/16 18:00, Alphazo wrote:
>>>>
>>>> Hello,
>>>>
>>>> My picture library today lies on an external hard drive that I sync on
>>>> a regular basis with a couple
Thanks Chris for the warning. I agree that mounting both drives
separately in degraded r/w will lead to very funky results when trying
to scrub them when put together.
On Mon, Jan 4, 2016 at 6:41 PM, Chris Murphy wrote:
> On Mon, Jan 4, 2016 at 10:00 AM, Alphazo
On Tue, Jan 05, 2016 at 05:24:31PM +0100, Psalle wrote:
> Hello all and excuse me if this is a silly question. I looked around in the
> wiki and list archives but couldn't find any in-depth discussion about this:
>
> I just realized that, since raid1 in btrfs is special (meaning only
I've got a two-disk RAID1 btrfs volume, which crashed for no apparent
reason and was corrupt on next boot. Relevant command runs and
outputs:
[root@archiso ~]# mount -osubvol=.,ro,recovery /dev/mapper/rootvol_1 mnt
mount: wrong fs type, bad option, bad superblock on /dev/mapper/rootvol_1
on the following unusual use case that I have
tested:
- Create a btrfs with the two drives with RAID1
- When at home I can work with the two drives connected so I can enjoy
the self-healing feature if a bit goes mad so I only backup perfect
copies to my backup servers.
- When not at home I only bring one
Hello all and excuse me if this is a silly question. I looked around in
the wiki and list archives but couldn't find any in-depth discussion
about this:
I just realized that, since raid1 in btrfs is special (meaning only two
copies in different devices), the effect in terms of resilience
:
- Create a btrfs with the two drives with RAID1
- When at home I can work with the two drives connected so I can enjoy
the self-healing feature if a bit goes mad so I only backup perfect
copies to my backup servers.
- When not at home I only bring one external drive and manually mount
it in degraded
On Mon, Jan 4, 2016 at 10:00 AM, Alphazo wrote:
> I have tested the above use case with a couple of USB flash drive and
> even used btrfs over dm-crypt partitions and it seemed to work fine
> but I wanted to get some advices from the community if this is really
> a bad
Latest update on this thread. btrfs check (4.3.1) reports no problems.
Volume mounts with kernel 4.2.8 with no errors. And I just did a scrub
and there were no errors, not even any fix up messages. And dev stats
are all zero.
So... it appears it was a minor enough problem, and still consistent
happens then? If yes, it would mean not to
blindly trust the RAID without doing the homeworks.
The one case where btrfs could get things wrong that I know of is as I
discovered in my initial pre-btrfs-raid1-deployment testing...
I've had exactly one case where I got _really_ unlucky and had a bunch
f yes, it would mean not to
> blindly trust the RAID without doing the homeworks.
The one case where btrfs could get things wrong that I know of is as I
discovered in my initial pre-btrfs-raid1-deployment testing...
1) Create a two-device btrfs raid1 (data and metadata) and ensure some
dat
per output?
> >
> > Nope. I will attach to this email below for both devices.
> >
> >> If that's the case, superblock on device 2 maybe older than
> >> superblock on device 1.
> >
> > Yes, looks iike devid 1 transid 4924, and devid 2 transid 4923. And
&g
for a btrfs-show-super output?
> >>>
> >>> Nope. I will attach to this email below for both devices.
> >>>
> >>>> If that's the case, superblock on device 2 maybe older than
> >>>> superblock on device 1.
> >>>
> >>
ow all the problem is explained.
You should be good to mount it rw, as RAID1 will handle all the
problem.
How should RAID1 handle this if both copies have valid checksums
(as I would assume here unless shown otherwise)? This is an even
bigger problem with block based RAID1 which does not hav
Latest update.
4.4.0-0.rc6.git0.1.fc24.x86_64
btrfs-progs v4.3.1
Mounted the volume normally with both devices available, no mount
options, so it is a rw mount. And it mounts with only the normal
kernel messages:
[ 9458.290778] BTRFS info (device sdc): disk space caching is enabled
[
id 1 transid 4924, and devid 2 transid 4923. And
it's devid 2 that had device reset and write errors when it vanished
and reappeared as a different block device.
Now all the problem is explained.
You should be good to mount it rw, as RAID1 will handle all the
problem.
How should RAID1 handle
On Mon, Dec 14, 2015 at 10:59 AM, Chris Murphy wrote:
> On Mon, Dec 14, 2015 at 1:04 AM, Qu Wenruo wrote:
>>
>>
>> Chris Murphy wrote on 2015/12/14 00:24 -0700:
>>> What is a full disk dump? I can try to see if it's possible.
>>
>>
>> Just a dd
for a btrfs-show-super output?
>>
>>
>> Nope. I will attach to this email below for both devices.
>>
>>> If that's the case, superblock on device 2 maybe older than superblock on
>>> device 1.
>>
>>
>> Yes, looks iike devid 1 transid 4924, a
peared as a different block device.
Now all the problem is explained.
You should be good to mount it rw, as RAID1 will handle all the problem.
Then you can either use scrub on dev2 to fix all the generation mismatch.
Although I prefer to wipe dev2 and mount dev1 as degraded, and replace
the missin
speed 0
dev_item.bandwidth 0
dev_item.generation 0
sys_chunk_array[2048]:
item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 715141414912)
chunk length 33554432 owner 2 stripe_len 65536
type SYSTEM|RAID1 num_stripes 2
stripe 0 devid 2 offset 35
mount just OK).
>
> I'm brave enough. I'll give it a try tomorrow unless there's another
> request for more info before then.
Given the off-by-one generations and my own btrfs raid1 experience, I'm
guessing the likely result is a good mount and either no problems or a
good initial mount but
On Mon, Dec 14, 2015 at 1:04 AM, Qu Wenruo wrote:
>
>
> Chris Murphy wrote on 2015/12/14 00:24 -0700:
>> What is a full disk dump? I can try to see if it's possible.
>
>
> Just a dd dump.
OK, yeah. That's 750GB per drive.
>t won't be an easy
> thing to find a place to
Chris Murphy wrote on 2015/12/14 00:24 -0700:
Thanks for the reply.
On Sun, Dec 13, 2015 at 10:48 PM, Qu Wenruo wrote:
Chris Murphy wrote on 2015/12/13 21:16 -0700:
btrfs check with devid 1 and 2 present produces thousands of scary
messages, e.g.
checksum verify
Chris Murphy wrote on 2015/12/13 21:16 -0700:
Part 1= What to do about it? This post.
Part 2 = How I got here? I'm still working on the write up, so it's
not yet posted.
Summary:
2 dev (spinning rust) raid1 for data and metadata.
kernel 4.2.6, btrfs-progs 4.2.2
btrfs check with devid 1
Part 1= What to do about it? This post.
Part 2 = How I got here? I'm still working on the write up, so it's
not yet posted.
Summary:
2 dev (spinning rust) raid1 for data and metadata.
kernel 4.2.6, btrfs-progs 4.2.2
btrfs check with devid 1 and 2 present produces thousands of scary
messages
(sdb are a raid0, btrfs receive destination, they don't come up in
this dmesg at all)
sdd are a raid1, btrfs send source
All four devices are in USB enclosures on a new Intel NUC that's had
~32 hours burnin with memtest. Both file systems had received many
scrubs before with zero errors all time
Thanks for the reply.
On Sun, Dec 13, 2015 at 10:48 PM, Qu Wenruo wrote:
>
>
> Chris Murphy wrote on 2015/12/13 21:16 -0700:
>> btrfs check with devid 1 and 2 present produces thousands of scary
>> messages, e.g.
>> checksum verify failed on 714189357056 found E4E3BDB6
On Wed, Nov 25, 2015 at 12:36:32PM +0100, Mario wrote:
>
> Hi,
>
> I pushed a subvolume using send/receive to an 8 TB disk, added
> two 4 TB disks and started a balance with conversion to RAID1.
>
> Afterwards, I got the following:
>
> Total devices 3 FS bytes
Hi,
I pushed a subvolume using send/receive to an 8 TB disk, added
two 4 TB disks and started a balance with conversion to RAID1.
Afterwards, I got the following:
Total devices 3 FS bytes used 5.40TiB
devid1 size 7.28TiB used 4.54TiB path /dev/mapper/yellow4
devid2 size 3.64TiB
│ 2 / 1 device │││ 1/1 (see note) │
├┼──┼┼┼─┤
││ │││ │
│ RAID0 │ ││ 1 to N │ 2/any │
├┼──┼┼┼─┤
││ │ ││ │
│
On Fri, Nov 20, 2015 at 11:05:44AM +, Duncan wrote:
> As in the title...
>
> The btrfs-progs v4.3.1 mkfs.btrfs manpage has a quite nice profiles table
> listing the various profiles, single/dup/raid0/raid10/raid5/raid6.
>
> It's missing raid1. =:^(
Oh, that's not int
Hi,
I'm sorry if this email is sent twice. Gmail says it failed to deliver
because it contains HTML, so I will try with plain text:
I have a raid1 array that contains 3 devices like this:
Label: 'pool' uuid: 7e66ba23-14c7-47b5-90fc-481ecc769138
Total devices 3 FS bytes used 888.04GiB
devid
As in the title...
The btrfs-progs v4.3.1 mkfs.btrfs manpage has a quite nice profiles table
listing the various profiles, single/dup/raid0/raid10/raid5/raid6.
It's missing raid1. =:^(
Also, it calls raid5/6 "copies" rather than "parity". Perhaps add
another colum
On Fri, 2015-11-20 at 11:05 +, Duncan wrote:
> It's missing raid1. =:^(
speaking of which...
Wouldn't the developers consider to rename raid1 to something more
correct? E.g. replicas2 or dup or whatever.
RAID1 has ever had the meaning of mirrored devices and the closest to
this in bt
Christoph Anton Mitterer posted on Fri, 20 Nov 2015 20:29:34 +0100 as
excerpted:
> On Fri, 2015-11-20 at 11:05 +, Duncan wrote:
>> It's missing raid1. =:^(
> speaking of which...
>
> Wouldn't the developers consider to rename raid1 to something more
> correct?
Hi,
I'm running Kernel 4.3 and Btrfs-tools 4.3 on Debian Jessie. I compiled
the tools and kernel myself.
Recently I added a new disk to my btrfs volume and wanted to proceed to
convert from single to raid1.
Unfortunately the new disk seems to be faulty and started throwing a lot
of errors
t;
> I'm running Kernel 4.3 and Btrfs-tools 4.3 on Debian Jessie. I compiled
> the tools and kernel myself.
>
> Recently I added a new disk to my btrfs volume and wanted to proceed to
> convert from single to raid1.
> Unfortunately the new disk seems to be faulty and
Lukas Pirl posted on Fri, 30 Oct 2015 10:43:41 +1300 as excerpted:
> If there is one subvolume that contains all other (read only) snapshots
> and there is insufficient storage to copy them all separately:
> Is there an elegant way to preserve those when moving the data across
> disks?
AFAIK, no
Lukas Pirl posted on Fri, 30 Oct 2015 10:43:41 +1300 as excerpted:
> Is e.g. "balance" also influenced by the userspace tools or does
> the kernel the actual work?
btrfs balance is done "online", that is, on the (writable-)mounted
filesystem, and the kernel does the real work. It's the tools
On Fri, Oct 30, 2015 at 10:58:47AM +, Duncan wrote:
> Lukas Pirl posted on Fri, 30 Oct 2015 10:43:41 +1300 as excerpted:
>
> > If there is one subvolume that contains all other (read only) snapshots
> > and there is insufficient storage to copy them all separately:
> > Is there an elegant way
On 2015-10-30 06:58, Duncan wrote:
Lukas Pirl posted on Fri, 30 Oct 2015 10:43:41 +1300 as excerpted:
If there is one subvolume that contains all other (read only) snapshots
and there is insufficient storage to copy them all separately:
Is there an elegant way to preserve those when moving the
TL;DR: thanks but recovery still preferred over recreation.
Hello Duncan and thanks for your reply!
On 10/26/2015 09:31 PM, Duncan wrote:
FWIW... Older btrfs userspace such as your v3.17 is "OK" for normal
runtime use, assuming you don't need any newer features, as in normal
runtime, it's the
Lukas Pirl posted on Mon, 26 Oct 2015 19:19:50 +1300 as excerpted:
> TL;DR: RAID1 does not recover, I guess the interesting part in the stack
> trace is: [elided, I'm not a dev so it's little help to me]
>
> I'd appreciate some help for repairing a corrupted RAID1.
>
> Setup:
TL;DR: RAID1 does not recover, I guess the interesting part in the stack
trace is:
Call Trace:
[] __del_reloc_root+0x30/0x100 [btrfs]
[] free_reloc_roots+0x25/0x40 [btrfs]
[] merge_reloc_roots+0x18e/0x240 [btrfs]
[] btrfs_recover_relocation+0x374/0x420 [btrfs]
[] open_ctree+0x1b7d
On Wed, Oct 21, 2015 at 2:07 PM, Austin S Hemmelgarn
wrote:
> And I realize of course right after sending this that my other reply didn't
> get through because GMail refuses to send mail in plain text, no matter how
> hard I beat it over the head...
In the web browser
On Wed, Oct 21, 2015 at 11:54 AM, Dmitry Katsubo
wrote:
> On 2015-10-21 00:40, Henk Slager wrote:
>> I had a similar issue some time ago, around the time kernel 4.1.6 was
>> just there.
>> In case you don't want to wait for new disk or decide to just run the
>>
On 2015-10-21 12:01, Chris Murphy wrote:
On Wed, Oct 21, 2015 at 2:07 PM, Austin S Hemmelgarn
wrote:
And I realize of course right after sending this that my other reply didn't
get through because GMail refuses to send mail in plain text, no matter how
hard I beat it over
efore I saw the need for a kernel patch and decided to wait.
>>>
>>> For anyone following this later, I needed to use the following to get
>>> the missing device ID:
>>>
>>> btrfs device usage
>>> ᐧ
>>>
>>> On Tue, Oct 20, 201
On 2015-10-21 00:40, Henk Slager wrote:
> I had a similar issue some time ago, around the time kernel 4.1.6 was
> just there.
> In case you don't want to wait for new disk or decide to just run the
> filesystem with 1 disk less or maybe later on replace 1 of the still
> healthy disks with a
On 2015-10-20 15:59, Austin S Hemmelgarn wrote:
On 2015-10-20 15:20, Duncan wrote:
Yes, there's some small but not infinitesimal chance the checksum may be
wrong, but if there's two copies of the data and the checksum on one is
wrong while the checksum on the other verifies... yes, there's
On 2015-10-21 07:51, Austin S Hemmelgarn wrote:
On 2015-10-20 15:59, Austin S Hemmelgarn wrote:
On 2015-10-20 15:20, Duncan wrote:
Yes, there's some small but not infinitesimal chance the checksum may be
wrong, but if there's two copies of the data and the checksum on one is
wrong while the
On 2015-10-20 09:15, Russell Coker wrote:
On Wed, 21 Oct 2015 12:00:59 AM Austin S Hemmelgarn wrote:
https://www.gnu.org/software/ddrescue/
At this stage I would use ddrescue or something similar to copy data from
the failing disk to a fresh disk, then do a BTRFS scrub to regenerate
the
Hi all,
I have a collection of three (was 4) 1-2TB devices with data and
metadata in a RAID1 mirror. Last night I was struck by the Click of
Death on an old Samsung drive.
I removed the device from the system, rebooted and mounted the volume
with `-o degraded` and the file system seems fine
really tells you is that at some point, the data got corrupted, it could
> be that the copy on the disk is bad, but it could also be caused by bad
> RAM, a bad storage controller, a loose cable, or even a bad power
> supply.
There's a significant difference between btrfs in dup/raid1/raid10 mo
suddenly started posting just fine.
> But, with the hard drives back in it, I'm getting the same hard lockup
> errors.
>
> An Arch ISO DVD runs stress testing perfectly.
>
> Btrfs-specific -
>
> The current problem I'm having must be a bad hard drive or corrupted
>
Hi Kyle,
On 10/20/2015 07:24 PM, Kyle Manna wrote:
I removed the device from the system, rebooted and mounted the volume
with `-o degraded` and the file system seems fine and usable. I'm
waiting on a replacement, drive but want to remove the old drive and
re-balance in the meantime.
This
Kyle Manna posted on Tue, 20 Oct 2015 10:24:48 -0700 as excerpted:
> Hi all,
>
> I have a collection of three (was 4) 1-2TB devices with data and
> metadata in a RAID1 mirror. Last night I was struck by the Click of
> Death on an old Samsung drive.
>
> I removed the d
On 2015-10-20 19:24, Kyle Manna wrote:
> Hi all,
[...]
> How do I remove the missing device? I tried the `btrfs device delete
> missing /mnt` but was greeted with "ERROR: missing is not a block
> device". A quick look at that btrfs-progs git repo shows that
> `stat("missing")` is called, which
1. Create a basic minimalistic Linux system in a VM (in my case, I just
used a stage3 tarball for Gentoo, with a paravirtuaized Xen domain)
using BTRFS as the root filesystem with a raid1 setup. Make sure and
verify that it actually boots.
2. Shutdown the VM, use btrfs-progs on the ho
be that the copy on the disk is bad, but it could also be caused by bad
RAM, a bad storage controller, a loose cable, or even a bad power
supply.
There's a significant difference between btrfs in dup/raid1/raid10 modes
anyway and some of the others you mentioned, however. Btrfs in these
modes actually
On 10/20/2015 15:59 -0400, Austin S Hemmelgarn wrote:
>> .
>> With a 32-bit checksum and a 4k block (the math is easier with
>> smaller numbers), that's 4128 bits, which means that a random
>> single bit error will have a approximately 0.24% chance of
>> occurring
stem in a VM (in my case, I just
> used a stage3 tarball for Gentoo, with a paravirtuaized Xen domain)
> using BTRFS as the root filesystem with a raid1 setup. Make sure and
> verify that it actually boots.
> 2. Shutdown the VM, use btrfs-progs on the host to find the physical
> lo
the missing device ID:
btrfs device usage
ᐧ
On Tue, Oct 20, 2015 at 1:58 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Kyle Manna posted on Tue, 20 Oct 2015 10:24:48 -0700 as excerpted:
>
>> Hi all,
>>
>> I have a collection of three (was 4) 1-2TB devices with data and
>&g
ᐧ
>>
>> On Tue, Oct 20, 2015 at 1:58 PM, Duncan <1i5t5.dun...@cox.net> wrote:
>>> Kyle Manna posted on Tue, 20 Oct 2015 10:24:48 -0700 as excerpted:
>>>
>>>> Hi all,
>>>>
>>>> I have a collection of three (was 4) 1-2TB devi
e missing device ID:
>
> btrfs device usage
> ᐧ
>
> On Tue, Oct 20, 2015 at 1:58 PM, Duncan <1i5t5.dun...@cox.net> wrote:
>> Kyle Manna posted on Tue, 20 Oct 2015 10:24:48 -0700 as excerpted:
>>
>>> Hi all,
>>>
>>> I have a collection
On 2015-10-20 00:45, Russell Coker wrote:
On Tue, 20 Oct 2015 03:16:15 PM james harvey wrote:
sda appears to be going bad, with my low threshold of "going bad", and
will be replaced ASAP. It just developed 16 reallocated sectors, and
has 40 current pending sectors.
I'm currently running a
On Wed, 21 Oct 2015 12:00:59 AM Austin S Hemmelgarn wrote:
> > https://www.gnu.org/software/ddrescue/
> >
> > At this stage I would use ddrescue or something similar to copy data from
> > the failing disk to a fresh disk, then do a BTRFS scrub to regenerate
> > the missing data.
> >
> > I
ng perfectly.
Btrfs-specific -
The current problem I'm having must be a bad hard drive or corrupted data.
3 drive btrfs RAID1 (data and metadata.) sda has 1GB of the 3GB of
data, and 1GB of the 1GB of metadata.
sda appears to be going bad, with my low threshold of "going bad", and
w
The create RAID1 example illustrates my question, at:
https://btrfs.wiki.kernel.org/index.php/UseCases#How_do_I_create_a_RAID1_mirror_in_Btrfs.3F
It shows:
mkfs.btrfs -m raid1 -d raid1 /dev/sda1 /dev/sdb1
will result in:
btrfs fi df /mount
Data, RAID1: total=1.00GB, used=128.00KB
Data: total
james harvey wrote on 2015/10/19 23:49 -0400:
The create RAID1 example illustrates my question, at:
https://btrfs.wiki.kernel.org/index.php/UseCases#How_do_I_create_a_RAID1_mirror_in_Btrfs.3F
It shows:
mkfs.btrfs -m raid1 -d raid1 /dev/sda1 /dev/sdb1
will result in:
btrfs fi df /mount
On Tue, 20 Oct 2015 03:16:15 PM james harvey wrote:
> sda appears to be going bad, with my low threshold of "going bad", and
> will be replaced ASAP. It just developed 16 reallocated sectors, and
> has 40 current pending sectors.
>
> I'm currently running a "btrfs scrub start -B -d -r /terra",
"btrfs replace" because there has not been any reply to my
inplace correction question. But I expect that clarification if
possible/how to resync RAID1 after one drive temporal disappear is
really important to many of BTRFS users.
As of right now, there is no way that I know of to safe
Hello everybody,
On Monday 05 of October 2015 22:26:46 Pavel Pisa wrote:
> Hello everybody,
...
> BTRFS has recognized appearance of its partition (even that hanged
> from sdb5 to sde5 when disk "hotplugged" again).
> But it seems that RAID1 components are not in sync
On 2015-10-08 04:28, Pavel Pisa wrote:
Hello everybody,
On Monday 05 of October 2015 22:26:46 Pavel Pisa wrote:
Hello everybody,
...
BTRFS has recognized appearance of its partition (even that hanged
from sdb5 to sde5 when disk "hotplugged" again).
But it seems that RAID1
ystem as soon as possible.
I have done backup to external drive before attempts to reconnect
failed drive.
I have done btrfs replace of temporary failed HDD to new bought HDD.
I have planned to replace old drive (that one which did not experience
problems but reports some relocated sectores). So I ha
> > inplace correction question. But I expect that clarification if
> > > possible/how to resync RAID1 after one drive temporal disappear is
> > > really important to many of BTRFS users.
> >
> > As of right now, there is no way that I know of to safely re-s
;I go to use "btrfs replace" because there has not been any reply to my
> > > > inplace correction question. But I expect that clarification if
> > > > possible/how to resync RAID1 after one drive temporal disappear is
> > > > really important to many of
On Thu, Oct 08, 2015 at 07:47:33AM -0400, Austin S Hemmelgarn wrote:
> On 2015-10-08 04:28, Pavel Pisa wrote:
> >I go to use "btrfs replace" because there has not been any reply to my
> >inplace correction
> >question. But I expect that clarification if possible/ho
On Mon, Oct 05, 2015 at 08:30:17AM -0400, Austin S Hemmelgarn wrote:
> I've been having issues recently with a relatively simple setup
> using a two device BTRFS raid1 on top of two two device md RAID0's,
> and every time I've rebooted since starting trying to use this
> particula
I've been having issues recently with a relatively simple setup using a
two device BTRFS raid1 on top of two two device md RAID0's, and every
time I've rebooted since starting trying to use this particular
filesystem, I've found it unable to mount and had to recreate it from
scratch
On 2015-09-22 14:35, Chris Murphy wrote:
On Tue, Sep 22, 2015 at 7:21 AM, Austin S Hemmelgarn
wrote:
It's not a bad idea, except that it changes established usage, and there are
probably some people out there who depend on the current behavior. If we do
go that way,
On 2015-09-21 16:35, Erkki Seppala wrote:
Gareth Pye writes:
People tend to be looking at BTRFS for a guarantee that data doesn't
die when hardware does. Defaults that defeat that shouldn't be used.
However, data is no more in danger at startup than it is at the
在 2015年09月22日 19:32, Austin S Hemmelgarn 写道:
On 2015-09-21 16:35, Erkki Seppala wrote:
Gareth Pye writes:
People tend to be looking at BTRFS for a guarantee that data doesn't
die when hardware does. Defaults that defeat that shouldn't be used.
However, data is no
601 - 700 of 1394 matches
Mail list logo