k, 16k, 32K and 64K as node size. 4K node size will conflict with 64K
>> sector size.
>>
>> [FIX]
>> - Specify sector size 4K manually
>> So at least no conflict at mkfs time.
>>
>> - Skip the test case if kernel can't mount with 4k sector size
>>
gt;
> [FIX]
> - Specify sector size 4K manually
> So at least no conflict at mkfs time.
>
> - Skip the test case if kernel can't mount with 4k sector size
> So once we add such support, the test can be automatically re-enabled.
>
> Signed-off-by
] Error 1
[CAUSE]
Mkfs.btrfs defaults to page size as sector size. However this test uses
4k, 16k, 32K and 64K as node size. 4K node size will conflict with 64K
sector size.
[FIX]
- Specify sector size 4K manually
So at least no conflict at mkfs time.
- Skip the test case if kernel can't
[root@TYOMIX tyomix]# btrfs restore -D /dev/sda3 /dev/null
checksum verify failed on 1048576 found E4E3BDB6 wanted
checksum verify failed on 1048576 found E4E3BDB6 wanted
bad tree block 1048576, bytenr mismatch, want=1048576, have=0
ERROR: cannot read chunk root
Could not open r
Hello. I have a btrfs partition which I'd been using for several
months with no problems until last week when it suddenly stopped
mounting.
It would be a pleasant surprise if there was a relatively painless way
to restore the drive to normal functionality, or at least recover any
files which might
Ok, so I ended with btrfs restore, seems that all (or most important)
files were restored.
Now looking for another reliable filesystem which will not unrecoverably
die on power outage.
msk
Dňa 22. 1. 2018 o 10:14 Zatkovský Dušan napísal(a):
Hi.
Badblocks finished on both disks with no err
Hi.
Badblocks finished on both disks with no errors. The only messages from
kernel
during night are 6x perf: interrupt took too long (2511 > 2500),
lowering kernel.perf_event_max_sample_rate to 79500
root@nas:~# smartctl -l scterc /dev/sda
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-a
On Sun, Jan 21, 2018 at 4:13 PM, Chris Murphy wrote:
> On Sun, Jan 21, 2018 at 3:31 PM, msk conf wrote:
>> Hello,
>>
>> thank you for the reply.
>>
>>> What do you get for btrfs fi df /array
>>
>>
>> Can't do that because filesystem is not mountable. I will get stats for '/'
>> filesystem instead
On Sun, Jan 21, 2018 at 8:53 AM, msk conf wrote:
> Hello there,
>
> I would like to ask you for help with (corrupted) btrfs on my nas.
>
> After power outage I can't mount it back at all:
> UUID="e8cb7e76-7f93-4eac-aec7-ca64395d2110"/array btrfs
> noat
Hello there,
I would like to ask you for help with (corrupted) btrfs on my nas.
After power outage I can't mount it back at all:
UUID="e8cb7e76-7f93-4eac-aec7-ca64395d2110" /array btrfs
noatime,compress=lzo 0 2
UUID="e8cb7e76-7f93-4eac-aec7-ca64395d2110"
>> But it could simply be that you have forgotten to refresh the
>> 'initramfs' with 'mkinitrd' after modifying the '/etc/fstab'.
> I finally managed it. I'm pretty sure having changed
> /boot/grub/menu.lst, but somehow changes got lost/weren't
> saved ?
So the next thing to check would indeed ha
> -Original Message-
> From: Andrei Borzenkov [mailto:arvidj...@gmail.com]
> Sent: Thursday, October 26, 2017 6:51 PM
> To: Lentes, Bernd ; Btrfs ML
>
> Subject: Re: SLES 11 SP4: can't mount btrfs
>
> root device information is stored in initrd, you n
26.10.2017 15:18, Lentes, Bernd пишет:
>
>> -Original Message-
>> From: linux-btrfs-ow...@vger.kernel.org
>> [mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Lentes, Bernd
>> Sent: Tuesday, October 24, 2017 6:44 PM
>> To: Btrfs ML
>> Subj
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org
[mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Peter Grandi
> Sent: Thursday, October 26, 2017 2:55 PM
> To: Linux fs Btrfs
> Subject: RE: SLES 11 SP4: can't mount btrfs
>
> > I formatt
> I formatted the / partition with Btrfs again and could restore
> the files from a backup. Everything seems to be there, I can
> mount the Btrfs manually. [ ... ] But SLES finds from where I
> don't know a UUID (see screenshot). This UUID is commented out
> in fstab and replaced by /dev/vg1/lv_ro
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org
> [mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Lentes, Bernd
> Sent: Tuesday, October 24, 2017 6:44 PM
> To: Btrfs ML
> Subject: RE: SLES 11 SP4: can't mount btrfs
>
>
> >
>
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org
> [mailto:linux-btrfs-ow...@vger.kernel.org] On Behalf Of Austin S.
> Hemmelgarn
> Sent: Tuesday, October 24, 2017 4:05 PM
> To: Lentes, Bernd ; Btrfs ML
>
> Subject: Re: SLES 11 SP4: can't
On 2017-10-24 10:12, Andrei Borzenkov wrote:
On Tue, Oct 24, 2017 at 2:53 PM, Austin S. Hemmelgarn
wrote:
SLES (and OpenSUSE in general) does do something special though, they use
subvolumes and qgroups to replicate multiple independent partitions (which
is a serious pain in the arse), and the
On Tue, Oct 24, 2017 at 2:53 PM, Austin S. Hemmelgarn
wrote:
>
> SLES (and OpenSUSE in general) does do something special though, they use
> subvolumes and qgroups to replicate multiple independent partitions (which
> is a serious pain in the arse), and they have snapshotting with snapper by
> def
On 2017-10-24 09:28, Lentes, Bernd wrote:
-Original Message-
From: Austin S. Hemmelgarn [mailto:ahferro...@gmail.com]
Sent: Tuesday, October 24, 2017 1:53 PM
To: Adam Borowski ; Lentes, Bernd
Cc: Btrfs ML
Subject: Re: SLES 11 SP4: can't mount btrfs
I think partimage _might_
> -Original Message-
> From: Austin S. Hemmelgarn [mailto:ahferro...@gmail.com]
> Sent: Tuesday, October 24, 2017 1:53 PM
> To: Adam Borowski ; Lentes, Bernd
>
> Cc: Btrfs ML
> Subject: Re: SLES 11 SP4: can't mount btrfs
>
> I think partimage _might_ have
On 2017-10-21 14:07, Adam Borowski wrote:
On Sat, Oct 21, 2017 at 01:46:06PM +0200, Lentes, Bernd wrote:
- Am 21. Okt 2017 um 4:31 schrieb Duncan 1i5t5.dun...@cox.net:
Lentes, Bernd posted on Fri, 20 Oct 2017 20:40:15 +0200 as excerpted:
Is it generally possible to restore a btrfs partiti
- Am 21. Okt 2017 um 20:07 schrieb Adam Borowski kilob...@angband.pl:
>> > Yes it's possible to restore a btrfs partition from tape backup, /if/ you
>> > backed up the partition itself, not just the files on top of it.
>
> Which is usually a quite bad idea: unless you shut down (or remount ro
On Sat, Oct 21, 2017 at 01:46:06PM +0200, Lentes, Bernd wrote:
> - Am 21. Okt 2017 um 4:31 schrieb Duncan 1i5t5.dun...@cox.net:
> > Lentes, Bernd posted on Fri, 20 Oct 2017 20:40:15 +0200 as excerpted:
> >
> >> Is it generally possible to restore a btrfs partition from a tape backup
> >> ?
> >
- Am 21. Okt 2017 um 4:31 schrieb Duncan 1i5t5.dun...@cox.net:
> Lentes, Bernd posted on Fri, 20 Oct 2017 20:40:15 +0200 as excerpted:
>
>> Is it generally possible to restore a btrfs partition from a tape backup
>> ?
>> I'm just starting, and I'm asking myself. What is about the subvolumes
Lentes, Bernd posted on Fri, 20 Oct 2017 20:40:15 +0200 as excerpted:
> Is it generally possible to restore a btrfs partition from a tape backup
> ?
> I'm just starting, and I'm asking myself. What is about the subvolumes ?
> This information isn't stored in files, but in the fs ? This is not on a
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
> ow...@vger.kernel.org] On Behalf Of Lentes, Bernd
> Sent: Friday, October 20, 2017 7:26 PM
> To: Btrfs ML
> Subject: RE: SLES 11 SP4: can't mount btrfs
>
>
> > -Ori
> -Original Message-
> From: Andrei Borzenkov [mailto:arvidj...@gmail.com]
> Sent: Friday, October 20, 2017 6:09 AM
> To: Chris Murphy ; Lentes, Bernd
>
> Cc: Btrfs ML
> Subject: Re: SLES 11 SP4: can't mount btrfs
>
> 19.10.2017 23:04, Chris Murphy
4.12 on current, and 4.9 and 4.4 on LTS, but even 4.4 is
getting pretty long in the tooth for this list, so it's fortunate that
the coming 4.14 is going to be an LTS as well.
While your knoppix had kernel 4.12.x which is list-reasonable for runtime,
where the kernel code is primary, once some
19.10.2017 23:04, Chris Murphy пишет:
> Btrfs
> is not just supported by SUSE, it's the default file system.
>
It is default choice for root starting with SLES12, not in SLES11. But
yes, it should still be supported.
I do not hold my breath though. For all I can tell transid errors are
usually f
On Thu, Oct 19, 2017 at 6:43 PM, Lentes, Bernd
wrote:
> Hi,
>
> this is the continuation of a thread i started on a SLES forum
> (https://forums.suse.com/showthread.php?10109-lv-with-btrfs-corrupt-some-tips-please),
> but i think this is the more appropriate place.
Maybe, but as this is SLES, y
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
> ow...@vger.kernel.org] On Behalf Of Lentes, Bernd
> Sent: Thursday, October 19, 2017 7:44 PM
> To: Btrfs ML
> Subject: SLES 11 SP4: can't mount btrfs
>
> Hi,
>
> thi
Hi,
this is the continuation of a thread i started on a SLES forum
(https://forums.suse.com/showthread.php?10109-lv-with-btrfs-corrupt-some-tips-please),
but i think this is the more appropriate place.
I have a SLES 11 SP4 with a btrfs on top of a logical volume i can't mount
anymore. The
On 2017年10月05日 07:13, Asif Youssuff wrote:
On 10/04/2017 01:18 AM, Qu Wenruo wrote:
On 2017年10月04日 12:00, Asif Youssuff wrote:
Thanks for the advice.
On 10/03/2017 09:38 PM, Qu Wenruo wrote:
[210017.281912] BTRFS info (device sdb): disk space caching is enabled
[210017.281915] BTRFS
On 10/04/2017 01:18 AM, Qu Wenruo wrote:
On 2017年10月04日 12:00, Asif Youssuff wrote:
Thanks for the advice.
On 10/03/2017 09:38 PM, Qu Wenruo wrote:
[210017.281912] BTRFS info (device sdb): disk space caching is enabled
[210017.281915] BTRFS info (device sdb): has skinny extents
[210017.
On 2017年10月04日 12:00, Asif Youssuff wrote:
Thanks for the advice.
On 10/03/2017 09:38 PM, Qu Wenruo wrote:
[210017.281912] BTRFS info (device sdb): disk space caching is enabled
[210017.281915] BTRFS info (device sdb): has skinny extents
[210017.402084] BTRFS error (device sdb): super_tota
Thanks for the advice.
On 10/03/2017 09:38 PM, Qu Wenruo wrote:
[210017.281912] BTRFS info (device sdb): disk space caching is enabled
[210017.281915] BTRFS info (device sdb): has skinny extents
[210017.402084] BTRFS error (device sdb): super_total_bytes
92017859088384
mismatch with fs_devi
On 2017年10月04日 07:32, Asif Youssuff wrote:
Hi,
My power went out at my home, and I'm now having trouble mounting my array.
I'm mounting with the 'recovery' option in fstab.
When mounting, dmesg output shows:
[210017.281912] BTRFS info (device sdb): disk space caching is enabled
[210017.2819
Hi,
My power went out at my home, and I'm now having trouble mounting my array.
I'm mounting with the 'recovery' option in fstab.
When mounting, dmesg output shows:
[210017.281912] BTRFS info (device sdb): disk space caching is enabled
[210017.281915] BTRFS info (device sdb): has skinny extent
dear all,
two days ago I got an old 1TB external USB HDD drive. I scanned it
using smartctl, long scan, no errors. I created one partition
/dev/sde1, formatted it using BTRFS, and created two subvolumes. One
subvolume was intended for backups. This morning I started copying some
200GB data on t
And the btrfs-debug-tree
https://drive.google.com/open?id=0B4abov8sCq9OcHQ3eGN4WmtxZ2M
2016-09-23 22:11 GMT+02:00 Mirak M :
> Hi,
>
> This the 2T img of the partition.
>
> https://drive.google.com/file/d/0B4abov8sCq9OUWdvRkpUYUt4TU0/view?usp=drivesdk
>
>>
>> Le 23 sept. 2016 00:05, "Chris Murphy"
Hi,
This the 2T img of the partition.
https://drive.google.com/file/d/0B4abov8sCq9OUWdvRkpUYUt4TU0/view?usp=drivesdk
>
> Le 23 sept. 2016 00:05, "Chris Murphy" a écrit :
>>
>> On Thu, Sep 22, 2016 at 3:55 PM, Mirak M wrote:
>> > Hi,
>> >
>> > Same error when mouting with this fedora iso, with
Hi,
Same error when mouting with this fedora iso, with mount, mount -o
recovery and mount -o ro,recovery
#
[ 682.954511] BTRFS info (device sdc2): disk space caching is enabled
[ 682.954518] BTRFS info (device sdc2): ha
2016-09-21 3:00 GMT+02:00 Chris Murphy :
> On Tue, Sep 20, 2016 at 5:16 PM, Mirak M wrote:
>> Hello,
>>
>> I have a failure when mounting btrfs.
>>
>>> mount -oro,recovery /dev/sda2 sda2_btrfs
>>> mount: /dev/sda2: can't read superblock
>
> What do you get for 'btrfs super-recover -v ' and 'btrfs
On Tue, Sep 20, 2016 at 5:16 PM, Mirak M wrote:
> Hello,
>
> I have a failure when mounting btrfs.
>
>> mount -oro,recovery /dev/sda2 sda2_btrfs
>> mount: /dev/sda2: can't read superblock
What do you get for 'btrfs super-recover -v ' and 'btrfs check '
For this purpose any 4.4+ version is probab
Hello,
I have a failure when mounting btrfs.
> mount -oro,recovery /dev/sda2 sda2_btrfs
> mount: /dev/sda2: can't read superblock
The kernel log is here http://pastebin.com/tHihHT92 and at the bottom
of the email
I must admit I did the error of running btrfs check --repair at some
point, not kn
On Tue, Oct 13, 2015 at 06:25:54PM -0500, EJ Parker wrote:
> I rebooted my server last night and discovered that my btrfs
> filesystem (3 disk raid1) would not mount anymore. After doing some
> research and getting nowhere I went to IRC and user darkling asked me
> a few questions and asked for out
I rebooted my server last night and discovered that my btrfs
filesystem (3 disk raid1) would not mount anymore. After doing some
research and getting nowhere I went to IRC and user darkling asked me
a few questions and asked for output of btrfs-debug-tree and
ultimately sent me here saying I should
I applied that patch to my 4.1.4, it mounted degraded, and now it's
balancing to the new drive.
Thanks for all the help!
On Fri, Aug 14, 2015 at 8:28 PM, Anand Jain wrote:
>
>
>> Just to be clear, I removed the drive (the original failed drive) when
>> the power was off, then powered up, and the
I thought for a second that maybe the problem is due to the "phantom"
single chunk(s) created at mkfs time. I redid the test, and did a
balance to get rid of the single chunk. I did this right after
populating volume with some data. But the problem still happens.
---
Chris Murphy
--
To unsubscribe
Just to be clear, I removed the drive (the original failed drive) when
the power was off, then powered up, and then mounted degraded. That's
not dangerous that I know of.
patch has details. pls refer.
Where is this patch, and what kernel versions can this be applied to?
https://patchwor
e failed, there were OOPSes,
>> etc.
>> - Now, although all of my data is there, I can't mount degraded,
>> because btrfs is complaining that too many devices are missing (3 are
>> there, but it sees 2 missing).
>
>
>
> This is addressed in the patch
>
&g
e were OOPSes, etc.
- Now, although all of my data is there, I can't mount degraded,
because btrfs is complaining that too many devices are missing (3 are
there, but it sees 2 missing).
This is addressed in the patch
[PATCH 23/23] Btrfs: allow -o rw,degraded for single group profile
Thanks, Ana
On Fri, Aug 14, 2015 at 1:03 PM, Timothy Normand Miller
wrote:
> I'm not sure my situation is quite like the one you linked, so here's
> my bug report:
>
> https://bugzilla.kernel.org/show_bug.cgi?id=102881
I can easily reproduce with just 2 device RAID. I updated the bug.
It's best these are sep
ed degraded.
>> - I hooked up a replacement drive, did an "add" on that one, and did a
>> "delete missing".
>> - During the rebalance, the replacement drive failed, there were OOPSes, etc.
>> - Now, although all of my data is there, I can't mount
tion:
>
> - I had a drive fail, so I removed it and mounted degraded.
> - I hooked up a replacement drive, did an "add" on that one, and did a
> "delete missing".
> - During the rebalance, the replacement drive failed, there were OOPSes, etc.
> - Now, a
up a replacement drive, did an "add" on that one, and did a
"delete missing".
- During the rebalance, the replacement drive failed, there were OOPSes, etc.
- Now, although all of my data is there, I can't mount degraded,
because btrfs is complaining that too many devices are missi
My
--
Timothy Normand Miller, PhD
Assistant Professor of Computer Science, Binghamton University
http://www.cs.binghamton.edu/~millerti/
Open Graphics Project
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More major
On 2015-07-22 10:13, Gregory Farnum wrote:
On Wed, Jul 22, 2015 at 12:16 PM, Austin S Hemmelgarn
wrote:
On 2015-07-21 22:01, Qu Wenruo wrote:
Steve Dainard wrote on 2015/07/21 14:07 -0700:
I don't know if this has any bearing on the failure case, but the
filesystem that I sent an image of w
On Wed, Jul 22, 2015 at 12:16 PM, Austin S Hemmelgarn
wrote:
> On 2015-07-21 22:01, Qu Wenruo wrote:
>>
>> Steve Dainard wrote on 2015/07/21 14:07 -0700:
>>>
>>> I don't know if this has any bearing on the failure case, but the
>>> filesystem that I sent an image of was only ever created, subvol
>
On 2015-07-21 22:01, Qu Wenruo wrote:
Steve Dainard wrote on 2015/07/21 14:07 -0700:
I don't know if this has any bearing on the failure case, but the
filesystem that I sent an image of was only ever created, subvol
created, and mounted/unmounted several times. There was never any data
written t
Steve Dainard wrote on 2015/07/21 14:07 -0700:
On Tue, Jul 21, 2015 at 4:15 AM, Austin S Hemmelgarn
wrote:
On 2015-07-21 04:38, Qu Wenruo wrote:
Hi Steve,
I checked your binary dump.
Previously I was too focused on the assert error, but ignored some even
larger bug...
As for the btrfs-de
On Tue, Jul 21, 2015 at 4:15 AM, Austin S Hemmelgarn
wrote:
> On 2015-07-21 04:38, Qu Wenruo wrote:
>>
>> Hi Steve,
>>
>> I checked your binary dump.
>>
>> Previously I was too focused on the assert error, but ignored some even
>> larger bug...
>>
>> As for the btrfs-debug-tree output, subvol 257
On 2015-07-21 04:38, Qu Wenruo wrote:
Hi Steve,
I checked your binary dump.
Previously I was too focused on the assert error, but ignored some even
larger bug...
As for the btrfs-debug-tree output, subvol 257 and 5 are completely
corrupted.
Subvol 257 seems to contains a new tree root, and 5 s
ngE
I'm not sure how to interpret the output, but the exit status is 0 so
it looks like btrfs doesn't think there's an issue with the file
system.
I get the same mount error with options ro,recovery.
On Fri, Jun 12, 2015 at 12:23 AM, Qu Wenruo
wrote:
Original Messa
ot sure how to interpret the output, but the exit status is 0 so
it looks like btrfs doesn't think there's an issue with the file
system.
I get the same mount error with options ro,recovery.
On Fri, Jun 12, 2015 at 12:23 AM, Qu Wenruo
wrote:
Original Message
Subj
all the commands are recommended to be executed on the device which
>>> you
>>> get the debug info from.
>>> As it's a small and almost empty device, so commands execution should be
>>> quite fast on it.
>>>
>>> Thanks,
>>> Q
/k3R3bngE
I'm not sure how to interpret the output, but the exit status is 0 so
it looks like btrfs doesn't think there's an issue with the file
system.
I get the same mount error with options ro,recovery.
On Fri, Jun 12, 2015 at 12:23 AM, Qu Wenruo
wrote:
---- Original
rogs 4.0.1 is here: http://pastebin.com/k3R3bngE
>>
>> I'm not sure how to interpret the output, but the exit status is 0 so
>> it looks like btrfs doesn't think there's an issue with the file
>> system.
>>
>> I get the same mount error with opti
the same mount error with options ro,recovery.
On Fri, Jun 12, 2015 at 12:23 AM, Qu Wenruo wrote:
Original Message
Subject: Can't mount btrfs volume on rbd
From: Steve Dainard
To:
Date: 2015年06月11日 23:26
Hello,
I'm getting an error when attempting to mount a
I get the same mount error with options ro,recovery.
On Fri, Jun 12, 2015 at 12:23 AM, Qu Wenruo wrote:
>
>
> Original Message ----
> Subject: Can't mount btrfs volume on rbd
> From: Steve Dainard
> To:
> Date: 2015年06月11日 23:26
>
>> Hello,
>
Original Message
Subject: Can't mount btrfs volume on rbd
From: Steve Dainard
To:
Date: 2015年06月11日 23:26
Hello,
I'm getting an error when attempting to mount a volume on a host that
was forceably powered off:
# mount /dev/rbd4 climate-downscale-CMIP5/
mount:
Hello,
I'm getting an error when attempting to mount a volume on a host that
was forceably powered off:
# mount /dev/rbd4 climate-downscale-CMIP5/
mount: mount /dev/rbd4 on /mnt/climate-downscale-CMIP5 failed: Stale file handle
/var/log/messages:
Jun 10 15:31:07 node1 kernel: rbd4: unknown parti
On Nov 3, 2014, at 12:48 PM, Florian Lindner wrote:
>
> Ok, problem is that I need to organise another hard disk for that. ;-)
>
> I tried restore for a test run, it gave a lot of messages about wrong
> compression length. I found some discussion about that, but I don't know if
> its indicate
Chris Murphy wrote:
>
> On Nov 2, 2014, at 8:18 AM, Florian Lindner wrote:
>
>> Hello,
>>
>> all after sudden I can't mount my btrfs home partition anymore. System is
>> Arch with kernel 3.17.2, but I use snapper which does snapshopts
>> regularly a
On Nov 2, 2014, at 8:18 AM, Florian Lindner wrote:
> Hello,
>
> all after sudden I can't mount my btrfs home partition anymore. System is
> Arch with kernel 3.17.2, but I use snapper which does snapshopts regularly
> and I had 3.17.1 before, which afaik had some problems
Robert White wrote:
> On 11/02/2014 07:18 AM, Florian Lindner wrote:
>> # btrfsck /dev/sdb1
>> # btrfsck --init-extent-tree /dev/sdb1
>> # btrfsck --init-csum-tree /dev/sdb1
>
> Notably missing from all these commands is "--repair"...
>
> I don't know that's your problem for sure, but it's where
Robert White posted on Sun, 02 Nov 2014 14:31:36 -0800 as excerpted:
> On 11/02/2014 07:18 AM, Florian Lindner wrote:
>> # btrfsck /dev/sdb1
>> # btrfsck --init-extent-tree /dev/sdb1
>> # btrfsck --init-csum-tree /dev/sdb1
>
> Notably missing from all these commands is "--repair"...
>
> I don't
On 11/02/2014 07:18 AM, Florian Lindner wrote:
# btrfsck /dev/sdb1
# btrfsck --init-extent-tree /dev/sdb1
# btrfsck --init-csum-tree /dev/sdb1
Notably missing from all these commands is "--repair"...
I don't know that's your problem for sure, but it's where I would start...
--
To unsubscribe
Hello,
all after sudden I can't mount my btrfs home partition anymore. System is
Arch with kernel 3.17.2, but I use snapper which does snapshopts regularly
and I had 3.17.1 before, which afaik had some problems with snapshops.
Trying to mount without any options gives to the syslog:
Nov
On Thu, Aug 14, 2014 at 08:17:02PM -0400, Chris Mason wrote:
> Yes, btrfs-zero log doesn't need that root to be read. I'll fix it up,
Cool, thanks for fixing that, this one was easy considering :)
> really glad it worked for you.
You and me both :)
Your timely reply today was very much appreci
On 08/14/2014 06:28 PM, Marc MERLIN wrote:
> On Thu, Aug 14, 2014 at 06:03:09PM -0400, Chris Mason wrote:
>> At least I'll get to buy you a beer this time.
>
> Haha, no worries :)
>
>> Lets just see if the log root is the only problem. This will get you
>> through btrfs-zero-log
>
> It sure
On Thu, Aug 14, 2014 at 06:03:09PM -0400, Chris Mason wrote:
> At least I'll get to buy you a beer this time.
Haha, no worries :)
> Lets just see if the log root is the only problem. This will get you
> through btrfs-zero-log
It sure did, thanks much for the patch.
It output absolutely nothing
On 08/14/2014 01:27 PM, Marc MERLIN wrote:
> On Thu, Aug 14, 2014 at 12:52:35PM -0400, Austin S Hemmelgarn wrote:
>> I don't think it is likely that the Samsung SSD is to blame, in my
>> experience Samsung's SSD's are better than almost every other brand
>> except Intel, and I know that they hono
On Thu, Aug 14, 2014 at 01:10:05PM -0600, Chris Murphy wrote:
>
> On Aug 14, 2014, at 11:27 AM, Marc MERLIN wrote:
>
> > On Thu, Aug 14, 2014 at 12:52:35PM -0400, Austin S Hemmelgarn wrote:
> >> I don't think it is likely that the Samsung SSD is to blame, in my
> >> experience Samsung's SSD's ar
On Aug 14, 2014, at 11:27 AM, Marc MERLIN wrote:
> On Thu, Aug 14, 2014 at 12:52:35PM -0400, Austin S Hemmelgarn wrote:
>> I don't think it is likely that the Samsung SSD is to blame, in my
>> experience Samsung's SSD's are better than almost every other brand
>> except Intel, and I know that th
On Thu, Aug 14, 2014 at 12:52:35PM -0400, Austin S Hemmelgarn wrote:
> I don't think it is likely that the Samsung SSD is to blame, in my
> experience Samsung's SSD's are better than almost every other brand
> except Intel, and I know that they honor write-barriers correctly.
> The likely issue is
s compresssed).
>
> Do you have any suggestions on how to repair this partition so that it's
> mountable again?
>
> This is a good SSD (samsung evo 840), I'm not sure it's to blame for
> corrupting data or writing it out of order and lying to the OS about it.
> If tha
er and lying to the OS about it.
If that is not the case, does it mean btrfs is still getting into states
where it mangles the filesystem in a way that it can't mount it anymore?
Thanks,
Marc
--
"A mouse is a device used to point at the xterm you want to t
On Jun 29, 2014, at 8:57 PM, Qu Wenruo wrote:
>>
> Finally find the stable method to reproduce the problem on 3.16-rc2,
> the point is if we mount subvol,ro first, then you can't mount the hole
> device:
>
> # mkfs.btrfs -f /dev/sda6
> # mount /dev/sda6 /mnt/btr
Original Message
Subject: Re: Can't mount subvolume with ro option
From: Qu Wenruo
To: Sébastien ROHAUT , Chris Murphy
Date: 2014年06月30日 10:19
Original Message
Subject: Re: Can't mount subvolume with ro option
From: Sébastien ROHAUT
To: Ch
Original Message
Subject: Re: Can't mount subvolume with ro option
From: Sébastien ROHAUT
To: Chris Murphy
Date: 2014年06月28日 19:02
Le 28/06/2014 00:12, Chris Murphy a écrit :
On Jun 27, 2014, at 4:08 PM, Chris Murphy
wrote:
On Jun 27, 2014, at 2:07 PM, Sébastien R
Le 28/06/2014 00:12, Chris Murphy a écrit :
On Jun 27, 2014, at 4:08 PM, Chris Murphy wrote:
On Jun 27, 2014, at 2:07 PM, Sébastien ROHAUT wrote:
Hi,
In the wiki, it's said we can mount subvolumes with different mount options.
nosuid, nodev, rw and ro are listed, as valid generic mount op
On Jun 27, 2014, at 4:08 PM, Chris Murphy wrote:
>
> On Jun 27, 2014, at 2:07 PM, Sébastien ROHAUT
> wrote:
>
>> Hi,
>>
>> In the wiki, it's said we can mount subvolumes with different mount options.
>> nosuid, nodev, rw and ro are listed, as valid generic mount options.
>
> This might re
On Jun 27, 2014, at 2:07 PM, Sébastien ROHAUT wrote:
> Hi,
>
> In the wiki, it's said we can mount subvolumes with different mount options.
> nosuid, nodev, rw and ro are listed, as valid generic mount options.
This might require 3.15. I don't recall it working with early 3.14 kernels, but
b
Hi,
In the wiki, it's said we can mount subvolumes with different mount
options. nosuid, nodev, rw and ro are listed, as valid generic mount
options.
https://btrfs.wiki.kernel.org/index.php/FAQ#Can_I_mount_subvolumes_with_different_mount_options.3F
But, when I try to mount my subvolume in re
Hey guys,
I have a 6-drive RAID10 btrfs volume. 2 drives are internal, then I have 2
external 2-bay enclosures. One of the enclosures disconnected and went offline
and I quickly unmounted the volume and then rebooted. With all drives
connected, and visible in "btrfs fi show" I tried to mount th
On Oct 15, 2013, at 3:48 PM, David Sterba wrote:
> On Tue, Oct 15, 2013 at 02:58:22PM -0600, Chris Murphy wrote:
>> Looks like this changed with the 3.2 kernel " Subvolumes mountable by
>> full path", I thought that was in addition to relative rather than a
>> total change in behavior. Good to k
On Tue, Oct 15, 2013 at 02:58:22PM -0600, Chris Murphy wrote:
> Looks like this changed with the 3.2 kernel " Subvolumes mountable by
> full path", I thought that was in addition to relative rather than a
> total change in behavior. Good to know.
Yes, the pre-3.2 behaviour was limited to subvolume
On Oct 15, 2013, at 2:50 PM, David Sterba wrote:
> On Tue, Oct 15, 2013 at 01:23:44PM -0600, Chris Murphy wrote:
>> After changing the default subvolume, I can't mount a nested subvolume
>> with a correct relative pathname to the default subvolume; it can only
>> be m
1 - 100 of 146 matches
Mail list logo