Thanks for the answer.
On 16/07/18 10:32, Qu Wenruo wrote:
>
>
> On 2018年07月16日 16:15, Udo Waechter wrote:
>>> The weird thing is that I can't really find information about the
>>> "failed to recover balance: -5" error. - There was no rebalancing
>>> running when during the crash.
>
> Can only
e: -5
This means your fs failed to recover the balance.
And it should mostly be caused by transid error just one line above.
Normally this means your fs is more or less corrupted, could be caused
by powerloss or something else.
>> [96926.985801] BTRFS error (device dm-2): open_ctree failed
>>
ed 3276017 found 3275985
> [96926.940705] BTRFS error (device dm-2): failed to recover balance: -5
> [96926.985801] BTRFS error (device dm-2): open_ctree failed
>
> The weird thing is that I can't really find information about the
> "failed to recover balance: -5" erro
is enabled
> [96926.927978] BTRFS error (device dm-2): parent transid verify failed
> on 321269628928 wanted 3276017 found 3275985
> [96926.938619] BTRFS error (device dm-2): parent transid verify failed
> on 321269628928 wanted 3276017 found 3275985
> [96926.940705] BTRFS error (de
: parent transid verify failed
on 321269628928 wanted 3276017 found 3275985
[96926.940705] BTRFS error (device dm-2): failed to recover balance: -5
[96926.985801] BTRFS error (device dm-2): open_ctree failed
The weird thing is that I can't really find information about the
"f
Ok, so I ended with btrfs restore, seems that all (or most important)
files were restored.
Now looking for another reliable filesystem which will not unrecoverably
die on power outage.
msk
Dňa 22. 1. 2018 o 10:14 Zatkovský Dušan napísal(a):
Hi.
Badblocks finished on both disks with no
Hi.
Badblocks finished on both disks with no errors. The only messages from
kernel
during night are 6x perf: interrupt took too long (2511 > 2500),
lowering kernel.perf_event_max_sample_rate to 79500
root@nas:~# smartctl -l scterc /dev/sda
smartctl 6.6 2016-05-31 r4324
On Sun, Jan 21, 2018 at 4:13 PM, Chris Murphy wrote:
> On Sun, Jan 21, 2018 at 3:31 PM, msk conf wrote:
>> Hello,
>>
>> thank you for the reply.
>>
>>> What do you get for btrfs fi df /array
>>
>>
>> Can't do that because filesystem is not mountable.
...
>
> Failing with:
> [ 2765.719548] BTRFS critical (device sda4): corrupt leaf, slot offset bad:
> block=3997250650112, root=1, slot=48
> [ 2765.731772] BTRFS error (device sda4): failed to read block groups: -5
> [ 2765.781993] BTRFS error (device sda4): open_ctree failed
S error (device sda4): open_ctree failed
Some informations at beginning:
uname -a
Linux nas 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64 GNU/Linux
btrfs --version
btrfs-progs v4.7.3
btrfs fi show
Label: none uuid: 4a20b115-4742-42f1-9e1a-225323faa31a
Total devices 1 FS bytes used 8
> I rebootet with HWE K4.11
>
> and took a pic of the error message (see attachment).
>
> It seems btrfs still sees the removed NVME.
> There is a mismatch from super_num_devices (3) to num_devices (2)
> with indicates something strage is going on here, imho.
>
> Then i returned and booted
> -Ursprüngliche Nachricht-
> Von: Dmitrii Tcvetkov
> Gesendet: Di. 22.08.2017 12:28
> An: g6094...@freenet.de
> Kopie: linux-btrfs@vger.kernel.org
> Betreff: Re: degraded BTRFS RAID 1 not mountable: open_ctree failed, unable
> to find block group for 0
>
> O
On Tue, 22 Aug 2017 11:31:23 +0200
g6094...@freenet.de wrote:
> So 1st should be investigating why did the disk not get removed
> correctly? Btrfs dev del should remove the device corretly, right? Is
> there a bug?
It should and probably did. To check that we need to see output of
btrfs
add) and removed the missing/dead device (btrfs dev del).
Everything worked well. BUT as i rebooted i ran into the "BTRFS RAID 1
not mountable: open_ctree failed, unable to find block group for 0"
because of a MISSING disk?! I checked the btrfs list and found that
there was a patch th
Op 12-06-17 om 01:00 schreef Chris Murphy:
> On Sun, Jun 11, 2017 at 4:13 AM, Koen Kooi wrote:
>>
>>> Op 11 jun. 2017, om 12:05 heeft Koen Kooi het
>>> volgende geschreven:
>>>
Op 11 jun. 2017, om 06:20 heeft Chris Murphy
Op 12-06-17 om 00:58 schreef Chris Murphy:
[..]
> Also worth trying btrfs check --mode=lowmem. This doesn't repair but
> is a whole new implementation so it might find the source of the
> problem better than the current fsck.
I ran it under 'catchsegv' to give more data where it segfaults,
ent transid
>>>>> verify failed on 5840011722752 wanted 170755 found 170832
>>>>> [Fri Jun 9 20:46:07 2017] BTRFS error (device md0): parent transid
>>>>> verify failed on 5840011722752 wanted 170755 found 170832
>>>>> [Fri Jun 9 20:46:0
On Sun, Jun 11, 2017 at 4:13 AM, Koen Kooi wrote:
>
>> Op 11 jun. 2017, om 12:05 heeft Koen Kooi het
>> volgende geschreven:
>>
>>>
>>> Op 11 jun. 2017, om 06:20 heeft Chris Murphy het
>>> volgende geschreven:
t;>> failed on 5840011722752 wanted 170755 found 170832
>>>> [Fri Jun 9 20:46:07 2017] BTRFS error (device md0): failed to read block
>>>> groups: -5
>>>> [Fri Jun 9 20:46:08 2017] BTRFS error (device md0): open_ctree failed
Superblock shows gen 170816
> Op 11 jun. 2017, om 12:05 heeft Koen Kooi het
> volgende geschreven:
>
>>
>> Op 11 jun. 2017, om 06:20 heeft Chris Murphy het
>> volgende geschreven:
>>
>> On Fri, Jun 9, 2017 at 1:57 PM, Hugo Mills wrote:
[..]
>>> failed on 5840011722752 wanted 170755 found 170832
>>> [Fri Jun 9 20:46:07 2017] BTRFS error (device md0): parent transid verify
>>> failed on 5840011722752 wanted 170755 found 170832
>>> [Fri Jun 9 20:46:07 2017] BTRFS error (device md0): failed to read
1722752 wanted 170755 found 170832
>> [Fri Jun 9 20:46:07 2017] BTRFS error (device md0): failed to read block
>> groups: -5
>> [Fri Jun 9 20:46:08 2017] BTRFS error (device md0): open_ctree failed
>
>With a transid failure on mount, about the only thing that's lik
832
>> [Fri Jun 9 20:46:07 2017] BTRFS error (device md0): failed to read block
>> groups: -5
>> [Fri Jun 9 20:46:08 2017] BTRFS error (device md0): open_ctree failed
>
>With a transid failure on mount, about the only thing that's likely
> to work is mounting with -o
> groups: -5
> [Fri Jun 9 20:46:08 2017] BTRFS error (device md0): open_ctree failed
With a transid failure on mount, about the only thing that's likely
to work is mounting with -o usebackuproot. If that doesn't work, then
a rebuild of the FS is almost certainly needed.
Hugo.
&g
20:46:07 2017] BTRFS error (device md0): parent transid verify
failed on 5840011722752 wanted 170755 found 170832
[Fri Jun 9 20:46:07 2017] BTRFS error (device md0): failed to read block
groups: -5
[Fri Jun 9 20:46:08 2017] BTRFS error (device md0): open_ctree failed
I tried repair
On 25. nov. 2016 21:19, Kai Stian Olstad wrote:
I have problem mounting my 3 disk raid1.
This happened after upgrading from Kubuntu 14.04 to 16.04.
I finally found the problem.
Since I needed to reboot after the upgrade I decided to add some disks,
and in order to do that I needed to move
to read tree
root on dm-11
Nov 25 20:25:35 cb kernel: [13486.160009] BTRFS: open_ctree failed
$ sudo btrfs filesystem show
Label: none uuid: 6554e692-6c1c-4a69-8ff8-cb5615fb9200
Total devices 3 FS bytes used 6.38TiB
devid1 size 5.46TiB used 4.38TiB
On giovedì 17 novembre 2016 21:20:56 CET, Austin S. Hemmelgarn wrote:
On 2016-11-17 15:05, Chris Murphy wrote:
I think the wiki should be updated to reflect that raid1 and raid10
are mostly OK. I think it's grossly misleading to consider either as
green/OK when a single degraded read write
On Thu, Nov 17, 2016 at 12:20 PM, Austin S. Hemmelgarn
wrote:
> On 2016-11-17 15:05, Chris Murphy wrote:
>>
>> I think the wiki should be updated to reflect that raid1 and raid10
>> are mostly OK. I think it's grossly misleading to consider either as
>> green/OK when a
Am Donnerstag, 17. November 2016, 12:05:31 CET schrieb Chris Murphy:
> I think the wiki should be updated to reflect that raid1 and raid10
> are mostly OK. I think it's grossly misleading to consider either as
> green/OK when a single degraded read write mount creates single chunks
> that will
On 2016-11-17 15:05, Chris Murphy wrote:
I think the wiki should be updated to reflect that raid1 and raid10
are mostly OK. I think it's grossly misleading to consider either as
green/OK when a single degraded read write mount creates single chunks
that will then prevent a subsequent degraded
I think the wiki should be updated to reflect that raid1 and raid10
are mostly OK. I think it's grossly misleading to consider either as
green/OK when a single degraded read write mount creates single chunks
that will then prevent a subsequent degraded read write mount. And
also the lack of
Am Mittwoch, 16. November 2016, 07:57:08 CET schrieb Austin S. Hemmelgarn:
> On 2016-11-16 06:04, Martin Steigerwald wrote:
> > Am Mittwoch, 16. November 2016, 16:00:31 CET schrieb Roman Mamedov:
> >> On Wed, 16 Nov 2016 11:55:32 +0100
> >>
> >> Martin Steigerwald
ugh. I took the "BTRFS: open_ctree failed" message as indicative to some
structural issue with the filesystem.
For the reason as to why the writable mount didn't work, check "btrfs fi df"
for the filesystem to see if you have any "single" profile chunks on it
dm-13): has skinny extents
[ 3080.150957] BTRFS warning (device dm-13): missing devices (1)
exceeds the limit (0), writeable mount is not allowed
[ 3080.195941] BTRFS: open_ctree failed
I have to wonder did you read the above message? What you need at this point
is simply "-o de
Am Mittwoch, 16. November 2016, 11:55:32 CET schrieben Sie:
> So mounting work although for some reason scrubbing is aborted (I had this
> issue a long time ago on my laptop as well). After removing /var/lib/btrfs
> scrub status file for the filesystem:
>
> merkaba:~> btrfs scrub start
Am Mittwoch, 16. November 2016, 16:00:31 CET schrieb Roman Mamedov:
> On Wed, 16 Nov 2016 11:55:32 +0100
>
> Martin Steigerwald <martin.steigerw...@teamix.de> wrote:
> > I do think that above kernel messages invite such a kind of interpretation
> > tough. I took th
On Wed, 16 Nov 2016 11:55:32 +0100
Martin Steigerwald <martin.steigerw...@teamix.de> wrote:
> I do think that above kernel messages invite such a kind of interpretation
> tough. I took the "BTRFS: open_ctree failed" message as indicative to some
> structural
info (device dm-13): disk space caching is
> > enabled
> > [ 3080.120706] BTRFS info (device dm-13): has skinny extents
> > [ 3080.150957] BTRFS warning (device dm-13): missing devices (1)
> > exceeds the limit (0), writeable mount is not allowed
> >
warning (device dm-13): missing devices (1) exceeds
> the limit (0), writeable mount is not allowed
> [ 3080.195941] BTRFS: open_ctree failed
I have to wonder did you read the above message? What you need at this point
is simply "-o degraded,ro". But I don't see that tried anywh
444] BTRFS info (device dm-12): has skinny extents
[ 5739.103153] BTRFS error (device dm-12): failed to read chunk tree: -5
[ 5739.130304] BTRFS: open_ctree failed
merkaba:~> LANG=C mount -o degraded,usebackuproot /dev/satafp1/daten
/mnt/zeit
mount: wrong fs type, bad opti
] BTRFS error (device dm-0): cleaner transaction
attach returned -30
[ 2301.035405] BTRFS: open_ctree failed
It is exactly the same error I saw when trying to boot normally as
mentioned above.
Based on these two links:
> https://btrfs.wiki.kernel.org/index.php/Problem_FAQ
>
ad key" and end in "BTRFS:
open_ctree failed":
# dmesg
...
[ 13.156643] Btrfs loaded
[ 13.228545] BTRFS: device label btrfs-raid devid 17 transid
15891267 /dev/sdl1
[ 13.228606] BTRFS: device label btrfs-raid devid 16 transid
15891267 /dev/sdk1
[ 13.228666] BTRFS: device la
down to an emergency shell through rd.break=pre-mount, when trying
to mount sysroot, I get the error open_ctree failed and BTRFS: failed to
read the system array. This is generally a problem when probing for btrfs
devices hasn't been done yet.
So I looked into the dracut sources to find
or so
[ 835.718898] BTRFS info (device dm-5): disk space caching is enabled
[ 835.778580] BTRFS: failed to read the system array on dm-5
[ 835.819223] BTRFS: open_ctree failed
gargamel:/var/log# mount /dev/mapper/raid0d1 /mnt/btrfs_space
[ 847.047607] BTRFS: device label btrfs_space devid 1
In some cases useful info is found in syslog - try
dmesg | tail or so
[ 835.718898] BTRFS info (device dm-5): disk space caching is enabled
[ 835.778580] BTRFS: failed to read the system array on dm-5
[ 835.819223] BTRFS: open_ctree failed
gargamel:/var/log# mount /dev/mapper/raid0d1
On Sun, Apr 26, 2015 at 03:32:26PM +, Hugo Mills wrote:
The usual reason for this is that btrfs dev scan isn't being run
properly. It's usually handled by udev -- most distributions will put
the appropriate hooks in their udev configuration if you have the
distribution's btrfs-progs
-3): disk space caching is enabled
[Nov11 15:35] BTRFS info (device dm-3): disk space caching is enabled
[Nov11 15:36] BTRFS info (device dm-3): disk space caching is enabled
[Nov11 15:37] BTRFS info (device dm-3): disk space caching is enabled
[ +16.054127] BTRFS: open_ctree failed
The below is a hard disk going bad or other systematic problem at the
hardware level (controller card, interrupt conflict, etc).
In fact, given ata6.00: irq_stat 0x0800, interface fatal error its
pretty much a smoking gun about your controller.
Since you just upgraded your kernel I'd
:37] BTRFS info (device dm-3): disk space caching is enabled
[ +16.054127] BTRFS: open_ctree failed
[Nov11 15:38] BTRFS info (device dm-3): disk space caching is enabled
[Nov11 16:02] BTRFS info (device dm-2): disk space caching is enabled
How could I mount these volumes again
Hi,
First of all: I noticed was able to mount my partitions when doing
with a different path, which made me investigate my /etc/fstab.
It contained this:
LABEL=data1 /mnt/databtrfs
Robert White posted on Sun, 02 Nov 2014 14:31:36 -0800 as excerpted:
On 11/02/2014 07:18 AM, Florian Lindner wrote:
# btrfsck /dev/sdb1
# btrfsck --init-extent-tree /dev/sdb1
# btrfsck --init-csum-tree /dev/sdb1
Notably missing from all these commands is --repair...
I don't know that's
Robert White wrote:
On 11/02/2014 07:18 AM, Florian Lindner wrote:
# btrfsck /dev/sdb1
# btrfsck --init-extent-tree /dev/sdb1
# btrfsck --init-csum-tree /dev/sdb1
Notably missing from all these commands is --repair...
I don't know that's your problem for sure, but it's where I would
On Nov 2, 2014, at 8:18 AM, Florian Lindner mailingli...@xgm.de wrote:
Hello,
all after sudden I can't mount my btrfs home partition anymore. System is
Arch with kernel 3.17.2, but I use snapper which does snapshopts regularly
and I had 3.17.1 before, which afaik had some problems with
Chris Murphy wrote:
On Nov 2, 2014, at 8:18 AM, Florian Lindner mailingli...@xgm.de wrote:
Hello,
all after sudden I can't mount my btrfs home partition anymore. System is
Arch with kernel 3.17.2, but I use snapper which does snapshopts
regularly and I had 3.17.1 before, which afaik
On Nov 3, 2014, at 12:48 PM, Florian Lindner mailingli...@xgm.de wrote:
Ok, problem is that I need to organise another hard disk for that. ;-)
I tried restore for a test run, it gave a lot of messages about wrong
compression length. I found some discussion about that, but I don't know if
kernel: parent transid verify failed on 2121534193664
wanted 1073784832 found 45541
Nov 02 16:04:16 horus kernel: BTRFS: open_ctree failed
btrfsck quits after about half a minute with
# btrfsck /dev/sdb1
root item for root 3377, current bytenr 2122078191616, current gen
8589934592, current level
On 11/02/2014 07:18 AM, Florian Lindner wrote:
# btrfsck /dev/sdb1
# btrfsck --init-extent-tree /dev/sdb1
# btrfsck --init-csum-tree /dev/sdb1
Notably missing from all these commands is --repair...
I don't know that's your problem for sure, but it's where I would start...
--
To unsubscribe
Hello.
A circuit breaker failed a few times and now I can't mount my btrfs
volume - it fails with open_ctree failed:
[ 337.004372] [817865a5] dump_stack+0x46/0x58
[ 337.004375] [810720ac] warn_slowpath_common+0x8c/0xc0
[ 337.004378] [810720fa] warn_slowpath_null
Frankie Fisher posted on Fri, 01 Aug 2014 10:58:39 +0100 as excerpted:
A circuit breaker failed a few times and now I can't mount my btrfs
volume - it fails with open_ctree failed:
[snip stacktrace, which as a btrfs user not dev doesn't give me much
anyway]
mount -o recovery doesn't
On 01/08/14 12:29, Duncan wrote:
Frankie Fisher posted on Fri, 01 Aug 2014 10:58:39 +0100 as excerpted:
A circuit breaker failed a few times and now I can't mount my btrfs
volume - it fails with open_ctree failed:
What are the next steps I should try? Should I try btrfs-zero-log? Or
should
On 06/24/2014 06:37 PM, Chris Murphy wrote:
On Jun 24, 2014, at 1:52 AM, Tamas Papp tom...@martos.bme.hu wrote:
On 06/22/2014 07:10 PM, Tamas Papp wrote:
On 06/20/2014 02:04 AM, George Mitchell wrote:
Hello Tamas,
I think it would help to provide more information than what you have posted.
On 06/25/2014 03:09 PM, Tamas Papp wrote:
On 06/24/2014 06:37 PM, Chris Murphy wrote:
On Jun 24, 2014, at 1:52 AM, Tamas Papp tom...@martos.bme.hu wrote:
On 06/22/2014 07:10 PM, Tamas Papp wrote:
On 06/20/2014 02:04 AM, George Mitchell wrote:
Hello Tamas,
I think it would help to provide
On Jun 24, 2014, at 11:53 PM, Mike Hartman m...@hartmanipulation.com wrote:
Does this version's btrfs-image allow you to make an image of the file
system?
Nope, same errors and no output.
https://btrfs.wiki.kernel.org/index.php/Restore
Your superblocks are good according to btrfs
I don't know all states of this file system, and copies you have. Right now
the earliest copy is obviously broken, and the latest copy is probably more
broken because at the least its csum tree has been blown away meaning there's
no checksums to confirm whether any data extracted/copied
On Jun 25, 2014, at 1:32 PM, Mike Hartman m...@hartmanipulation.com wrote:
I don't know all states of this file system, and copies you have. Right now
the earliest copy is obviously broken, and the latest copy is probably more
broken because at the least its csum tree has been blown away
Chris Murphy posted on Mon, 23 Jun 2014 23:19:37 -0600 as excerpted:
I zeroed out the drive and ran every smartctl test on it I could find
and it never threw any more errors.
Zeroing SSDs isn't a good way to do it. Use ATA Secure Erase instead.
The drive is overprovisioned, so there are
On Jun 23, 2014, at 11:39 PM, Mike Hartman m...@hartmanipulation.com wrote:
https://github.com/kdave/btrfs-progs.git integration-20140619
Thanks. I pulled that version and retried everything in my original
transcript, including the btrfs check --init-csum-tree
--init-extent-tree. Results
On Jun 23, 2014, at 11:49 PM, Mike Hartman m...@hartmanipulation.com wrote:
I zeroed out the drive and ran every smartctl test on it I could find
and it never threw any more errors.
Zeroing SSDs isn't a good way to do it. Use ATA Secure Erase instead. The
drive is overprovisioned, so
On Jun 24, 2014, at 1:52 AM, Tamas Papp tom...@martos.bme.hu wrote:
On 06/22/2014 07:10 PM, Tamas Papp wrote:
On 06/20/2014 02:04 AM, George Mitchell wrote:
Hello Tamas,
I think it would help to provide more information than what you have
posted. open_ctree can cover a lot of
Does this version's btrfs-image allow you to make an image of the file system?
Nope, same errors and no output.
https://btrfs.wiki.kernel.org/index.php/Restore
Your superblocks are good according to btrfs rescue super-recover. And
various tree roots are found by btrfs-find-root including a
7222353813307377896 760696832
[92039.141820] btrfs: open_ctree failed
I dded an image of the partition, grabbed the latest btrfs-progs from
git, and then tried every troubleshooting/recovery method I could find
online with no luck. The complete transcript is available here:
http://pastebin.com/PSHL7UxZ
Summary
On Jun 23, 2014, at 4:18 PM, Mike Hartman m...@hartmanipulation.com wrote:
Can anyone offer any suggestions? Is this data really unrecoverable? I
have no idea what could have gone so severely wrong.
btrfs check --repair /media/mint/usb_data/sda6_check.img
btrfs check --repair --init-csum-tree
On 06/24/2014 09:17 AM, Chris Murphy wrote:
On Jun 23, 2014, at 4:18 PM, Mike Hartman m...@hartmanipulation.com wrote:
Can anyone offer any suggestions? Is this data really unrecoverable? I
have no idea what could have gone so severely wrong.
btrfs check --repair
I have a dd image, but not a btrfs-image. I ran the btrfs-image
command, but it threw the same errors as everything else and generated
a 0 byte file.
I agree that it SOUNDS like some kind of media failure, but if so it
seems odd to me that I was able to dd the entire partition with no
read
I have no particular desire to use it. I just tried everything else
first and thought it was worth a shot.
If you think that version would help, can you point me to the git
repo? The one I grabbed was
git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git.
On Mon, Jun 23, 2014 at
Hi Mike,
On 06/24/2014 11:04 AM, Mike Hartman wrote:
I have no particular desire to use it. I just tried everything else
first and thought it was worth a shot.
If you think that version would help, can you point me to the git
repo? The one I grabbed was
On Jun 23, 2014, at 8:58 PM, Mike Hartman m...@hartmanipulation.com wrote:
I have a dd image, but not a btrfs-image. I ran the btrfs-image
command, but it threw the same errors as everything else and generated
a 0 byte file.
I agree that it SOUNDS like some kind of media failure, but if so
Of course it could just be a bug so it's worth trying David's integration
branch.
I'll try that shortly.
* Firmware Version: 0006
Firmware 0007 is current for this SSD.
I assume that's probably not something I should mess with right now
though, right?
6 0x008 4
On Jun 23, 2014, at 10:34 PM, Mike Hartman m...@hartmanipulation.com wrote:
Firmware 0007 is current for this SSD.
I assume that's probably not something I should mess with right now
though, right?
I would deal with that later. A firmware change now might make things worse if
you care
My two cents:-)
If you really want to use btrfs check --init-csum-tree
--init-extent-tree,
i'd suggest you use
David Latest btrfs-progs branch which includes some latest bug fixes.
I have no particular desire to use it. I just tried everything else
first and thought it was worth a shot.
I zeroed out the drive and ran every smartctl test on it I could find
and it never threw any more errors.
Zeroing SSDs isn't a good way to do it. Use ATA Secure Erase instead. The
drive is overprovisioned, so there are pages without LBAs assigned, which
means they can't be written to by
On 06/20/2014 02:04 AM, George Mitchell wrote:
Hello Tamas,
I think it would help to provide more information than what you have
posted. open_ctree can cover a lot of territory.
1) I may be missing something, but I see no attachment. I am not
sure the mailing list can handle
On 06/16/2014 04:02 PM, Tamas Papp wrote:
On 06/16/2014 03:26 PM, Tamas Papp wrote:
hi All,
There is a Dell XPS 13 laptop with and SSD.
System:
Ubuntu 14.04 amd64
Kernel is from the daily ppa, like 3.15rcX.
Now, it's running live system:
Linux ubuntu 3.13.0-24-generic #46-Ubuntu SMP Thu
On 06/19/2014 02:50 PM, Tamas Papp wrote:
On 06/16/2014 04:02 PM, Tamas Papp wrote:
On 06/16/2014 03:26 PM, Tamas Papp wrote:
hi All,
There is a Dell XPS 13 laptop with and SSD.
System:
Ubuntu 14.04 amd64
Kernel is from the daily ppa, like 3.15rcX.
Now, it's running live system:
Linux
On 06/16/2014 03:26 PM, Tamas Papp wrote:
hi All,
There is a Dell XPS 13 laptop with and SSD.
System:
Ubuntu 14.04 amd64
Kernel is from the daily ppa, like 3.15rcX.
Now, it's running live system:
Linux ubuntu 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC
2014 x86_64 x86_64
on 59394068480 wanted
84587 found 84589
[179649.291776] Failed to read block groups: -5
[179649.312476] btrfs: open_ctree failed
On Ubuntu 13.04, btrfsck would assert with the following error:
btrfsck: disk-io.c:441: find_and_setup_root: Assertion `!(ret)' failed.
I have backup the disk partition
is enabled
[ 304.363253] btrfs bad tree block start 12878227550790025961 610545664
[ 304.363725] btrfs bad tree block start 3755836431281323167 610545664
[ 304.363757] btrfs: failed to read log tree
[ 304.377198] btrfs: open_ctree failed
[ 536.047135] device fsid 9c37c7d8-0fcb-4154-be3b
0
[ 56.209412] parent transid verify failed on 3321036099584 wanted 97967
found 97966
[ 56.225990] parent transid verify failed on 3321036099584 wanted 97967
found 97966
[ 56.226128] btrfs: failed to read log tree
[ 56.344483] btrfs: open_ctree failed
I've tried with 3.11-rc7
to read log tree [ 56.344483] btrfs:
open_ctree failed
I've tried with 3.11-rc7, but it gives the same result.
Any hints how to recover from that?
I have backups, but it would be nice if the filesystem just mounted.
Try mounting with both -orecovery and -oro,recovery
On Mon, 26 Aug 2013 20:01:46 +0100
Hugo Mills h...@carfax.org.uk wrote:
Anything else I can try?
Oh, hang on... that's the log tree.
btrfs-zero-log may help. If you can take a btrfs-image of the
filesystem before running that, josef would like to see it.
btrfs-zero-log did the
On Sun, Sep 16, 2012 at 03:24:53PM +0200, Sébastien Kalt wrote:
I'm running Debian Sid, 3.2.0-3-amd64 kernel and Btrfs v0.19
(0.19+20120328-8 according to dpkg), using XFCE4 and dolphin as a file
manager. The usb drive is auto-mounting, and I'm accessing it with
dolphin or console. I always
One entire subvolume was restored. But there were 4 subvolumes on that
partition. Is there a way to specify/force the restore of a different
subvolume ?
find-root seems to only find a single root.
thanks
On Mon, Mar 26, 2012 at 3:47 PM, Hugo Mills h...@carfax.org.uk wrote:
On Mon, Mar 26, 2012
On Tue, Mar 27, 2012 at 05:58:17AM -0700, Not Zippy wrote:
One entire subvolume was restored. But there were 4 subvolumes on that
partition. Is there a way to specify/force the restore of a different
subvolume ?
find-root seems to only find a single root.
There is only a single root
I had found that note on the restore but my restore.c does not allow
that flag (it is also missing the m flag as well), I used the branch
dangerousdonteveruse on
https://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git I
switched to the master branch to see if there was a difference but it
Thought I would let you know I did get things figured out. I used
btrfs-progs from github
https://github.com/josefbacik/btrfs-progs
I also used the findroot function from there which generated more
possibilities for the root objectid.
By pluging in the guesses from findroot into -r objectid for
failed on 498530500608 wanted
26696 found 27535
[ 1518.660243] btrfs: open_ctree failed
Tried btrfsck after a lot of messages got:
...
leaf parent key incorrect 563096514560
Unable to find block group for 0
btrfsck: extent-tree.c:284: find_search_start: Assertion `!(1)' failed.
My kernel 3.0.0-16
on 498530500608 wanted
26696 found 27535
[ 1518.640279] parent transid verify failed on 498530500608 wanted
26696 found 27535
[ 1518.660243] btrfs: open_ctree failed
Tried btrfsck after a lot of messages got:
...
leaf parent key incorrect 563096514560
Unable to find block group for 0
btrfsck: extent
Hugo
I did try the dangerdonteveruse branch and thats the error btrfsck
--repair gave me. Looks like the btrfs-restore command may work
(thanks!). And yes I do have backups for the important data - I had
some other data on there which would need to be d/l again..
I don't dabble that much with the
On Mon, Mar 26, 2012 at 03:36:13PM -0700, Not Zippy wrote:
Hugo
I did try the dangerdonteveruse branch and thats the error btrfsck
--repair gave me.
Oooh, a brave one, I see. ;)
Looks like the btrfs-restore command may work (thanks!). And yes I
do have backups for the important data - I
1 - 100 of 137 matches
Mail list logo