At 11/04/2016 04:01 PM, Marc MERLIN wrote:
On Mon, Oct 31, 2016 at 09:21:40PM -0700, Marc MERLIN wrote:
On Tue, Nov 01, 2016 at 12:13:38PM +0800, Qu Wenruo wrote:
Would you try to locate the range where we starts to fail to read?
I still think the root problem is we failed to read the
On Fri, Nov 04, 2016 at 02:00:43PM +0500, Roman Mamedov wrote:
> On Fri, 4 Nov 2016 01:01:13 -0700
> Marc MERLIN wrote:
>
> > Basically I have this:
> > sde8:64 0 3.7T 0
> > └─sde1 8:65 0 3.7T 0
> > └─md5
On Fri, 4 Nov 2016 01:01:13 -0700
Marc MERLIN wrote:
> Basically I have this:
> sde8:64 0 3.7T 0
> └─sde1 8:65 0 3.7T 0
> └─md59:50 14.6T 0
> └─bcache0252:00
On Mon, Oct 31, 2016 at 09:21:40PM -0700, Marc MERLIN wrote:
> On Tue, Nov 01, 2016 at 12:13:38PM +0800, Qu Wenruo wrote:
> > Would you try to locate the range where we starts to fail to read?
> >
> > I still think the root problem is we failed to read the device in user
> > space.
>
>
On Tue, Nov 01, 2016 at 12:13:38PM +0800, Qu Wenruo wrote:
> Would you try to locate the range where we starts to fail to read?
>
> I still think the root problem is we failed to read the device in user
> space.
Understood.
I'll run this then:
myth:~# dd if=/dev/mapper/crypt_bcache0
At 10/31/2016 11:04 PM, Marc MERLIN wrote:
On Mon, Oct 31, 2016 at 08:44:12AM +, Hugo Mills wrote:
Any idea on special dm setup which can make us fail to read out some
data range?
I've seen both btrfs check and btrfs dump-super give wrong answers
(particularly, some addresses end up
So, I'm willing to wait 2 more days before I wipe this filesystem and
start over if I can't get check --repair to work on it.
If you need longer, please let me konw you have an upcoming patch for me
to try and I'll wait.
Thanks,
Marc
On Mon, Oct 31, 2016 at 08:04:22AM -0700, Marc MERLIN wrote:
>
On Mon, Oct 31, 2016 at 08:44:12AM +, Hugo Mills wrote:
> > Any idea on special dm setup which can make us fail to read out some
> > data range?
>
>I've seen both btrfs check and btrfs dump-super give wrong answers
> (particularly, some addresses end up larger than the device, for some
>
On Mon, Oct 31, 2016 at 03:04:27PM +0800, Qu Wenruo wrote:
>
>
> At 10/31/2016 02:37 PM, Marc MERLIN wrote:
> >On Mon, Oct 31, 2016 at 02:32:53PM +0800, Qu Wenruo wrote:
> >>
> >>
> >>At 10/31/2016 02:25 PM, Marc MERLIN wrote:
> >>>On Mon, Oct 31, 2016 at 02:04:10PM +0800, Qu Wenruo wrote:
>
At 10/31/2016 02:37 PM, Marc MERLIN wrote:
On Mon, Oct 31, 2016 at 02:32:53PM +0800, Qu Wenruo wrote:
At 10/31/2016 02:25 PM, Marc MERLIN wrote:
On Mon, Oct 31, 2016 at 02:04:10PM +0800, Qu Wenruo wrote:
Sorry for asking, am I doing this wrong?
myth:~# dd if=/dev/mapper/crypt_bcache0
On Mon, Oct 31, 2016 at 02:32:53PM +0800, Qu Wenruo wrote:
>
>
> At 10/31/2016 02:25 PM, Marc MERLIN wrote:
> >On Mon, Oct 31, 2016 at 02:04:10PM +0800, Qu Wenruo wrote:
> >>>Sorry for asking, am I doing this wrong?
> >>>myth:~# dd if=/dev/mapper/crypt_bcache0 of=/tmp/dump1 bs=512 count=32
>
At 10/31/2016 02:25 PM, Marc MERLIN wrote:
On Mon, Oct 31, 2016 at 02:04:10PM +0800, Qu Wenruo wrote:
Sorry for asking, am I doing this wrong?
myth:~# dd if=/dev/mapper/crypt_bcache0 of=/tmp/dump1 bs=512 count=32
skip=26367830208
dd: reading `/dev/mapper/crypt_bcache0': Invalid argument
0+0
On Mon, Oct 31, 2016 at 02:04:10PM +0800, Qu Wenruo wrote:
> >Sorry for asking, am I doing this wrong?
> >myth:~# dd if=/dev/mapper/crypt_bcache0 of=/tmp/dump1 bs=512 count=32
> >skip=26367830208
> >dd: reading `/dev/mapper/crypt_bcache0': Invalid argument
> >0+0 records in
> >0+0 records out
> >0
At 10/31/2016 01:47 PM, Marc MERLIN wrote:
On Mon, Oct 31, 2016 at 01:27:56PM +0800, Qu Wenruo wrote:
Would you please dump the following bytes?
That's the chunk root tree block on your disk.
offset: 13500329066496 length: 16384
offset: 13500330213376 length: 16384
Sorry for asking, am I
On Mon, Oct 31, 2016 at 01:27:56PM +0800, Qu Wenruo wrote:
> Would you please dump the following bytes?
> That's the chunk root tree block on your disk.
>
> offset: 13500329066496 length: 16384
> offset: 13500330213376 length: 16384
Sorry for asking, am I doing this wrong?
myth:~# dd
At 10/31/2016 10:06 AM, Marc MERLIN wrote:
On Mon, Oct 31, 2016 at 09:02:50AM +0800, Qu Wenruo wrote:
Your chunk root is corrupted, and since chunk tree provides the
underlying disk layout, even for single device, so if we failed to read
it, then it will never be able to be mounted.
That's
On Sun, Oct 30, 2016 at 07:06:16PM -0700, Marc MERLIN wrote:
> On Mon, Oct 31, 2016 at 09:02:50AM +0800, Qu Wenruo wrote:
> > Your chunk root is corrupted, and since chunk tree provides the
> > underlying disk layout, even for single device, so if we failed to read
> > it, then it will never be
On Mon, Oct 31, 2016 at 09:02:50AM +0800, Qu Wenruo wrote:
> Your chunk root is corrupted, and since chunk tree provides the
> underlying disk layout, even for single device, so if we failed to read
> it, then it will never be able to be mounted.
That's the thing though, I can mount the
I stopped using Btrfs RAID-5 after encountering this problem two times
(once due to a failing SATA cable, once due to a random kernel problem
which caused the SATA or the block device driver to reset/crash).
As much as I can tell, the main problem is that after a de- and a
subsequent re-attach (on
At 10/31/2016 02:34 AM, Marc MERLIN wrote:
I have a filesystem on top of md raid5 that got a few problems due to the
underlying block layer (bad data cable).
The filesystem mounts fine, but had a few issues
Scrub runs (I didn't let it finish, it takes a _long_ time)
But check --repair won't
I have a filesystem on top of md raid5 that got a few problems due to the
underlying block layer (bad data cable).
The filesystem mounts fine, but had a few issues
Scrub runs (I didn't let it finish, it takes a _long_ time)
But check --repair won't even run at all:
myth:~# btrfs --version
21 matches
Mail list logo