On 2018年03月22日 01:13, Liu Bo wrote:
> On Tue, Mar 20, 2018 at 7:01 PM, Qu Wenruo wrote:
>>
>>
>> On 2018年03月21日 01:44, Mike Stevens wrote:
>>>
> 30 devices is really not that much, heck you get 90 disks top load JBOD
> storage chassis these days and BTRFS does
On Tue, Mar 20, 2018 at 7:01 PM, Qu Wenruo wrote:
>
>
> On 2018年03月21日 01:44, Mike Stevens wrote:
>>
30 devices is really not that much, heck you get 90 disks top load JBOD
storage chassis these days and BTRFS does sound like an attractive choice
for things
On 2018年03月21日 01:44, Mike Stevens wrote:
>
>>> 30 devices is really not that much, heck you get 90 disks top load JBOD
>>> storage chassis these days and BTRFS does sound like an attractive choice
>>> for things like that.
>
>> So Mike's case is, that both metadata and data are configured as
On 03/19/2018 07:06 PM, Liu Bo wrote:
[...]
>
> So Mike's case is, that both metadata and data are configured as
> raid6, and the operations, balance and scrub, that he tried, need to
> set the existing block group as readonly (in order to avoid any
> further changes being applied during
>> 30 devices is really not that much, heck you get 90 disks top load JBOD
>> storage chassis these days and BTRFS does sound like an attractive choice
>> for things like that.
> So Mike's case is, that both metadata and data are configured as
> raid6, and the operations, balance and scrub, that
On Sun, Mar 18, 2018 at 3:52 PM, waxhead wrote:
> Liu Bo wrote:
>>
>> On Sat, Mar 17, 2018 at 5:26 PM, Liu Bo wrote:
>>>
>>> On Fri, Mar 16, 2018 at 2:46 PM, Mike Stevens
>>> wrote:
>
> Could you please paste the
On 2018年03月19日 06:52, waxhead wrote:
> Liu Bo wrote:
>> On Sat, Mar 17, 2018 at 5:26 PM, Liu Bo wrote:
>>> On Fri, Mar 16, 2018 at 2:46 PM, Mike Stevens
>>> wrote:
> Could you please paste the whole dmesg, it looks like it hit
>
Liu Bo wrote:
On Sat, Mar 17, 2018 at 5:26 PM, Liu Bo wrote:
On Fri, Mar 16, 2018 at 2:46 PM, Mike Stevens wrote:
Could you please paste the whole dmesg, it looks like it hit
btrfs_abort_transaction(),
which should give us more information
On 03/18/2018 08:57 AM, Goffredo Baroncelli wrote:
> BTRFS_SYSTEM_CHUNK_ARRAY_SIZE = 2048
> sizeof(struct btrfs_chunk)) = 48
> sizeof(struct btrfs_stripe) = 32
>
> So
>
> (2048/2-48)/32+1 = 31
>
> If my math is correct
my math was wrong:
sizeof(struct
On 03/18/2018 07:41 AM, Liu Bo wrote:
> ((BTRFS_SYSTEM_CHUNK_ARRAY_SIZE / 2) - sizeof(struct btrfs_chunk)) /
> sizeof(struct btrfs_stripe) + 1
BTRFS_SYSTEM_CHUNK_ARRAY_SIZE = 2048
sizeof(struct btrfs_chunk)) = 48
sizeof(struct btrfs_stripe) = 32
So
On Sat, Mar 17, 2018 at 5:26 PM, Liu Bo wrote:
> On Fri, Mar 16, 2018 at 2:46 PM, Mike Stevens
> wrote:
>>> Could you please paste the whole dmesg, it looks like it hit
>>> btrfs_abort_transaction(),
>>> which should give us more information
On 2018年03月16日 23:03, Mike Stevens wrote:
>> Can you post a more complete dmesg rather than snipping it? Is there
>> anything device or Btrfs related in the 5 minutes before this trace
>> happens? And is it still going read only?
>
> It's still going read only after the 4.15.10 update. Here's
On Fri, Mar 16, 2018 at 2:46 PM, Mike Stevens wrote:
>> Could you please paste the whole dmesg, it looks like it hit
>> btrfs_abort_transaction(),
>> which should give us more information about where goes wrong.
>
> The whole thing is here https://pastebin.com/4ENq2saQ
> Could you please paste the whole dmesg, it looks like it hit
> btrfs_abort_transaction(),
> which should give us more information about where goes wrong.
The whole thing is here https://pastebin.com/4ENq2saQ
The
On Thu, Mar 15, 2018 at 2:07 PM, Mike Stevens wrote:
>> That's a hell of a filesystem. RAID5 and RAID5 is unstable and should
>> not be used for anything but throw away data. You will be happy that you
>> value you data enough to have backups because all sensible
>>Mar 15 14:03:06 auswscs9903 kernel: BTRFS warning (device sdag): failed
>>setting block group ro: -30
> These are only found in scrub.c
Interesting. I'm running an offline btrfs check right now, so far extents and
free space cache seem to have passed.
If that finishes successfully, I'll
On Fri, Mar 16, 2018 at 10:17 AM, Mike Stevens
wrote:
>> Also, in the meantime, maybe the problem can be prevented by
>> preventing the balance from resuming when mounting. First umount then
>> mount with -o skip_balance.
>
> Thanks for the suggestion Chris. I already
> Also, in the meantime, maybe the problem can be prevented by
> preventing the balance from resuming when mounting. First umount then
> mount with -o skip_balance.
Thanks for the suggestion Chris. I already had mounted it with skip_balance
and then cancelled
the balance. It will mount, but
Also, in the meantime, maybe the problem can be prevented by
preventing the balance from resuming when mounting. First umount then
mount with -o skip_balance.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
On Thu, Mar 15, 2018 at 3:07 PM, Mike Stevens wrote:
> Mar 15 14:03:06 auswscs9903 kernel: WARNING: CPU: 6 PID: 2720 at
> fs/btrfs/extent-tree.c:10192 btrfs_create_pending_block_groups+0x1f3/0x260
> [btrfs]
> Mar 15 14:03:06 auswscs9903 kernel: Modules linked in:
On Thu, Mar 15, 2018 at 12:58 PM, Mike Stevens
wrote:
> First, the required information
>
> ~ $ uname -a
> Linux auswscs9903 3.10.0-693.21.1.el7.x86_64
For a kernel this old you kinda need to get support from the distro.
This list is upstream and pretty much always
> That's a hell of a filesystem. RAID5 and RAID5 is unstable and should
> not be used for anything but throw away data. You will be happy that you
> value you data enough to have backups because all sensible sysadmins
> do have backups correct?! (Do read just about any of Duncan's replies -
Mike Stevens wrote:
First, the required information
~ $ uname -a
Linux auswscs9903 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018
x86_64 x86_64 x86_64 GNU/Linux
~ $ btrfs --version
btrfs-progs v4.9.1
~ $ sudo btrfs fi show
Label: none uuid:
23 matches
Mail list logo