On Sun, 24 Nov 2013, Stan Hoeppner wrote:
> I have always surmised that the culprit is rotational latency, because
> we're not able to get a real sector-by-sector streaming read from each
> drive. If even only one disk in the array has to wait for the platter
> to come round again, the entire str
On Sat, Nov 23, 2013 at 8:03 PM, Stan Hoeppner wrote:
> Parity array rebuilds are read-modify-write operations. The main
> difference from normal operation RMWs is that the write is always to the
> same disk. As long as the stripe reads and chunk reconstruction outrun
> the write throughput the
I'm getting these with 3.13-rc1:
[53358.655620] [ cut here ]
[53358.655686] WARNING: CPU: 7 PID: 1239 at fs/btrfs/inode.c:4721
inode_tree_add+0xc2/0x13f [btrfs]()
[53358.655779] Modules linked in: veth ipt_MASQUERADE iptable_nat
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv
On Sat, 23 Nov 2013 20:03:30 -0800 Kent Overstreet wrote:
> It was being open coded in a few places.
>
> Signed-off-by: Kent Overstreet
> Cc: Jens Axboe
> Cc: Joern Engel
> Cc: Prasad Joshi
> Cc: Neil Brown
> Cc: Chris Mason
Acked-by: NeilBrown
for the drivers/md/md.c bits, however...
On 11/23/2013 1:12 AM, NeilBrown wrote:
> On Fri, 22 Nov 2013 21:34:41 -0800 John Williams
>> Even a single 8x PCIe 3.0 card has potentially over 7GB/s of bandwidth.
>>
>> Bottom line is that IO bandwidth is not a problem for a system with
>> prudently chosen hardware.
Quite right.
>> More like
It was being open coded in a few places.
Signed-off-by: Kent Overstreet
Cc: Jens Axboe
Cc: Joern Engel
Cc: Prasad Joshi
Cc: Neil Brown
Cc: Chris Mason
---
block/blk-flush.c | 19 +--
drivers/md/md.c| 12 +---
fs/btrfs/check-integrity.c | 32 +
Hi Andrea,
On Sat, Nov 23, 2013 at 08:55:08AM +0100, Andrea Mazzoleni wrote:
> Hi Piergiorgio,
>
> > How about par2? How does this work?
> I checked the matrix they use, and sometimes it contains some singular
> square submatrix.
> It seems that in GF(2^16) these cases are just less common. Maybe
On 22/11/13 23:59, NeilBrown wrote:
On Fri, 22 Nov 2013 10:07:09 -0600 Stan Hoeppner
wrote:
In the event of a double drive failure in one mirror, the RAID 1 code
will need to be modified in such a way as to allow the RAID 5 code to
rebuild the first replacement disk, because the RAID 1 devic
Daniel Pocock posted on Sat, 23 Nov 2013 12:44:25 +0100 as excerpted:
>> [btrfs manpage quote]
>> btrfs device stats [-z] {|}
>>
>> Read and print the device IO stats for all devices of the filesystem
>> identified by or for a single .
>> -z Reset stats to zero after reading them.
>> Here's
On 23/11/13 11:35, Duncan wrote:
> Daniel Pocock posted on Sat, 23 Nov 2013 09:37:50 +0100 as excerpted:
>
>> What about when btrfs detects a bad block checksum and recovers data
>> from the equivalent block on another disk? The wiki says there will be
>> a syslog event. Does btrfs keep any st
Hi David,
On 2013-11-23 01:52, David Sterba wrote:
> On Tue, Nov 12, 2013 at 01:41:41PM +, Filipe David Borba Manana wrote:
>> This is a revised version of the original proposal/work from Alexander Block
>> to introduce a generic framework to set properties on btrfs filesystem
>> objects
>> (
Daniel Pocock posted on Sat, 23 Nov 2013 09:37:50 +0100 as excerpted:
> What about when btrfs detects a bad block checksum and recovers data
> from the equivalent block on another disk? The wiki says there will be
> a syslog event. Does btrfs keep any stats on the number of blocks that
> it cons
On 23/11/13 09:37, Daniel Pocock wrote:
>
>
> On 23/11/13 04:59, Anand Jain wrote:
>>
>>
>>> For example, would the command
>>>
>>> btrfs filesystem show --all-devices
>>>
>>> give a non-zero error status or some other clue if any of the devices
>>> are at risk?
>>
>> No there isn't any g
On 23/11/13 04:59, Anand Jain wrote:
>
>
>> For example, would the command
>>
>> btrfs filesystem show --all-devices
>>
>> give a non-zero error status or some other clue if any of the devices
>> are at risk?
>
> No there isn't any good way as of now. that's something to fix.
Does it re
14 matches
Mail list logo