On 09/05/2014 10:40 AM, Jeff Moyer wrote:
> Christoph Hellwig writes:
>
>> On Wed, Sep 03, 2014 at 10:01:58AM +1000, NeilBrown wrote:
>>> Do we still need maximums at all?
>>
>> I don't think we do. At least on any system I work with I have to
>> increase them to get good performance without any
Christoph Hellwig writes:
> On Wed, Sep 03, 2014 at 10:01:58AM +1000, NeilBrown wrote:
>> Do we still need maximums at all?
>
> I don't think we do. At least on any system I work with I have to
> increase them to get good performance without any adverse effect on
> throttling.
>
>> So can we jus
On Wed, Sep 03, 2014 at 10:01:58AM +1000, NeilBrown wrote:
> Do we still need maximums at all?
I don't think we do. At least on any system I work with I have to
increase them to get good performance without any adverse effect on
throttling.
> So can we just remove the limit on max_sectors and th
On Mon, 1 Sep 2014 18:22:22 -0700 Christoph Hellwig wrote:
> On Tue, Sep 02, 2014 at 10:08:22AM +1000, Dave Chinner wrote:
> > Pretty obvious difference: avgrq-sz. btrfs is doing 512k IOs, ext4
> > and XFS are doing is doing 128k IOs because that's the default block
> > device readahead size. 'b
On Tue, Sep 02, 2014 at 04:20:24PM +0200, Jan Kara wrote:
> On Tue 02-09-14 07:31:04, Ted Tso wrote:
> > > - the very small max readahead size
> >
> > For things like the readahead size, that's probably something that we
> > should autotune, based the time it takes to read N sectors. i.e.,
> > s
On Tue 02-09-14 07:31:04, Ted Tso wrote:
> > - the very small max readahead size
>
> For things like the readahead size, that's probably something that we
> should autotune, based the time it takes to read N sectors. i.e.,
> start N relatively small, such as 128k, and then bump it up based on
>
I wholeheartedly agree. Of course, getting something other than CFQ as
the default I/O scheduler is going to be a difficult task. Enough
people upstream are convinced that we all NEED I/O priorities, when most
of what I see people doing with them is bandwidth provisioning, which
can be done much
While I'm sure some of those settings were selected with good reason,
maybe there can be a few options (2 or 3) that have some basic
intelligence at creation to pick a more sane option.
Some checks to see if an option or two might be better suited for the
fs. Like the RAID5 stripe size. Leave the
> - the very small max readahead size
For things like the readahead size, that's probably something that we
should autotune, based the time it takes to read N sectors. i.e.,
start N relatively small, such as 128k, and then bump it up based on
how long it takes to do a sequential read of N sector
On Tue, Sep 02, 2014 at 10:08:22AM +1000, Dave Chinner wrote:
> Pretty obvious difference: avgrq-sz. btrfs is doing 512k IOs, ext4
> and XFS are doing is doing 128k IOs because that's the default block
> device readahead size. 'blockdev --setra 1024 /dev/sdd' before
> mounting the filesystem will
On Tue, Aug 26, 2014 at 07:39:08PM -0400, Nikolai Grigoriev wrote:
> Hi,
>
> This is not exactly a problem - I am trying to understand why BTRFS
> demonstrates significantly higher throughput in my environment.
>
> I am observing something that I cannot explain. I am trying to come up
> with a go
Duncan <1i5t5.duncan cox.net> writes:
>
> Nikolai Grigoriev posted on Tue, 26 Aug 2014 19:39:08 -0400 as excerpted:
>
> > Kernel: 3.8.13-35.3.5.el6uek.x86_64 #2 SMP Fri Aug 8 21:58:11 PDT 2014
> > x86_64 x86_64 x86_64 GNU/Linux
>
> > Btrfs v0.20-rc1
>
> I've no answer for your question, but y
Nikolai Grigoriev posted on Tue, 26 Aug 2014 19:39:08 -0400 as excerpted:
> Kernel: 3.8.13-35.3.5.el6uek.x86_64 #2 SMP Fri Aug 8 21:58:11 PDT 2014
> x86_64 x86_64 x86_64 GNU/Linux
> Btrfs v0.20-rc1
I've no answer for your question, but you know how old both your kernel
and btrfs-progs versions
Hi,
This is not exactly a problem - I am trying to understand why BTRFS
demonstrates significantly higher throughput in my environment.
I am observing something that I cannot explain. I am trying to come up
with a good filesystem configuration using HP P420i controller and
SSDs (Intel S3500). Out
14 matches
Mail list logo