On Mon, Jan 20, 2014 at 05:58:55AM -0800, Christoph Hellwig wrote:
> On Thu, Jan 16, 2014 at 09:07:21AM +1100, Dave Chinner wrote:
> > Yes, I think it can be done relatively simply. We'd have to change
> > the code in xfs_file_aio_write_checks() to check whether EOF zeroing
> > was required rather
On Thu, Jan 16, 2014 at 09:07:21AM +1100, Dave Chinner wrote:
> Yes, I think it can be done relatively simply. We'd have to change
> the code in xfs_file_aio_write_checks() to check whether EOF zeroing
> was required rather than always taking an exclusive lock (for block
> aligned IO at EOF sub-blo
On Tue, Jan 14, 2014 at 03:30:11PM +0200, Sergey Meirovich wrote:
> Hi Cristoph,
>
> On 8 January 2014 16:03, Christoph Hellwig wrote:
> > On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
> >> Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about
> >> ext4
> >
Hi Cristoph,
On 8 January 2014 16:03, Christoph Hellwig wrote:
> On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
>> Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
>> However I have tried XFS as well. It was a bit slower than ext4 on all
>> occasion
On 10 January 2014 16:32, Sergey Meirovich wrote:
> Hi Jan,
>
> On 10 January 2014 12:48, Jan Kara wrote:
>> On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
>>> Hi Jan,
>>>
>>> On 10 January 2014 11:36, Jan Kara wrote:
>>> > On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
>>> ...
>>> >> I've
Hi Jan,
On 10 January 2014 12:48, Jan Kara wrote:
> On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
>> Hi Jan,
>>
>> On 10 January 2014 11:36, Jan Kara wrote:
>> > On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
>> ...
>> >> I've done preallocation on fnic/XtremIO as Christoph suggested.
>>
On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
> Hi Jan,
>
> On 10 January 2014 11:36, Jan Kara wrote:
> > On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
> ...
> >> I've done preallocation on fnic/XtremIO as Christoph suggested.
> >>
> >> [root@dca-poc-gtsxdb3 mnt]# sysbench --max-requests=
Hi Jan,
On 10 January 2014 11:36, Jan Kara wrote:
> On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
...
>> I've done preallocation on fnic/XtremIO as Christoph suggested.
>>
>> [root@dca-poc-gtsxdb3 mnt]# sysbench --max-requests=0
>> --file-extra-flags=direct --test=fileio --num-threads=4
>> -
On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
> Hi Jan,
> On 8 January 2014 22:55, Jan Kara wrote:
> >
> >> So far I've seen so massive degradation only in SAN environment. I
> >> started my investigation with RHEL6.5 kernel so below table is from it
> >> but the trend is the same as for mainl
Hi,
On 9 January 2014 23:26, Sergey Meirovich wrote:
> Hi Duglas,
>
> On 9 January 2014 21:54, Douglas Gilbert wrote:
>> On 14-01-08 08:57 AM, Sergey Meirovich wrote:
> ...
>>>
>>> The strangest thing to me that this is the problem with sequential
>>> write. For example the fnic one machine is z
Hi Duglas,
On 9 January 2014 21:54, Douglas Gilbert wrote:
> On 14-01-08 08:57 AM, Sergey Meirovich wrote:
...
>>
>> The strangest thing to me that this is the problem with sequential
>> write. For example the fnic one machine is zoned to EMC XtremIO and
>> had results: 14.43Mb/sec 3693.65 Reques
On 14-01-08 08:57 AM, Sergey Meirovich wrote:
Hi James,
On 7 January 2014 22:57, James Smart wrote:
Sergey,
The Thor chipset is a bit old - a 4Gig adapter. Most of our performance
improvements, including parallelization, have gone into the 8G and 16G
adapters. But you still should have seen
Hi Jan,
On 8 January 2014 22:55, Jan Kara wrote:
>
>> So far I've seen so massive degradation only in SAN environment. I
>> started my investigation with RHEL6.5 kernel so below table is from it
>> but the trend is the same as for mainline it seems.
>>
>> Chunk size Bandwidth MiB/s
>>
On Wed 08-01-14 19:30:38, Sergey Meirovich wrote:
> On 8 January 2014 17:26, Christoph Hellwig wrote:
> >
> > On my laptop SSD I get the following results (sometimes up to 200MB/s,
> > sometimes down to 100MB/s, always in the 40k to 50k IOps range):
> >
> > time elapsed (sec.):5
> > bandwidth
On 8 January 2014 17:26, Christoph Hellwig wrote:
>
> On my laptop SSD I get the following results (sometimes up to 200MB/s,
> sometimes down to 100MB/s, always in the 40k to 50k IOps range):
>
> time elapsed (sec.):5
> bandwidth (MiB/s): 160.00
> IOps: 40960.00
Any dir
On Wed, Jan 08, 2014 at 04:43:07PM +0200, Sergey Meirovich wrote:
> Results are almost the same:
> 14.68Mb/sec 3758.02 Requests/sec
>
On my laptop SSD I get the following results (sometimes up to 200MB/s,
sometimes down to 100MB/s, always in the 40k to 50k IOps range):
time elapsed (sec.):
Hi Christoph,
On 8 January 2014 16:03, Christoph Hellwig wrote:
> On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
>> Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
>> However I have tried XFS as well. It was a bit slower than ext4 on all
>> occasio
On Wed, Jan 08, 2014 at 02:17:13AM +0100, Jan Kara wrote:
> Well, I was specifically worried about i_mutex locking. In particular:
> Before we report appending IO completion we need to update i_size.
> To update i_size we need to grab i_mutex.
>
> Now this is unpleasant because inode_dio_wait()
On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
> Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
> However I have tried XFS as well. It was a bit slower than ext4 on all
> occasions.
I wasn't trying to say XFS fixes your problem, but that we could
i
Hi James,
On 7 January 2014 22:57, James Smart wrote:
> Sergey,
>
> The Thor chipset is a bit old - a 4Gig adapter. Most of our performance
> improvements, including parallelization, have gone into the 8G and 16G
> adapters. But you still should have seen significantly beyond what you
> reported
On Tue 07-01-14 07:58:30, Christoph Hellwig wrote:
> On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
> > This is likely a problem of Linux direct IO implementation. The thing is
> > that in Linux when you are doing appending direct IO (i.e., direct IO which
> > changes file size), the I
Hi Christoph,
On 7 January 2014 17:58, Christoph Hellwig wrote:
> On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
>> This is likely a problem of Linux direct IO implementation. The thing is
>> that in Linux when you are doing appending direct IO (i.e., direct IO which
>> changes file
On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
> This is likely a problem of Linux direct IO implementation. The thing is
> that in Linux when you are doing appending direct IO (i.e., direct IO which
> changes file size), the IO is performed synchronously so that we have our
> life sim
23 matches
Mail list logo