On Mon, Jan 20, 2014 at 05:58:55AM -0800, Christoph Hellwig wrote:
> On Thu, Jan 16, 2014 at 09:07:21AM +1100, Dave Chinner wrote:
> > Yes, I think it can be done relatively simply. We'd have to change
> > the code in xfs_file_aio_write_checks() to check whether EOF zeroing
> > was required rather
On Thu, Jan 16, 2014 at 09:07:21AM +1100, Dave Chinner wrote:
> Yes, I think it can be done relatively simply. We'd have to change
> the code in xfs_file_aio_write_checks() to check whether EOF zeroing
> was required rather than always taking an exclusive lock (for block
> aligned IO at EOF
On Thu, Jan 16, 2014 at 09:07:21AM +1100, Dave Chinner wrote:
Yes, I think it can be done relatively simply. We'd have to change
the code in xfs_file_aio_write_checks() to check whether EOF zeroing
was required rather than always taking an exclusive lock (for block
aligned IO at EOF sub-block
On Mon, Jan 20, 2014 at 05:58:55AM -0800, Christoph Hellwig wrote:
On Thu, Jan 16, 2014 at 09:07:21AM +1100, Dave Chinner wrote:
Yes, I think it can be done relatively simply. We'd have to change
the code in xfs_file_aio_write_checks() to check whether EOF zeroing
was required rather than
On Tue, Jan 14, 2014 at 03:30:11PM +0200, Sergey Meirovich wrote:
> Hi Cristoph,
>
> On 8 January 2014 16:03, Christoph Hellwig wrote:
> > On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
> >> Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about
> >> ext4
>
On Tue, Jan 14, 2014 at 03:30:11PM +0200, Sergey Meirovich wrote:
Hi Cristoph,
On 8 January 2014 16:03, Christoph Hellwig h...@infradead.org wrote:
On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about
Hi Cristoph,
On 8 January 2014 16:03, Christoph Hellwig wrote:
> On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
>> Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
>> However I have tried XFS as well. It was a bit slower than ext4 on all
>>
Hi Cristoph,
On 8 January 2014 16:03, Christoph Hellwig h...@infradead.org wrote:
On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
However I have tried XFS as well. It was a bit slower than ext4 on
On 10 January 2014 16:32, Sergey Meirovich wrote:
> Hi Jan,
>
> On 10 January 2014 12:48, Jan Kara wrote:
>> On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
>>> Hi Jan,
>>>
>>> On 10 January 2014 11:36, Jan Kara wrote:
>>> > On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
>>> ...
>>> >>
Hi Jan,
On 10 January 2014 12:48, Jan Kara wrote:
> On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
>> Hi Jan,
>>
>> On 10 January 2014 11:36, Jan Kara wrote:
>> > On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
>> ...
>> >> I've done preallocation on fnic/XtremIO as Christoph suggested.
>>
On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
> Hi Jan,
>
> On 10 January 2014 11:36, Jan Kara wrote:
> > On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
> ...
> >> I've done preallocation on fnic/XtremIO as Christoph suggested.
> >>
> >> [root@dca-poc-gtsxdb3 mnt]# sysbench
Hi Jan,
On 10 January 2014 11:36, Jan Kara wrote:
> On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
...
>> I've done preallocation on fnic/XtremIO as Christoph suggested.
>>
>> [root@dca-poc-gtsxdb3 mnt]# sysbench --max-requests=0
>> --file-extra-flags=direct --test=fileio --num-threads=4
>>
On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
> Hi Jan,
> On 8 January 2014 22:55, Jan Kara wrote:
> >
> >> So far I've seen so massive degradation only in SAN environment. I
> >> started my investigation with RHEL6.5 kernel so below table is from it
> >> but the trend is the same as for
On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
Hi Jan,
On 8 January 2014 22:55, Jan Kara j...@suse.cz wrote:
So far I've seen so massive degradation only in SAN environment. I
started my investigation with RHEL6.5 kernel so below table is from it
but the trend is the same as for
Hi Jan,
On 10 January 2014 11:36, Jan Kara j...@suse.cz wrote:
On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
...
I've done preallocation on fnic/XtremIO as Christoph suggested.
[root@dca-poc-gtsxdb3 mnt]# sysbench --max-requests=0
--file-extra-flags=direct --test=fileio --num-threads=4
On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
Hi Jan,
On 10 January 2014 11:36, Jan Kara j...@suse.cz wrote:
On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
...
I've done preallocation on fnic/XtremIO as Christoph suggested.
[root@dca-poc-gtsxdb3 mnt]# sysbench --max-requests=0
Hi Jan,
On 10 January 2014 12:48, Jan Kara j...@suse.cz wrote:
On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
Hi Jan,
On 10 January 2014 11:36, Jan Kara j...@suse.cz wrote:
On Thu 09-01-14 12:11:16, Sergey Meirovich wrote:
...
I've done preallocation on fnic/XtremIO as Christoph
On 10 January 2014 16:32, Sergey Meirovich rathamah...@gmail.com wrote:
Hi Jan,
On 10 January 2014 12:48, Jan Kara j...@suse.cz wrote:
On Fri 10-01-14 12:36:22, Sergey Meirovich wrote:
Hi Jan,
On 10 January 2014 11:36, Jan Kara j...@suse.cz wrote:
On Thu 09-01-14 12:11:16, Sergey
Hi,
On 9 January 2014 23:26, Sergey Meirovich wrote:
> Hi Duglas,
>
> On 9 January 2014 21:54, Douglas Gilbert wrote:
>> On 14-01-08 08:57 AM, Sergey Meirovich wrote:
> ...
>>>
>>> The strangest thing to me that this is the problem with sequential
>>> write. For example the fnic one machine is
Hi Duglas,
On 9 January 2014 21:54, Douglas Gilbert wrote:
> On 14-01-08 08:57 AM, Sergey Meirovich wrote:
...
>>
>> The strangest thing to me that this is the problem with sequential
>> write. For example the fnic one machine is zoned to EMC XtremIO and
>> had results: 14.43Mb/sec 3693.65
On 14-01-08 08:57 AM, Sergey Meirovich wrote:
Hi James,
On 7 January 2014 22:57, James Smart wrote:
Sergey,
The Thor chipset is a bit old - a 4Gig adapter. Most of our performance
improvements, including parallelization, have gone into the 8G and 16G
adapters. But you still should have seen
Hi Jan,
On 8 January 2014 22:55, Jan Kara wrote:
>
>> So far I've seen so massive degradation only in SAN environment. I
>> started my investigation with RHEL6.5 kernel so below table is from it
>> but the trend is the same as for mainline it seems.
>>
>> Chunk size Bandwidth MiB/s
>>
Hi Jan,
On 8 January 2014 22:55, Jan Kara j...@suse.cz wrote:
So far I've seen so massive degradation only in SAN environment. I
started my investigation with RHEL6.5 kernel so below table is from it
but the trend is the same as for mainline it seems.
Chunk size Bandwidth MiB/s
On 14-01-08 08:57 AM, Sergey Meirovich wrote:
Hi James,
On 7 January 2014 22:57, James Smart james.sm...@emulex.com wrote:
Sergey,
The Thor chipset is a bit old - a 4Gig adapter. Most of our performance
improvements, including parallelization, have gone into the 8G and 16G
adapters. But you
Hi Duglas,
On 9 January 2014 21:54, Douglas Gilbert dgilb...@interlog.com wrote:
On 14-01-08 08:57 AM, Sergey Meirovich wrote:
...
The strangest thing to me that this is the problem with sequential
write. For example the fnic one machine is zoned to EMC XtremIO and
had results: 14.43Mb/sec
Hi,
On 9 January 2014 23:26, Sergey Meirovich rathamah...@gmail.com wrote:
Hi Duglas,
On 9 January 2014 21:54, Douglas Gilbert dgilb...@interlog.com wrote:
On 14-01-08 08:57 AM, Sergey Meirovich wrote:
...
The strangest thing to me that this is the problem with sequential
write. For
On Wed 08-01-14 19:30:38, Sergey Meirovich wrote:
> On 8 January 2014 17:26, Christoph Hellwig wrote:
> >
> > On my laptop SSD I get the following results (sometimes up to 200MB/s,
> > sometimes down to 100MB/s, always in the 40k to 50k IOps range):
> >
> > time elapsed (sec.):5
> > bandwidth
On 8 January 2014 17:26, Christoph Hellwig wrote:
>
> On my laptop SSD I get the following results (sometimes up to 200MB/s,
> sometimes down to 100MB/s, always in the 40k to 50k IOps range):
>
> time elapsed (sec.):5
> bandwidth (MiB/s): 160.00
> IOps: 40960.00
Any
On Wed, Jan 08, 2014 at 04:43:07PM +0200, Sergey Meirovich wrote:
> Results are almost the same:
> 14.68Mb/sec 3758.02 Requests/sec
>
On my laptop SSD I get the following results (sometimes up to 200MB/s,
sometimes down to 100MB/s, always in the 40k to 50k IOps range):
time elapsed (sec.):
Hi Christoph,
On 8 January 2014 16:03, Christoph Hellwig wrote:
> On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
>> Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
>> However I have tried XFS as well. It was a bit slower than ext4 on all
>>
On Wed, Jan 08, 2014 at 02:17:13AM +0100, Jan Kara wrote:
> Well, I was specifically worried about i_mutex locking. In particular:
> Before we report appending IO completion we need to update i_size.
> To update i_size we need to grab i_mutex.
>
> Now this is unpleasant because inode_dio_wait()
On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
> Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
> However I have tried XFS as well. It was a bit slower than ext4 on all
> occasions.
I wasn't trying to say XFS fixes your problem, but that we could
Hi James,
On 7 January 2014 22:57, James Smart wrote:
> Sergey,
>
> The Thor chipset is a bit old - a 4Gig adapter. Most of our performance
> improvements, including parallelization, have gone into the 8G and 16G
> adapters. But you still should have seen significantly beyond what you
>
Hi James,
On 7 January 2014 22:57, James Smart james.sm...@emulex.com wrote:
Sergey,
The Thor chipset is a bit old - a 4Gig adapter. Most of our performance
improvements, including parallelization, have gone into the 8G and 16G
adapters. But you still should have seen significantly beyond
On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
However I have tried XFS as well. It was a bit slower than ext4 on all
occasions.
I wasn't trying to say XFS fixes your problem, but that we could
On Wed, Jan 08, 2014 at 02:17:13AM +0100, Jan Kara wrote:
Well, I was specifically worried about i_mutex locking. In particular:
Before we report appending IO completion we need to update i_size.
To update i_size we need to grab i_mutex.
Now this is unpleasant because inode_dio_wait()
Hi Christoph,
On 8 January 2014 16:03, Christoph Hellwig h...@infradead.org wrote:
On Tue, Jan 07, 2014 at 08:37:23PM +0200, Sergey Meirovich wrote:
Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4
However I have tried XFS as well. It was a bit slower than ext4 on
On Wed, Jan 08, 2014 at 04:43:07PM +0200, Sergey Meirovich wrote:
Results are almost the same:
14.68Mb/sec 3758.02 Requests/sec
On my laptop SSD I get the following results (sometimes up to 200MB/s,
sometimes down to 100MB/s, always in the 40k to 50k IOps range):
time elapsed (sec.):
On 8 January 2014 17:26, Christoph Hellwig h...@infradead.org wrote:
On my laptop SSD I get the following results (sometimes up to 200MB/s,
sometimes down to 100MB/s, always in the 40k to 50k IOps range):
time elapsed (sec.):5
bandwidth (MiB/s): 160.00
IOps:
On Wed 08-01-14 19:30:38, Sergey Meirovich wrote:
On 8 January 2014 17:26, Christoph Hellwig h...@infradead.org wrote:
On my laptop SSD I get the following results (sometimes up to 200MB/s,
sometimes down to 100MB/s, always in the 40k to 50k IOps range):
time elapsed (sec.):5
On Tue 07-01-14 07:58:30, Christoph Hellwig wrote:
> On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
> > This is likely a problem of Linux direct IO implementation. The thing is
> > that in Linux when you are doing appending direct IO (i.e., direct IO which
> > changes file size), the
Hi Christoph,
On 7 January 2014 17:58, Christoph Hellwig wrote:
> On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
>> This is likely a problem of Linux direct IO implementation. The thing is
>> that in Linux when you are doing appending direct IO (i.e., direct IO which
>> changes file
On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
> This is likely a problem of Linux direct IO implementation. The thing is
> that in Linux when you are doing appending direct IO (i.e., direct IO which
> changes file size), the IO is performed synchronously so that we have our
> life
On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
This is likely a problem of Linux direct IO implementation. The thing is
that in Linux when you are doing appending direct IO (i.e., direct IO which
changes file size), the IO is performed synchronously so that we have our
life
Hi Christoph,
On 7 January 2014 17:58, Christoph Hellwig h...@infradead.org wrote:
On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
This is likely a problem of Linux direct IO implementation. The thing is
that in Linux when you are doing appending direct IO (i.e., direct IO which
On Tue 07-01-14 07:58:30, Christoph Hellwig wrote:
On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
This is likely a problem of Linux direct IO implementation. The thing is
that in Linux when you are doing appending direct IO (i.e., direct IO which
changes file size), the IO is
46 matches
Mail list logo