[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-04-04 Thread Prakash Surya
Closed #548 via 3f3cc3c3b4711584dc83f4566add510782e30e51.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#event-1557014526
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M65210a36799228d33daa2dbe
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-28 Thread Alexander Motin
OK.  The fluctuations are indeed strange, but do not look dramatic.  Hope the 
test is representative for the issue.

One more not checked thought: right now we have some customer with pretty 
fragmented SSD pool, which spends significant amount of time in opening new 
metaslabs, that dramatically affects the performance.  There are still attempts 
to analyze it better and address with some tunables, but it makes me think that 
having several metaslabs open same time may require proportionally more time to 
open/close them.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-376969271
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Mf08671aaef41d7b3890dc5b1
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-28 Thread Paul Dagnelie
Write performance:

For `recordsize=512`:

| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.03|0.07|0.45|3.27|**10**|0.03|0.07|0.43|3.35|
|**40**|0.11|0.29|1.68|34.28|**40**|0.11|0.29|1.66|27.49|
|**160**|0.46|1.17|8.18|478.53|**160**|0.45|1.18|8.23|357.20|

For `recordsize=8k`:

| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.00|0.00|0.16|1.09|**10**|0.00|0.00|0.16|1.08|
|**40**|0.11|0.17|0.64|4.67|**40**|0.11|0.00|0.64|4.49|
|**160**|0.46|0.70|2.72|19.08|**160**|0.46|0.70|2.69|22.66|

For `recordsize=128k`:

| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.00|0.04|0.00|1.02|**10**|0.00|0.04|0.15|1.02|
|**40**|0.12|0.17|0.61|4.18|**40**|0.11|0.17|0.61|4.22|
|**160**|0.46|0.67|2.48|16.78|**160**|0.46|0.68|2.43|16.81|

For `recordsize=1m`:

| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.00|0.04|0.15|1.05|**10**|0.03|0.00|0.00|1.04|
|**40**|0.11|0.18|0.62|4.27|**40**|0.11|0.17|0.62|4.24|
|**160**|0.46|0.71|2.53|17.14|**160**|0.44|0.68|2.51|17.10|

Similar to the read perf, or slightly better: Almost everything is the same or 
better, only a couple things are worse. Weirdly, 160 64m files for 8k and 128k 
are worse, but 512 and 1m are not.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-376961988
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M13bb2b7a62de8118560a4e4c
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-27 Thread Alexander Motin
Thanks for reminder.  Read numbers indeed look fine: slightly here, slightly 
there, but don't look biased on a quick look. But I would not expect dramatic 
change on read, since multiple files read in parallel are any way read 
non-sequentially, requiring some head seek.  I was more curios about write 
time, when during each transaction group 4 different metaslabs has to be 
written, that may turn more or less sequential process into more random, I 
guess.  Haven't you measured the write time during your tests?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-376654901
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Ma6332b4fc4128f6379761a93
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-27 Thread Prakash Surya
@amotin let us know when you get a chance to review @pcd1193182's performance 
results, and/or if we can count you as a reviewer.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-376645540
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M82c9e75c1786f53de9eb30f5
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-22 Thread Igor K
it should be rebased.
i have tested it in 2 weeks without problems

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-375434450
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M5e77b9ef1f6c5e71f3c4d741
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-22 Thread Igor K
ikozhukhov approved this pull request.





-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#pullrequestreview-106275184
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Mdc1156068b0327048752e8c4
Delivery options: https://openzfs.topicbox.com/groups


Re: [developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-22 Thread Paul Dagnelie
I've updated the comment with a 1m table.

On Thu, Mar 22, 2018 at 10:22 AM, Paul Dagnelie  wrote:

> Hey Igor, I'm running it now. Should have results for you in a few hours.
>
> On Thu, Mar 22, 2018 at 9:46 AM, Igor Kozhukhov  wrote:
>
>> Paul,
>>
>> could you please try to do your tests with recordsize=1m ?
>>
>> Best regards,
>> -Igor
>>
>>
>> On Mar 22, 2018, at 7:43 PM, Paul Dagnelie 
>> wrote:
>>
>> Sorry for the delay in getting back to this. I created a pool on 4 1TB
>> HDDs, and then created a filesystem with compression off (since i'm using
>> random data, it wouldn't help) and a edon-r for the checksum. I then create
>> a number of files of a given size using dd, creating 10 at a time in
>> parallel. Once the files are created, I export and import the pool to clear
>> the cache, and read in all the files, writing them to /dev/null. I ran that
>> part 15 times, timing it each time to get a decent performance measurement.
>> Then I destroy the filesystem, and go back to the fs creation step with new
>> parameters.
>>
>> I ran a decent spectrum of tests, so hopefully some of this data will
>> help reassure you. The numbers across the top rows are the size of the
>> files created, and the numbers across the left are the number of files
>> being read. The times are the average of the 15 runs, in seconds.
>>
>> For recordsize=512:
>> Before128k1m8m64mAfter128k1m8m64m
>> *10* 0.09 0.36 2.88 21.70 *10* 0.07 0.33 2.89 20.21
>> *40* 0.29 1.55 11.93 86.50 *40* 0.24 1.46 10.94 83.53
>> *160* 1.14 6.08 50.10 361.76 *160* 0.93 6.23 44.20 342.40
>>
>> For recordsize=8k:
>> Before128k1m8m64mAfter128k1m8m64m
>> *10* 0.05 0.16 0.64 3.89 *10* 0.07 0.16 0.72 3.61
>> *40* 0.25 0.59 3.02 17.72 *40* 0.20 0.68 3.46 15.96
>> *160* 0.69 2.79 12.43 68.86 *160* 0.76 2.59 14.47 59.43
>>
>> For recordsize=128k:
>> Before128k1m8m64mAfter128k1m8m64m
>> *10* 0.04 0.10 0.53 3.58 *10* 0.05 0.11 0.64 4.32
>> *40* 0.14 0.37 2.31 17.94 *40* 0.14 0.45 2.62 16.95
>> *160* 0.59 1.67 9.81 60.39 *160* 0.56 1.77 10.42 61.90
>>
>> It looks like performance is usually similar with the new bits, some
>> slightly better and some slightly worse. Those results may be within the
>> margin of error of each other, or there may be some pattern that explains
>> why some runs are slightly faster and some are slightly slower; I'm not
>> sure.
>>
>> —
>> You are receiving this because you are subscribed to this thread.
>> Reply to this email directly, view it on GitHub
>> , or mute
>> the thread
>> 
>> .
>>
>>
>> *openzfs * / openzfs-developer
>>  / Permalink
>> 
>>  Delivery
>> options 
>>
>
>
>
> --
> Paul Dagnelie
>



-- 
Paul Dagnelie

--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Mea923ba989a16177e89ea86d
Delivery options: https://openzfs.topicbox.com/groups


Re: [developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-22 Thread Paul Dagnelie
Hey Igor, I'm running it now. Should have results for you in a few hours.

On Thu, Mar 22, 2018 at 9:46 AM, Igor Kozhukhov  wrote:

> Paul,
>
> could you please try to do your tests with recordsize=1m ?
>
> Best regards,
> -Igor
>
>
> On Mar 22, 2018, at 7:43 PM, Paul Dagnelie 
> wrote:
>
> Sorry for the delay in getting back to this. I created a pool on 4 1TB
> HDDs, and then created a filesystem with compression off (since i'm using
> random data, it wouldn't help) and a edon-r for the checksum. I then create
> a number of files of a given size using dd, creating 10 at a time in
> parallel. Once the files are created, I export and import the pool to clear
> the cache, and read in all the files, writing them to /dev/null. I ran that
> part 15 times, timing it each time to get a decent performance measurement.
> Then I destroy the filesystem, and go back to the fs creation step with new
> parameters.
>
> I ran a decent spectrum of tests, so hopefully some of this data will help
> reassure you. The numbers across the top rows are the size of the files
> created, and the numbers across the left are the number of files being
> read. The times are the average of the 15 runs, in seconds.
>
> For recordsize=512:
> Before128k1m8m64mAfter128k1m8m64m
> *10* 0.09 0.36 2.88 21.70 *10* 0.07 0.33 2.89 20.21
> *40* 0.29 1.55 11.93 86.50 *40* 0.24 1.46 10.94 83.53
> *160* 1.14 6.08 50.10 361.76 *160* 0.93 6.23 44.20 342.40
>
> For recordsize=8k:
> Before128k1m8m64mAfter128k1m8m64m
> *10* 0.05 0.16 0.64 3.89 *10* 0.07 0.16 0.72 3.61
> *40* 0.25 0.59 3.02 17.72 *40* 0.20 0.68 3.46 15.96
> *160* 0.69 2.79 12.43 68.86 *160* 0.76 2.59 14.47 59.43
>
> For recordsize=128k:
> Before128k1m8m64mAfter128k1m8m64m
> *10* 0.04 0.10 0.53 3.58 *10* 0.05 0.11 0.64 4.32
> *40* 0.14 0.37 2.31 17.94 *40* 0.14 0.45 2.62 16.95
> *160* 0.59 1.67 9.81 60.39 *160* 0.56 1.77 10.42 61.90
>
> It looks like performance is usually similar with the new bits, some
> slightly better and some slightly worse. Those results may be within the
> margin of error of each other, or there may be some pattern that explains
> why some runs are slightly faster and some are slightly slower; I'm not
> sure.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> , or mute
> the thread
> 
> .
>
>
> *openzfs * / openzfs-developer
>  / Permalink
> 
>  Delivery
> options 
>



-- 
Paul Dagnelie

--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M0c3cc37d719862cce7cdd734
Delivery options: https://openzfs.topicbox.com/groups


Re: [developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-22 Thread Igor Kozhukhov
Paul,

could you please try to do your tests with recordsize=1m ?

Best regards,
-Igor


> On Mar 22, 2018, at 7:43 PM, Paul Dagnelie  wrote:
> 
> Sorry for the delay in getting back to this. I created a pool on 4 1TB HDDs, 
> and then created a filesystem with compression off (since i'm using random 
> data, it wouldn't help) and a edon-r for the checksum. I then create a number 
> of files of a given size using dd, creating 10 at a time in parallel. Once 
> the files are created, I export and import the pool to clear the cache, and 
> read in all the files, writing them to /dev/null. I ran that part 15 times, 
> timing it each time to get a decent performance measurement. Then I destroy 
> the filesystem, and go back to the fs creation step with new parameters.
> 
> I ran a decent spectrum of tests, so hopefully some of this data will help 
> reassure you. The numbers across the top rows are the size of the files 
> created, and the numbers across the left are the number of files being read. 
> The times are the average of the 15 runs, in seconds.
> 
> For recordsize=512:
> Before128k1m  8m  64m After   128k1m  8m  
> 64m
> 100.090.362.8821.70   10  0.070.332.8920.21
> 400.291.5511.93   86.50   40  0.241.4610.94   83.53
> 160   1.146.0850.10   361.76  160 0.936.2344.20   342.40
> 
> For recordsize=8k:
> Before128k1m  8m  64m After   128k1m  8m  
> 64m
> 100.050.160.643.8910  0.070.160.723.61
> 400.250.593.0217.72   40  0.200.683.4615.96
> 160   0.692.7912.43   68.86   160 0.762.5914.47   59.43
> 
> For recordsize=128k:
> Before128k1m  8m  64m After   128k1m  8m  
> 64m
> 100.040.100.533.5810  0.050.110.644.32
> 400.140.372.3117.94   40  0.140.452.6216.95
> 160   0.591.679.8160.39   160 0.561.7710.42   61.90
> 
> It looks like performance is usually similar with the new bits, some slightly 
> better and some slightly worse. Those results may be within the margin of 
> error of each other, or there may be some pattern that explains why some runs 
> are slightly faster and some are slightly slower; I'm not sure.
> 
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub 
> , or mute 
> the thread 
> .
> 
> openzfs  / openzfs-developer 
>  / Permalink 
> Delivery
>  options 

--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M9e030568d2f37c30c92ed08b
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-03-22 Thread Paul Dagnelie
Sorry for the delay in getting back to this. I created a pool on 4 1TB HDDs, 
and then created a filesystem with compression off (since i'm using random 
data, it wouldn't help) and a edon-r for the checksum.  I then create a number 
of files of a given size using dd, creating 10 at a time in parallel. Once the 
files are created, I export and import the pool to clear the cache, and read in 
all the files, writing them to /dev/null. I ran that part 15 times, timing it 
each time to get a decent performance measurement.  Then I destroy the 
filesystem, and go back to the fs creation step with new parameters. 

I ran a decent spectrum of tests, so hopefully some of this data will help 
reassure you.  The numbers across the top rows are the size of the files 
created, and the numbers across the left are the number of files being read. 
The times are the average of the 15 runs, in seconds.

For `recordsize=512`:

| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.09|0.36|2.88|21.70|**10**|0.07|0.33|2.89|20.21|
|**40**|0.29|1.55|11.93|86.50|**40**|0.24|1.46|10.94|83.53|
|**160**|1.14|6.08|50.10|361.76|**160**|0.93|6.23|44.20|342.40|

For `recordsize=8k`:

| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.05|0.16|0.64|3.89|**10**|0.07|0.16|0.72|3.61|
|**40**|0.25|0.59|3.02|17.72|**40**|0.20|0.68|3.46|15.96|
|**160**|0.69|2.79|12.43|68.86|**160**|0.76|2.59|14.47|59.43|

For `recordsize=128k`:

| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.04|0.10|0.53|3.58|**10**|0.05|0.11|0.64|4.32|
|**40**|0.14|0.37|2.31|17.94|**40**|0.14|0.45|2.62|16.95|
|**160**|0.59|1.67|9.81|60.39|**160**|0.56|1.77|10.42|61.90|

It looks like performance is usually similar with the new bits, some slightly 
better and some slightly worse. Those results may be within the margin of error 
of each other, or there may be some pattern that explains why some runs are 
slightly faster and some are slightly slower; I'm not sure.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-375374996
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M50a62e81c12093e0307c840a
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-21 Thread Alexander Motin
@pcd1193182 I think single file sequential write may trigger it less 
aggressively, according to the logic I saw.  Though metadata I think should go 
to different metaslabs.  Multiple small files written may trigger it stronger.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-367533348
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M77b9a79b86e1423ac0deb4b9
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-21 Thread Paul Dagnelie
@amotin I'm setting up a system with regular HDDs, and I'll do some sequential 
write and read tests and report results back.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-367532213
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M32d51d52568baa61a2ee4565
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-21 Thread Alexander Motin
@ahrens If you mean that result should be measured for regular HDDs, that yes, 
it makes sense to me.  I am not opposing this patch, just proposing to be more 
careful.  Right now with our QA and marketing engineers we are trying to 
quantify negative side effects we got from compressed ARC, ABD, bigger indirect 
block and other changes on different benchmarks, and sometimes I had to 
advocate the new features more then I'd like to.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-367422341
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M45d7c7323050a268748fecce
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-21 Thread Matthew Ahrens
@amotin Did my response make sense?  Also, have you reviewed the code and can 
we count you as a code reviewer?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-367417963
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M08b42317c635fb50cc72ec8a
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-21 Thread Matthew Ahrens
@andy-js No, these changes are in addition to the allocation improvements that 
@grwilson presented at OpenZFS Europe a few years back:

https://www.youtube.com/watch?v=KBI6rRGUv4E=PLaUVvul17xScvtic0SPoks2MlQleyejks=4

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-367415525
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Ma11986232d9a48fe04d7bbb2
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-15 Thread Matthew Ahrens
I think the change as proposed does have "safe defaults and tunable as a last 
resort".  We're speculating here about the impact on locality.  I think we 
should measure that before going with a lower default value (which would 
decrease the benefit to high-performance systems and require tuning on them to 
get good results), or doing something self-tuning.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-366078103
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Mc01562a719027f35574c2a42
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-15 Thread Alexander Motin
Safe defaults and tunables are fine as last resort.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-366072630
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Mf6fc7db654b4590e5172198e
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-15 Thread Andrew Stormont
Am I right in thinking these are the changes that were presented an OpenZFS 
Europe a couple of years back?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-366071890
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M2a28db034451ecab3f6847d4
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-15 Thread Paul Dagnelie
I would be open to suggestions on how to implement self-tuning behavior for 
this; we spent some time thinking about it, but didn't have any great 
solutions. In absence of a good self-tuning mechanism, picking a safe default 
and giving users the option to tweak for better performance is probably the way 
to go, no?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-366071299
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-M8a4275b4239edb3bbdf9742f
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-15 Thread Alexander Motin
I don't believe in tunables for production systems.  Users always tend to 
misuse them or simply don't know about them.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-365961590
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Mc2e68a02697c114dbad2fd14
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-14 Thread Paul Dagnelie
It's probably mixed. For low-throughput workloads, probably it reduces 
locality. For high-throughput systems, my understanding is that in the steady 
state, you're switching between metaslabs during a txg anyway, so the effect is 
reduced significantly.  One thing we could do is change the default of 
spa_allocators down to 1 or 2, to reduce the odds of harm; people could then 
tune it up if they think it would give them more performance.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-365827055
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Mcbc42410fad2bcea8be8bef2
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9112 Improve allocation performance on high-end systems (#548)

2018-02-14 Thread Alexander Motin
It look interesting.  I just haven't decided yet whether distribution different 
objects writes between several allocators and so metaslabs is good or bad for 
data locality.  It should be mostly irrelevant for SSDs, but I worry about 
HDDs, which may have to seek over all the media possibly many times per TXG due 
to allocation queue limitations.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-365826628
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f71bf0a7c33f4-Mab62fc7bb7e67ecb774ad8f8
Powered by Topicbox: https://topicbox.com