Closed #548 via 3f3cc3c3b4711584dc83f4566add510782e30e51.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#event-1557014526
--
openzfs:
OK. The fluctuations are indeed strange, but do not look dramatic. Hope the
test is representative for the issue.
One more not checked thought: right now we have some customer with pretty
fragmented SSD pool, which spends significant amount of time in opening new
metaslabs, that dramatically
Write performance:
For `recordsize=512`:
| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.03|0.07|0.45|3.27|**10**|0.03|0.07|0.43|3.35|
|**40**|0.11|0.29|1.68|34.28|**40**|0.11|0.29|1.66|27.49|
Thanks for reminder. Read numbers indeed look fine: slightly here, slightly
there, but don't look biased on a quick look. But I would not expect dramatic
change on read, since multiple files read in parallel are any way read
non-sequentially, requiring some head seek. I was more curios about
@amotin let us know when you get a chance to review @pcd1193182's performance
results, and/or if we can count you as a reviewer.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
it should be rebased.
i have tested it in 2 weeks without problems
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-375434450
--
ikozhukhov approved this pull request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#pullrequestreview-106275184
--
openzfs:
I've updated the comment with a 1m table.
On Thu, Mar 22, 2018 at 10:22 AM, Paul Dagnelie wrote:
> Hey Igor, I'm running it now. Should have results for you in a few hours.
>
> On Thu, Mar 22, 2018 at 9:46 AM, Igor Kozhukhov wrote:
>
>> Paul,
>>
>> could you
Hey Igor, I'm running it now. Should have results for you in a few hours.
On Thu, Mar 22, 2018 at 9:46 AM, Igor Kozhukhov wrote:
> Paul,
>
> could you please try to do your tests with recordsize=1m ?
>
> Best regards,
> -Igor
>
>
> On Mar 22, 2018, at 7:43 PM, Paul Dagnelie
Paul,
could you please try to do your tests with recordsize=1m ?
Best regards,
-Igor
> On Mar 22, 2018, at 7:43 PM, Paul Dagnelie wrote:
>
> Sorry for the delay in getting back to this. I created a pool on 4 1TB HDDs,
> and then created a filesystem with
Sorry for the delay in getting back to this. I created a pool on 4 1TB HDDs,
and then created a filesystem with compression off (since i'm using random
data, it wouldn't help) and a edon-r for the checksum. I then create a number
of files of a given size using dd, creating 10 at a time in
@pcd1193182 I think single file sequential write may trigger it less
aggressively, according to the logic I saw. Though metadata I think should go
to different metaslabs. Multiple small files written may trigger it stronger.
--
You are receiving this because you are subscribed to this
@amotin I'm setting up a system with regular HDDs, and I'll do some sequential
write and read tests and report results back.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
@ahrens If you mean that result should be measured for regular HDDs, that yes,
it makes sense to me. I am not opposing this patch, just proposing to be more
careful. Right now with our QA and marketing engineers we are trying to
quantify negative side effects we got from compressed ARC, ABD,
@amotin Did my response make sense? Also, have you reviewed the code and can
we count you as a code reviewer?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
@andy-js No, these changes are in addition to the allocation improvements that
@grwilson presented at OpenZFS Europe a few years back:
https://www.youtube.com/watch?v=KBI6rRGUv4E=PLaUVvul17xScvtic0SPoks2MlQleyejks=4
--
You are receiving this because you are subscribed to this thread.
Reply to
I think the change as proposed does have "safe defaults and tunable as a last
resort". We're speculating here about the impact on locality. I think we
should measure that before going with a lower default value (which would
decrease the benefit to high-performance systems and require tuning
Safe defaults and tunables are fine as last resort.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-366072630
--
Am I right in thinking these are the changes that were presented an OpenZFS
Europe a couple of years back?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-366071890
I would be open to suggestions on how to implement self-tuning behavior for
this; we spent some time thinking about it, but didn't have any great
solutions. In absence of a good self-tuning mechanism, picking a safe default
and giving users the option to tweak for better performance is probably
I don't believe in tunables for production systems. Users always tend to
misuse them or simply don't know about them.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
It's probably mixed. For low-throughput workloads, probably it reduces
locality. For high-throughput systems, my understanding is that in the steady
state, you're switching between metaslabs during a txg anyway, so the effect is
reduced significantly. One thing we could do is change the
It look interesting. I just haven't decided yet whether distribution different
objects writes between several allocators and so metaslabs is good or bad for
data locality. It should be mostly irrelevant for SSDs, but I worry about
HDDs, which may have to seek over all the media possibly many
23 matches
Mail list logo