participants <https://openzfs.topicbox.com/groups/developer/members> +
> delivery options
> <https://openzfs.topicbox.com/groups/developer/subscription> Permalink
> <https://openzfs.topicbox.com/groups/developer/T77c79c75ab05a912-Me6344594a06ca2007e812e93>
>
--
Alexander Motin
/
+ err = dmu_object_set_blocksize(rwa->os, drro->drr_object,
+ drro->drr_blksz, 0, tx);
}
if (err != 0) {
dmu_tx_commit(tx);
--
Alexander Motin
--
openzfs: openzfs-de
amotin commented on this pull request.
> @@ -1039,6 +1039,13 @@ metaslab_group_allocatable(metaslab_group_t *mg,
> metaslab_group_t *rotor,
if (mg->mg_no_free_space)
return (B_FALSE);
+ /*
+* Relax allocation throttling
amotin commented on this pull request.
> @@ -1039,6 +1039,13 @@ metaslab_group_allocatable(metaslab_group_t *mg,
> metaslab_group_t *rotor,
if (mg->mg_no_free_space)
return (B_FALSE);
+ /*
+* Relax allocation throttling
The patch is committed to FreeBSD head.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/688#issuecomment-414019902
--
openzfs: openzfs-developer
Relax allocation throttling for ditto blocks. Due to random imbalances
in allocation it tends to push block copies to one vdev, that looks looks
slightly better at the moment. Slightly less strict policy allows both
improve data security and surprisingly write performance, since we don't
need to
Use METASLAB_WEIGHT_CLAIM weight to allocate tertiary blocks.
Preious use of METASLAB_WEIGHT_SECONDARY for that caused errors
later on metaslab_activate_allocator() call, leading to massive
load of unneeded metaslabs and write freezes.
You can view, comment on, or merge this pull request online
On 24.05.2018 17:25, Igor Kozhukhov wrote:
>> On May 25, 2018, at 12:22 AM, Alexander Motin <mav...@gmail.com> wrote:
>>
>> ZFS in FreeBSD HEAD now approximately matches illumos from about
>> mid-March. I haven't looked what came in to illumos after that, but at
&
iew over there (over at https://reviews.freebsd.org/D15562).
>>
>> I don't have an Illumos system setup, so I don't feel comfortable
>> trying to make a pull request for code I can't even do a build-test
>> on. Alexander Motin suggested we ask about it. So here I am, asking.
Device removal code does not set spa_indirect_vdevs_loaded for pools
that never experienced device removal. At least one visual consequence
of it is completely blocked speculative prefetcher. This patch sets
the variable in such situations.
You can view, comment on, or merge this pull request
https://github.com/zfsonlinux/zfs/pull/6989 needs to be merged from ZoL. I
originally disregarded it myself due to very small size of dbuf cache, but
since its size increased recently, it should become more important for proper
buffers promotion MRU to MFU. I've tried that patch on FreeBSD
OK. The fluctuations are indeed strange, but do not look dramatic. Hope the
test is representative for the issue.
One more not checked thought: right now we have some customer with pretty
fragmented SSD pool, which spends significant amount of time in opening new
metaslabs, that dramatically
Thanks for reminder. Read numbers indeed look fine: slightly here, slightly
there, but don't look biased on a quick look. But I would not expect dramatic
change on read, since multiple files read in parallel are any way read
non-sequentially, requiring some head seek. I was more curios about
@pcd1193182 I think single file sequential write may trigger it less
aggressively, according to the logic I saw. Though metadata I think should go
to different metaslabs. Multiple small files written may trigger it stronger.
--
You are receiving this because you are subscribed to this
@ahrens If you mean that result should be measured for regular HDDs, that yes,
it makes sense to me. I am not opposing this patch, just proposing to be more
careful. Right now with our QA and marketing engineers we are trying to
quantify negative side effects we got from compressed ARC, ABD,
Safe defaults and tunables are fine as last resort.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/548#issuecomment-366072630
--
I don't believe in tunables for production systems. Users always tend to
misuse them or simply don't know about them.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
It look interesting. I just haven't decided yet whether distribution different
objects writes between several allocators and so metaslabs is good or bad for
data locality. It should be mostly irrelevant for SSDs, but I worry about
HDDs, which may have to seek over all the media possibly many
I may be wrong, but "Cannot contact i-063ec021b6568c4f3:
java.lang.InterruptedException" looks more like a test suite failure.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
I've added the comment.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/497#issuecomment-350350348
--
openzfs-developer
Archives:
Another case when this patch helps is short sequential sub-block reads, which
previously could be handled as misses for the same reason. Prefetch in that
case may be not so important for performance, but it filled stream cache with
useless garbage, that could affect prefetch of other valid
It depends. We hit it in case Veeam backup over iSCSI, where almost all reads
we see as misaligned and performance is terrible (several times lower then
expected). Even if OS/applicaton align partitions, it may not help if file
system or database block size is smaller then ZFS block size.
In case of misaligned I/O sequential requests are not detected as such
due to overlaps in logical block sequence:
dmu_zfetch(f80198dd0ae0, 27347, 9, 1)
dmu_zfetch(f80198dd0ae0, 27355, 9, 1)
dmu_zfetch(f80198dd0ae0, 27363, 9, 1)
dmu_zfetch(f80198dd0ae0, 27371, 9, 1)
As I have old on Linux pull request, I don't know DMU layer good enough, but it
makes sense to me.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/462#issuecomment-329014686
So, if there are no objections any more, can this finally be committed to close
this chapter? The life is goes on, and I see another potentially conflicting
ZIL change being proposed for discussion.
--
You are receiving this because you are subscribed to this thread.
Reply to this email
properties are used on FreeBSD, so I am not sure
whether empty value is correct default there.
Also, is there reason to declare properties applicable only to
ZFS_TYPE_VOLUME as PROP_INHERIT, not as PROP_DEFAULT? From where can
they inherit their values if volumes can not be nested?
--
Alexander Motin
26 matches
Mail list logo