Got a dump with kmem_flags set, but I am not sure if this is the same issue, as
I applied the patch to a more recent build.
panic[cpu0]/thread=ff000f5e7c40:
BAD TRAP: type=e (#pf Page fault) rp=ff000f5e7790 addr=0 occurred in module
"zfs" due to a NULL pointer dereference
zpool-zones:
@sdimitro pushed 1 commit.
667119b final feedback
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/544/files/44ba0bf8d06902b16951721e34cb1e98f8ff2ac6..667119b38ecaf1f1a540975a8b63a7189ee9eaeb
--
rmustacc commented on this pull request.
> @@ -198,6 +204,100 @@ write_uint64(mdb_tgt_as_t as, mdb_tgt_addr_t addr,
> uint64_t n, uint_t rdback)
return (addr + sizeof (n));
}
+/*
+ * Writes to objects of size 1, 2, 4, or 8 bytes. The function
+ * doesn't care if the object is a numbe
sdimitro commented on this pull request.
> @@ -198,6 +204,100 @@ write_uint64(mdb_tgt_as_t as, mdb_tgt_addr_t addr,
> uint64_t n, uint_t rdback)
return (addr + sizeof (n));
}
+/*
+ * Writes to objects of size 1, 2, 4, or 8 bytes. The function
+ * doesn't care if the object is a numbe
@sdimitro pushed 1 commit.
a23de99 get rid of redundant check
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/544/files/667119b38ecaf1f1a540975a8b63a7189ee9eaeb..a23de999d1a50e209c43bcdaadb8a29029c50e53
--
Reviewed by: Matthew Ahrens mahr...@delphix.com
Reviewed by: George Wilson george.wil...@delphix.com
Reviewed by: Serapheim Dimitropoulos serapheim.dimi...@delphix.com
## Overview
We parallelize the allocation process by creating the concept of "allocators".
There are a certain number of allocat
It look interesting. I just haven't decided yet whether distribution different
objects writes between several allocators and so metaslabs is good or bad for
data locality. It should be mostly irrelevant for SSDs, but I worry about
HDDs, which may have to seek over all the media possibly many t
It's probably mixed. For low-throughput workloads, probably it reduces
locality. For high-throughput systems, my understanding is that in the steady
state, you're switching between metaslabs during a txg anyway, so the effect is
reduced significantly. One thing we could do is change the default