Dump for the assert in sa.c
https://www.magentacloud.de/lnk/4xLBV469
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/489#issuecomment-376474714
@prakashsurya sorry to be a pest. When should I expect this to be RTI'd? The
README.md in the root of the repo doesn't go into enough detail of the
contribution process for me to know if this is being held up for a reason or if
it's just falling through the cracks.
--
You are receiving this
I added 2 more commits, which address feedback from @sdimitro and @behlendorf.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/591#issuecomment-376707957
behlendorf approved this pull request.
I like it, thanks.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/591#pullrequestreview-107505918
--
I have been doing my own testing with parallel mounts and have observed that
enabling SMB shares is a significant bottleneck. From looking at the code it's
not clear to me that this will help.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or
I'll do that now; sorry for the delay, I meant to do that last week. Thanks for
the reminder.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/592#issuecomment-376586429
@GernotS Thanks. I'm also very interested in getting this merged :-)
kmem_flags is great, but in addition it would be good to try a debug build,
i.e. using the `nightly` rather than `nightly-nd` packages. This might be able
to hit an assertion sooner - closer to where the error occurred.
@amotin let us know when you get a chance to review @pcd1193182's performance
results, and/or if we can count you as a reviewer.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Thanks for reminder. Read numbers indeed look fine: slightly here, slightly
there, but don't look biased on a quick look. But I would not expect dramatic
change on read, since multiple files read in parallel are any way read
non-sequentially, requiring some head seek. I was more curios about
I have now moved around a few TBs using raw send, compressed send, etc with
L2ARC devices, but I cannot reproduce the issue any more. Is it possible
kmem_flags hides this?
Will try to get a debug build done.
--
You are receiving this because you are subscribed to this thread.
Reply to this
behlendorf requested changes on this pull request.
> + * vdevs. In either case, we try every combination. This ensures that if
+ * a mirror has small silent errors on all of its children, we can still
+ * reconstruct the correct data, as long as those errors are at
+ * sufficiently-separated
11 matches
Mail list logo