veloper / see discussions + participants + delivery
> options Permalink
--
Paul Dagnelie
--
openzfs: openzfs-developer
Permalink:
https://openzfs.topicbox.com/groups/developer/T950b02acdf392290-M8e272454e732ce1d5a0047c5
Delivery options: https:/
> participants <https://openzfs.topicbox.com/groups/developer/members> +
> delivery
> options <https://openzfs.topicbox.com/groups/developer/subscription>
> Permalink
> <https://openzfs.topicbox.com/groups/developer/T91ab128e3e20cf25-M9c251997cc58f27edf3eee42>
>
@ahrens @behlendorf On illumos, cv_destroy simply asserts that noone is still
waiting. As a result, you cannot safely cv_broadcast(); cv_destroy(); free(cv);
because the waiters may not have actually started executing and exited the cv
code yet.
--
You are receiving this because you are
pcd1193182 commented on this pull request.
> (void) fprintf(stderr, "TIMESENT SNAPSHOT\n");
/*
* Print the progress from ZFS_IOC_SEND_PROGRESS every second.
*/
for (;;) {
- (void) sleep(1);
+
+ /*
+
pcd1193182 requested changes on this pull request.
I like the idea overall, just a few questions about details of the
implementation.
>*/
- if (sdd->progress) {
We could still have this conditional, just `if(sdd->progress || sdd->siginfo)`
Might be best to keep
pcd1193182 requested changes on this pull request.
Once the comment is updated to reflect ZFS as a whole, rather than ZoL
specifically, this looks good to me.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
pcd1193182 approved this pull request.
I went over this issue with Alexander privately, and this is the solution we
came up with. This particular implementation looks good to me, but I'd
appreciate feedback on the approach from another set of eyes.
--
You are receiving this because you are
pcd1193182 approved this pull request.
Looks like there was an issue with the automated test run, but the code looks
good to me.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
pcd1193182 approved this pull request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/656#pullrequestreview-132612380
--
openzfs:
The issue is that the error handling in zfs diff was not updated to reflect
that files on the delete queue no longer trigger a non-zero return value. As a
result, delete-queue files in the from snapshot are no longer ignored by zfs
diff, which results in misleading and incorrect behavior. The
@pcd1193182 pushed 1 commit.
f0f88ac Address mahrens feedback
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/631/files/5855937858f682c160005770975b0546f591b4b2..f0f88acb4c7f84fef26f23a3e67a66ad404179c3
pcd1193182 commented on this pull request.
> @@ -2206,6 +2215,9 @@ receive_write(struct receive_writer_arg *rwa, struct
> drr_write *drrw,
rwa->last_object = drrw->drr_object;
rwa->last_offset = drrw->drr_offset;
+ if (rwa->last_object > rwa->max_object)
+
I also haven't had time for an in-depth look into the problem, but I'm inclined
to agree with Matt. Retrying with delay is usually a sign that there is a
concurrency problem that is being avoided; for the first issue, I'd be curious
which ioctls specifically do this. My guess is that they
Write performance:
For `recordsize=512`:
| Before |128k| 1m| 8m| 64m| After |128k| 1m| 8m| 64m|
|---|:-:|:---:|::|:-:|---|:-:|:---:|::|:-:|
|**10**|0.03|0.07|0.45|3.27|**10**|0.03|0.07|0.43|3.35|
|**40**|0.11|0.29|1.68|34.28|**40**|0.11|0.29|1.66|27.49|
I've updated the comment with a 1m table.
On Thu, Mar 22, 2018 at 10:22 AM, Paul Dagnelie <p...@delphix.com> wrote:
> Hey Igor, I'm running it now. Should have results for you in a few hours.
>
> On Thu, Mar 22, 2018 at 9:46 AM, Igor Kozhukhov <i...@dilos.org> wrote:
>
Hey Igor, I'm running it now. Should have results for you in a few hours.
On Thu, Mar 22, 2018 at 9:46 AM, Igor Kozhukhov <i...@dilos.org> wrote:
> Paul,
>
> could you please try to do your tests with recordsize=1m ?
>
> Best regards,
> -Igor
>
>
> On Mar 22
Sorry for the delay in getting back to this. I created a pool on 4 1TB HDDs,
and then created a filesystem with compression off (since i'm using random
data, it wouldn't help) and a edon-r for the checksum. I then create a number
of files of a given size using dd, creating 10 at a time in
@amotin I'm setting up a system with regular HDDs, and I'll do some sequential
write and read tests and report results back.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Reviewed by: Pavel Zakharov
Reviewed by: Matthew Ahrens
Reopening [PR 423](https://github.com/openzfs/openzfs/pull/423) rebased onto
trunk, will RTI promptly.
You can view, comment on, or merge this pull request online at:
I would be open to suggestions on how to implement self-tuning behavior for
this; we spent some time thinking about it, but didn't have any great
solutions. In absence of a good self-tuning mechanism, picking a safe default
and giving users the option to tweak for better performance is probably
It's probably mixed. For low-throughput workloads, probably it reduces
locality. For high-throughput systems, my understanding is that in the steady
state, you're switching between metaslabs during a txg anyway, so the effect is
reduced significantly. One thing we could do is change the
Reviewed by: Matthew Ahrens mahr...@delphix.com
Reviewed by: George Wilson george.wil...@delphix.com
Reviewed by: Serapheim Dimitropoulos serapheim.dimi...@delphix.com
## Overview
We parallelize the allocation process by creating the concept of "allocators".
There are a certain number of
pcd1193182 approved this pull request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/510#pullrequestreview-93380303
--
openzfs-developer
@pcd1193182 pushed 1 commit.
1decb22 comment, errors
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/484/files/8c3bff1cfb34de585bded4d676a5303e44ebaacc..1decb225010a9bc1f14e282853b53426bf15a2af
This update rebases on top of openzfs's master branch, as well as the
encryption PR. The goal is to integrate this change after encryption has
landed, so I went ahead and did the rebasing early to get it out of the way.
--
You are receiving this because you are subscribed to this thread.
Reply
pcd1193182 commented on this pull request.
> @@ -2082,6 +2086,38 @@ zfs_resume_fs(zfsvfs_t *zfsvfs, dsl_dataset_t *ds)
return (err);
}
+/*
+ * Release VOPs and unmount a suspended filesystem.
+ */
+int
+zfs_end_fs(zfsvfs_t *zfsvfs, dsl_dataset_t *ds)
If you're referring to the
@pcd1193182 pushed 1 commit.
cc8f2f9 ramzec feedback
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/484/files/2209166b70f40c9842cd61ceb70f8367da8541d6..cc8f2f9b916f736eb2d28866a4b8791b36a82fbf
pcd1193182 commented on this pull request.
> + dbn->dbn_dirty = B_FALSE;
+ }
+ }
+#ifdef ZFS_DEBUG
+ for (dsl_bookmark_node_t *dbn = avl_first(>ds_bookmarks);
+ dbn != NULL; dbn = AVL_NEXT(>ds_bookmarks, dbn)) {
+
pcd1193182 commented on this pull request.
- return (0);
+ switch (new_type) {
+ case HOLE:
+ pending->sru.hole.datablksz = datablksz;
+ break;
+ case DATA:
+ pending->sru.data.datablksz = datablksz;
+
pcd1193182 commented on this pull request.
> + */
+static int
+find_redact_book(libzfs_handle_t *hdl, const char *path,
+const uint64_t *redact_snap_guids, int num_redact_snaps,
+char **bookname)
+{
+ char errbuf[1024];
+ int error = 0;
+ nvlist_t *props =
pcd1193182 commented on this pull request.
> @@ -1054,6 +1054,8 @@ static const char *spa_feature_names[] = {
"com.delphix:embedded_data",
"org.open-zfs:large_blocks",
"org.illumos:sha512",
+ "com.delphix:redaction_bookmarks",
It's primarily for tracking purposes.
This patch implements Redacted send/recv, a feature for zfs send and receive
described at the 2015 ZFS developer summit. It includes extensive testing, as
well as significant refactoring of the ZFS send and receive code. Also
included are new features for send size estimation, new ioctls for
That can only be a good thing, no? Fewer hash collisions will generally result
in higher performance. Even if you do lots of lookups to similar-argument dbufs
at about the same time, we store them using chaining, so it doesn't get you any
cache effect wins.
--
You are receiving this because
Hi @idodeclare, including the lower 8 bits of the spa and many more bits of the
os (because we're dropping the lower 8 and then & with 0xFF, so we only get
bits 8-15) does not decrease the quality of the hashing. It will not cause any
new collisions which did not already occur, and we perform a
pcd1193182 approved this pull request.
Send receive code looks fine to me, just one small question.
> int err;
- if (dmu_object_info(rwa->os, obj, NULL) != 0)
+ err = dmu_object_info(rwa->os, obj, );
Why change this to add the argument?
--
You are
pcd1193182 requested changes on this pull request.
I just looked through the zfs send related code, but that part of it seemed
pretty good to me.
> @@ -1332,9 +1533,17 @@ recv_begin_check_existing_impl(dmu_recv_begin_arg_t
> *drba, dsl_dataset_t *ds,
/* if full, then must be
pcd1193182 commented on this pull request.
> @@ -1332,9 +1533,17 @@ recv_begin_check_existing_impl(dmu_recv_begin_arg_t
> *drba, dsl_dataset_t *ds,
/* if full, then must be forced */
if (!drba->drba_cookie->drc_force)
return
d be possible
>>>> by simply ignoring the hole_birth metadata with something like a global
>>>> tunable, but that seems too heavy-handed to me - either you're disabling
>>>> the feature everywhere because you don't know when you can start trusting
>>>> the birth
38 matches
Mail list logo