New field and copyright notice added.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/425#issuecomment-315147670
--
openzfs-developer
Archives:
The zpool checkpoint feature in DxOS added a new field in the uberblock.
The Multi-Modifier Protection Pull Request from ZoL adds two new fields
in the uberblock (Reference: https://github.com/zfsonlinux/zfs/pull/6279).
As these two changes come from two different sources and once upstreamed
and
@ofaaland could you take a look too please.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/425#issuecomment-315098175
--
openzfs-developer
sdimitro approved this pull request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/407#pullrequestreview-48710702
--
openzfs-developer
sdimitro commented on this pull request.
- if (zfs_ioctl(hdl, ZFS_IOC_POOL_SCAN, ) == 0 ||
- (errno == ENOENT && func != POOL_SCAN_NONE))
+ /* ECANCELED on a scrub means we resumed a paused scrub */
Yeah I figured that was the case. I just wanted to ask first. Thank you
sdimitro commented on this pull request.
LGTM overall. I just have one question and a small nit.
> .\"
-.Dd Oct 2, 2016
+.Dd June 21, 2016
[nit] 2017
- if (zfs_ioctl(hdl, ZFS_IOC_POOL_SCAN, ) == 0 ||
- (errno == ENOENT && func != POOL_SCAN_NONE))
+ /* ECANCELED on a
Reviewed by: Prakash Surya
Reviewed by: George Wilson
The problem is that zfs_get_data() supplies a stale zgd_bp to
dmu_sync(), which we then nopwrite against.
zfs_get_data() doesn't hold any DMU-related locks, so after it
copies db_blkptr
Reviewed by: Paul Dagnelie
Reviewed by: Pavel Zakharov
Reviewed by: George Wilson
The problem is that when dsl_bookmark_destroy_check() is
executed from open context (the pre-check), it fills in
dbda_success based on the
In nvs_embedded(), when we return EINVAL due to reaching
the recursion limit, we should free the nvpriv_t that was
allocated earlier in the function.
Reviewed by: Pavel Zakharov
Reviewed by: George Wilson
Reviewed by: Prashanth Sreenivasa
Reviewed by: George Wilson
Reviewed by: Matthew Ahrens
Reviewed by: Prashanth Sreenivasa
We need to initialize the dn_origin_obj_refd field of a dnode when it
is allocated from the kmem cache (or, alternatively, when it is
Closed #401.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/401#event-1123509392
--
openzfs-developer
Archives:
yeap just realized it. Sorry about that
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/401#issuecomment-308465961
--
openzfs-developer
When writing pre-compressed buffers, arc_write() requires that
the compression algorithm used to compress the buffer matches
the compression algorithm requested by the zio_prop_t, which is
set by dmu_write_policy(). This makes dmu_write_policy() and its
callers a bit more complicated.
We simplify
Reviewed by: Dan Kimmel
Reviewed by: Paul Dagnelie
dbuf_evict_notify() holds the dbuf_evict_lock while checking if it should do
the eviction itself
(because the evict thread is not able to keep up). This can result in massive
lock contention.
It isn't
@zettabot go
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/369#issuecomment-299274721
--
openzfs-developer
Archives:
Reviewed by: Dan Kimmel
Reviewed by: George Wilson
When writing pre-compressed buffers, arc_write() requires that
the compression algorithm used to compress the buffer matches
the compression algorithm requested by the zio_prop_t, which is
set
Reviewed by: Matt Ahrens
Reviewed by: Chris Williamson
Reviewed by: Pavel Zakharov
We want to be able to run channel programs outside of synching context.
This would greatly improve performance for channel programs
sdimitro approved this pull request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/468#pullrequestreview-64402335
--
openzfs-developer
Reviewed by: Matt Ahrens <mahr...@delphix.com>
Reviewed by: Serapheim Dimitropoulos <seraph...@delphix.com>
There are cases where sa_get_one_zfs_share() of libshare doens't
fill in all the paths[] due to a specific handle being NULL and
returns SA_SYSTEM_ERR.
Currently regardles
sdimitro commented on this pull request.
> for (int i = 0; i < sharearg->zhandle_len; ++i) {
zfs_handle_t *fs_handle =
((zfs_handle_t **)(sharearg->zhandle_arr))[i];
if (fs_handle == NULL) {
+ /* Free non-null
Reviewed by: Pavel Zakharov
Reviewed by: Steve Gonczi
Reviewed by: George Wilson
In dsl_destroy_snapshots_nvl(), "snaps_normalized" is not
freed after it is added to "arg".
You can view, comment on, or merge this
Reviewed by: Chris Williamson
Reviewed by: Matthew Ahrens
ZFS channel programs should be able to perform a rollback.
Upstream bugs: DLPX-43642
You can view, comment on, or merge this pull request online at:
sdimitro approved this pull request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/455#pullrequestreview-60180273
--
openzfs-developer
Reviewed by: Chris Williamson
Reviewed by: Matthew Ahrens
ZFS channel programs should be able to perform a rollback.
Upstream Bugs: DLPX-43642
You can view, comment on, or merge this pull request online at:
Reviewed by: Paul Dagnelie
Reviewed by: Dan Kimmel
Reviewed by: Matt Ahrens
zfs.exists() in channel programs doesn't return any result, and should
have a man page entry. This patch corrects zcp.exists so that it
returns a value
@stevenh It is zfs.exist() (the method used by channel programs). Thanks for
catching that.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/457#issuecomment-326686408
Reviewed by: Matthew Ahrens
Reviewed by: John Kennedy
Reviewed by: Brad Lewis
ZFS channel programs should be able to create snapshots.
In addition to the base snapshot functionality, this entails extra logic to
handle edge
Closed #457.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/457#event-1232174723
--
openzfs-developer
Archives:
Reviewed by: Paul Dagnelie
Reviewed by: Dan Kimmel
Reviewed by: Matt Ahrens
zfs.exists() in channel programs doesn't return any result, and should
have a man page entry. This patch corrects zfs.exists so that it
returns a value
sdimitro approved this pull request.
Thanks for you for correcting the additional cases that we missed :-)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Reviewed by: Matt Ahrens
Reviewed by: George Wilson
Every time we want to unmount a snapshot (happens during snapshot
deletion or renaming) we unnecessarily iterate through all the
mountpoints in the VFS layer (see zfs_get_vfs).
The current patch
Closed #450.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/450#event-1225819718
--
openzfs-developer
Archives:
ahr...@delphix.com>
Reviewed by: Serapheim Dimitropoulos <seraph...@delphix.com>
Upstream Bugs: DLPX-50618, DLPX-50362, DLPX-52112, DLPX-52115
You can view, comment on, or merge this pull request online at:
https://github.com/openzfs/openzfs/pull/499
-- Commit Summary --
* 8862 ZFS Channel Pro
Closed #499.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/499#event-1361777250
--
openzfs-developer
Archives:
sdimitro commented on this pull request.
> + * which otherwise would be skipped for entries in the dbuf cache.
+ */
+void
+arc_buf_access(arc_buf_t *buf)
+{
+ mutex_enter(>b_evict_lock);
+ arc_buf_hdr_t *hdr = buf->b_hdr;
+
+ /*
+* Avoid taking the hash_lock when
Hey @grimreaper!
I opened the following illumos bug for this issue:
https://www.illumos.org/issues/9521
For future reference whenever we want to open a pull request we always open a
bug
in https://www.illumos.org/issues if one doesn't exist already. Then we
reference the
illumos bug in the PR
Problem:
According to spa_config_update() we expect new vdevs to have
vdev_ms_array equal to 0 and then we go ahead and set their metaslab
size. The problem is that indirect vdevs also have vdev_ms_array == 0
because their metaslabs are destroyed once their removal is done.
As a result, if a vdev
Note: This patch builds upon work zfs redacted send/receive whose status is
pending, waiting for the encryption changes.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
= Motivation
While dealing with another performance issue
(see
https://github.com/openzfs/openzfs/commit/126118fbe4a957da76c8f44769ee63730080cafc)
we noticed that we spend a lot of time in various places in the kernel when
constructing
long nvlists. The problem is that when an nvlist is created
sdimitro commented on this pull request.
> @@ -249,6 +249,31 @@ lzc_remap(const char *fsname)
return (error);
}
+int
+lzc_rename(const char *source, const char *target)
+{
+ zfs_cmd_t zc = { 0 };
+ int error;
+
Since this call (unfortunately) uses the old-style ioctl()
sdimitro approved this pull request.
Thank you for catching this
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/632#pullrequestreview-116694441
Reviewed by: Matt Ahrens
Reviewed by: Pavel Zakharov
The timeline of the race condition is the following:
[1] Thread A is about to finish condesing the first vdev in
spa_condense_indirect_thread(),
so it calls the
Reviewed by: Matthew Ahrens
Reviewed by: George Wilson
Description
A scenario came up where a callback executed by vdev_indirect_remap() on a
vdev, calls
vdev_indirect_remap() on the same vdev and tries to reacquire
vdev_indirect_rwlock that
@sdimitro pushed 1 commit.
a96b5da apply feedback
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/544/files/f17764a59fa855e38e6044d0020648f07dd07599..a96b5daac5b0059e9074f0b59b8f9529d2157797
Review updated. @rmustacc if there is no further feedback, do we have your
approval for this review?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/544#issuecomment-365305966
@sdimitro pushed 1 commit.
a23de99 get rid of redundant check
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/544/files/667119b38ecaf1f1a540975a8b63a7189ee9eaeb..a23de999d1a50e209c43bcdaadb8a29029c50e53
sdimitro commented on this pull request.
> @@ -198,6 +204,100 @@ write_uint64(mdb_tgt_as_t as, mdb_tgt_addr_t addr,
> uint64_t n, uint_t rdback)
return (addr + sizeof (n));
}
+/*
+ * Writes to objects of size 1, 2, 4, or 8 bytes. The function
+ * doesn't care if the object is a
Reviewed by: Matthew Ahrens
Reviewed by: Paul Dagnelie
Reviewed by: Prashanth Sreenivasa
Currently, if you want to write to a kernel tunable using
mdb, you need to use the /Z or /W flags to write 8 or 4
bytes respectively. If you use the
sdimitro commented on this pull request.
> @@ -198,6 +198,100 @@ write_uint64(mdb_tgt_as_t as, mdb_tgt_addr_t addr,
> uint64_t n, uint_t rdback)
return (addr + sizeof (n));
}
+/*
+ * Writes to objects of size 1, 2, 4, or 8 bytes. The function
+ * doesn't care if the object is a
Reviewed by: Matthew Ahrens
Reviewed by: John Kennedy
Reviewed by: Dan Kimmel
Details about the motivation of this feature and its usage can
be found in this blogpost:
https://sdimitro.github.io/post/zpool-checkpoint/
A
Let's keep a hold on this until the ZoL PR makes it in:
https://github.com/zfsonlinux/zfs/pull/7705
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/664#issuecomment-414166806
sdimitro commented on this pull request.
> @@ -653,43 +654,28 @@ zfs_read(vnode_t *vp, uio_t *uio, int ioflag, cred_t
> *cr, caller_context_t *ct)
static int
zfs_write(vnode_t *vp, uio_t *uio, int ioflag, cred_t *cr, caller_context_t
*ct)
{
- znode_t *zp = VTOZ(vp);
Since I
sdimitro commented on this pull request.
> @@ -1217,18 +1352,10 @@ zvol_dumpio(zvol_state_t *zv, void *addr, uint64_t
> offset, uint64_t size,
int
zvol_strategy(buf_t *bp)
{
- zfs_soft_state_t *zs = NULL;
see my other comment (e.g. cleanup).
--
You are receiving this because you
Reviewed by: Sara Hartse
Reviewed by: Sebastien Roy
Reviewed by: Matthew Ahrens
Reviewed by: George Wilson
There are use-cases when we want to look into dataset performance
(reads/writes) in ZFS and create high-level tooling for it.
In illumos we already have some of these kstats already in
sdimitro commented on this pull request.
> + * There should not be anything wrong with having kstats for
+* snapshots. Since we are not sure how useful they would be
+* though nor how much their memory overhead would matter in
+* a filesystem with many snapshots, we
Reviewed by: Matt Ahrens
Reviewed by: George Wilson
Motivation
The current space map encoding has the following disadvantages:
[1] Assuming 512 sector size each entry can represent at most 16MB for a
segment.
This makes the encoding very
sdimitro commented on this pull request.
> +# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy is
sdimitro approved this pull request.
Can you add an entry in `usr/src/pkg/manifests/system-test-zfstest.mf` for the
new test that you added?
Things look good to me overall since I've had a sneak peek of the ZoL review.
I'd have @jwk404 take a look at the new test case though just in case.
>
@avg-I here is the txg.d that I used:
https://gist.github.com/sdimitro/23ed816690faa412ddf0b00ae9cd49e8
and here is another ad hoc dtrace script that I make for dynamically tracing
txg_kick():
https://gist.github.com/sdimitro/4eac9894d4a109ad5774d0e3b07f20a0
--
You are receiving this because
sdimitro approved this pull request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/625#pullrequestreview-161262396
--
openzfs:
This is not ready to go yet.
I still haven't updated the code with the restructuring from ZoL. There is a
kstat memory leak that I need to narrow down in the new code before updating
this. Sorry for the delay.
--
You are receiving this because you are subscribed to this thread.
Reply to this
sdimitro approved this pull request.
I just found one minor nit. Code LGTM.
Are there any plans of doing any extra testing for this in illumos?
> - * majority of segment copies are good. This allows all the segment copies to
- * participate fairly in the reconstruction and prevents the repeated
62 matches
Mail list logo