[developer] Re: [openzfs/openzfs] Native data and metadata encryption for zfs (#124)

2017-08-23 Thread Jorgen Lundman
Now contains the send/recv fixes from ZOL. Please test.


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/124#issuecomment-324526413
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta17fb0edb29e7895-M308c690cb9f77387326d751f
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] Native data and metadata encryption for zfs (#124)

2017-08-23 Thread Jorgen Lundman
@lundman pushed 1 commit.

57291b0  de-linting


-- 
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/124/files/8e2c3b855a7eb155744586ec610d642c2c46a5c9..57291b074f93cd588106df1a5c0147778c600c6e

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T1625245905c55186-M77ee3f4745b532d33becaa0e
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7531 Assign correct flags to prefetch buffers (#222)

2017-08-23 Thread Matthew Ahrens
ahrens approved this pull request.

It would be nice to clean up DBUF_IS_L2CACHEABLE so that the logic isn't 
duplicated with DBUF_IS_L2CACHEABLE_IMPL, but I won't make that a requirement.



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/222#pullrequestreview-58233975
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T45c103905470ca47-M6b97be160509309204e066a5
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7531 Assign correct flags to prefetch buffers (#222)

2017-08-23 Thread Matthew Ahrens
> l2arc_noprefetch=0 is useless because prefetched buffers are not 
> ARC_FLAG_L2CACHE flagged
If a buffer is prefetched but not hit, it will not be L2ARC eligible, because 
it will not be flagged accordingly, so l2arc_noprefetch=0 has no effect on this 
buffer here

I agree; I'll accept this PR based on that.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/222#issuecomment-324480521
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T45c103905470ca47-M2364c22e53b4ee52d3a5ba5e
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7823 implement autoreplace matching based on FRU slot number (#416)

2017-08-23 Thread Yuri Pankov
Given no interest in reviewing this, I'm going to close the PR and try to RTI 
this as is.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/416#issuecomment-324478316
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T97ec8ed8719a97d0-Md456ed1a331b5c8abdeea030
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7823 implement autoreplace matching based on FRU slot number (#416)

2017-08-23 Thread Yuri Pankov
Closed #416.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/416#event-1218744542
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T97ec8ed8719a97d0-M2624dc9b3894499394a46c57
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7531 Assign correct flags to prefetch buffers (#222)

2017-08-23 Thread Matthew Ahrens
ahrens commented on this pull request.



> @@ -347,6 +347,12 @@ boolean_t dbuf_is_metadata(dmu_buf_impl_t *db);
(dbuf_is_metadata(_db) &&   \
((_db)->db_objset->os_secondary_cache == ZFS_CACHE_METADATA)))

I guess that doesn't really work (it doesn't evaluate to `result`).  You'd have 
to convert this to an inline function.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/222#discussion_r134884758
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T45c103905470ca47-M549846f56994ea1a83f11614
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7361 Improve L2ARC filling (#189)

2017-08-23 Thread Matthew Ahrens
Closed #189.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/189#event-1218736634
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T122a323c2623f54c-M5678b94ebe3f668cb3492461
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7584 Improve 'zpool labelclear' command (#424)

2017-08-23 Thread Matthew Ahrens
I'm not convinced that people are "used to the fact that this command blindly 
zeroes 512KiB at the beginning and the end of the disk".  I think that this 
command is seldom-used, and when it's used it's for the documented purpose: 
"Removes ZFS label information."  That said, I'm open to evidence to the 
contrary.

I agree that with my suggestion,  "there will be no easy way of "full cleaning" 
labels", but I don't know of the use case for that.  I don't consider "abusing 
... to wipe other kinds of metadata" a legitimate use case.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/424#issuecomment-324471369
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T92ce69bb6bd4a9c6-M3640d8d8f37a86c59a4860fc
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8423 Implement large_dnode pool feature (#409)

2017-08-23 Thread Matthew Ahrens
ahrens commented on this pull request.



/*
 * All forms of zfs create (create, mkdir, mkxattrdir, symlink)
 * eventually end up in zfs_mknode(), which assigns the object's
-* creation time and generation number.  The generic VOP_CREATE()
-* doesn't have either concept, so we smuggle the values inside
-* the vattr's otherwise unused va_ctime and va_nblocks fields.
+* creation time, generation number, and dnode slot count. The
+* generic VOP_CREATE() has no concept of these attributes, so
+* we smuggle the values inside * the vattr's otherwise unused
+* va_ctime, va_nblocks, and va_nlink fields.

Yes, it should say `va_fsid` rather than `va_nlink`, same as the comment in 
zfs_replay_create_acl().  Not sure how that got messed up.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/409#discussion_r134860543
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tfefe6318017dd4c7-Ma3c3e12f2a1524be78a8b66f
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8585 improve batching done in zil_commit() (#447)

2017-08-23 Thread Prakash Surya
Problem

The current implementation of zil_commit() can introduce significant
latency, beyond what is inherent due to the latency of the underlying
storage. The additional latency comes from two main problems:

 1. When there's outstanding ZIL blocks being written (i.e. there's
already a "writer thread" in progress), then any new calls to
zil_commit() will block waiting for the currently oustanding ZIL
blocks to complete. The blocks written for each "writer thread" is
coined a "batch", and there can only ever be a single "batch" being
written at a time. When a batch is being written, any new ZIL
transactions will have to wait for the next batch to be written,
which won't occur until the current batch finishes.

As a result, the underlying storage may not be used as efficiently
as possible. While "new" threads enter zil_commit() and are blocked
waiting for the next batch, it's possible that the underlying
storage isn't fully utilized by the current batch of ZIL blocks. In
that case, it'd be better to allow these new threads to generate
(and issue) a new ZIL block, such that it could be serviced by the
underlying storage concurrently with the other ZIL blocks that are
being serviced.

 2. Any call to zil_commit() must wait for all ZIL blocks in its "batch"
to complete, prior to zil_commit() returning. The size of any given
batch is proportional to the number of ZIL transaction in the queue
at the time that the batch starts processing the queue; which
doesn't occur until the previous batch completes. Thus, if there's a
lot of transactions in the queue, the batch could be composed of
many ZIL blocks, and each call to zil_commit() will have to wait for
all of these writes to complete (even if the thread calling
zil_commit() only cared about one of the transactions in the batch).

To further complicate the situation, these two issues result in the
following side effect:

 3. If a given batch takes longer to complete than normal, this results
in larger batch sizes, which then take longer to complete and
further drive up the latency of zil_commit(). This can occur for a
number of reasons, including (but not limited to): transient changes
in the workload, and storage latency irregularites.

Solution

The solution attempted by this change has the following goals:

 1. no on-disk changes; maintain current on-disk format.
 2. modify the "batch size" to be equal to the "ZIL block size".
 3. allow new batches to be generated and issued to disk, while there's
already batches being serviced by the disk.
 4. allow zil_commit() to wait for as few ZIL blocks as possible.
 5. use as few ZIL blocks as possible, for the same amount of ZIL
transactions, without introducing significant latency to any
individual ZIL transaction. i.e. use fewer, but larger, ZIL blocks.

In theory, with these goals met, the new allgorithm will allow the
following improvements:

 1. new ZIL blocks can be generated and issued, while there's already
oustanding ZIL blocks being serviced by the storage.
 2. the latency of zil_commit() should be proportional to the underlying
storage latency, rather than the incoming synchronous workload.

Where to Start w.r.t. Reviews

For those attempting to review this change, I suggest the following order:

 1. to familiarize yourself with the high level design this change is
shooting for, read the comment directly above zil_commit().
 2. if anything in that comment is unclear, not true, and/or it's
missing critical information, please open review a comment so that
can be addressed.
 3. If you're already familiar with the data structures of the ZIL, now
would be a good time to review the modifications I made, as a
precursor to the algorithm changes. If you're not already familiar,
it's probably best to do this after scanning the algorithm, at which
point you'll hopefully have more context to fall back on.
 4. briefly read through zil_commit(), zil_commit_writer(), and
zil_commit_waiter() to get familiar with the "flow" of the code
w.r.t. the thread(s) calling zil_commit().
 5. briefly read through zil_lwb_write_done(), zil_lwb_flush_vdevs(),
and zil_lwb_flush_vdevs_done() to get familiar with how the ZIL
blocks complete, and how the "commit waiters" are signalled.

At this point, I would hope that you'll have a decent grasp of the
changes at a high level, such that you'll be able to navigate the code
on your own, and rely on the code comments for further inspection and
understanding. If that's not the case, I should probably address it
with more code comments, so please open review issues so I can do that.

Performance Testing

I used two fio workloads to drive a synchronous write workload. In both
cases, the performance of this change was pitted against the baseline
"master" branch.

The first workload had each fio thread trying to perform 

[developer] Re: [openzfs/openzfs] 0000 improve batching done in zil_commit() (#421)

2017-08-23 Thread Prakash Surya
Closed #421.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/421#event-1218292532
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T6a034fc77ec64e5d-M6f7f0b0ac0cc3d34786d115b
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7361 Improve L2ARC filling (#189)

2017-08-23 Thread Ben RUBSON
Good news (at least for this change !), thank you @prakashsurya  

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/189#issuecomment-324371112
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T122a323c2623f54c-M1b5e2d36ad936408dc0c083b
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7361 Improve L2ARC filling (#189)

2017-08-23 Thread Prakash Surya
@benrubson I don't think that failure is due to this change; rather, think that 
failure is due to a prior commit: 
https://github.com/openzfs/openzfs/commit/d28671a3b094af696bea87f52272d4c4d89321c7

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/189#issuecomment-324354962
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T122a323c2623f54c-Mf66a04b93993c80e296d640e
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8423 Implement large_dnode pool feature (#409)

2017-08-23 Thread George Wilson
grwilson commented on this pull request.

There are lots of good comments in the pull request that aren't referenced in 
the actual code. It would be good to incorporate some of these as comments.

/*
 * All forms of zfs create (create, mkdir, mkxattrdir, symlink)
 * eventually end up in zfs_mknode(), which assigns the object's
-* creation time and generation number.  The generic VOP_CREATE()
-* doesn't have either concept, so we smuggle the values inside
-* the vattr's otherwise unused va_ctime and va_nblocks fields.
+* creation time, generation number, and dnode slot count. The
+* generic VOP_CREATE() has no concept of these attributes, so
+* we smuggle the values inside * the vattr's otherwise unused

Extra "*" -- `inside * the vattr's`

/*
 * All forms of zfs create (create, mkdir, mkxattrdir, symlink)
 * eventually end up in zfs_mknode(), which assigns the object's
-* creation time and generation number.  The generic VOP_CREATE()
-* doesn't have either concept, so we smuggle the values inside
-* the vattr's otherwise unused va_ctime and va_nblocks fields.
+* creation time, generation number, and dnode slot count. The
+* generic VOP_CREATE() has no concept of these attributes, so
+* we smuggle the values inside * the vattr's otherwise unused
+* va_ctime, va_nblocks, and va_nlink fields.

shouldn't this comment include `va_fsid` or should the code below be using 
`va_nlink` instead of `va_fsid`?

> + } else if (ds && ds->ds_feature_inuse[SPA_FEATURE_LARGE_DNODE]) {
+   /*
+* For large_dnode datasets, scan from the beginning of the
+* dnode block to find the starting offset. This is needed
+* because objectp could be part of a large dnode so we can't
+* assume it's a hole even if dmu_object_info() returns ENOENT.
+*/
+   int epb = DNODE_BLOCK_SIZE >> DNODE_SHIFT;
+   int skip;
+   uint64_t i;
+
+   for (i = *objectp & ~(epb - 1); i <= *objectp; i += skip) {
+   dmu_object_info_t doi;
+
+   error = dmu_object_info(os, i, );
+   if (error)

NIT: error != 0

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/409#pullrequestreview-57301052
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tfefe6318017dd4c7-Mccb2ecbc27ed810810878cf9
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7584 Improve 'zpool labelclear' command (#424)

2017-08-23 Thread Ganael Laplanche
@ahrens Hi Matthew, what do you think about my previous comments ? Can we leave 
the default options as they are ?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/424#issuecomment-324284867
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T92ce69bb6bd4a9c6-M6044f2c2055304c13214bf79
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7531 Assign correct flags to prefetch buffers (#222)

2017-08-23 Thread Ben RUBSON
Test passed, thank you for having launched it @prakashsurya  

I'm pretty convinced prefetched buffers should be `ARC_FLAG_L2CACHE` flagged 
(what this PR currently does), as `l2arc_noprefetch` user tunable will then 
control whether or not these buffers can be L2 backed.
This gives a nice performance improvement, as shown in the [bug 
report](https://www.illumos.org/issues/7531).

FreeBSD has _dtrace_, but I never used it, so I've no idea how to easily track 
the path of these buffers which, according to you @ahrens, should have been 
flagged by `arc_read`. And what about prefetched buffers which are evicted 
before being read ?

Thank you very much for your help  

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/222#issuecomment-324242895
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T45c103905470ca47-M76ad212c02718b172bd31c68
Powered by Topicbox: https://topicbox.com