Am 10.08.2016 um 18:53 schrieb Ultima:
>> I didn't see any reply on the list, so I thought I might let you know
> Sorry, never received this reply (till now) xD
>>what I assume is happening:
>> ZFS never updates data in place, which affects inode updates, e.g. if
>> a file has been read and access times must be updated. (For that reason,
>> many ZFS file systems are configured to ignore access time updates).
>> Even if there were only R/O accesses to files in the pool, there will
>> have been updates to the inodes, which were missed by the offlined
>> drives (unless you ignore atime updates).
>> But even if there are no access time updates, ZFS might have written
>> new uberblocks and other meta information. Check the POOL history and
>> see if there were any TXGs created during the scrub.
>> If you scrub the pooll while it is off-line, it should stay stable
>> (but if any information about the scrub, the offlining of drives etc.
>> is recorded in the pool's history log, differences are to be expected).
>> Just my $.02 ...
>> Regards, STefan
> Thanks for the reply, I'm not completely sure what would be considered a
> TXG. Maintained normal operations during most this noise and this pool
> has quite a bit of activity during normal operations. My zpool history
> looks like it gos on forever and the last scrub is showing it repaired
> 9.48G. That was for all these access time updates? I guess that would be
> a little less then 2.5G per disk worth.
> The zpool history looks like it gos on forever (733373 lines). This pool
> has much of this activity with poudriere. All the entries I see are
> clone, destroy, rollback and snapshotting. I can't really say how much
> but at least 500 (prob much more than that) entries between the last two
> scrubs. Atime is off on all datasets.
> So to be clear, this is expected behavior with atime=off + TXGs during
> offline time? I had thought that the resilver after onlining the disk
> would bring that disk up-to-date with the pool. I guess my understanding
> was a bit off.
Sorry, you'll have to ask somebody more familiar with ZFS internals
I just wanted to point out, that scrub might change the state of the
drives, even though no file data is modified.
Some 10 GB "repaired" on a 35000 GB pool is not much, it is about what
I'd expect to be required for meta-data.
BTW: The pool history is chronologically sorted, you need only check
the last few lines (written after the start time of the scrub, or
rather written after offlining some of the disk drives).
email@example.com mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"