all that the
retire agent is doing under the hood.
Hope that helps,
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Apr 5, 2010, at 11:43 AM, Garrett D'Amore wrote:
>
> I see ereport.fs.zfs.io_failure, and ereport.fs.zfs.probe_failure. Also,
> ereport.io.service.lost and ereport.io.device.inval_state. There is indeed a
> fault.fs.zfs.device in the list as well.
The ereports are not interesting, only
' show? Does doing a 'zpool
replace c2t3d1 c2t3d2' by hand succeed?
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
othing pathological (i.e. 30 seconds, not 30 hours). Expect
to see fixes for these remaining issues in the near future.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-
distinguish between REMOVED and FAULTED devices. Mis-diagnosing a removed
drive as faulted is very bad (fault = broken hardware = service call = $$$).
- Eric
P.S. the bug in the ZFS scheme module is legit, we just haven't fixed it yet
--
Eric Schrock, Fishworkshttp://bl
aving your pool running minus one disk for hours/days/weeks is clearly broken.
If you have a solution that correctly detects devices as REMOVED for a new
class of HBAs/drivers, that'd be more than welcome. If you choose to represent
missing devices as faulted in your own third party sy
will it report them as CMD_DEV_GONE, or will it report an error
> causing a fault to be flagged?
This is detected as device removal. There is a timeout associated with I/O
errors in zfs-diagnosis that gives some grace period to detect removal before
declaring a disk faulted.
- Eric
--
Eric Schro
On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote:
> On 18/06/2010 00:18, Garrett D'Amore wrote:
>> On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
>>
>>> On the SS7000 series, you get an alert that the enclosure has been detached
>>>
age you to work on the RFE yourself - any implementation would
certainly be appreciated. This possibility was originally why the 'snapdir'
property was named as it was, so we could someday support 'snapdir=every' to
export .zfs in every directory.
- Eric
--
Eric Schrock, F
discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
quot; is overly brief and
could be expanded to include this use case.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
c0t5d0 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0 48.5K resilvered
errors: No known data errors
On 10/14/09 15:23, Eric Schrock wrote:
On 10/14/09 14:17, Cindy Swearingen wrote:
Hi Jason,
I think you are asking how do you tell ZFS that you want to replace t
0
vdev = 0x179e471c0732582
(end resource)
(end fault-list[0])
So, I'm still stumped.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
db.
This is fixed in build 127 via:
6889827 ZFS retire agent needs to do a better job of staying in sync
We've followed Eric's work on ZFS device enumeration for the Fishwork project
with great interest - hopefully this will eventually get extended to the fmdump
ou
ist
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
s working on it.
- Eric
--
Regards,
Cyril
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/09/09 12:58, Brent Jones wrote:
Are these recent developments due to help/support from Oracle?
No.
Or is it business as usual for ZFS developments?
Yes.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
ng at the ideal
state. By definition a hot spare is always DEGRADED. As long as the spare
itself is ONLINE it's fine.
Hope that helps,
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/11/10 17:42, Paul B. Henson wrote:
On Sat, 9 Jan 2010, Eric Schrock wrote:
No, it's fine. DEGRADED just means the pool is not operating at the
ideal state. By definition a hot spare is always DEGRADED. As long as
the spare itself is ONLINE it's fine.
One more question o
On Jan 11, 2010, at 6:35 PM, Paul B. Henson wrote:
> On Mon, 11 Jan 2010, Eric Schrock wrote:
>
>> No, there is no way to tell if a pool has DTL (dirty time log) entries.
>
> Hmm, I hadn't heard that term before, but based on a quick search I take it
> that's th
e-attach the device if it is indeed just missing.
#2 is being worked on, but also does not affect the standard reboot case.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discus
hanks again for your reply.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/es
fact that free operations used to be in-memory
only but with dedup enabled can result in synchronous I/O to disks in
syncing context.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
z
.
> >
> >Fred
> >
>
> *illumos-developer* |
> Archives<https://www.listbox.com/member/archive/182179/=now>
> <https://www.listbox.com/member/archive/rss/182179/21175057-f8151d0d> |
> Modify<https://www.listbox.com/member/?member_id=21175057&id_sec
with the 'compression'
property on a per-filesystem level, and is fundamentally per-block. Dedup
is also controlled per-filesystem, though the DDT is global to the pool.
If you think there are compelling features lurking here, then by all means
grab the code and run with it :-)
- Eric
www.listbox.com
>
>
>
> ---
> illumos-developer
> Archives: https://www.listbox.com/member/archive/182179/=now
> RSS Feed:
> https://www.listbox.com/member/archive/rss/182179/21175057-f8151d0d
> Modify Your Subscription:
> https:
ing, guess you learn something new every day :-)
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/mkdir.c
Thanks,
- Eric
--
Eric Schrock
Delphix
http://blog.delphix.com/eschrock
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
e if it does the right thing for checksum
errors. That is a very small subset of possible device failure modes.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@open
additional writes for every block. If it's even possible to implement this
"paranoid ZIL" tunable, are you willing to take a 2-5x performance hit to be
able to detect this failure mode?
- Eric
--
Eric Schrock, Fishworks
come and the (now free)
> blocks are reused for new data.
ZFS will not reuse blocks for 3 transaction groups. This is why uberblock
rollback will do normally only attempt a rollback of up to two previous txgs.
- Eric
--
Eric Schrock, Fishworks
evante synonymer på norsk.
>
> ___
> Developer mailing list
> develo...@lists.illumos.org
> http://lists.illumos.org/m/listinfo/developer
>
--
Eric Schrock
Delphix
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
__
but more
>>> intuitive, in my opinion)?
>>>
>> I'd favor "refcompressratio"
>> otherwise LGTM
>> -- richard
>>
>
> Looks useful. I favor longer, more description names. "refcompressratio"
Webrev has been updated:
http://dev1.illumos.org/~eschrock/cr/zfs-refratio/
- Eric
--
Eric Schrock
Delphix
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Good catch. For consistency, I updated the property description to match
"compressratio" exactly.
- Eric
On Mon, Jun 6, 2011 at 9:39 PM, Mark Musante wrote:
>
> minor quibble: compressratio uses a lowercase x for the description text
> whereas the new prop uses an upperc
on if you are using whole disks and a
driver with static device paths (such as sata).
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ttach the mirror to achieve a full re-sync.
>
> this is snv_93
>
> Ref:
>
> http://docs.sun.com/app/docs/doc/817-2271/gcfhe?l=en&a=view&q=%22only+applicable+to+mirror+and+replacing+vdevs%22
>
>
> This message posted from opensolaris.org
> ___
On Fri, Aug 15, 2008 at 02:14:02PM -0700, Eric Schrock wrote:
> The fact that it's DEGRADED and not FAULTED indicates that it thinks the
> DTL (dirty time logs) for the two sides of the mirrors overlap in some
> way, so detaching it would result in loss of data. In the process of
&g
gt; I believe ZFS should apply the same tough standards to pool availability as
> it does to data integrity. A bad checksum makes ZFS read the data from
> elsewhere, why shouldn't a timeout do the same thing?
>
> Ross
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
uation really poorly.
I don't think you understand how this works. Imagine two I/Os, just
with different sd timeouts and retry logic - that's B_FAILFAST. It's
quite simple, and independent of any hardware implementation.
- Eric
--
Eric Schrock, Fishworks
ny such "best effort RAS" is a little dicey because
you have very little visibility into the state of the pool in this
scenario - "is my data protected?" becomes a very difficult question to
answer.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
und
> no way to make the link.
>
> Can someone help ?
>
> Thank you.
>
> Alain Ch?reau.
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail
_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
_
______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
probably happened to you.
FYI, this is bug 6667208 fixed in build 100 of nevada.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nfrastructure
fails.
- Eric
[1]
http://mbruning.blogspot.com/2008/08/recovering-removed-file-on-zfs-disk.html
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
#!/sbin/dtrace -s
#pragma D option
> markm
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://blogs.sun.com/fishworks
There will be much more information throughout the day and in the coming
weeks. If you want to give it a spin, be sure to check out the freely
available VM images.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com
configured in an implementation-defined way for
the software to function correctly.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ed config)
so that we can mirror/RAID across them. Even without NSPF, we have
redundant cables, HBAs, power supplies, and controllers, so this is only
required if you are worried about disk backplane failure (a very rare
failure mode).
Can you point to the literature that suggests this is
has gotten the
> best of me, so I apologize. Feel free to correct as you see fit.
I can update the blog entry if it's misleading. I assumed that it was
implicit that the absence of the above (missing or broken disks) meant
supported, but I admit that I did not state that explicitl
. The
> > more appropriate solution is that this feature should be in FMA.
> >
> > Bob
> > ==
> > Bob Friesenhahn
> > [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> > GraphicsMagi
bring up the issue on storage-discuss where you will find
more experts in this area.
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tive blocks repaired.
>
> So the read test seemed to work fine.
>
> Any suggestions on how to proceed? Thoughts on why the disks are showing
> weirdly in format? Any way to recover/rebuild the zpool metadata?
>
> Any help would be appreciated
>
> Regards Rep
ep
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworksht
8 //
> // Level 7, 476 St. Kilda Road Mobile: 0419 305 456 //
> // Melbourne 3004 VictoriaAustralia //
> //
> ___
> zfs-discuss mai
wrong, and everyone is laughing already.
>
> Discuss?
>
> Lund
>
> --
> Jorgen Lundman |
> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
> Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
> Japan| +81 (0)3 -3
tinue without
committed data.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ds - it doesn't matter if reads are fast for slogs.
With the txg being a working set of the active commit, so might be a
set of NFS iops?
If the NFS ops are synchronous, then yes. Async operations do not use
the ZIL and therefore don't have anything to do with slogs.
- Eric
-
open. A failed slog device can
prevent such a pool from being imported.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ol using an older version.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rsrc
drwx-- 2 root sys 512 13 dic 2007 xprt
the content of the file is not printable.
Maurilio.
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
with ZFS. If you build proftp with TCP_CORK off you
won't have this problem.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
some deep fight or flight reaction in my animal brain,
after having watched others debug the original problem.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolari
his case though, proftpd
isn't actually at fault (and presumably does the right thing). It was
ultimately a kernel bug.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@
he work in open context which won't
stop the txg train and admin commands should continue to work.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
t;use
log devices" or not.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sk format.
The pool can
still be used, but some features are unavailable.
The system pool will never be upgraded - there is no point.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss
log data. I had some correspondence with Eric Schrock who
indicated it looked like a combination of buggy Intel firmware and a
bug in
the Solaris SATL driver, but haven't heard back from him as to
whether they
might fix it.
It's clearly bad firmware - there's no bug in the sata d
invalid record, while we (via SATL) keep
going. But any way you slice it, the drive is returning invalid data.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opens
e ATA
data and the translated SCSI data. My guess is that it just gives up
at the first invalid version record, something we should probably be
doing.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-disc
A. So the translation via SATL is done in
hardware, not software.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
code could definitely be cleaned up to bail when
processing an invalid record. I can file a CR for you if you haven't
already done so. Also, I'd encourage any developers out there with
one of these drives to take a shot at fixing the issue via the
OpenSolaris sponsor process.
w.csupomona.edu/
~henson/
Operating Systems and Network Analyst | hen...@csupomona.edu
California State Polytechnic University | Pomona CA 91768
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss m
0 RPM disks, this can end
up taking a very long time.
The ZFS team is actively working on improvements in this area.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
any device in case you are
using disks with a Solaris VTOC.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
er
pessimistic. Now that we actually have the structure erepor data, we'll
be able to do some more complex analysis once we have a body of failure
data to work with.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
these statistics or do anything with them.
These two things combined should avoid the need for an explicit fitness
test.
Hope that helps,
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
em, not ZFS-specific. All that
is needed in ZFS is an agent actively responding to external diagnoses.
I am laying the groundwork for this as part of ongoing ZFS/FMA work
mentioned in other threads. For more information on ongoing FMA work, I
recommend visi
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> -
> Be a PS3 game guru.
> Get your game face on with the latest PS3 news and previews at Yahoo! Games.
> _
fs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Feb 26, 2007 at 12:06:14PM -0600, Nicolas Williams wrote:
> On Mon, Feb 26, 2007 at 10:05:08AM -0800, Eric Schrock wrote:
> > The slow part of zpool import is actually discovering the pool
> > configuration. This involves examining every device on the system (or
> >
awn a
thread to do synchronous I/O, or use the driver entry points if
provided).
> Also, I see this happens in user-land. Is there any benefit of trying
> this in kernel-land?
No. It's simpler and less brittle to keep it userland.
- Eric
--
Eric Schrock, Solaris Kernel Develo
corrupted data
> c5t60060160718518009A7926FF5831DB11d0 FAULTED corrupted data
> [EMAIL PROTECTED] ~]$
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tely above
vdev_set_state() in the above stacks? I think the function should be
vdev_validate(), but I don't remember if it's the same in the ZFS
version you have.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
__
vdev_load::dis | mdb -k'. This will give the
disassembly for vdev_load() in your current kernel (which will help us
pinpoint what vdev_load+0x69 is really doing).
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ebug from
here, so I'll let one of the other ZFS experts chime in...
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
__
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mail
ent
before submitting it.
The prototype is functional except for the offline device insertion and
hot spares functionality. I hope to have this integrated within the
next month, along with the next phase of FMA integration. Please
respond with any comments, concerns, or suggestions.
Thanks,
- Eri
ow. It's only when the spare is complete, either
through explicit zpool(1M) actions or replacing the underlying drive,
that the 'spare' vdev disappears and the new vdev reflects the larger
size.
Thanks for the input,
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a hot spare to automatically kick in.
>
Kicking in a hot spare is a harmless activity (the end result will be
the same), why would you want to avoid this? Do you have an idea of how
you would want to control this behavior?
- Eric
--
Eric Schrock, Solaris Kernel Development http://blog
.
> I hope that there's a way to disable the periodic probing of hot
> spares. Spinning these drives up often might be highly annoying in
> some environments (though useful in others, since it could also verify
> that the disk is responding normally).
Why is this "high
;s part required
>
This is part of ongoing work with Solaris platform integration (see my
last blog post) and future ZFS/FMA work. We will eventually be
leveraging IPMI and SES to manage physical indicators (i.e. LEDs) in
response to Solaris events. It will take some time to reach this point,
how
ion I was planning to ask as well.
>
> Does zfs allow a hot spare to be allocated to multiple pools or as a system
> hot spare. Or would this be done manually with a cron script.
>
> Nicholas
Spares can belong to multiple pools (they can only be actively spared in
one pool, obvi
road.
Thanks for sharing the details on FreeBSD, it's quite interesting.
Since the majority of this work is Solaris-specific, I'll be interested
to see how other platforms deal with this type of reconfiguration.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.s
Am I Evil? Yes, I Am!
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Eric
On Sun, Apr 01, 2007 at 11:14:44PM +0200, Pawel Jakub Dawidek wrote:
> On Sun, Apr 01, 2007 at 12:03:36PM -0700, Eric Schrock wrote:
> > You should be able to get rid of it with 'zfs inherit'. It's not
> > exactly intuitive, but it matches the native property b
when compared with the initial disk access.
>
Also, what is your definition of "broken"? Does this mean the device
appears as FAULTED in the pool status, or that the drive is present and
not responding? If it's the latter, this will be fixed by my upcoming
FMA work.
- E
discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Writing such a tool is effectively impossible. For the one known
corruption bug we've encountered (and since fixed, we provided the
'zfs_recover' /etc/system switch, but it only works for this particular
bug. Without understanding the underlying pathology it's impossible
1 - 100 of 409 matches
Mail list logo