to subscribe to the forum should send mail to
'[EMAIL PROTECTED]'.
Thanks,
Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
snapshot creation/deletion will cause scrubs/resilvers to restart. You
can stop the scrub with 'zpool scrub -s'.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
, there's an open RFE to have
mount(1M) identify a ZFS pool/dataset path without '-F', rather than
trying to interpret it as a NFS mount:
6365048 legacy mount should recognise ZFS filesystems without needing -F flag
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com
Yes, this is a known issue recently discovered ourselves (CR#6425111).
I'm looking into it now. Hopefully it'll be fixed in build 41, but
definiely by build 42.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
.
is this a bug?
grant.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
?
Not currently.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
PROTECTED]
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http
On Tue, May 16, 2006 at 03:13:48PM -0700, Eric Schrock wrote:
Yes, this will work. If you install build snv_39 and run 'zpool version
Errr, that should be zpool upgrade -v
- Eric
--
Eric Schrock, Solaris Kernel
projects, groups, or any other
abstraction, as well as on entire portions of the hierarchy. This allows
them to be combined in ways that traditional per-user quotas cannot.
Hope that helps,
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
, but there's not much to say at this
point.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jun 08, 2006 at 10:53:06PM -0500, Spencer Shepler wrote:
On Thu, Eric Schrock wrote:
The problem is that statvfs() only returns two values (total blocks and
free blocks) from which we have to calculate three values: size, free,
?
From statvfs(2) the following are returned
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
only for the leaf device.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. Moving forward, we want to integrate this into fmd/libtopo so
that any fmd module can react to these events without having to manually
plumb up the underlying mechanism.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
time, but it's not 100 years.
Yes, I goofed on the math. It's still (256*10*5) seconds, but
somehow I managed to goof up the math. I tried it again and came up
with 1,481 days.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
the mountpoint or not.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
reserve enough
blocks in a pool in order to always be able to rm and destroy stuff.
Best regards,
Constantin
P.S.: Most US Sun employees are on vacation this week, so don't be alarmed
if the really good answers take some time :).
--
Eric Schrock, Solaris Kernel Development http
of the clone.
# uname -av
SunOS enterprise 5.11 snv_39 sun4u sparc SUNW,Ultra-2
#
James Dickens
uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock
. Eventually, we will need to grab a 128k block of contiguous
VA, but can't find a contiguous region, despite having plenty of memory
(physical or virtual).
This is only a problem on 32-bit kernels, because on a 64-bit kernel VA
is effectively limitless.
- Eric
--
Eric Schrock, Solaris Kernel
(consistent snapshot naming,
automatically scheduled snapshots, retirement of old snapshots).
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
similar to ps -o format, viz:
zfs list -o \
name=NAME---,used=USED,available,referenced,mountpoint
Users would probably specify spaces instead of hyphens, but it is difficult
to show that effectively in word-wrapped e-mail readers.
--
Eric Schrock, Solaris Kernel
).
Examples include using vdevs of different redundancy (raidz + mirror),
as well as using different size devices. If you have other definitions
of silly, let us know what we should be looking for.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
disks.
This gives a nice bias towards one of the following configurations:
- 5x(7+2), 1 hot spare, 21.0TB
- 4x(9+2), 2 hot spares, 18.0TB
- 6x(5+2), 4 hot spares, 15.0TB
The performance characteristics of these configurations would be equally
interesting.
- Eric
--
Eric
On Tue, Jul 18, 2006 at 10:59:59AM -0700, Eric Schrock wrote:
One thing I would pay attention to is the future world of native ZFS
root. On a thumper, you only have two drives which are bootable from
the BIOS. For any application in which reliability is important, you
would have these two
want to
incorporate in future diagnosis engines.
The current thumper-specific diagnosis engine does this, but we're
working on generalizing the framework and more tightly integrating with
ZFS.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss
, Jul 20, 2006 at 02:28:38PM -0500, Eric Lowe wrote:
Eric Schrock wrote:
What does 'zpool status -v' show? This sounds like you have corruption
# zpool status -v
pool: junk
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
sparcv9 sparc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
also try:
# mdb -k
::pgrep zpool | ::walk thread | ::findstack
To get an idea of where the threads are stuck. But most likely they are
waiting on some ZIO or lock, and it would be much faster to let a ZFS
specialist take a look.
- Eric
--
Eric Schrock, Solaris Kernel Development http
?
Absolutely.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
model.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
:00 PM Mountain
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
driver is exporting a device in /dev/dsk, but not exporting
basic information (such as the size or number of blocks) that ZFS (and
potentially the rest of Solaris) needs to interact with the device.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zones-discuss mailing list
zones-discuss@opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel
?
No, not currently. We are working on more tightly integrating iSCSI
with ZFS. So, for example, you could do:
# zfs set iscsi=on pool/vol
And have it just work. There are a lot of details to work out, but we
have a good idea of what needs to be done.
- Eric
--
Eric Schrock, Solaris Kernel
On Fri, Jul 28, 2006 at 09:29:42AM -0700, Eric Schrock wrote:
On Fri, Jul 28, 2006 at 12:43:37AM -0700, Frank Cusack wrote:
zfs automatically mounts locally attached disks (export/import aside). Does
it do this for iscsi? I guess my question is, does the solaris iscsi
initiator provide
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
or will it probably take longer to implement and test?
I can say for certain that it won't be in Update 3. After that, It's
hard to say. Once it integrates into Nevada we'll have a better idea
which Solaris 10 update it will appear in.
- Eric
--
Eric Schrock, Solaris Kernel Development http
] ufs
directory underneath it. Yuck!
As with the existing code, this would only apply when ZFS was
responsible for creating the mountpoint itself. The administrator could
change the underlying permissions, or pre-create the directory and ZFS
would not modify it.
- Eric
--
Eric Schrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss
On Fri, Aug 04, 2006 at 09:48:37AM -0700, Eric Schrock wrote:
This indicates that share(1M) didn't produce any output, but returned
a non-zero exit status. I'm not sure why this would happen - can you
run the following by hand?
# share /export
# echo $?
Incidentally, the explicit 'zfs
latency
IO Summary: 537542 ops 8899.9 ops/s, (1369/1369 r/w) 43.5mb/s,307us
cpu/op, 5.4ms latency
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs
, strange - I did try with value of 1, 60 and 256. And basically I
get the same results from varmail tests.
Well that's good data, too. It means that this isn't an impediment for
this particular test. It was just a shot in the dark...
- Eric
--
Eric Schrock, Solaris Kernel Development
is: if your want /var, /var/adm, /var/run, or /tmp on
ZFS, use legacy mountpoints and put in /etc/vfstab. All other
filesystems can use standard ZFS mountpoints.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs
to another drive/server if you request it to do
so. Looks like they just made snapshots accesible to desktop users.
Pretty impressive how they did the GUI work too.
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs
overview, Apple
is not ignoring the roots of DTrace, and will not be hiding this fact
from their developers (not that they could).
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs
at 01:32:40PM -0400, Jeff Victor wrote:
Hi Eric,
Eric Schrock wrote:
...
Second, it forced the CLI to distinguish between a container and a
filesystem. At first this was accomplished with a trailing slash on the
name, and later introducing the 'ctr' type. Both were confusing to
users
a
reasonable option.
- Eric
On Thu, Aug 10, 2006 at 10:40:49AM -0700, Matthew Ahrens wrote:
On Thu, Aug 10, 2006 at 10:23:20AM -0700, Eric Schrock wrote:
A new option will be added, 'canmount', which specifies whether the
given filesystem can be mounted with 'zfs mount'. This is a boolean
in the interim, this is not a huge performance overhead.
Hope that helps,
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
. Is there still a plan to fix this?
Phi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
of engineering.
Yep, this is all possible. Of course once you do this, you have to be
more careful that you don't select overlapping names, since it's only
enforceable at mount time ;-)
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Following up on earlier mail, here's a proposal for create-time
properties. As usual, any feedback or suggestions is welcome.
For those curious about the implementation, this finds its way all the
way down to the create callback, so that we can pick out true
create-time properties (e.g.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss
that this only a
requirement for the initial implementation of ZFS crypto if that would
be more appropriate.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
wrote:
Ok,
but what about the toplevel FS in each pool? Then we need a -o option for
zpool create also:
zpool create pool
zfs set compression=on pool
Or will it be impossible to set encryption on directly on pool as opposed
to pool/fs?
Daniel
--
Eric Schrock, Solaris Kernel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs
-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Following up on a string of related proposals, here is another draft
proposal for user-defined properties. As usual, all feedback and
comments are welcome.
The prototype is finished, and I would expect the code to be integrated
sometime within the next month.
- Eric
INTRODUCTION
ZFS currently
names are case-insensitive (and internally are
always converted to lower-case), but the property values can be
anything.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
from
trying to access it.
That being said, the server should never hang - only proceed arbitrarily
slowly. When you say 'hang', what does that mean?
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss
/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs
drivers properly handle this case.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
dataset whose snapshots could serve as a clone
source for other delegated datasets. Since this is extremely rare, it
would probably suffice to have a special string like (shared) to
indicate that it is being shared between multiple zones.
Feel free to file an RFE.
- Eric
--
Eric Schrock, Solaris
this on
a test machine and see what's going on.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it working that way at one point. Apparently I'm imagining things,
or it got broken somewhere along the way. Please file a bug.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss
On Wed, Sep 06, 2006 at 08:34:26AM -0700, Eric Schrock wrote:
Feel free to file an RFE.
Oops, found one already:
6313352 'zpool list' 'zfs list' should add '-z' '-Z' to identifier a zone
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
to enforce sane behavior between the global and
local zone.
I'm reading that as 'the only real use case for 1 dataset in multiple zones'
(sorry if I'm misunderstanding you)?
Yep.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
complicated.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
use alternate root pools
(with '/' they become effectively temporary) and allow the host to come
all the way up before having the failback conversation with the other
host before explicitly importing the pool.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
that's actively shared, or by
force-importing a pool that's actively in use somewhere else.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
many important things to get done first.
If you don't care about unstable interfaces, you're welcome to use them
as-is. If you want a stable interface, you are correct that the only
way is through invoking 'zfs get' and 'zfs set'.
- Eric
--
Eric Schrock, Solaris Kernel Development http
will irrevocably
corrupt their data.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
stop thinking it's mounted somewhere else) - anybody have any ideas?
Thanks,
- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs
with more
information than was available before.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pool is
identified by a unique GUID. You cannot have two pools active on the
system with the same GUID. If this is really a valid use case, we could
invent a way to assign a new GUID on import.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
as different
paths and/or devids, then ZFS will behave exactly as I described and you
will be perfectly safe - you just won't be able to import the pool from
the mirrored devices.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
to change GUIDs on import).
I'm not sure what you mean by an unimplemented RFE being nor even
tested.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of behavior, without having to rely on
the (documented) side-effect of importing with an alternate root.
Since the behavior is identical to '-R /', you can see why it hasn't
been implemented yet...
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
.
Taking a snapshot of a single LUN pool will not lead to inconsistency.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this
mention that a hot spare action will happen?
Yep. I'll take care of this when I do the next phase of ZFS/FMA
integration.
- Eri
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss
, both before and after you do the
import?
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
driver
(PSARC 2006/501) was approved relatively recently. I would expect the
driver to be available in Nevada soon, but I have no further
information. You should follow up with the storage-discuss alias if you
want more details.
- Eric
--
Eric Schrock, Solaris Kernel Development http
on opensolaris.org for information on what we
expect. Chances are they have already heard of this bug, as I seem to
remember it coming up before.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs
anything
different than if you had placed something into /etc/dfs/dfstab. I'd
like to understand how exactly this happened, though. This may also
overlap with a (soon to be completed) rewrite of how shares are managed
and tracked in Solaris.
- Eric
--
Eric Schrock, Solaris Kernel Development
mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
1 - 100 of 306 matches
Mail list logo