Victor, thanks for posting that. It really is interesting to see exactly what
happened, and to read about how zfs pools can be recovered.
Your work on these forums has done much to re-assure me that ZFS is stable
enough for us to be using on a live server, and I look forward to seeing
Alex Peng [EMAIL PROTECTED] writes:
Is it fun to have autocomplete in zpool or zfs command?
For instance -
zfs cr 'Tab key' will become zfs create
zfs clone 'Tab key' will show me the available snapshots
zfs set 'Tab key' will show me the available properties, then zfs set
Hm -
This caused me to ask the question: Who keeps the capabilities in sync?
Is there a programmatic way we can have bash (or other shells)
interrogate zpool and zfs to find out what it's capabilities are?
I'm thinking something like having bash spawn a zfs command to see what
options are
Hi,
on a Solaris 10u5 box (X4500) with latest patches (Oct 8) one disk was
marked as failed. We replaced it yesterday, I configured it via cfgadm
and told ZFS to replace it with the replacement:
cfgadm -c configure sata1/4
zpool replace atlashome c1t4d0
Initially it looked well, resilvering
On Thu, 2008-10-09 at 22:40 -0700, Alex Peng wrote:
Is it fun to have autocomplete in zpool or zfs command?
For instance -
zfs cr 'Tab key' will become zfs create
zfs clone 'Tab key' will show me the available snapshots
zfs set 'Tab key' will show me the available
2008/10/9 Bob Friesenhahn [EMAIL PROTECTED]:
On Thu, 9 Oct 2008, Miles Nordin wrote:
catastrophically. If this is really the situation, then ZFS needs to
give the sysadmin a way to isolate and fix the problems
deterministically before filling the pool with data, not just blame
the sysadmin
Tim Foster wrote:
On Thu, 2008-10-09 at 22:40 -0700, Alex Peng wrote:
Is it fun to have autocomplete in zpool or zfs command?
For instance -
zfs cr 'Tab key' will become zfs create
zfs clone 'Tab key' will show me the available snapshots
zfs set 'Tab key' will show me the
Hello all,
I think the problem here is the ZFS´ capacity for recovery from a failure.
Forgive me, but thinking about creating a code without failures, maybe the
hackers did forget that other people can make mistakes (if they can´t).
- ZFS does not need fsck.
Ok, that´s a great statement,
Hello Everyone,
I have recently jumped onto the OpenSolaris bandwagon coming from FreeBSD,
mainly because FreeBSD's ZFS stability is pretty bad. So a few weeks ago I
rebuilt my BSD NAS to OpenSolaris using ZFS and CIFS. Everything has been
working fine and I'm loving OpenSolaris. I haven't
The circumstances where I have lost data have been when ZFS has not
handled a layer of redundancy. However, I am not terribly optimistic
of the prospects of ZFS on any device that hasn't committed writes
that ZFS thinks are committed.
FYI, I'm working on a workaround for broken devices. As
On 10/10/2008, at 5:12 PM, Nathan Kroenert wrote:
On 10/10/08 05:06 PM, Boyd Adamson wrote:
Alex Peng [EMAIL PROTECTED] writes:
Is it fun to have autocomplete in zpool or zfs command?
For instance -
zfs cr 'Tab key' will become zfs create
zfs clone 'Tab key' will show me the
Hi Jeff,
On Sex, 2008-10-10 at 01:26 -0700, Jeff Bonwick wrote:
The circumstances where I have lost data have been when ZFS has not
handled a layer of redundancy. However, I am not terribly optimistic
of the prospects of ZFS on any device that hasn't committed writes
that ZFS thinks are
That sounds like a great idea for a tool Jeff. Would it be possible to build
that in as a zpool recover command?
Being able to run a tool like that and see just how bad the corruption is, but
know it's possible to recover an older version would be great. Is there any
chance of outputting
Mark Wymer wrote:
Something for consideration perhaps, as well as being able to specify the
quota and reservation sizes as an absolute number it would be nice to be able
to specify a relative percentage too.
i.e. zfs create -o quota=10% tank/testfs
This would enable the quota to grow
I'm wondering if this bug is fixed and if not, what is the bug number:
If your entire pool consisted of a single mirror of
two disks, A and B,
and you detached B at some point in the past, you
*should* be able to
recover the pool as it existed when you detached B.
However, I just
ried
jb == Jeff Bonwick [EMAIL PROTECTED] writes:
rmc == Ricardo M Correia [EMAIL PROTECTED] writes:
jb We need a little more Code of Hammurabi in the storage
jb industry.
It seems like most of the work people have to do now is cleaning up
after the sloppyness of others. At least it takes
On Fri, Oct 10, 2008 at 06:15:16AM -0700, Marcelo Leal wrote:
- ZFS does not need fsck.
Ok, that?s a great statement, but i think ZFS needs one. Really does.
And in my opinion a enhanced zdb would be the solution. Flexibility.
Options.
About 99% of the problems reported as I need ZFS fsck
On Tue, October 7, 2008 09:19, Johan Hartzenberg wrote:
Wouldn't it be great if programmers could just focus on writing code
rather
than having to worry about getting sued over whether someone else is able
or not to make a derivative program from their code?
If that's what you want, it's
Eric Schrock wrote:
On Fri, Oct 10, 2008 at 06:15:16AM -0700, Marcelo Leal wrote:
- ZFS does not need fsck.
Ok, that?s a great statement, but i think ZFS needs one. Really does.
And in my opinion a enhanced zdb would be the solution. Flexibility.
Options.
About 99% of the problems
2008/10/10 Richard Elling [EMAIL PROTECTED]:
Timh Bergström wrote:
2008/10/9 Bob Friesenhahn [EMAIL PROTECTED]:
On Thu, 9 Oct 2008, Miles Nordin wrote:
catastrophically. If this is really the situation, then ZFS needs to
give the sysadmin a way to isolate and fix the problems
Ok, in addition to my why do I have to use -F post above, now I've tried it
with -F but after the first in the series of snapshots gets sent, it gives me a
cannot mount '/backup/shares': failed to create mountpoint.
--
This message posted from opensolaris.org
On Fri, Oct 10, 2008 at 06:15:16AM -0700, Marcelo
Leal wrote:
- ZFS does not need fsck.
Ok, that?s a great statement, but i think ZFS
needs one. Really does.
And in my opinion a enhanced zdb would be the
solution. Flexibility.
Options.
About 99% of the problems reported as I
On Sex, 2008-10-10 at 11:23 -0700, Eric Schrock wrote:
But I haven't actually heard a reasonable proposal for what a
fsck-like tool (i.e. one that could repair things automatically) would
actually *do*, let alone how it would work in the variety of situations
it needs to (compressed RAID-Z?)
On Thu, Oct 9, 2008 at 6:56 PM, BJ Quinn [EMAIL PROTECTED] wrote:
So, here's what I tried - first of all, I set the backup FS to readonly.
That resulted in the same error message. Strange, how could something have
changed since the last snapshot if I CONSCIOUSLY didn't change anything or
CD
You've seen -F be necessary on some systems and not on others?
Also, was the mount=legacy suggestion for my problem with not wanting to use -F
or for my cannot create mountpoint problem? Or both?
If you use legacy mountpoints, does that mean that mounting the parent
filesystem doesn't
David Dyer-Bennet [EMAIL PROTECTED] writes:
On Tue, October 7, 2008 09:19, Johan Hartzenberg wrote:
Wouldn't it be great if programmers could just focus on writing code
rather than having to worry about getting sued over whether someone
else is able or not to make a derivative program from
Timh Bergström wrote:
2008/10/10 Richard Elling [EMAIL PROTECTED]:
Timh Bergström wrote:
2008/10/9 Bob Friesenhahn [EMAIL PROTECTED]:
On Thu, 9 Oct 2008, Miles Nordin wrote:
catastrophically. If this is really the situation, then ZFS needs to
give the sysadmin
Sorry if this is the wrong list, but I would like to know if this is a
known problem, or if its just me.
I'm in the middle of a live upgrade on an x86 box.
I type this command:
# luupgrade -u -n snv_99 -s /mnt
~~~output~~~
System has findroot enabled GRUB
No entry for BE snv_99 in GRUB menu
On Fri, Oct 10, 2008 at 04:09:08PM -0700, Joe S wrote:
Sorry if this is the wrong list, but I would like to know if this is a
known problem, or if its just me.
I'm in the middle of a live upgrade on an x86 box.
I type this command:
# luupgrade -u -n snv_99 -s /mnt
[..]
Then the box
Paul,
I have a question about ZFS and how it protects data integrity in the
context of a replication scenario.
First, ZFS is designed such that all data on disk is in a consistent
state. Likewise, all data in a ZFS snapshot on disk is in a
consistent
state. Further, ZFS, by virtue of
Hi
I've used format's volname command to give labels to my drives
according to their physical location. I did quite a lot of work
labeling all my drives (I couldn't figure out which controller got
which numbers so I had to disconnect drives one by one, and they're not
hotpluggable = lot's of
On Oct 10, 2008, at 15:48, Victor Latushkin wrote:
I've mostly seen (2), because despite all the best practices out
there,
single vdev pools are quite common. In all such cases that I had my
hands on it was possible to recover pool by going back by one or two
txgs.
For better or worse
Or is there a way to mitigate a checksum error on non-redundant zpool?
It's just like the difference between non-parity, parity, and ECC memory.
Most filesystems don't have checksums (non-parity), so they don't even
know when they're returning corrupt data. ZFS without any replication
can
I've been playing with Solaris express community edition on some x4500
servers (SunOS kenny 5.11 snv_97 i86pc i386 i86pc), trying to get a head
start on a configuration for U6 when it comes out (soon I hope).
Sometimes zfs commands take extremely long to execute, other times they are
very fast.
On Fri, Oct 10, 2008 at 9:14 PM, Jeff Bonwick [EMAIL PROTECTED] wrote:
Note: even in a single-device pool, ZFS metadata is replicated via
ditto blocks at two or three different places on the device, so that
a localized media failure can be both detected and corrected.
If you have two or more
On Sat, Oct 11, 2008 at 03:19:49AM +0300, Marcus Sundman wrote:
I've used format's volname command to give labels to my drives
according to their physical location. I did quite a lot of work
labeling all my drives (I couldn't figure out which controller got
which numbers so I had to disconnect
36 matches
Mail list logo