On 26/06/2007, at 12:08 PM, [EMAIL PROTECTED] wrote:
I've been saving up a few wishlist items for zfs. Time to share.
1. A verbose (-v) option to the zfs commandline.
In particular zfs sometimes takes a while to return from zfs
snapshot -r tank/[EMAIL PROTECTED] in the case where there are a
Victor Engle wrote:
Roshan,
As far as I know, there is no problem at all with
using SAN storage
with ZFS and it does look like you were having an
underlying problem
with either powerpath or the array.
Correct. A write failed.
The best practices guide on opensolaris does
Same problem here (snv_60).
Robert, did you find any solutions?
gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Shouldn't S10u3 just see the newer on-disk format and
report that fact, rather than complain it is corrupt?
Yep, I just tried it, and it refuses to zpool import the newer pool,
telling me about the incompatible version. So I guess the pool
format isn't the correct explanation for the Dick
It would be really handy if whoever was responsible for the message at:
http://www.sun.com/msg/ZFS-8000-A5
could add data about which zpool versions are supported at specific OS/patch
releases.
The current message doesn't help the user figure out how to accomplish their
implied task, which is
Gino wrote:
Same problem here (snv_60).
Robert, did you find any solutions?
Couple of week ago I put together an implementation of space maps which
completely eliminates loops and recursion from space map alloc
operation, and allows to implement different allocation strategies quite
easily
On Wed, 27 Jun 2007, [UTF-8] Jürgen Keil wrote:
Yep, I just tried it, and it refuses to zpool import the newer pool,
telling me about the incompatible version. So I guess the pool format
isn't the correct explanation for the Dick Davies' (number9) problem.
Have you tried creating the pool on
On Tue, Jun 19, 2007 at 05:19:05PM +0200, Constantin Gonzalez wrote:
Hi,
http://blogs.sun.com/constantin/entry/new_zfs_white_paper_in
Excellent!!!
I think it is a pretty good idea, to put the links for the
paper and slides on the ZFS Documentation page aka
Richard Elling wrote:
Rob Logan wrote:
an array of 30 drives in a RaidZ2 configuration with two hot spares
I don't want to mirror 15 drives to 15 drives
ok, so space over speed... and are willing to toss somewhere between 4
and 15 drives for protection.
raidz splits the (up to 128k)
Thanks to everyone for the sanity check - I think
it's a platform issue, but not an endian one.
The stick was originally DOS-formatted, and the zpool was built on the first
fdisk partition. So Sparcs aren't seeing it, but the x86/x64 boxes are.
--
Rasputin :: Jack of All Trades - Master of
I had a similar situation between x86 and SPARC, version number. When I
created the pool on the LOWER rev machine, it was seen by the HIGHER rev
machine. This was a USB HDD, not a stick. I can now move the drive
between boxes.
HTH,
Mike
Dick Davies wrote:
Thanks to everyone for the sanity
Jens,
Someone already added it to the ZFS links page, here:
http://opensolaris.org/os/community/zfs/links/
I just added a link to the links page from the zfs docs page
so it is easier to find.
Thanks,
Cindy
Jens Elkner wrote:
On Tue, Jun 19, 2007 at 05:19:05PM +0200, Constantin Gonzalez
I have 8 SATA on the motherboard, 4 PCI cards with 4 SATA each, one
PCIe 4x sata card with two, and one PCIe 1x with two. The operating
system itself will be on a hard drive attached to one ATA 100
connector.
Kind of like a poor man's data centre, except not that cheap... It
still is estimated
On June 26, 2007 2:13:54 PM -0700 Joel Miller [EMAIL PROTECTED] wrote:
The 2500 series engineering team is talking with the ZFS folks to
understand the various aspects of delivering a complete solution. (There
is a lot more to it than it seems to work...).
Great news, you made my day! Any
William D. Hathaway wrote:
It would be really handy if whoever was responsible for the message at:
http://www.sun.com/msg/ZFS-8000-A5
could add data about which zpool versions are supported at specific OS/patch
releases.
Did you look at http://www.opensolaris.org/os/community/zfs/version/N
Jef Pearlman wrote:
Absent that, I was considering using zfs and just
having a single pool. My main question is this: what
is the failure mode of zfs if one of those drives
either fails completely or has errors? Do I
permanently lose access to the entire pool? Can I
attempt to read
Perhaps I'm not asking my question clearly. I've already experimented
a fair amount with zfs, including creating and destroying a number of
pools with and without redundancy, replacing vdevs, etc. Maybe asking
by example will clarify what I'm looking for or where I've missed the
boat. The key
Hi,
In brief, what I am trying to do is to use libzpool to access a zpool -
like ztest does.
Matthew Ahrens wrote:
Manoj Joseph wrote:
Hi,
Replying to myself again. :)
I see this problem only if I attempt to use a zpool that already
exists. If I create one (using files instead of
Darren Dunham wrote:
The problem I've come across with using mirror or raidz for this setup
is that (as far as I know) you can't add disks to mirror/raidz groups,
and if you just add the disk to the pool, you end up in the same
situation as above (with more space but no redundancy).
You
The only thing I haven't found in zfs yet, is metadata etc info.
The previous 'next best thing' in FS was of course ReiserFS (4). Reiser3
was quite a nice thing, fast, journaled and all that, but Reiser4
promised to bring all those things that we see emerging now, like cross
FS search, any
Darren Dunham wrote:
The problem I've come across with using mirror or raidz for this setup
is that (as far as I know) you can't add disks to mirror/raidz groups,
and if you just add the disk to the pool, you end up in the same
situation as above (with more space but no redundancy).
On Wed, 2007-06-27 at 14:50 -0700, Darren Dunham wrote:
Darren Dunham wrote:
The problem I've come across with using mirror or raidz for this setup
is that (as far as I know) you can't add disks to mirror/raidz groups,
and if you just add the disk to the pool, you end up in the same
Jef Pearlman wrote:
Perhaps I'm not asking my question clearly. I've already experimented a fair amount
with zfs, including creating and destroying a number of pools with and without
redundancy, replacing vdevs, etc. Maybe asking by example will clarify what I'm
looking for or where I've
On Wed, 2007-06-27 at 12:03 -0700, Jef Pearlman wrote:
Jef Pearlman wrote:
Absent that, I was considering using zfs and just
having a single pool. My main question is this: what
is the failure mode of zfs if one of those drives
either fails completely or has errors? Do I
On 28/06/2007, at 12:29 AM, Victor Latushkin wrote:
It is not so easy to predict. ZFS will coalesce writes. A single
transaction
group may have many different writes in it. Also, raidz[12] is
dynamic, and
will use what it needs, unlike separate volume managers who do not
have any
Erik Trimble wrote:
If you had known about the drive sizes beforehand, the you could have
done something like this:
Partition the drives as follows:
A: 1 20GB partition
B: 1 20gb 1 10GB partition
C: 1 40GB partition
D: 1 40GB partition 2 10GB paritions
then you do:
zpool create tank
26 matches
Mail list logo