Hi,
I need to be a little bit more precise in how I formulate comments:
1. Yes, zpool remove is a desirable feature, no doubt about that.
2. Most of the cases where customers ask for zpool remove can be solved
with zfs send/receive or with zpool replace. Think Pareto's 80-20 rule.
2a.
On Jan 30, 2007, at 09:52, Luke Scharf wrote:
Hey, I can take a double-drive failure now! And I don't even need
to rebuild! Just like having a hot spare with raid5, but without
the rebuild time!
Theoretically you want to rebuild as soon as possible, because
running in degraded mode
I understand all the math involved with RAID 5/6 and failure rates,
but its wise to remember that even if the probabilities are small
they aren't zero. :)
And after 3-5 years of continuous operation, you better decommission the
whole thing or you will have many disk failures.
Casper
As a followup, the system I'm trying to use this on is a dual PII 400 with
512MB. Real low budget.
2 500 GB drives with 2 120 GB in a RAIDZ. The idea is that I can get 2 more
500 GB drives later to get full capacity. I tested going from a 20GB to a
120GB and that worked well.
I'm finding
Hi there,
Richard's blog post
(http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance)
got me thinking. I posted a comment but it got mangled, and I'm wondering if I
got it right, so I'm reposting here:
Just to make sure I have things right:
Given (by the ZFS layer) a
David Magda wrote:
On Jan 30, 2007, at 09:52, Luke Scharf wrote:
Hey, I can take a double-drive failure now! And I don't even need
to rebuild! Just like having a hot spare with raid5, but without the
rebuild time!
Theoretically you want to rebuild as soon as possible, because running
in
On 01/30/07 17:59, Neal Pollack wrote:
I am assuming that one single command;
# zfs set sharenfs=ro bigpool
would share /export as a read-only NFS point?
It will share /export as read-only. The property will also
be inherited by all filesystem below export, so they
too will be shared
On Jan 31, 2007, at 4:26 AM, Selim Daoud wrote:
you can still do some lun masking at the HBA level (Solaris 10)
this feature is call blacklist
Oh, I'd do that but Solaris isn't the only OS that uses arrays on my
SAN, and other hosts even cross-departmental. Thus masking from the
array is
Just to make sure I have things right:
Given (by the ZFS layer) a block D of data to store, RAID-Z will first split
the block in several smaller blocks D_1..D_n as needed and calculate the
parity block P from those. (n is the stripe width for this write)
Then D_1..D_n and p are written to
Wout Mertens wrote:
Hi there,
Richard's blog post
(http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance)
got me thinking. I posted a comment but it got mangled, and I'm wondering if I
got it right, so I'm reposting here:
Wout, I'm glad you started the thread here,
= What if ZFS had parity blocks?
Try this scenario:
Given data to store, that data is stored in regular
ZFS blocks, and a parity block is calculated. The
data and parity blocks are laid out across the
available disks in the pool.
When you need data from one of those blocks,
That's good to know.
It's a new Addonics 4 port card. Specifically:
ADS3GX4R5-ERAID5/JBOD 4-port ext. SATA II PCI-X
prtconf -v output:
pci1095,7124, instance #0
Driver properties:
name='sata' type=int items=1 dev=none
.
name='compatible'
On 1/31/07, Wout Mertens [EMAIL PROTECTED] wrote:
= What if ZFS had parity blocks?
Try this scenario:
Given data to store, that data is stored in regular
ZFS blocks, and a parity block is calculated. The
data and parity blocks are laid out across the
available disks in the pool.
Hello Jeremy,
Wednesday, January 31, 2007, 10:21:59 AM, you wrote:
JT On 1/30/07, Jeremy Teo [EMAIL PROTECTED] wrote:
JT On a related note: I've been itching to make what you want possible
JT also (ie detaching a vdev from a mirror and getting a zpool from via
JT zpool import. I'll see what I
Which structure in ZFS stores file property info such as permissions, owner
etc? What is its relationship with uberblock, block pointer or metadnode etc? I
thought it would be dnode. However, I don't know which structure in dnode is
used to store such info. Thx for ur help
dnode:
On Wed, Jan 31, 2007 at 01:11:52PM -0800, Brian Gao wrote:
Which structure in ZFS stores file property info such as permissions, owner
etc? What is its relationship with uberblock, block pointer or metadnode etc?
I thought it would be dnode. However, I don't know which structure in dnode
is
Or look at pages 46-50 of the ZFS on-disk format document:
http://opensolaris.org/os/community/zfs/docs/ondiskformatfinal.pdf
There's an final version? That link appears to be broken (and the
lastest version linked from the ZFS docs area
http://opensolaris.org/os/community/zfs/docs/ is dated
Hello.
2. Most of the cases where customers ask for zpool
remove can be solved
with zfs send/receive or with zpool replace. Think
Pareto's 80-20 rule.
This depends on where you define most. In the cases I am looking at, I would
have to disagree.
2a. The cost of doing 2., including extra
http://napobo3.blogspot.com/2007/01/printing-problemz.html
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jan 31, 2007 at 09:31:34PM +, James Blackburn wrote:
Or look at pages 46-50 of the ZFS on-disk format document:
http://opensolaris.org/os/community/zfs/docs/ondiskformatfinal.pdf
There's an final version? That link appears to be broken (and the
lastest version linked from the
Thank nico. I'll read the doc.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
No it's not the final version or even the latest!
The current on disk format version is 3. However, it hasn't
diverged much and the znode/acl stuff hasn't changed.
Neil.
James Blackburn wrote On 01/31/07 14:31,:
Or look at pages 46-50 of the ZFS on-disk format document:
Final for the first draft. :-)
Use the .../community/zfs/docs link to get to this doc link at the
bottom of the page. The current version is indeed 0822.
More updates are needed, but the dnode description is still applicable.
Someone will correct if I'm wrong.
cs
James Blackburn wrote:
Or
Hello Tom,
Wednesday, January 31, 2007, 2:01:19 PM, you wrote:
TB As a followup, the system I'm trying to use this on is a dual PII
TB 400 with 512MB. Real low budget.
TB 2 500 GB drives with 2 120 GB in a RAIDZ. The idea is that I can
TB get 2 more 500 GB drives later to get full capacity.
Urk!
Where is this documented? And - is it something you can do nothing
about, or are we ultimately trying to address it somewhere / somehow?
Thanks!!
Nathan.
Bill Moore wrote:
On Wed, Jan 31, 2007 at 05:01:19AM -0800, Tom Buskey wrote:
As a followup, the system I'm trying to use this on
Peter Buckingham wrote:
Hi Eric,
eric kustarz wrote:
The first thing i would do is see if any I/O is happening ('zpool
iostat 1'). If there's none, then perhaps the machine is hung (which
you then would want to grab a couple of '::threadlist -v 10's from mdb
to figure out if there are hung
I have Solaris 10 U2 with all the latest patches that started to crash recently
on regular basis... so I started to dig and see what is causing it and here is
what I found out:
panic[cpu1]/thread=2a1009a7cc0: really out of space
02a1009a6d70 zfs:zio_write_allocate_gang_members+33c
Krzys wrote:
I have Solaris 10 U2 with all the latest patches that started to crash
recently on regular basis... so I started to dig and see what is causing
it and here is what I found out:
panic[cpu1]/thread=2a1009a7cc0: really out of space
02a1009a6d70
I guess I need to upgrade this system then... thanks for info...
Chris
On Thu, 1 Feb 2007, James C. McPherson wrote:
Krzys wrote:
I have Solaris 10 U2 with all the latest patches that started to crash
recently on regular basis... so I started to dig and see what is causing it
and here
I wrote:
Just thinking out loud here. Now I'm off to see what kind of performance
cost there is, comparing (with 400GB disks):
Simple ZFS stripe on one 2198GB LUN from a 6+1 HW RAID5 volume
8+1 RAID-Z on 9 244.2GB LUN's from a 6+1 HW RAID5 volume
[EMAIL PROTECTED] said:
On 2/1/07, Marion Hakanson [EMAIL PROTECTED] wrote:
There's also the potential of too much seeking going on for the raidz pool,
since there are 9 LUN's on top of 7 physical disk drives (though how Hitachi
divides/stripes those LUN's is not clear to me).
Marion,
That is the part of your setup
fishy smell way below...
Marion Hakanson wrote:
I wrote:
Just thinking out loud here. Now I'm off to see what kind of performance
cost there is, comparing (with 400GB disks):
Simple ZFS stripe on one 2198GB LUN from a 6+1 HW RAID5 volume
8+1 RAID-Z on 9 244.2GB LUN's from a
32 matches
Mail list logo