Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV If ZFS is providing better data integrity then the current storage
DV arrays, that sounds like to me an opportunity for the next generation
DV of intelligent arrays to become better.
Actually they can't.
If you want end-to-end
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me that the simplest way to
Hello przemolicc,
Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
ppf On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
Case in point, there was a gentleman who posted on the Yahoo Groups solx86
list and described how faulty firmware on a Hitach HDS system damaged a
bunch of data.
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
Hello przemolicc,
Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
ppf On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
Case in point, there was a gentleman who posted on the Yahoo Groups solx86
list and
Hello,
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But cannot detect
and repair
errors in its (ZFS) code.
I
Robert Milkowski wrote:
Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV If ZFS is providing better data integrity then the current storage
DV arrays, that sounds like to me an opportunity for the next generation
DV of intelligent arrays to become better.
Actually they
[EMAIL PROTECTED] wrote:
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk
Depends on your definition of firmware. In higher end arrays the data is
checksummed when it comes in and a hash is written when it gets to disk.
Of course this is no where near end to end but it is better then nothing.
The checksum is often stored with the data (so if the data is not
Depends on your definition of firmware. In higher end arrays the data
is checksummed when it comes in and a hash is written when it gets to
disk. Of course this is no where near end to end but it is better then
nothing.
... and code is code. Easier to debug is a context sensitive term.
Hello Noel,
Wednesday, June 28, 2006, 5:59:18 AM, you wrote:
ND a zpool remove/shrink type function is on our list of features we want
ND to add.
ND We have RFE
ND 4852783 reduce pool capacity
ND open to track this.
Is there someone actually working on this right now?
--
Best regards,
Robert Milkowski wrote:
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me
On Wed, Jun 21, 2006 at 04:34:59PM -0600, Mark Shellenbaum wrote:
Can you give us an example of a 'file' the ssh-agent wishes to open and
what the permission are on the file and also what privileges the
ssh-agent has, and what the expected results are.
ssh-agent(1) should need to open no
On Jun 28, 2006, at 12:32, Erik Trimble wrote:The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N / 2) -1 speed N / 2 -1 minimum # disks to lose before loss of data:
Mark Shellenbaum wrote:
Can you give us an example of a 'file' the ssh-agent wishes to open and
what the permission are on the file and also what privileges the
ssh-agent has, and what the expected results are.
The whole point is that ssh-agent should NEVER be opening any files that
the user
Doug,
Very nice setup! As you mention, more notes would be very helpful, but
very neat stuff!
Thanks,
Tabriz
Doug Scott wrote:
I have posted a blog http://solaristhings.blogspot.com/ on how I have
configured a zfs root partition on my laptop. It is a slightly modified version
of Tabriz's
Dennis,
You are absolutely correct that the doc needs a step to verify
that the backup occurred.
I'll work on getting this step added to the admin guide ASAP.
Thanks for feedback...
Cindy
Dennis Clarke wrote:
Am I missing something here? [1]
Dennis
[1] I am fully prepared for RTFM
For an embedded application, I'm looking at creating a minimal Solaris
10 U2 image which would include ZFS functionality. In quickly taking a
look at the opensolaris.org site under pkgdefs, I see three packages
that appear to be related to ZFS: SUNWzfskr, SUNWzfsr, and SUNWzfsu. Is
it naive
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
The main reason I don't see ZFS mirror / HW RAID5 as useful is this:
ZFS mirror/ RAID5: capacity = (N / 2) -1
speed N / 2 -1
minimum # disks to lose before loss
Robert,
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me that the simplest way to go is to use zfs to mirror
PT HW raid5, preferably with the HW raid5 LUNs being completely
PT
Hey Robert,
Well, not yet. Right now our top two priorities are improving
performance in multiple areas of zfs(soon there will be a performance
page tracking progess on the zfs community page), and also getting zfs
boot done. Hence, we're not currently working on heaps of brand new
Which is better -
zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5?
The latter. With a mirror of RAID-5 arrays, you get:
(1) Self-healing data.
(2) Tolerance of whole-array failure.
(3) Tolerance of *at least* three disk failures.
(4) More IOPs than raidz of hardware mirrors
Hello Peter,
Wednesday, June 28, 2006, 11:24:32 PM, you wrote:
PT Robert,
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me that the simplest way to go is to use zfs to mirror
PT HW
Hello Erik,
Wednesday, June 28, 2006, 6:32:38 PM, you wrote:
ET Robert -
ET I would definitely like to see the difference between read on HW RAID5
ET vs read on RAIDZ. Naturally, one of the big concerns I would have is
ET how much RAM is needed to avoid any cache starvation on the ZFS
ET
Robert Milkowski wrote On 06/28/06 15:52,:
Hello Neil,
Wednesday, June 21, 2006, 8:15:54 PM, you wrote:
NP Robert Milkowski wrote On 06/21/06 11:09,:
Hello Neil,
Why is this option available then? (Yes, that's a loaded question.)
NP I wouldn't call it an option, but an internal
On Wed, 2006-06-28 at 22:13 +0100, Peter Tribble wrote:
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
Given a reasonable number of hot-spares, I simply can't see the (very)
marginal increase in safety give by using HW RAID5 as out balancing the
considerable speed hit using RAID5 takes.
On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote:
Which is better -
zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5?
The latter. With a mirror of RAID-5 arrays, you get:
(1) Self-healing data.
(2) Tolerance of whole-array failure.
(3) Tolerance of *at least*
26 matches
Mail list logo