On Thu, Nov 08, 2007 at 07:28:47PM -0800, can you guess? wrote:
How so? In my opinion, it seems like a cure for the brain damage of RAID-5.
Nope.
A decent RAID-5 hardware implementation has no 'write hole' to worry about,
and one can make a software implementation similarly robust with
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this
Hi Dan,
Dan Pritts wrote:
On Tue, Nov 13, 2007 at 12:25:24PM +0100, Paul Boven wrote:
We've building a storage system that should have about 2TB of storage
and good sequential write speed. The server side is a Sun X4200 running
Solaris 10u4 (plus yesterday's recommended patch cluster), the
On Fri, Nov 16, 2007 at 11:31:00AM +0100, Paul Boven wrote:
Thanks for your reply. The SCSI-card in the X4200 is a Sun Single
Channel U320 card that came with the system, but the PCB artwork does
sport a nice 'LSI LOGIC' imprint.
That is probably the same card i'm using; it's actually a Sun
We are having the same problem.
First with 125025-05 and then also with 125205-07
Solaris 10 update 4 - Know with all Patchesx
We opened a Case and got
T-PATCH 127871-02
we installed the Marvell Driver Binary 3 Days ago.
T127871-02/SUNWckr/reloc/kernel/misc/sata
...
I personally believe that since most people will have
hardware LUN's
(with underlying RAID) and cache, it will be
difficult to notice
anything. Given that those hardware LUN's might be
busy with their own
wizardry ;) You will also have to minimize the effect
of the database
cache ...
On Thu, 15 Nov 2007, Brian Lionberger wrote:
The question is, should I create one zpool or two to hold /export/home
and /export/backup?
Currently I have one pool for /export/home and one pool for /export/backup.
Should it be on pool for both??? Would this be better and why?
One thing to
I'll be setting up a small server and need two SATA-II ports for an x86
box. The cheaper the better.
Thanks!!
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is
How I can destroy the following pool ?
pool: mstor0
id: 5853485601755236913
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
mstor0 UNAVAIL
Brain damage seems a bit of an alarmist label. While you're certainly right
that for a given block we do need to access all disks in the given stripe,
it seems like a rather quaint argument: aren't most environments that
matter trying to avoid waiting for the disk at all? Intelligent prefetch
I have a zpool issue that I need to discuss.
My application is going to run on a 3120 with 4 disks. Two(mirrored)
disks will represent /export/home and the other two(mirrored) will be
/export/backup.
The question is, should I create one zpool or two to hold /export/home
and /export/backup?
Splitting this thread and changing the subject to reflect that...
On 11/14/07, can you guess? [EMAIL PROTECTED] wrote:
Another prominent debate in this thread revolves around the question of
just how significant ZFS's unusual strengths are for *consumer* use.
WAFL clearly plays no part in
can you guess? billtodd at metrocast.net writes:
You really ought to read a post before responding
to it: the CERN study
did encounter bad RAM (and my post mentioned that)
- but ZFS usually can't
do a damn thing about bad RAM, because errors tend
to arise either
before ZFS ever
msl wrote:
Hello all...
I'm migrating a nfs server from linux to solaris, and all clients(linux) are
using read/write block sizes of 8192. That was the better performance that i
got, and it's working pretty well (nfsv3). I want to use all the zfs'
advantages, and i know i can have a
Hey folks,
I have no knowledge at all about how streams work in Solaris, so this might
have a simple answer, or be completely impossible. Unfortunately I'm a windows
admin so haven't a clue which :)
We're looking at rolling out a couple of ZFS servers on our network, and
instead of tapes
On Nov 15, 2007 9:42 AM, Nabeel Saad [EMAIL PROTECTED] wrote:
I am sure I will not use ZFS to its fullest potential at all.. right now I'm
trying to recover the dead disk, so if it works to mount a single disk/boot
disk, that's all I need, I don't need it to be very functional. As I
I have the following layout
A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using
A1 anfd B1 controller port 4Gbps speed.
Each controller has 2G NVRAM
On 6140s I setup raid0 lun per SAS disks with 16K segment size.
On 490 I created a zpool with 8 4+1 raidz1s
I am getting zpool
Manoj,
# zpool destroy -f mstor0
Regards,
Marco Lopes.
Manoj Nayak wrote:
How I can destroy the following pool ?
pool: mstor0
id: 5853485601755236913
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause
I've been observing two threads on zfs-discuss with the following
Subject lines:
Yager on ZFS
ZFS + DB + fragments
and have reached the rather obvious conclusion that the author can
you guess? is a professional spinmeister, who gave up a promising
career in political speech writing, to
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over all devices but now all synchronous traffic
is directed at one device (and everything is synchronous in NFS).
Mind you 15MB/s seems a bit on the slow
On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over all devices but now all synchronous traffic
is directed at one device (and everything
On Nov 16, 2007 9:17 PM, Joe Little [EMAIL PROTECTED] wrote:
On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over all devices but now
23 matches
Mail list logo