Nicolas Williams wrote:
On Fri, Feb 02, 2007 at 03:17:17PM -0500, Torrey McMahon wrote:
Nicolas Williams wrote:
But a continuous zfs send/recv would be cool too. In fact, I think ZFS
tightly integrated with SNDR wouldn't be that much different from a
continuous zfs send/recv
Jonathan Edwards wrote:
On Feb 2, 2007, at 15:35, Nicolas Williams wrote:
Unlike traditional journalling replication, a continuous ZFS send/recv
scheme could deal with resource constraints by taking a snapshot and
throttling replication until resources become available again.
Replication
Gary Mills wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
On Jan 26, 2007, at 9:42, Gary Mills wrote:
How does this work in an environment with storage that's centrally-
managed and shared between many servers?
It will work, but if the storage system corrupts
Dana H. Myers wrote:
Ed Gould wrote:
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good reference,
comes from Jim Gray, who has been around the data-management
Dana H. Myers wrote:
Torrey McMahon wrote:
Dana H. Myers wrote:
Ed Gould wrote:
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good
Richard Elling wrote:
Personally, I've never been in the situation where users ask for less
storage,
but maybe I'm just the odd guy out? ;-)
You just realized that JoeSysadmin allocated ten luns to the zpool when
he realy only should have allocated one.
Al Hopper wrote:
On Fri, 26 Jan 2007, Torrey McMahon wrote:
Al Hopper wrote:
Now your accounting folks are going to be asking you to justify the
purchase of that hi-end SAN box and why you're not using ZFS
everywhere. :)
Oh - and the accounting folks love it when you tell them
Toby Thain wrote:
On 26-Jan-07, at 7:29 PM, Selim Daoud wrote:
it would be good to have real data and not only guess ot anecdots
this story about wrong blocks being written by RAID controllers
sounds like the anti-terrorism propaganda we are leaving in: exagerate
the facts to catch
Albert Chin wrote:
On Wed, Jan 24, 2007 at 10:19:29AM -0800, Frank Cusack wrote:
On January 24, 2007 10:04:04 AM -0800 Bryan Cantrill [EMAIL PROTECTED]
wrote:
On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote:
Well, he did say fairly cheap. the ST 3511 is about
Neal Pollack wrote:
Jason J. W. Williams wrote:
So I was curious if anyone had any insights into the history/origins
of the Thumper...or just wanted to throw more rumors on the fire. ;-)
Thumper was created to hold the the entire electronic transcript of the
Bill Clinton impeachment
, *Torrey McMahon* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
What does that view show?
Gael wrote:
All,
And on that one big mea culpa, the wanboot.conf install file
used the
solaris 9 miniroot to load that solaris 10 U3 machine...
explaining why the MD21
Robert Milkowski wrote:
2. I belive it's definitely possible to just correct your config under
Mac OS without any need to use other fs or volume manager, however
going to zfs could be a good idea anyway
That implies that MacOS has some sort of native SCSI multipathing like
Solaris Mpxio.
On 1/15/07, Torrey McMahon [EMAIL PROTECTED] wrote:
Robert Milkowski wrote:
2. I belive it's definitely possible to just correct your config under
Mac OS without any need to use other fs or volume manager, however
going to zfs could be a good idea anyway
That implies that MacOS has some sort
Richard Elling wrote:
Gael wrote:
jumps8002:/etc/apache2 #cat /etc/release
Solaris 10 11/06 s10s_u3wos_10 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
[EMAIL PROTECTED] wrote:
Does this seem feasible? Are there any blocking points that I am
missing
or unaware of? I am just posting this for discussion, it seems very
interesting to me.
Note that you'd actually have to verify that the blocks were the same;
you
You want to give ZFS multiple LUNs so it can have redundancy within the
pool. (Mirror or RAIDZ) Otherwise, you will not be able to recover from
certain types of errors. A zpool with a single LUN would only let you
detect the errors.
Karen Chau wrote:
on the 6130.
If I use 6 disks to create 3 LUNS (2 disks per LUN) and create a
raidz pool. I will have stripe w/parity on *BOTH* LUN level and ZFS
level, would this cause a performance issue? How about recovery???
Torrey McMahon wrote On 01/03/07 09:56,:
You want to give ZFS multiple LUNs so
A LUN going away should not cause a panic. (The obvious exception
being the boot LUN) If mpxio saw the LUN move and everything moved ...
then it's a bug. The panic backtrace will point to the guilty party in
any case.
Jason J. W. Williams wrote:
Hi Robert,
MPxIO had correctly moved the
Derek E. Lewis wrote:
Greetings,
I'm trying to move some of my mirrored pooldevs to another controller.
I have a StorEdge A5200 (Photon) with two physical paths to it, and
originally, when I created the storage pool, I threw all of the drives
on c1. Several days after my realization of this,
Bill Sommerfeld wrote:
On Tue, 2006-12-26 at 14:01 +0300, Victor Latushkin wrote:
What happens if fatal failure occurs after the txg which frees blocks
have been written but before before txg doing bleaching will be
started/completed?
clearly you'd need to store the unbleached list
Roch - PAE wrote:
The fact that most FS do not manage the disk write caches
does mean you're at risk of data lost for those FS.
Does ZFS? I thought it just turned it on in the places where we had
previously turned if off.
___
zfs-discuss mailing
Jonathan Edwards wrote:
On Dec 20, 2006, at 04:41, Darren J Moffat wrote:
Bill Sommerfeld wrote:
There also may be a reason to do this when confidentiality isn't
required: as a sparse provisioning hack..
If you were to build a zfs pool out of compressed zvols backed by
another pool, then it
Darren Reed wrote:
Darren,
A point I don't yet believe that has been addressed in this
discussion is: what is the threat model?
Are we targetting NIST requirements for some customers
or just general use by everyday folks?
Even higher level: What problem are you/we trying to solve?
Darren J Moffat wrote:
Jonathan Edwards wrote:
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should we
warn that ZFS isn't
Anton B. Rang wrote:
INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect your data.
OK, I'm puzzled.
Am I the only one on this list who believes that a kernel panic, instead of
EIO, represents a bug?
Christine Tran wrote:
And the PowerPath question is important, customer is using PP right now.
I haven't heard any powerpath issues. Can you track down what it was
GeorgeW mentioned?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
James W. Abendschan wrote:
Once the mirror was synced, I disconnected one of the iSCSI boxes
(pulled the ethernet plug from one of the VTraks), did some I/O
on the volume, and Solaris paniced. After it rebooted, I did a
'zpool scrub' and the T1000 again went into la-la land while the
scrubbing
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment? What will and will not work?
From some of the information I have been gathering
it doesn't
Richard Elling wrote:
Jason J. W. Williams wrote:
Hi Jeremy,
It would be nice if you could tell ZFS to turn off fsync() for ZIL
writes on a per-zpool basis. That being said, I'm not sure there's a
consensus on that...and I'm sure not smart enough to be a ZFS
contributor. :-)
The behavior is a
Robert Milkowski wrote:
Hello Torrey,
Tuesday, December 12, 2006, 11:40:42 PM, you wrote:
TM Robert Milkowski wrote:
Hello Matthew,
MCA Also, I am considering what type of zpools to create. I have a
MCA SAN with T3Bs and SE3511s. Since neither of these can work as a
MCA JBOD (at lesat
Robert Milkowski wrote:
Hello Matthew,
MCA Also, I am considering what type of zpools to create. I have a
MCA SAN with T3Bs and SE3511s. Since neither of these can work as a
MCA JBOD (at lesat that is what I remember) I guess I am going to
MCA have to add in the LUNS in a mirrored zpool of
Still ... I don't think a core file is appropriate. Sounds like a bug is
in order if one doesn't already exist. (zpool dumps core when missing
devices are used perhaps?)
Wee Yeh Tan wrote:
Ian,
The first error is correct in that zpool-create will not, unless
forced, create a file system if
Krzys wrote:
Thanks, ah another wird thing is that when I run format on that
frive I get a coredump :(
Run pstack /path/to/core and send the output.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Douglas Denny wrote:
In reading the list archives, am I right to conclude that disks larger
than 1 TB need to support EFI? I one of my projects the SAN does not
support EFI labels under Solaris. Does this mean I would have to
create a pool with disks 1 TB?
Out of curiosity ... what array is
Not automagically. You'll need to do a dump/restore or copy from one to the
other.
- Original Message
From: Dan Christensen [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Sent: Thursday, November 16, 2006 5:52:51 PM
Subject: [zfs-discuss] SVM - UFS Upgrade
Is it possible to
Richard Elling - PAE wrote:
Torrey McMahon wrote:
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
[SNIP]
Tunable in a form of pool property, with default 100%.
On the other hand maybe simple algorithm Veritas has used is good
enough - simple delay
Howdy Robert.
Robert Milkowski wrote:
You've got the same behavior with any LVM when you replace a disk.
So it's not something unexpected for admins. Also most of the time
they expect LVM to resilver ASAP. With default setting not being 100%
you'll definitely see people complaining ZFS is
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
TM Robert Milkowski wrote:
Also scrub can consume all CPU power on smaller and older machines and
that's not always what I would like.
REP The big question, though, is 10% of what? User CPU? iops?
Robert Milkowski wrote:
Also scrub can consume all CPU power on smaller and older machines and
that's not always what I would like.
REP The big question, though, is 10% of what? User CPU? iops?
AH Probably N% of I/O Ops/Second would work well.
Or if 100% means full speed, then 10%
Richard Elling - PAE wrote:
ZFS fans,
Recalling our conversation about hot-plug and hot-swap terminology and
use,
I afraid to say that CR 6483250 has been closed as will-not-fix. No
explaination
was given.
A bug that is closed will-not-fix should, at the very least, have some
rationale as
Richard Elling - PAE wrote:
The better approach is for the file system to do what it needs
to do as efficiently as possible, which is the current state of ZFS.
This implies that the filesystem has exclusive use of the channel - SAN
or otherwise - as well as the storage array front end
Richard Elling - PAE wrote:
Incidentally, since ZFS schedules the resync iops itself, then it can
really move along on a mostly idle system. You should be able to resync
at near the media speed for an idle system. By contrast, a hardware
RAID array has no knowledge of the context of the data
Richard Elling - PAE wrote:
Robert Milkowski wrote:
I almost completely agree with your points 1-5, except that I think
that having at least one hot spare by default would be better than
having none at all - especially with SATA drives.
Yes, I pushed for it, but didn't win.
In a perfect
Jay Grogan wrote:
The V120 has 4GB of RAM , on the HDS side we are in a RAID 5 on the LUN and not
shairing any ports on the MCdata, but with so much cache we aren't close to
taxing the disk.
Are you sure? At some point data has to get flushed from the cache to
the drives themselves. In most
Spencer Shepler wrote:
On Wed, Adam Leventhal wrote:
On Wed, Nov 01, 2006 at 01:17:02PM -0500, Torrey McMahon wrote:
Is there going to be a method to override that on the import? I can see
a situation where you want to import the pool for some kind of
maintenance procedure but you
the
performance of a degraded RAIDZ volume?
This might alleviate the fears of someeven though we are out to get
you. Wait...did I see we? :)
--
Torrey McMahon
Sun Microsystems Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Reads? Maybe. Writes are an other matter. Namely the overhead associated
with turning a large write into a lot of small writes. (Checksums for
example.)
Jeremy Teo wrote:
Hello all,
Isn't a large block size a simple case of prefetching? In other words,
if we possessed an intelligent prefetch
Anthony Miller wrote:
Hi,
I've search the forums and not found any answer to the following.
I have 2 JBOD arrays each with 4 disks.
I want to create create a raidz on one array and have it mirrored to the other
array.
Do you think this will get you more availability compared to a simple
Victor Latushkin wrote:
Darren J Moffat wrote:
Asif Iqbal wrote:
Hi
I have a X2100 with two 74G disks. I build the OS on the first disk
with slice0 root 10G ufs, slice1 2.5G swap, slice6 25MB ufs and slice7
62G zfs. What is the fastest way to clone it to the second disk. I
have to build 10 of
Try it and let us know. :)
Seriously - Ya got me. It, along with the rest of the stack, would see
multiple drives. However, I'm not sure how the ZFS pool id, detection,
and the like would come into play.
Hong Wei Liam wrote:
Hi,
I understand that ZFS leaves multipathing to MPXIO or the
Do you have any multipathing enabled?
Frank Cusack wrote:
I don't have any problems with a dual attached 3511 using qle2462 cards
and mpxio. Not sure if that was the question or not. :-)
On October 19, 2006 3:46:55 PM -0400 Torrey McMahon
[EMAIL PROTECTED] wrote:
Try it and let us know
Robert Milkowski wrote:
This of course does work. I guess the real question was what will
happen if you now export your pool, then disable mpxio so you will see
the same disk at least twice and now you decide to import that pool.
Would it confuse ZFS?
That's what I was getting at. The
Matthew Ahrens wrote:
Or, as has been suggested, add an API for apps to tell us the
recordsize before they populate the file.
I'll drop a RFE in and point people at the number.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Torrey McMahon wrote:
Matthew Ahrens wrote:
Or, as has been suggested, add an API for apps to tell us the
recordsize before they populate the file.
I'll drop a RFE in and point people at the number.
For those playing at home the RFE is 6483154
Ciaran Johnston (AT/LMI) wrote:
[SNIP]
With our current filesystem, we create two 5-disk RAID5 arrays and
export these as two logical devices, with two spare disks. In a ZFS
scenario, is it worth us letting the 3510 do RAID5 in the way we
currently do, or should we let ZFS manage all the RAID
Richard Elling - PAE wrote:
Anantha N. Srirama wrote:
I'm glad you asked this question. We are currently expecting 3511
storage sub-systems for our servers. We were wondering about their
configuration as well. This ZFS thing throws a wrench in the old line
think ;-) Seriously, we now have to
Matthew Ahrens wrote:
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do you suggest doing so?
Maybe I've been
James C. McPherson wrote:
Dick Davies wrote:
On 12/10/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you
boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or
Bart Smaalders wrote:
Sergey wrote:
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS
on RAID. And someday you lost your server completely (fired
motherboard, physical crash, ...). Is there any way to connect the
RAID to some
Pierre Klovsjo wrote:
Is ZFS/mpxio strong enough to be an alternativ to VXvm/Dmp today? Has anyone
done this change and if so, what was your experiance?
I can only speak for the multpathing side but I know that we've had
plenty of sites change from DMP to Mpxio. Most of the rationale the
Richard Elling - PAE wrote:
More generally, I could suggest that we use an odd number of vdevs
for raidz and an even number for mirrors and raidz2.
Thoughts?
Sounds good to me. I'd make sure it's in the same section of the BP
guide as Align the block size with your app... type notes.
Jakob Praher wrote:
What about iSCSI on top of ZFS? is that an option.
It will be shortly. Check out the iSCSI target project on the
opensolaris site.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
but, in the past, I have
seen NFS grouped under the SAN umbrella. Most people hear SAN and think
FC block but recall that the S stands for Storage. Not very common but
it happens.
--
Torrey McMahon
Sun Microsystems Inc.
___
zfs-discuss mailing list
zfs-discuss
Darren Dunham wrote:
In my experience, we would not normally try to mount two different
copies of the same data at the same time on a single host. To avoid
confusion, we would especially not want to do this if the data represents
two different points of time. I would encourage you to stick
Richard Elling - PAE wrote:
This question was asked many times in this thread. IMHO, it is the
single biggest reason we should implement ditto blocks for data.
We did a study of disk failures in an enterprise RAID array a few
years ago. One failure mode stands heads and shoulders above the
Richard Elling - PAE wrote:
Non-recoverable reads may not represent permanent failures. In the case
of a RAID array, the data should be reconstructed and a rewrite + verify
attempted with the possibility of sparing the sector. ZFS can
reconstruct the data and relocate the block.
True but
Eric Schrock wrote:
On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote:
It looks as this has not been implemented yet nor even tested.
What hasn't been implemented? As far as I can tell, this is a request
for the previously mentioned RFE (ability to change GUIDs on
Torrey McMahon wrote:
A day later I turn the host off. I go to the array and offer all six
LUNs, the pool that was in use as well as the snapshot that I took a
day previously, and offer all three LUNs to the host.
Errrthat should be
A day later I turn the host off. I go
James Dickens wrote:
eric was allready talking about printing the last time a disk was
accessed when a disk was about to be imported, my idea would be run
that check twice, once initially and if it looks like it could be
still in use, like the pool wasn't exported and last write occurred in
the
eric kustarz wrote:
I want per pool, per dataset, and per file - where all are done by the
filesystem (ZFS), not the application. I was talking about a further
enhancement to copies than what Matt is currently proposing - per
file copies, but its more work (one thing being we don't have
Matthew Ahrens wrote:
Nicolas Dorfsman wrote:
Hi,
There's something really bizarre in ZFS snaphot specs : Uses no
separate backing store. .
Hum...if I want to mutualize one physical volume somewhere in my SAN
as THE snaphots backing-store...it becomes impossible to do !
Really bad.
Is there
Bart Smaalders wrote:
Torrey McMahon wrote:
eric kustarz wrote:
I want per pool, per dataset, and per file - where all are done by
the filesystem (ZFS), not the application. I was talking about a
further enhancement to copies than what Matt is currently
proposing - per file copies
Matthew Ahrens wrote:
Nicolas Dorfsman wrote:
We need to think ZFS as ZFS, and not as a new filesystem ! I mean,
the whole concept is different.
Agreed.
So. What could be the best architecture ?
What is the problem?
With UFS, I used to have separate metadevices/LUNs for each
Erik Trimble wrote:
OK, this may seem like a stupid question (and we all know that there are
such things...)
I'm considering sharing a disk array (something like a 3510FC) between
two different systems, a SPARC and an Opteron.
Will ZFS transparently work to import/export pools between the two
UNIX admin wrote:
This is simply not true. ZFS would protect against
the same type of
errors seen on an individual drive as it would on a
pool made of HW raid
LUN(s). It might be overkill to layer ZFS on top of a
LUN that is
already protected in some way by the devices internal
RAID code but
Celso wrote:
Hopefully we can agree that you lose nothing by adding this feature, even if
you personally don't see a need for it.
If I read correctly user tools will show more space in use when adding
copies, quotas are impacted, etc. One could argue the added confusion
outweighs the
David Dyer-Bennet wrote:
While I'm not a big fan of this feature, if the work is that well
understood and that small, I have no objection to it. (Boy that
sounds snotty; apologies, not what I intend here. Those of you
reading this know how muich you care about my opinion, that's up to
you.)
Ed Gould wrote:
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are
+5
I've been saving my +1s for a few weeks now. ;)
Richard Elling - PAE wrote:
There is another option. I'll call it grow into your storage.
Pre-ZFS, for most systems you would need to allocate the storage well
in advance of its use. For the 7xFLX380 case using SVM and UFS, you
would
Roch - PAE wrote:
Thinking some more about this. If your requirements does
mandate some form of mirroring, then it truly seems that ZFS
should take that in charge if only because of the
self-healing characteristics. So I feel the storage array's
job is to export low latency Luns to ZFS.
Depends on the workload. (Did I miss that email?)
Peter Sundstrom wrote:
Hmm. Appears to be differing opinions.
Another way of putting my question is can anyone guarantee that ZFS will not
perform worse that UFS on the array?
High speed performance is not really an issue, hence the reason
.
--
Torrey McMahon
Sun Microsystems Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Tony Galway wrote:
A question (well lets make it 3 really) – Is vdbench a useful tool when testing file system performance of a ZFS file system?
Not really. VDBench simply reads and writes from the allocated file.
Filesystem tests do things like create files, read files, delete files,
move
Tabriz Leman wrote:
Torrey McMahon wrote:
Lori Alt wrote:
No, zfs boot will be supported on both x86 and sparc. Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures
Lori Alt wrote:
No, zfs boot will be supported on both x86 and sparc. Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures.
Hi Lori.
Can you expand a bit on the
I'm with ya on that one. I'd even go so far as to change single parity
RAID to single parity block. The talk of RAID throws people off
pretty easily especially when you start layering ZFS on top of things
other then a JBOD.
Eric Schrock wrote:
I don't see why you would distinguish between
Path failover is not handled by ZFS. You would use mpxio, or other
software, to take care of path failover.
Pierre Klovsjo wrote:
Greetings all,
I have been given the task of playing around with ZFS and a StorEdge 9970 (HDS 9970) disk array. This setup will be duplicated into a production
Luke Lonergan wrote:
Torrey,
On 8/1/06 10:30 AM, Torrey McMahon [EMAIL PROTECTED] wrote:
http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml
Look at the specs page.
I did.
This is 8 trays, each with 14 disks and two active Fibre channel
attachments.
That means
Richard Elling wrote:
Jonathan Edwards wrote:
Now with thumper - you are SPoF'd on the motherboard and operating
system - so you're not really getting the availability aspect from
dual controllers .. but given the value - you could easily buy 2 and
still come out ahead .. you'd have to work
prasad wrote:
I have a StorEdge 3510 FC array which is currently configured in the following
way:
* logical-drives
LDLD-IDSize Assigned Type Disks Spare Failed Status
ld0 255ECBD0 2.45TB
Luke Lonergan wrote:
Torrey,
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, July 31, 2006 8:32 PM
You might want to check the specs of the the 3510. In some
configs you
only get 2 ports. However, in others you can get 8.
Really? 8
Frank Cusack wrote:
On July 31, 2006 11:32:15 PM -0400 Torrey McMahon
[EMAIL PROTECTED] wrote:
You're comparing apples to a crate of apples. A more useful
comparison would be something along
the lines a single R0 LUN on a 3510 with controller to a single
3510-JBOD with ZFS across all
(I hate when I hit the Send button when trying to change windows)
Eric Schrock wrote:
On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote:
The correct comparison is done when all the factors are taken into
account. Making blanket statements like, ZFS JBODs are always ideal
Luke Lonergan wrote:
Torrey,
On 7/28/06 10:11 AM, Torrey McMahon [EMAIL PROTECTED] wrote:
That said a 3510 with a raid controller is going to blow the door, drive
brackets, and skin off a JBOD in raw performance.
I'm pretty certain this is not the case.
If you need sequential
Frank Cusack wrote:
On July 28, 2006 3:31:51 AM -0700 Louwtjie Burger
[EMAIL PROTECTED] wrote:
Hi there
Is it fair to compare the 2 solutions using Solaris 10 U2 and a
commercial database (SAP SD
scenario).
The cache on the HW raid helps, and the CPU load is less... but the
solution costs
Does format show these drives to be available and containing a non-zero
size?
Eric Schrock wrote:
On Wed, Jul 26, 2006 at 02:11:44PM -0600, David Curtis wrote:
Eric,
Here is the output:
# ./dtrace2.dtr
dtrace: script './dtrace2.dtr' matched 4 probes
CPU ID
Given the amount of I/O wouldn't it make sense to get more drives
involved or something that has cache on the front end or both? If you're
really pushing the amount of I/O you're alluding too - Hard to tell
without all the details - then you're probably going to hit a limitation
on the drive
Or if you have the right patches ...
http://blogs.sun.com/roller/page/torrey?entry=really_big_luns
Cindy Swearingen wrote:
Hi Julian,
Can you send me the documentation pointer that says 2 TB isn't supported
on the Solaris 10 6/06 release?
The 2 TB limit was lifted in the Solaris 10 1/06
Dick Davies wrote:
On 15/07/06, Torrey McMahon [EMAIL PROTECTED] wrote:
eric kustarz wrote:
martin wrote:
To monitor activity, use 'zpool iostat 1' to monitor just zfs
datasets, or iostat(1M) to include non-zfs devices.
Perhaps Martin was asking for something a little more robust
eric kustarz wrote:
martin wrote:
How could i monitor zfs ?
or the zpool activity ?
I want to know if anything wrong is going on.
If i could receive those warning by email, it would be great :)
For pool health:
# zpool status -x
all pools are healthy
#
To monitor activity, use 'zpool
101 - 200 of 208 matches
Mail list logo