On Thu, 2006-05-11 at 10:27 -0700, Richard Elling wrote:
On Thu, 2006-05-11 at 10:31 -0600, Gregory Shaw wrote:
A couple of points/additions with regard to oracle in particular:
When talking about large database installations, copy-on-write may
or may not apply. The files
On Fri, 2006-05-12 at 10:42 -0500, Anton Rang wrote:
Now latency wise, the cost of copy is small compared to the
I/O; right ? So it now turns into an issue of saving some
CPU cycles.
CPU cycles and memory bandwidth (which both can be in short
supply on a database server).
We can
On Thu, 2006-05-11 at 17:01 -0700, Jeff Bonwick wrote:
plan A. To mirror on iSCSI devices:
keep one server with a set of zfs file systems
with 2 (sub)mirrors each, one of the mirrors use
devices physically on remote site accessed as
iSCSI LUNs.
On Tue, 2006-05-16 at 10:32 -0700, Eric Schrock wrote:
On Wed, May 17, 2006 at 03:22:34AM +1000, grant beattie wrote:
what I find interesting is that the SCSI errors were continuous for 10
minutes before I detached it, ZFS wasn't backing off at all. it was
flooding the VGA console
I need some real sample data on ZFS utilization. If you have
a moment, please send me the output of zpool list for as
many systems as you've got running ZFS. Feel free to obfuscate
the NAME column or reply anonymously. What I'm interested in is
the SIZE and USED columns.
The reason I'm
On Fri, 2006-05-19 at 10:18 +1000, Nathan Kroenert wrote:
Just piqued my interest on this one -
How would we enforce quotas of sorts in large filesystems that are
shared? I can see times when I might want lots of users to use the same
directory (and thus, same filesystem) but still want to
On Tue, 2006-05-30 at 14:59 -0500, Anton Rang wrote:
On May 30, 2006, at 2:16 PM, Richard Elling wrote:
[assuming we're talking about disks and not hardware RAID arrays...]
It'd be interesting to know how many customers plan to use raw disks,
and how their performance relates to hardware
Jeff Bonwick wrote:
btw: I'm really suprised how SATA disks are unreliable. I put dozen
TBs of data on ZFS last time and just after few days I got few hundreds
checksum error (there raid-z was used). And these disks are 500GB in
3511 array. Well that would explain some fsck's, etc. we saw
billtodd wrote:
I do want to comment on the observation that enough concurrent 128K I/O can
saturate a disk - the apparent implication being that one could therefore do
no better with larger accesses, an incorrect conclusion. Current disks can
stream out 128 KB in 1.5 - 3 ms., while taking
Erik Trimble wrote:
That is, start out with adding the ability to differentiate between
access policy in a vdev. Generally, we're talking only about mirror
vdevs right now. Later on, we can consider the ability to migrate data
based on performance, but a lot of this has to take into
Eric Schrock wrote:
On Tue, Jun 20, 2006 at 11:17:42AM -0700, Jonathan Adams wrote:
On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote:
Flash is (can be) a bit more sophisticated. The problem is that they
have a limited write endurance -- typically spec'ed at 100k writes to
any
Dana H. Myers wrote:
What I do not know yet is exactly how the flash portion of these hybrid
drives is administered. I rather expect that a non-hybrid-aware OS may
not actually exercise the flash storage on these drives by default; or
should I say, the flash storage will only be available to a
Joe Little wrote:
On 6/22/06, Darren J Moffat [EMAIL PROTECTED] wrote:
Rich Teer wrote:
On Thu, 22 Jun 2006, Joe Little wrote:
Please don't top post.
What if your 32bit system is just a NAS -- ZFS and NFS, nothing else?
I think it would still be ideal to allow tweaking of things at
Dick Davies wrote:
I was wondering if anyone could recommend hardware
forr a ZFS-based NAS for home use.
The 'zfs on 32-bit' thread has scared me of a mini-itx fanless
setup, so I'm looking at sparc or opteron. Ideally it would:
I think the issue with ZFS on 32-bit is revolving around the
Joe Little wrote:
On 6/23/06, Roch [EMAIL PROTECTED] wrote:
Joe, you know this but for the benefit of others, I have to
highlight that running any NFS server this way, may cause
silent data corruption from client's point of view.
Whenever a server keeps data in RAM this way and does not
Olaf Manczak wrote:
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of
Siegfried Nikolaivich wrote:
But for ZFS, it has been said often that it currently performs
much better with a 64bit address space, such as that with
Opterons and other AMD64 CPUs. I think this would play a
bigger part in a ZFS server performing well than just MHZ
and cache size.
I will no
Dale Ghent wrote:
ZFS we all know is just more than a dumb fs like UFS is. As mentioned,
it has metadata in the form of volume options and whatnot. So, sure, I
can still use my Legato/NetBackup/Amanda and friends to back that data
up... but if the worst were to happen and I find myself having
David Dyer-Bennet wrote:
Knowing what SATA controllers will work helps a great deal. Does the OS
interact with the hot-swap rack at all, or does it just notice the device
on the end of the SAAT cable is gone? Is that yet another thing I have to
worry about compatibility on?
SATA is
Dale Ghent wrote:
See, you're talking with a person who saves prtvtoc output of all his
disks so that if a disk dies, all I need to do to recreate the dead
disk's exact slice layout on the replacement drive is to run that saved
output through fmthard. One second on the command line rather than
Michael Schuster - Sun Microsystems wrote:
Sean Meighan wrote:
I am not sure if this is ZFS, Niagara or something else issue? Does
someone know why commands have the latency shown below?
*1) do a ls of a directory. 6.9 seconds total, truss only shows .07
seconds.*
[...]
this may be an
There are two questions here.
1. Can you add a redundant set of vdevs to a pool. Answer: yes.
2. What is the best way for Scott to grow his archive into his disks.
The answer to this is what I discuss below.
David Dyer-Bennet wrote:
Scott Roberts [EMAIL PROTECTED] writes:
I've been reading
David Abrahams wrote:
David Dyer-Bennet [EMAIL PROTECTED] writes:
Adam Leventhal [EMAIL PROTECTED] writes:
I'm not sure I even agree with the notion that this is a real
problem (and if it is, I don't think is easily solved). Stripe
widths are a function of the expected failure rate and
Kieran wrote:
I have seen the recommendation for the Marvell storage controller.
http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
This uses the Marvell 88SX6081, which is supported by the marvell88sx(7D0
driver in Solaris. This is the same SATA controller chip used in
I too have seen this recently, due to a partially failed drive.
When I physically removed the drive, ZFS figured everything out and
I was back up and running. Alas, I have been unable to recreate.
There is a bug lurking here, if someone has a more clever way to
test, we might be able to nail it
[stirring the pot a little...]
Jim Mauro wrote:
I agree with Greg - For ZFS, I'd recommend a larger number of raidz
luns, with a smaller number
of disks per LUN, up to 6 disks per raidz lun.
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2. For 3-5 disks,
Jeff Bonwick wrote:
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2.
Maybe I'm missing something, but it ought to be the other way around.
With 6 disks, RAID-Z2 can tolerate any two disk failures, whereas
for 3x2-way mirroring, of the (6 choose 2) = 6*5/2 = 15
Thanks Rob, one comment below.
Rob Logan wrote:
perhaps these are good picks:
5 x (7+2) 1 hot spare 35 data disks - best safety
5 x (8+1) 1 hot spare 40 data disks - best space
9 x (4+1) 1 hot spare 36 data disks - best speed
1 x (45+1) 0 hot spare 45 data disks - max space
This
Rich Teer wrote:
On Sat, 22 Jul 2006, Richard Elling wrote:
This one stretches the models a bit. In one model, the MTTDL is
For us storage newbies, what is MTTDL? I would guess Mean Time
To Data Loss, which presumably is some multiple of the drives'
MTBF (Mean Time Between Failures
Craig Morgan wrote:
Spare a thought also for the remote serviceability aspects of these
systems, if customers raise calls/escalations against such systems then
our remote support/solution centre staff would find such an output
useful in identifying and verifying the config.
I'm don't have
Timing is everything :-)
http://docs.sun.com/app/docs/doc/819-6612
-- richard
Richard Elling wrote:
Craig Morgan wrote:
Spare a thought also for the remote serviceability aspects of these
systems, if customers raise calls/escalations against such systems
then our remote support/solution
From a RAS perspective, ZFS's end-to-end data integrity feature is critical.
If the competing file system doesn't have this capability, then they can't play
in this sandbox.
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Danger Will Robinson...
Jeff Victor wrote:
Jeff Bonwick wrote:
If one host failed I want to be able to do a manual mount on the
other host.
Multiple hosts writing to the same pool won't work, but you could indeed
have two pools, one for each host, in a dual active-passive arrangement.
That
Brian Hechinger wrote:
On Fri, Jul 28, 2006 at 02:14:50PM +0200, Patrick Bachmann wrote:
systems config? There are a lot of things you know better off-hand
about your system, otherwise you need to do some benchmarking, which
ZFS would have to do too, if it was to give you the best performing
Brian Hechinger wrote:
On Fri, Jul 28, 2006 at 02:02:13PM -0700, Richard Elling wrote:
Joseph Mocker wrote:
Richard Elling wrote:
The problem is that there are at least 3 knobs to turn (space, RAS, and
performance) and they all interact with each other.
Good point. then how about something
Darren J Moffat wrote:
So with that in mind this is my plan so far.
On the target (the V880):
Put all the 12 36G disks into a single zpool (call it iscsitpool).
Use iscsitadm to create 2 targets of 202G each.
On the initiator (the v40z):
Use iscsiadm to discover (import) the 2 202G targets.
Darren J Moffat wrote:
performance, availability, space, retention.
OK, something to work with. I would recommend taking advantage of ZFS'
dynamic stripe over 2-disk mirrors. This should give good performance,
with good data availability. If you monitor the status of the disks
Jonathan Edwards wrote:
Now with thumper - you are SPoF'd on the motherboard and operating
system - so you're not really getting the availability aspect from dual
controllers .. but given the value - you could easily buy 2 and still
come out ahead .. you'd have to work out some sort of timely
Jim Connors wrote:
Working to get ZFS to run on a minimal Solaris 10 U2 configuration.
What does minimal mean? Most likely, you are missing something.
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi Robert, thanks for the data.
Please clarify one thing for me.
In the case of the HW raid, was there just one LUN? Or was it 12 LUNs?
-- richard
Robert Milkowski wrote:
Hi.
3510 with two HW controllers, configured on LUN in RAID-10 using 12 disks in
head unit (FC-AL 73GB 15K disks).
Jesus Cea wrote:
Anton B. Rang wrote:
I have a two-vdev pool, just plain disk slices
If the vdev's are from the same disk, your are doomed.
ZFS tries to spread the load among the vdevs, so if the vdevs are from
the same disk, you will have a seek hell.
It is not clear to me that this is a
Dale Ghent wrote:
James C. McPherson wrote:
As I understand things, SunCluster 3.2 is expected to have support
for HA-ZFS
and until that version is released you will not be running in a
supported
configuration and so any errors you encounter are *your fault
alone*.
Still, after reading
Anantha N. Srirama wrote:
I did a non-scientific benchmark against ASM and ZFS. Just look for my posts
and you'll see it. To summarize it was a statistical tie for simple loads of
around 2GB of data and we've chosen to stick with ASM for a variety of reasons
not the least of which is its
Peter Eriksson wrote:
There is nothing in the ZFS FAQ about this. I also fail to see how FMA could make any
difference since it seems that ZFS is deadlocking somewhere in the kernel when this happens...
Some people don't see a difference between hung and patiently waiting.
There are failure
Podlipnik wrote:
When creating raidz pool out of n disks where n =2 pool size will get a size
of the smallest disk multiplied by n:
# zpool create -f newpool raidz c1t12d0 c1t10d0 c1t13d0
# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
newpool
David Dyer-Bennet wrote:
On 11/26/06, Al Hopper [EMAIL PROTECTED] wrote:
[4] I proposed this solution to a user on the [EMAIL PROTECTED]
list - and it resolved his problem. His problem - the system would reset
after getting about 1/2 way through a Solaris install. The installer was
simply
Matthew Ahrens wrote:
Elizabeth Schwartz wrote:
How would I use more redundancy?
By creating a zpool with some redundancy, eg. 'zpool create poolname
mirror disk1 disk2'.
after the fact, you can add a mirror using 'zpool attach'
-- richard
___
Jason J. W. Williams wrote:
Is it possible to non-destructively change RAID types in zpool while
the data remains on-line?
Yes. With constraints, however. What exactly are you trying to do?
-- richard
___
zfs-discuss mailing list
your data. The simplest way to implement
this with redundancy is to mirror the log zpool. You might try that
first, before you relayout the data.
-- richard
Best Regards,
Jason
On 11/28/06, Richard Elling [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Is it possible to non-destructively
David Elefante wrote:
I had this happen on three different motherboards. So it seems that there
should be a procedure in the documentation that states if your BIOS doesn't
support EFI labels than you need to write ZFS to a partition (slice) not the
overlay, causing the BIOS to hang on
Hi Jason,
It seems to me that there is some detailed information which would
be needed for a full analysis. So, to keep the ball rolling, I'll
respond generally.
Jason J. W. Williams wrote:
Hi Richard,
Been watching the stats on the array and the cache hits are 3% on
these volumes. We're
Douglas Denny wrote:
On 12/4/06, James C. McPherson [EMAIL PROTECTED] wrote:
Is this normal behavior for ZFS?
Yes. You have no redundancy (from ZFS' point of view at least),
so ZFS has no option except panicing in order to maintain the
integrity of your data.
This is interesting from a
Anton B. Rang wrote:
And to panic? How can that in any sane way be good
way to protect the application?
*BANG* - no chance at all for the application to
handle the problem...
I agree -- a disk error should never be fatal to the system; at worst, the file system
should appear to have been
Dale Ghent wrote:
Matthew Ahrens wrote:
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to
This looks more like a cabling or connector problem. When that happens
you should see parity errors and transfer rate negotiations.
-- richard
Krzys wrote:
Ok, so here is an update
I did restart my sysyte, I power it off and power it on. Here is screen
capture of my boot. I certainly do
BTW, there is a way to check what the SCSI negotiations resolved to.
I wrote about it once in a BluePrint
http://www.sun.com/blueprints/0500/sysperfnc.pdf
See page 11
-- richard
Richard Elling wrote:
This looks more like a cabling or connector problem. When that happens
you should see
Luke Schwab wrote:
Hi,
I am running Solaris 10 ZFS and I do not have STMS multipathing enables. I have dual FC connections to storage using two ports on an Emulex HBA.
In the Solaris ZFS admin guide. It says that a ZFS file system monitors disks by their path and their device ID. If a disk
Jim Davis wrote:
eric kustarz wrote:
What about adding a whole new RAID-Z vdev and dynamicly stripe across
the RAID-Zs? Your capacity and performance will go up with each
RAID-Z vdev you add.
Thanks, that's an interesting suggestion.
This has the benefit of allowing you to grow into your
Jignesh K. Shah wrote:
I am already using symlinks.
But the problem is the ZFS framework won't know about them .
Can you explain how this knowledge would benefit the combination
of ZFS and databases? There may be something we could leverage here.
I would expect something like this from ZVOL
Jochen M. Kaiser wrote:
Dear all,
we're currently looking forward to restructure our hardware environment for
our datawarehousing product/suite/solution/whatever.
cool.
We're currently running the database side on various SF V440's attached via
dual FC to our SAN backend (EMC DMX3) with
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
Should I file this as a bug, or should I just not do that :-
Don't do that. The same should happen if you umount
BR Yes, absolutely. Set var in /etc/system, reboot, system come up. That
BR happened almost 2 months ago, long before this lock insanity problem
BR popped up.
For the archives, a high level of lock activity can always be a problem.
The worst cases I've experienced were with record locking over
Robert Milkowski wrote:
Hello Richard,
Tuesday, December 5, 2006, 7:01:17 AM, you wrote:
RE Dale Ghent wrote:
Similar to UFS's onerror mount option, I take it?
RE Actually, it would be interesting to see how many customers change the
RE onerror setting. We have some data, just need more
Matthew C Aycock wrote:
We are currently working on a plan to upgrade our HA-NFS cluster that
uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 and ZFS. Is
there a known procedure or best practice for this? I have enough free disk
space to recreate all the filesystems and copy the
Kory Wheatley wrote:
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array.
Here's are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the
best performance?
What I'm trying to ask is if you have 3 LUNS
Kory Wheatley wrote:
The Luns will be on separate SPA controllersnot on all
the same controller, so that's why I thought if we split
our data on different disks and ZFS Storage Pools we would
get better IO performance. Correct?
The way to think about it is that, in general, for best
Anton B. Rang wrote:
Also note that the UB is written to every vdev (4 per disk) so the
chances of all UBs being corrupted is rather low.
The chances that they're corrupted by the storage system, yes.
However, they are all sourced from the same in-memory buffer, so
an undetected in-memory
blocks, but it didn't work.
So, Richard Elling will likely have data but I have anecdotes:
I have some data, but without knowing more about the disk, it is
difficult to say where to do. In some cases a low level format
will clear up some errors for a little while for some drives.
I've seen two
Jeremy Teo wrote:
On 12/16/06, Richard Elling [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Hi Jeremy,
It would be nice if you could tell ZFS to turn off fsync() for ZIL
writes on a per-zpool basis. That being said, I'm not sure there's a
consensus on that...and I'm sure not smart
Additional comments below...
Christine Tran wrote:
Hi,
I guess we are acquainted with the ZFS Wikipedia?
http://en.wikipedia.org/wiki/ZFS
Customers refer to it, I wonder where the Wiki gets its numbers. For
example there's a Sun marketing slide that says unlimited snapshots
contradicted
comment far below...
Jonathan Edwards wrote:
On Dec 18, 2006, at 16:13, Torrey McMahon wrote:
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment?
sidetracking below...
Matt Ingenthron wrote:
Mike Seda wrote:
Basically, is this a supported zfs configuration?
Can't see why not, but support or not is something only Sun support can
speak for, not this mailing list.
You say you lost access to the array though-- a full disk failure
Torrey McMahon wrote:
The first bug we'll get when adding a ZFS is not going to be able to
fix data inconsistency problems error message to every pool creation or
similar operation is going to be Need a flag to turn off the warning
message...
Richard pines for ditto blocks for data...
--
Dennis Clarke wrote:
Anton B. Rang wrote:
INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect your
data.
Is this the official, long-term stance? I don't think it is. I think this
is an interpretation of
I think ZFS might be too smart here. The feature we like is that ZFS
will find the devices no matter what their path is. This is very much
a highly desired feature. If there are multiple paths to the same LUN,
then it does expect an intermediary to handle that: MPxIO, PowerPath, etc.
Jason Austin wrote:
A bit off the subject but what would be the advantage in virtualization using a
pool of files verse just creating another zfs on an existing pool. My purpose
for using the file pools was to experiment and learn about any quirks before I
go production. It let me do things
Tomas Ă–gren wrote:
df (GNU df) says there are ~850k inodes used, I'd like to keep those in
memory.. There is currently 1.8TB used on the filesystem.. The
probability of a cache hit in the user data cache is about 0% and the
probability that an rsync happens again shortly is about 100%..
Also,
Jason J. W. Williams wrote:
Hello All,
I was curious if anyone had run a benchmark on the IOPS performance of
RAIDZ2 vs RAID-10? I'm getting ready to run one on a Thumper and was
curious what others had seen. Thank you in advance.
I've been using a simple model for small, random reads. In
Al Hopper wrote:
On Fri, 5 Jan 2007, Anton B. Rang wrote:
If [SSD or Flash] devices become more prevalent, and/or cheaper I'm curious what
ways ZFS could be made to bast take advantage of them?
The intent log is a possibility, but this would work better with SSD than
Flash; Flash
Matthew Ahrens wrote:
Robert Milkowski wrote:
Hello zfs-discuss,
zfs recv -v at the end reported:
received 928Mb stream in 6346 seconds (150Kb/sec)
I'm not sure but shouldn't it be 928MB and 150KB ?
Or perhaps we're counting bits?
That's correct, it is in bytes and should use capital B.
Darren Dunham wrote:
That would be useless, and not provide anything extra.
I think it's useless if a (disk) block of data holding RAIDZ parity
never has silent corruption, or if scrubbing was a lightweight operation
that could be run often.
The problem is that you will still need to
Peter Schuller wrote:
I've been using a simple model for small, random reads. In that model,
the performance of a raidz[12] set will be approximately equal to a single
disk. For example, if you have 6 disks, then the performance for the
6-disk raidz2 set will be normalized to 1, and the
Peter Schuller wrote:
Is this expected behavior? Assuming concurrent reads (not synchronous and
sequential) I would naively expect an ndisk raidz2 pool to have a
normalized performance of n for small reads.
q.v. http://www.opensolaris.org/jive/thread.jspa?threadID=20942tstart=0
where
So, does anyone know if I can run ZFS on my iPhone? ;-)
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Erik Trimble wrote:
Just a thought: would it be theoretically possible to designate some
device as a system-wide write cache for all FS writes? Not just ZFS,
but for everything... In a manner similar to which we currently use
extra RAM as a cache for FS read (and write, to a certain
I've got a few articles in my blog backlog which you should find useful as
you think about configuring ZFS. I just posted one on space vs MTTDL which
should appear shortly.
http://blogs.sun.com/relling
Enjoy.
-- richard
___
zfs-discuss
Patrick P Korsnick wrote:
hi,
i just set up snv_54 on an old p4 celeron system and even tho the processor is
crap, it's got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i'm wondering if there is
an optimal way to lay out the ZFS pool(s) to make this old girl as fast as
possible
as it stands
[attempt to clean up the text, sorry if I miss something]
James Dickens wrote:
On 1/12/07, Kyle McDonald [EMAIL PROTECTED] wrote:
Patrick P Korsnick wrote:
i just set up snv_54 on an old p4 celeron system and even tho the
processor is crap, it's got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i'm
FYI, Pawel Wojcik has been blogging about the design of
the SATA framework. This may answer some questions which
occasionally pop up on this forum.
http://blogs.sun.com/pawelblog
-- richard
This message posted from opensolaris.org
___
zfs-discuss
Gael wrote:
Hello,
I'm currently trying to convert a system from Solaris 10 U1 with Veritas
VM to Solaris 10 U3 with ZFS... the san portion of the server is managed
by Hitachi HDLM 5.8.
I'm seeing two distinct errors... let me know if they are classical or
if I should open a ticket (bug
Gael wrote:
jumps8002:/etc/apache2 #cat /etc/release
Solaris 10 11/06 s10s_u3wos_10 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 14 November 2006
Kyle McDonald wrote:
Richard Elling wrote:
roland wrote:
i have come across an interesting article at :
http://www.anandtech.com/IT/showdoc.aspx?i=2859p=5
Can anyone comment on the claims or conclusions of the article itself?
It seems to me that they are not always clear about what
Rainer Heilke wrote:
I'll know for sure later today or tomorrow, but it sounds like they are
seriously considering the ASM route. Since we will be going to RAC later
this year, this move makes the most sense. We'll just have to hope that
the DBA group gets a better understanding of LUN's and
Rainer Heilke wrote:
What do you mean by UFS wasn't an option due to
number of files?
Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle
Financials environment well exceeds this limitation.
Really?!? I thought Oracle would use a database for storage...
Also do you
I explore ZFS on X4500 (thumper) MTTDL models in yet another blog.
http://blogs.sun.com/relling/entry/a_story_of_two_mttdl
I hope you find it interesting.
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Rainer Heilke wrote:
If you plan on RAC, then ASM makes good sense. It is
unclear (to me anyway)
if ASM over a zvol is better than ASM over a raw LUN.
Hmm. I thought ASM was really the _only_ effective way to do RAC,
but then, I'm not a DBA (and don't want to be ;-) We'll be just
using raw
Frank Cusack wrote:
On January 20, 2007 1:07:27 PM -0800 David J. Orman
[EMAIL PROTECTED] wrote:
On that note, I've recently read it might be the case that the 1u sun
servers do not have hot-swappable disk drives... is this really true?
Yes.
Only for the x2100 (and x2100m2). It's not that
Frank Cusack wrote:
On January 19, 2007 5:59:13 PM -0800 David J. Orman
[EMAIL PROTECTED] wrote:
card that supports SAS would be *ideal*,
Except that SAS support on Solaris is not very good.
One major problem is they treat it like scsi when instead they should
treat it like FC (or native
Peter Schuller wrote:
Hello,
There have been comparisons posted here (and in general out there on the net)
for various RAID levels and the chances of e.g. double failures. One problem
that is rarely addressed though, is the various edge cases that significantly
impact the probability of loss
Jason J. W. Williams wrote:
Hi All,
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not too off-topic.
What are the actual origins of the Thumper? I've heard varying stories
in word and print. It appears that the Thumper was the original server
1 - 100 of 2354 matches
Mail list logo