if you get rid of the HBA and log device, and run with ZIL
disabled (if your work load is compatible with a disabled ZIL.)
By get rid of the HBA I assume you mean put in a battery-backed RAID
card instead?
-J
___
zfs-discuss mailing list
WD's drives have gotten better the last few years but their quality is still
not very good. I doubt they test their drives extensively for heavy duty server
configs, particularly since you don't see them inside any of the major server
manufactures' boxes.
Hitachi in particular does well in
This might be related to your issue:
http://blog.mpecsinc.ca/2010/09/western-digital-re3-series-sata-drives.html
On Saturday, August 6, 2011, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
In my experience, SATA drives behind SAS expanders just don't work.
They fail in the manner you
Just for history as to why Fishworks was running on this box...we were
in the beta program and have upgraded along the way. This box is an
X4240 with 16x 146GB disks running the Feb 2010 release of FW with
de-dupe.
We were getting ready to re-purpose the box and getting our data off.
We then
Hi,
I just replaced a drive (c12t5d0 in the listing below). For the first 6
hours of the resilver I saw no issues. However, sometime during the last
hour of the resilver, the new drive and two others in the same RAID-Z2 strip
threw a couple checksum errors. Also, two of the other drives in the
Thanks Tuomas. I'll run the scrub. It's an aging X4500.
-J
On Thu, Sep 30, 2010 at 3:25 AM, Tuomas Leikola tuomas.leik...@gmail.comwrote:
On Thu, Sep 30, 2010 at 9:08 AM, Jason J. W. Williams
jasonjwwilli...@gmail.com wrote:
Should I be worried about these checksum errors?
Maybe. Your
134 it is. This is an OpenSolaris rig that's going to be replaced within the
next 60 days, so just need to get it to something that won't through false
checksum errors like the 120-123 builds do and has decent rebuild times.
Future boxes will be NexentaStor.
Thank you guys. :)
-J
On Sun, Sep
Err...I meant Nexenta Core.
-J
On Mon, Sep 27, 2010 at 12:02 PM, Jason J. W. Williams
jasonjwwilli...@gmail.com wrote:
134 it is. This is an OpenSolaris rig that's going to be replaced within
the next 60 days, so just need to get it to something that won't through
false checksum errors like
If one was sticking with OpenSolaris for the short term, is something older
than 134 more stable/less buggy? Not using de-dupe.
-J
On Thu, Sep 23, 2010 at 6:04 PM, Richard Elling richard.ell...@gmail.comwrote:
Hi Charles,
There are quite a few bugs in b134 that can lead to this. Alas, due to
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were at 90% busy and ~150io/s each
Upgrading is definitely an option. What is the current snv favorite for ZFS
stability? I apologize, with all the Oracle/Sun changes I haven't been paying
as close attention to big reports on zfs-discuss as I used to.
-J
Sent via iPhone
Is your e-mail Premiere?
On Sep 26, 2010, at 10:22, Roy
Since iSCSI is block-level, I don't think the iSCSI intelligence at
the file level you're asking for is feasible. VSS is used at the
file-system level on either NTFS partitions or over CIFS.
-J
On Wed, Jan 7, 2009 at 5:06 PM, Mr Stephen Yum sosu...@yahoo.com wrote:
Hi all,
If I want to make a
X4500 problems seconded. Still having issues with port resets due to
the Marvell driver. Though they seem considerably more transient and
less likely to lock up the entire systems in the most recent ( b72)
OpenSolaris builds.
-J
On Feb 12, 2008 9:35 AM, Carson Gaspar [EMAIL PROTECTED] wrote:
Hey Thiago,
SVM is a direct replacement for LVM. Also, you'll notice about a 30%
performance boost if you move from LVM to SVM. At least we did when we
moved a couple of years ago.
-J
On Jan 21, 2008 8:09 AM, Thiago Sobral [EMAIL PROTECTED] wrote:
Hi folks,
I need to manage volumes like LVM
It'd be a really nice feature. Combined with baked-in replication it
would be a nice alternative to our DD appliances.
-J
On Jan 21, 2008 2:03 PM, John Martinez [EMAIL PROTECTED] wrote:
Great question. I've been wondering this myself over the past few
weeks, as de-dup is becoming more popular
Hey Y'all,
I've posted the program (SnapBack) my company developed internally for
backing up production MySQL servers using ZFS snapshots:
http://blogs.digitar.com/jjww/?itemid=56
Hopefully, it'll save other folks some time. We use it a lot for
standing up new MySQL slaves as well.
Best
Hello,
There seems to be a persistent issue we have with ZFS where one of the
SATA disk in a zpool on a Thumper starts throwing sense errors, ZFS
does not offline the disk and instead hangs all zpools across the
system. If it is not caught soon enough, application data ends up in
an inconsistent
Hi Albert,
Thank you for the link. ZFS isn't offlining the disk in b77.
-J
On Jan 3, 2008 3:07 PM, Albert Chin
[EMAIL PROTECTED] wrote:
On Thu, Jan 03, 2008 at 02:57:08PM -0700, Jason J. W. Williams wrote:
There seems to be a persistent issue we have with ZFS where one of the
SATA disk
, 2008 at 03:11:39PM -0700, Jason J. W. Williams wrote:
Hi Albert,
Thank you for the link. ZFS isn't offlining the disk in b77.
-J
On Jan 3, 2008 3:07 PM, Albert Chin
[EMAIL PROTECTED] wrote:
On Thu, Jan 03, 2008 at 02:57:08PM -0700, Jason J. W. Williams wrote:
There seems
Seconded. Redundant controllers means you get one controller that
locks them both up, as much as it means you've got backup.
Best Regards,
Jason
On Mar 21, 2007 4:03 PM, Richard Elling [EMAIL PROTECTED] wrote:
JS wrote:
I'd definitely prefer owning a sort of SAN solution that would basically
Hey Guys,
Have any of y'all seen a condition where the ILOM considers a disk
faulted (status is 3 instead of 1), but ZFS keeps writing to the disk
and doesn't report any errors? I'm going to do a scrub tomorrow and
see what comes back. I'm curious what caused the ILOM to fault the
disk. Any
iostat -E doesn't like the X4500 for some reason.
Thank you again for your help.
Best Regards,
Jason
On Dec 4, 2007 2:54 AM, Ralf Ramge [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Have any of y'all seen a condition where the ILOM considers a disk
faulted (status is 3 instead of 1
A quick Google of ext3 fsck did not yield obvious examples of why people
needed to run fsck on ext3, though it did remind me that by default ext3 runs
fsck just for the hell of it every N (20?) mounts - could that have been part
of what you were seeing?
I'm not sure if that's what Robert
Hi Guys,
Someone asked me how to count the number of inodes/objects in a ZFS
filesystem and I wasn't exactly sure. zdb -dv filesystem seems
like a likely candidate but I wanted to find out for sure. As to why
you'd want to know this, I don't know their reasoning but I assume it
has to do with the
Is that anything at all like what you're after?
--Bill
On Wed, Oct 17, 2007 at 10:00:03PM -0600, Jason J. W. Williams wrote:
Hey Guys,
Its not possible yet to fracture a snapshot or clone into a
self-standing filesystem is it? Basically, I'd like to fracture a
snapshot/clone into is own
.) Promote clone_B.
5.) If clone_Bs data doesn't work out, promote clone_F to roll forward.
Thank you in advance.
Best Regards,
Jason
On 10/18/07, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Hi Bill,
You've got it 99%. I want to roll E back to say B, and keep G intact.
I really don't care about
Hey Guys,
Its not possible yet to fracture a snapshot or clone into a
self-standing filesystem is it? Basically, I'd like to fracture a
snapshot/clone into is own FS so I can rollback past that snapshot in
the original filesystem and still keep that data.
Thank you in advance.
Best Regards,
Hi Dale,
We're testing out the enhanced arc_max enforcement (track DNLC
entries) using Build 72 right now. Hopefully, it will fix the memory
creep, which is the only real downside to ZFS for DB work it seems to
me. Frankly, of our DB loads have improved performance with ZFS. I
suspect its because
Hello All,
Awhile back (Feb '07) when we noticed ZFS was hogging all the memory
on the system, y'all were kind enough to help us use the arc_max
tunable to attempt to limit that usage to a hard value. Unfortunately,
at the time a sticky problem was that the hard limit did not include
DNLC entries
Hey All,
Is it possible (or even technically feasible) for zfs to have a
destroy to feature? Basically destroy any snapshot older than a
certain date?
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi Mark,
Thank you very much. That's what I was kind of afraid of. Its fine to
script it, just would be nice to have a built in function. :-) Thank
you again.
Best Regards,
Jason
On 5/11/07, Mark J Musante [EMAIL PROTECTED] wrote:
On Fri, 11 May 2007, Jason J. W. Williams wrote
Hi Guys,
Rather than starting a new thread I thought I'd continue this thread.
I've been running Build 54 on a Thumper since Mid January and wanted
to ask a question about the zfs_arc_max setting. We set it to
0x1 #4GB, however its creeping over that till our Kernel
memory usage is
Hi Rainer,
While I would recommend upgrading to Build 54 or newer to use the
system tunable, its not that big of a deal to set the ARC on boot up.
We've done it on a T2000 for awhile, until we could take it down for
an extended period of time to upgrade it.
Definitely WOULD NOT run a database
Hi Brian,
To my understanding the X2100 M2 and X2200 M2 are basically the same
board OEM'd from Quanta...except the 2200 M2 has two sockets.
As to ZFS and their weirdness, it would seem to me that fixing it
would be more an issue of the SATA/SCSI driver. I may be wrong here.
-J
On 3/12/07,
/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
Hi Przemol,
I think Casper had a good point bringing up the data integrity
features when using ZFS for RAID. Big companies do a lot of things
just because that's the certified
Hi Gino,
Was there more than one LUN in the RAID-Z using the port you disabled?
-J
On 2/26/07, Gino Ruopolo [EMAIL PROTECTED] wrote:
Hi Jason,
saturday we made some tests and found that disabling a FC port under heavy load
(MPXio enabled) often takes to a panic. (using a RAID-Z !)
No
Hi Eric,
Everything Mark said.
We as a customer ran into this running MySQL on a Thumper (and T2000).
We solved it on the Thumper by limiting the ARC to 4GB:
/etc/system: set zfs:zfs_arc_max = 0x1 #4GB
This has worked marvelously over the past 50 days. The ARC stays
around 5-6GB now.
Hi Przemol,
I think Casper had a good point bringing up the data integrity
features when using ZFS for RAID. Big companies do a lot of things
just because that's the certified way that end up biting them in the
rear. Trusting your SAN arrays is one of them. That all being said,
the need to do
Hi Nicholas,
Actually Virtual Iron, they have a nice system at the moment with live
migration of windows guest.
Ah. We looked at them for some Windows DR. They do have a nice product.
3. Which leads to: coming from Debian, how easy is system updates? I
remember with OpenBSD system updates
Hi Nicholas,
ZFS itself is very stable and very effective as fast FS in our
experience. If you browse the archives of the list you'll see that NFS
performance is pretty acceptable, with some performance/RAM quirks
around small files:
http://www.opensolaris.org/jive/message.jspa?threadID=19858
Hi Jeff,
Maybe I mis-read this thread, but I don't think anyone was saying that
using ZFS on-top of an intelligent array risks more corruption. Given
my experience, I wouldn't run ZFS without some level of redundancy,
since it will panic your kernel in a RAID-0 scenario where it detects
a LUN is
Thank you for the detailed explanation. It is very helpful to
understand the issue. Is anyone successfully using SNDR with ZFS yet?
Best Regards,
Jason
On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Could the replication engine eventually be integrated more
Hi Guys,
I seem to remember the Massive Array of Independent Disk guys ran into
a problem I think they called static friction, where idle drives would
fail on spin up after being idle for a long time:
http://www.eweek.com/article2/0,1895,1941205,00.asp
Would that apply here?
Best Regards,
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
before you can use both to their full potential together?
Best Regards,
Jason
On
dramatically cut down on the power. What do y'all think?
Best Regards,
Jason
On 1/29/07, Toby Thain [EMAIL PROTECTED] wrote:
On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote:
Hi Guys,
I seem to remember the Massive Array of Independent Disk guys ran into
a problem I think they called
Hi Jeff,
We're running a FLX210 which I believe is an Engenio 2884. In our case
it also is attached to a T2000. ZFS has run VERY stably for us with
data integrity issues at all.
We did have a significant latency problem caused by ZFS flushing the
write cache on the array after every write, but
Correction: ZFS has run VERY stably for us with data integrity
issues at all. should read ZFS has run VERY stably for us with NO
data integrity issues at all.
On 1/26/07, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Hi Jeff,
We're running a FLX210 which I believe is an Engenio 2884. In our
To be fair, you can replace vdevs with same-sized or larger vdevs online.
The issue is that you cannot replace with smaller vdevs nor can you
eliminate vdevs. In other words, I can migrate data around without
downtime, I just can't shrink or eliminate vdevs without send/recv.
This is where the
You could use SAN zoning of the affected LUN's to keep multiple hosts
from seeing the zpool. When failover time comes, you change the zoning
to make the LUN's visible to the new host, then import. When the old
host reboots, it won't find any zpool. Better safe than sorry
Or change the
Could the replication engine eventually be integrated more tightly
with ZFS? That would be slick alternative to send/recv.
Best Regards,
Jason
On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote:
Project Overview:
I propose the creation of a project on opensolaris.org, to bring to the community
Hi Wee,
Having snapshots in the filesystem that work so well is really nice.
How are y'all quiescing the DB?
Best Regards,
J
On 1/24/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
On 1/25/07, Bryan Cantrill [EMAIL PROTECTED] wrote:
...
after all, what was ZFS going to do with that expensive but
Hi Neal,
We've been getting pretty good performance out of RAID-Z2 with 3x
6-disk RAID-Z2 stripes. More stripes mean better performance all
around...particularly on random reads. But as a file-server that's
probably not a concern. With RAID-Z2 it seems to me 2 hot-spares is
very sufficient, but
I believe the SmartArray is an LSI like the Dell PERC isn't it?
Best Regards,
Jason
On 1/23/07, Robert Suh [EMAIL PROTECTED] wrote:
People trying to hack together systems might want to look
at the HP DL320s
http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475
Hi Peter,
Perhaps I'm a bit dense, but I've been befuddled by the x+y notation
myself. Is it X stripes consisting of Y disks?
Best Regards,
Jason
On 1/23/07, Peter Tribble [EMAIL PROTECTED] wrote:
On 1/23/07, Neal Pollack [EMAIL PROTECTED] wrote:
Hi: (Warning, new zfs user question)
Hi Peter,
Ah! That clears it up for me. Thank you.
Best Regards,
Jason
On 1/23/07, Peter Tribble [EMAIL PROTECTED] wrote:
On 1/23/07, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Hi Peter,
Perhaps I'm a bit dense, but I've been befuddled by the x+y notation
myself. Is it X stripes
Hi All,
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not too off-topic.
What are the actual origins of the Thumper? I've heard varying stories
in word and print. It appears that the Thumper was the original server
Bechtolsheim designed at Kealia as a
Hi Prashanth,
My company did a lot of LVM+XFS vs. SVM+UFS testing in addition to
ZFS. Overall, LVM's overhead is abysmal. We witnessed performance hits
of 50%+. SVM only reduced performance by about 15%. ZFS was similar,
though a tad higher.
Also, my understanding is you can't write to a ZFS
Wow. That's an incredibly cool story. Thank you for sharing it! Does
the Thumper today pretty much resemble what you saw then?
Best Regards,
Jason
On 1/23/07, Bryan Cantrill [EMAIL PROTECTED] wrote:
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not
Hi Prashanth,
This was about a year ago. I believe I ran bonnie++ and IOzone tests.
Tried also to simulate an OLTP load. The 15-20% overhead for ZFS was
vs. UFS on a raw disk...UFS on SVM was almost exactly 15% lower
performance than raw UFS. UFS and XFS on raw disk were pretty similar
in terms
Hi Frank,
I'm sure Richard will check it out. He's a very good guy and not
trying to jerk you around. I'm sure the hostility isn't warranted. :-)
Best Regards,
Jason
On 1/22/07, Frank Cusack [EMAIL PROTECTED] wrote:
On January 22, 2007 10:03:14 AM -0800 Richard Elling
[EMAIL PROTECTED] wrote:
Hi David,
Depending on the I/O you're doing the X4100/X4200 are much better
suited because of the dual HyperTransport buses. As a storage box with
GigE outputs you've got a lot more I/O capacity with two HT buses than
one. That plus the X4100 is just a more solid box. The X2100 M2 while
a vast
Hi Guys,
The original X2100 was a pile of doggie doo-doo. All of our problems
with it go back to the atrocious quality of the nForce 4 Pro chipset.
The NICs in particular are just crap. The M2s are better, but the
MCP55 chipset has not resolved all of its flakiness issues. That being
said Sun
Hi David,
Glad to help! I don't want to bad-mouth the X2100 M2s that much,
because they have been solid. I believe the M2s are made/designed just
for Sun by Quanta Computer (http://www.quanta.com.tw/e_default.htm)
whereas the mobos in the original X2100 was Tyan Tiger with some
slight
Hello all,
I have a question. Below are two ::memstat outputs about 5 days apart.
The interesting thing is the anonymous memory shows 2GB, though the
two major hogs of that memory (two MySQL instances) claim to be
consuming about 6.2GB (checked via pmap).
Also, it seems like the ARC keeps
Hi Shannon,
The markup is still pretty high on a per-drive basis. That being said,
$1-2/GB is darn low for the capacity in a server. Plus, you're also
paying for having enough HyperTransport I/O to feed the PCI-E I/O.
Does anyone know what problems they had with the 250GB version of the
Thumper
Hi David,
I don't know if your company qualifies as a startup under Sun's regs
but you can get an X4500/Thumper for $24,000 under this program:
http://www.sun.com/emrkt/startupessentials/
Best Regards,
Jason
On 1/19/07, David J. Orman [EMAIL PROTECTED] wrote:
Hi,
I'm looking at Sun's 1U x64
Hi Frank,
Sun doesn't support the X2100 SATA controller on Solaris 10? That's
just bizarre.
-J
On 1/18/07, Frank Cusack [EMAIL PROTECTED] wrote:
THANK YOU Naveen, Al Hopper, others, for sinking yourselves into the
shit world of PC hardware and [in]compatibility and coming up with
well
Hi Anantha,
I was curious why segregating at the FS level would provide adequate
I/O isolation? Since all FS are on the same pool, I assumed flogging a
FS would flog the pool and negatively affect all the other FS on that
pool?
Best Regards,
Jason
On 1/17/07, Anantha N. Srirama [EMAIL
Hi Robert,
I see. So it really doesn't get around the idea of putting DB files
and logs on separate spindles?
Best Regards,
Jason
On 1/17/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Wednesday, January 17, 2007, 11:24:50 PM, you wrote:
JJWW Hi Anantha,
JJWW I was curious why
Hi Philip,
I'm not an expert, so I'm afraid I don't know what to tell you. I'd
call Apple Support and see what they say. As horrid as they are at
Enterprise support they may be the best ones to clarify if
multipathing is available without Xsan.
Best Regards,
Jason
On 1/16/07, Philip Mötteli
Hi Torrey,
I think it does if you buy Xsan. Its still a separate product isn't
it? Thought its more like QFS + MPXIO.
Best Regards,
Jason
On 1/15/07, Torrey McMahon [EMAIL PROTECTED] wrote:
Robert Milkowski wrote:
2. I belive it's definitely possible to just correct your config under
Mac
Hi Roch,
You mentioned improved ZFS performance in the latest Nevada build (60
right now?)...I was curious if one would notice much of a performance
improvement between 54 and 60? Also, does anyone think the zfs_arc_max
tunable-support will be made available as a patch to S10U3, or would
that
multipathing - Like Mpxio, DLM, VxDMP, etc. -
is usually separated from the filesystem layers.
Jason J. W. Williams wrote:
Hi Torrey,
I think it does if you buy Xsan. Its still a separate product isn't
it? Thought its more like QFS + MPXIO.
Best Regards,
Jason
On 1/15/07, Torrey McMahon
Hi Robert,
Will build 54 offline the drive?
Best Regards,
Jason
On 1/13/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Saturday, January 13, 2007, 12:06:57 AM, you wrote:
JJWW Hi Robert,
JJWW We've experienced luck with flaky SATA drives in our STK array by
JJWW unseating and
Hi Robert,
We've experienced luck with flaky SATA drives in our STK array by
unseating and reseating the drive to cause a reset of the firmware. It
may be a bad drive, or the firmware may just have hit a bug. Hope its
the latter! :-D
I'd be interested why the hot-spare didn't kick in. I thought
Hi Mark,
That does help tremendously. How does ZFS decide which zio cache to
use? I apologize if this has already been addressed somewhere.
Best Regards,
Jason
On 1/11/07, Mark Maybee [EMAIL PROTECTED] wrote:
Al Hopper wrote:
On Wed, 10 Jan 2007, Mark Maybee wrote:
Jason J. W. Williams
Hello all,
Just my two cents on the issue. The Thumper is proving to be a
terrific database server in all aspects except latency. While the
latency is acceptable, being able to add some degree of battery-backed
write cache that ZFS could use would be phenomenal.
Best Regards,
Jason
On 1/11/07,
in the discussion thread on this seemed to be
related to a lot of DLNC entries due to the workload of a file server.
How would this affect a database server with operations in only a
couple very large files? Thank you in advance.
Best Regards,
Jason
On 1/10/07, Jason J. W. Williams [EMAIL PROTECTED] wrote
Hi Kyle,
I think there was a lot of talk about this behavior on the RAIDZ2 vs.
RAID-10 thread. My understanding from that discussion was that every
write stripes the block across all disks on a RAIDZ/Z2 group, thereby
making writing the group no faster than writing to a single disk.
However
Hi Robert,
I read the following section from
http://blogs.sun.com/roch/entry/when_to_and_not_to as indicating
random writes to a RAID-Z had the performance of a single disk
regardless of the group size:
Effectively, as a first approximation, an N-disk RAID-Z group will
behave as a single
Hi Robert,
We've got the default ncsize. I didn't see any advantage to increasing
it outside of NFS serving...which this server is not. For speed the
X4500 is showing to be a killer MySQL platform. Between the blazing
fast procs and the sheer number of spindles, its perfromance is
tremendous. If
is loaded (arc_init()).
You should be able to change the value of arc.c_max through mdb and set
it to the value
you want. Exercise caution while setting it. Make sure you don't have
active zpools during this operation.
Thanks and regards,
Sanjeev.
Jason J. W. Williams wrote:
Hello,
Is there a way
We're not using the Enterprise release, but we are working with them.
It looks like MySQL is crashing due to lack of memory.
-J
On 1/8/07, Toby Thain [EMAIL PROTECTED] wrote:
On 8-Jan-07, at 11:54 AM, Jason J. W. Williams wrote:
...We're trying to recompile MySQL to give a
stacktrace
Hello,
Is there a way to set a max memory utilization for ZFS? We're trying
to debug an issue where the ZFS is sucking all the RAM out of the box,
and its crashing MySQL as a result we think. Will ZFS reduce its cache
size if it feels memory pressure? Any help is greatly appreciated.
Best
Could this ability (separate ZIL device) coupled with an SSD give
something like a Thumper the write latency benefit of battery-backed
write cache?
Best Regards,
Jason
On 1/5/07, Neil Perrin [EMAIL PROTECTED] wrote:
Robert Milkowski wrote On 01/05/07 11:45,:
Hello Neil,
Friday, January 5,
Hello All,
I was curious if anyone had run a benchmark on the IOPS performance of
RAIDZ2 vs RAID-10? I'm getting ready to run one on a Thumper and was
curious what others had seen. Thank you in advance.
Best Regards,
Jason
___
zfs-discuss mailing list
provide a nice safety net.
Best Regards,
Jason
On 1/3/07, Richard Elling [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Hello All,
I was curious if anyone had run a benchmark on the IOPS performance of
RAIDZ2 vs RAID-10? I'm getting ready to run one on a Thumper and was
curious what
from the RAID-10 to the RAID-Z2 took 258 seconds. Would have
expected the RAID-10 to write data more quickly.
Its interesting to me that the RAID-10 pool registered the 38.4GB of
data as 38.4GB, whereas the RAID-Z2 registered it as 56.4.
Best Regards,
Jason
On 1/3/07, Jason J. W. Williams [EMAIL
Hi Robert,
Our X4500 configuration is multiple 6-way (across controllers) RAID-Z2
groups striped together. Currently, 3 RZ2 groups. I'm about to test
write performance against ZFS RAID-10. I'm curious why RAID-Z2
performance should be good? I assumed it was an analog to RAID-6. In
our recent
Hi Robert,
That makes sense. Thank you. :-) Also, it was zpool I was looking at.
zfs always showed the correct size.
-J
On 1/3/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Wednesday, January 3, 2007, 11:40:38 PM, you wrote:
JJWW Just got an interesting benchmark. I made two
Hi Anton,
Thank you for the information. That is exactly our scenario. We're 70%
write heavy, and given the nature of the workload, our typical writes
are 10-20K. Again the information is much appreciated.
Best Regards,
Jason
On 1/3/07, Anton B. Rang [EMAIL PROTECTED] wrote:
In our recent
Hi Brad,
I believe benr experienced the same/similar issue here:
http://www.opensolaris.org/jive/message.jspa?messageID=77347
If it is the same, I believe its a known ZFS/NFS interaction bug, and
has to do with small file creation.
Best Regards,
Jason
On 1/2/07, Brad Plecs [EMAIL PROTECTED]
Hi Robert,
MPxIO had correctly moved the paths. More than one path to controller
A was OK, and one patch to controller A for each LUN was active when
controller B was rebooted. I have a hunch that the array was at
fault, because it also rebooted a Windows server with LUNs only on
Controller A.
Just for what its worth, when we rebooted a controller in our array
(we pre-moved all the LUNs to the other controller), despite using
MPXIO ZFS kernel panicked. Verified that all the LUNs were on the
correct controller when this occurred. Its not clear why ZFS thought
it lost a LUN but it did.
for slow Friday's before x-mas, I have a bit of time to play in the
lab today.
--Tim
-Original Message-
From: Jason J. W. Williams [mailto:[EMAIL PROTECTED]
Sent: Friday, December 22, 2006 10:56 AM
To: Tim Cook
Cc: Shawn Joy; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re
Hi Naveen,
I believe the newer LSI cards work pretty well with Solaris.
Best Regards,
Jason
On 12/20/06, Naveen Nalam [EMAIL PROTECTED] wrote:
Hi,
This may not be the right place to post, but hoping someone here is running a
reliably working system with 12 drives using ZFS that can tell me
popping and reinserting triggered a rebuild of
the drive.
Best Regards,
Jason
On 12/19/06, Toby Thain [EMAIL PROTECTED] wrote:
On 19-Dec-06, at 2:42 PM, Jason J. W. Williams wrote:
I do see this note in the 3511 documentation: Note - Do not use a
Sun StorEdge 3511 SATA array to store single
Not sure. I don't see an advantage to moving off UFS for boot pools. :-)
-J
On 12/20/06, James C. McPherson [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic
I do see this note in the 3511 documentation: Note - Do not use a Sun StorEdge 3511
SATA array to store single instances of data. It is more suitable for use in
configurations where the array has a backup or archival role.
My understanding of this particular scare-tactic wording (its also in
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?
Because if the host controller port goes flaky and
1 - 100 of 120 matches
Mail list logo