- Adam...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and zpool iostat show very little to zero load on the devices even
while blocking.
Any suggestions on avenues of approach for troubleshooting?
Thanks,
Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On May 4, 2011, at 12:28 PM, Michael Schuster wrote:
On Wed, May 4, 2011 at 21:21, Adam Serediuk asered...@gmail.com wrote:
We have an X4540 running Solaris 11 Express snv_151a that has developed an
issue where its write performance is absolutely abysmal. Even touching a
file takes over five
and the DDT no
longer fits in RAM? That would create a huge performance cliff.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org on behalf of Eric D. Mudama
Sent: Wed 5/4/2011 12:55 PM
To: Adam Serediuk
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Extremely Slow
cpu but not having much luck finding anything meaningful. Occasionally
the cpu usage for that thread will drop, and when it does performance of the
filesystem increases.
On Wed, 2011-05-04 at 15:40 -0700, Adam Serediuk wrote:
Dedup is disabled (confirmed to be.) Doing some digging it looks
the financial
commitment.
- Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In my case, it gives an error that I need at least 11 disks (which I don't)
but the point is that raidz parity does not seem to be limited to 3. Is this
not true?
RAID-Z is limited to 3 parity disks. The error message is giving you false hope
and that's a bug. If you had plugged in 11 disks
be twice as good. (I've just done some tests on the MacZFS port on my
blog for more info)
Here's a good blog comparing some ZFS compression modes in the context of the
Sun Storage 7000:
http://blogs.sun.com/dap/entry/zfs_compression
Adam
--
Adam Leventhal, Fishworkshttp
Hey Robert,
I've filed a bug to track this issue. We'll try to reproduce the problem and
evaluate the cause. Thanks for bringing this to our attention.
Adam
On Jun 24, 2010, at 2:40 AM, Robert Milkowski wrote:
On 23/06/2010 18:50, Adam Leventhal wrote:
Does it mean that for dataset used
Hey Robert,
How big of a file are you making? RAID-Z does not explicitly do the parity
distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths
to distribute IOPS.
Adam
On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote:
Hi,
zpool create test raidz c0t0d0 c1t0d0
parity.
What is the total width of your raidz1 stripe?
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of resilience and
performance, no doubt. Which makes me think the pretty interface becomes an
annoyance sometimes. Let's wait for 2010.Q1 :)
As always, we welcome feedback (although zfs-discuss is not the appropriate
forum), and are eager to improve the product.
Adam
--
Adam Leventhal
on fully
random workloads. Your mileage may very but for now I am very happy
with the systems finally (and rightfully so given their performance
potential!)
--
Adam Serediuk___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, etc all make a large different when dealing with very
large data sets.
On 24-Feb-10, at 2:05 PM, Adam Serediuk wrote:
I manage several systems with near a billion objects (largest is
currently 800M) on each and also discovered slowness over time. This
is on X4540 systems with average file
Hi Any idea why zfs does not dedup files with this format ?
file /opt/XXX/XXX/data
VAX COFF executable - version 7926
With dedup enabled, ZFS will identify and remove duplicated regardless of the
data format.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
Hey Bob,
My own conclusions (supported by Adam Leventhal's excellent paper) are that
- maximum device size should be constrained based on its time to
resilver.
- devices are growing too large and it is about time to transition to
the next smaller physical size.
I don't disagree
. As
with any block-based caching, this device has no notion of the semantic
meaning of a given block so there's only so much intelligence it can bring
to bear on the problem.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
! This is great news for ZFS. I'll be very interested to
see the results members of the community can get with your device as part
of their pool. COMSTAR iSCSI performance should be dramatically improved
in particular.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
| D15 |
1K per device with an additional 1K for parity.
Adam
On Jan 4, 2010, at 3:17 PM, Brad wrote:
If a 8K file system block is written on a 9 disk raidz vdev, how is the data
distributed (writtened) between all devices in the vdev since a zfs write is
one continuously IO operation
with identical parity?
You're right that a mirror is a degenerate form of raidz1, for example, but
mirrors allow for specific optimizations. While the redundancy would be the
same, the performance would not.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
that examines recent trends in hard
drives and makes the case for triple-parity RAID. It's at least peripherally
relevant to this conversation:
http://blogs.sun.com/ahl/entry/acm_triple_parity_raid
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
Hi Giridhar,
The size reported by ls can include things like holes in the file. What space
usage does the zfs(1M) command report for the filesystem?
Adam
On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
Hi,
Reposting as I have not gotten any response.
Here is the issue. I created
Thanks for the response Adam.
Are you talking about ZFS list?
It displays 19.6 as allocated space.
What does ZFS treat as hole and how does it identify?
ZFS will compress blocks of zeros down to nothing and treat them like
sparse files. 19.6 is pretty close to your computed. Does your
the new bits.
Adam
On Dec 9, 2009, at 3:40 AM, Kjetil Torgrim Homme wrote:
I'm planning to try out deduplication in the near future, but started
wondering if I can prepare for it on my servers. one thing which struck
me was that I should change the checksum algorithm to sha256 as soon
, because
clone contents are (in this scenario) just some new data?
The dedup property applies to all writes so the settings for the pool of origin
don't matter, just those on the destination pool.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
What's the earliest build someone has seen this
problem? i.e. if we binary chop, has anyone seen it
in
b118?
We have used every stable build from b118 up, as b118 was the first reliable
one that could be used is a CIFS-heavy environment. The problem occurs on all
of them.
- Adam
of them when the bus resets. We
have 15 of these systems running, all with the same config using 2 foot
external cables...changing cables doesn`t help. We have not tried using a
different JBOD.
- Adam
--
This message posted from opensolaris.org
Hi Adam,
thanks for this info. I've talked with my colleagues
in Beijing (since
I'm in Beijing this week) and we'd like you to try
disabling MSI/MSI-X
for your mpt instances. In /etc/system, add
set mpt:mpt_enable_msi = 0
then regen your boot archive and reboot.
I had already done
occurring?
The process is currently:
zfs_send - mbuffer - LAN - mbuffer - zfs_recv
--
Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with it enabled but I wasn't
about to find out.
Thanks
On 20-Nov-09, at 11:48 AM, Richard Elling wrote:
On Nov 20, 2009, at 11:27 AM, Adam Serediuk wrote:
I have several X4540 Thor systems with one large zpool that
replicate data to a backup host via zfs send/recv. The process
works
see from the consistent work of Eric Schrock.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a substantial hit in throughput moving from one to the
other.
Tim,
That all really depends on your specific system and workload. As with
any
performance related matter experimentation is vital for making your
final
decision.
Adam
--
Adam Leventhal, Fishworkshttp
So, while we are working on resolving this issue with Sun, let me approach this
from the another perspective: what kind of controller/drive ratio would be the
minimum recommended to support a functional OpenSolaris-based archival
solution? Given the following:
- the vast majority of IO to the
The iostat I posted previously was from a system we had already tuned the
zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10 in
actv per disk).
I reset this value in /etc/system to 7, rebooted, and started a scrub. iostat
output showed busier disks (%b is higher,
The controller connects to two disk shelves (expanders), one per port on the
card. If you look back in the thread, you'll see our zpool config has one vdev
per shelf. All of the disks are Western Digital (model WD1002FBYS-18A6B0) 1TB
7.2K, firmware rev. 03.00C06. Without actually matching up
Our config is:
OpenSolaris snv_118 x64
1 x LSISAS3801E controller
2 x 23-disk JBOD (fully populated, 1TB 7.2k SATA drives)
Each of the two external ports on the LSI connects to a 23-disk JBOD. ZFS-wise
we use 1 zpool with 2 x 22-disk raidz2 vdevs (1 vdev per JBOD). Each zpool has
one ZFS
Just submitted the bug yesterday, under advice of James, so I don't have a
number you can refer to you...the change request number is 6894775 if that
helps or is directly related to the future bugid.
From what I seen/read this problem has been around for awhile but only rears
its ugly head
I don't think there was any intention on Sun's part to ignore the
problem...obviously their target market wants a performance-oriented box and
the x4540 delivers that. Each 1068E controller chip supports 8 SAS PHY channels
= 1 channel per drive = no contention for channels. The x4540 is a
LSI's sales literature on that card specs 128 devices which I take with a few
hearty grains of salt. I agree that with all 46 drives pumping out streamed
data, the controller would be overworked BUT the drives will only deliver data
as fast as the OS tells them to. Just because the speedometer
with a zfs
send/receive on the receiving side?
As with all property changes, new writes get the new properties. Old data
is not rewritten.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
And therein lies the issue. The excessive load that causes the IO issues is
almost always generated locally from a scrub or a local recursive ls used to
warm up the SSD-based zpool cache with metadata. The regular network IO to the
box is minimal and is very read-centric; once we load the box
Here is example of the pool config we use:
# zpool status
pool: pool002
state: ONLINE
scrub: scrub stopped after 0h1m with 0 errors on Fri Oct 23 23:07:52 2009
config:
NAME STATE READ WRITE CKSUM
pool002 ONLINE 0 0 0
raidz2 ONLINE
Cindy: How can I view the bug report you referenced? Standard methods show my
the bug number is valid (6694909) but no content or notes. We are having
similar messages appear with snv_118 with a busy LSI controller, especially
during scrubbing, and I'd be interested to see what they mentioned
James: We are running Phase 16 on our LSISAS3801E's, and have also tried the
recently released Phase 17 but it didn't help. All firmware NVRAM settings are
default. Basically, when we put the disks behind this controller under load
(e.g. scrubbing, recursive ls on large ZFS filesystem) we get
I've filed the bug, but was unable to include the prtconf -v output as the
comments field only accepted 15000 chars total. Let me know if there is
anything else I can provide/do to help figure this problem out as it is
essentially preventing us from doing any kind of heavy IO to these pools,
I Too have seen this problem.
I had done a zfs send from my main pool terra (6 disk raidz on seagate 1TB
drives) to a mirror pair of WD Green 1TB drives.
ZFS send was successful, however i noticed the pool was degraded after a while
(~1 week) with one of the mirror disks constantly re-silvering
parity
RAID-6block-interleaved double distributed parity
raidz1 is most like RAID-5; raidz2 is most like RAID-6. There's no RAID
level that covers more than two parity disks, but raidz3 is most like RAID-6,
but with triple distributed parity.
Adam
--
Adam Leventhal, Fishworks
description of the two issues. This is for interest
only and does not contain additional discussion of symptoms or
prescriptive
action.
Adam
---8---
1. In situations where a block read from a RAID-Z vdev fails to checksum
but there were no errors from any of the child vdevs (e.g. hard
drives) we
must
should take a look at?
Absolutely not. That is an unrelated issue. This problem is isolated to
RAID-Z.
And good luck with the fix for build 124. Are talking days or weeks for the
fix to be available, do you think? :) --
Days or hours.
Adam
--
Adam Leventhal, Fishworks http
to the last either later today or tomorrow.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Trevor,
We intentionally install the system pool with an old ZFS version and
don't
provide the ability to upgrade. We don't need or use (or even expose)
any
of the features of the newer versions so using a newer version would
only
create problems rolling back to earlier releases.
Adam
Hi James,
After investigating this problem a bit I'd suggest avoiding deploying
RAID-Z
until this issue is resolved. I anticipate having it fixed in build 124.
Apologies for the inconvenience.
Adam
On Aug 28, 2009, at 8:20 PM, James Lever wrote:
On 28/08/2009, at 3:23 AM, Adam Leventhal
be satisfying to add
another
request for it, Matt is already cranking on it as fast as he can and
more
requests for it are likely to have the opposite of the intended effect.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
Will BP rewrite allow adding a drive to raidz1 to get raidz2? And
how is status on BP rewrite? Far away? Not started yet? Planning?
BP rewrite is an important component technology, but there's a bunch
beyond
that. It's not a high priority right now for us at Sun.
Adam
--
Adam Leventhal
Hey Gary,
There appears to be a bug in the RAID-Z code that can generate
spurious checksum errors. I'm looking into it now and hope to have it
fixed in build 123 or 124. Apologies for the inconvenience.
Adam
On Aug 25, 2009, at 5:29 AM, Gary Gendel wrote:
I have a 5-500GB disk Raid-Z
But the real question is whether the enterprise drives would have
avoided your problem.
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-26, at 11:38, Troels Nørgaard Nielsen t...@t86.dk wrote:
Hi Tim Cook.
If I was building my own system again, I would prefer not to go with
consumer
do better with copies=3 ;-)
Maybe this is noted somewhere, but I did not realize that copies
invoked logic that distributed the copies among vdevs? Can you please
provide some pointers about this?
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
money.
That's our assessment, but it's highly dependent on the specific
characteristics of the MLC NAND itself, the SSD controller, and, of
course, the workload.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
I believe you will get .5 TB in this example, no?
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-12, at 16:44, Erik Trimble erik.trim...@sun.com wrote:
Eric D. Mudama wrote:
On Wed, Aug 12 at 12:11, Erik Trimble wrote:
Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB
be an issue. I'd
like to have the CF cards as read-only as possible though.
By sharable, what do you mean exactly?
Thanks a lot for the advice,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs
for everyone's input!
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
don't think you
can move the bulk - /usr.
See:
http://docs.sun.com/source/820-4893-13/compact_flash.html#50589713_78631
Good link.
So I suppose I can move /var out and that would deal with most (all?)
of the writes.
Good plan!
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772
that
takes only 1 CF card?
I just ordered a pair of the Syba units, cheap enough too test out
anyway.
Now to find some reasonably priced 8GB CompactFlash cards…
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss
Excellent advice, thans Ian.
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-06, at 15:16, Ian Collins i...@ianshome.com wrote:
Adam Sherman wrote:
On 4-Aug-09, at 16:54 , Ian Collins wrote:
Use a CompactFlash card (the board has a slot) for root, 8
drives in raidz2 tank, backup the root
there.
You are suggesting booting from a mirrored pair of CF cards? I'll have
to wait until I see the system to know if I have room, but that's a
good idea.
I've got lots of unused SATA ports.
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
used them on a few machines, opensolaris and freebsd. I'm a big
fan of compact flash.
What about USB sticks? Is there a difference in practice?
Thanks for the advice,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs
. Of course, my system only has a single x16
PCI-E slot in it. :)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in order to have root on the raidz2 tank
5.5. Figure out how to have the kernel and bootloader on the CF card
in order to have 4 pairs of mirrored drives in a tank, supposing #2
doesn't work
Comments, suggestions, questions, criticism?
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel
CDN for the 500GB model, would have put this
system way over budget.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
My test setup of 8 x 2G virtual disks under Virtual Box on top of Mac
OS X is running nicely! I haven't lost a *single* byte of data.
;)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss
, Joyent Inc.
I believe I have about a TB of data on at least one of Jason's pools
and it seems to still be around. ;)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
they all behave similarly dangerously, but actual
data would be useful.
Also, I think it may have already been posted, but I haven't found the
option to disable VirtualBox' disk cache. Anyone have the incantation
handy?
Thanks,
A
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772
/flush command. Caching is still
enabled
(it wasn't the problem).
Thanks!
As Russell points on in the last post to that thread, it doesn't seem
possible to do this with virtual SATA disks? Odd.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
A.
--
Adam Sherman
+1.613.797.6819
On 2009-07-23, at 14:09, Greg Mason gma...@msu.edu wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up
maybe it is
different now.
Absolutely. I was talking more or less about optimal timing. I realize that
due to the priorities within ZFS and real word loads that it can take far
longer.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
that the parts already developed are truely enterprise-
grade.
While I don't disagree that the focus for ZFS should be ensuring
enterprise-class reliability and performance, let me assure you that
requirements are driven by the market and not by marketing.
Adam
--
Adam Leventhal, Fishworks
writes into larger chunks.
I hope that's clear; if it's not, stay tuned for the aforementioned
blog post.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Don't hear about triple-parity RAID that often:
Author: Adam Leventhal
Repository: /hg/onnv/onnv-gate
Latest revision: 17811c723fb4f9fce50616cb740a92c8f6f97651
Total changesets: 1
Log message:
6854612 triple-parity RAID-Z
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/
009872
no way around it. Fortunately with proper
scrubbing
encountering data corruption in one stripe on three different drives is
highly unlikely.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
a mess of them into a SAS JBOD with
an expander?
Thanks for everyone's great feedback, this thread has been highly
educating.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss
than:
http://www.provantage.com/lsi-logic-lsi00124~7LSIG03W.htm
Just newer?
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
for pointing to relevant documentation.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
documentation.
The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options. See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.
I'll read though that, thanks for the detailed pointers.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel
Another thought in the same vein, I notice many of these systems
support SES-2 for management. Does this do anything useful under
Solaris?
Sorry for these questions, I seem to be having a tough time locating
relevant information on the web.
Thanks,
A.
--
Adam Sherman
CTO, Versature
management
uses of SES?
I'm really just exploring. Where can I read about how FMA is going to
help with failures in my setup?
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss
have a look at to get
=12 SATA disks externally attached to my systems?
Thanks!
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
X4100s:
http://sunsolve.sun.com/handbook_private/validateUser.do?target=Devices/SCSI/SCSI_PCIX_SAS_SATA_HBA
$280 or so, looks like. Might be overkill for me though.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss
of disks
to the OS, so I really only get 14 disks for data. Correct?
That's right. We market the 7110 as either 2TB = 146GB x 14 or 4.2TB =
300GB x 14 raw capacity.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs
Hey Lawrence,
Make sure you're running the latest software update. Note that this forumn
is not the appropriate place to discuss support issues. Please contact your
official Sun support channel.
Adam
On Thu, Jun 18, 2009 at 12:06:02PM -0700, lawrence ho wrote:
We have a 7110 on try and buy
write performance.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of SSDs with ZFS as a ZIL device, an L2ARC device,
and eventually as primary storage. We'll first focus on the specific
SSDs we certify for use in our general purpose servers and the Sun
Storage 7000 series, and help influence the industry to move to
standards that we can then use.
Adam
that would hardly be measurable given
the architecture of the Hybrid Storage Pool. Recall that unlike other
products in the same space, we get our IOPS from flash rather than from
a bazillion spindles spinning at 15,000 RPM.
Adam
--
Adam Leventhal, Fishworks http
their
drives with stuff they've bought at Fry's. Is this still covered by their
service plan or would this only be in an unsupported config?
Thanks.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
than ZFS?
Are you telling me zfs is deficient to the point it can't handle basic
right-sizing like a 15$ sata raid adapter?
How do there $15 sata raid adapters solve the problem? The more details you
could provide the better obviously.
Adam
--
Adam Leventhal, Fishworks
complexity and little to no value? They seem
very different to me, so I suppose the answer to your question is: no I cannot
feel the irony oozing out between my lips, and yes I'm oblivious to the same.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
to use the full extent of the drives they've paid for, say,
if they're using a vendor that already provides that guarantee?
You know, sort of like you not letting people choose their raid layout...
Yes, I'm not saying it shouldn't be done. I'm asking what the right answer
might be.
Adam
--
Adam
it actually
applies.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not have
that assurance.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The Intel part does about a fourth as many synchronous write IOPS at
best.
Adam
On Jan 16, 2009, at 5:34 PM, Erik Trimble wrote:
I'm looking at the newly-orderable (via Sun) STEC Zeus SSDs, and
they're
outrageously priced.
http://www.stec-inc.com/product/zeusssd.php
I just looked
On Tue, Nov 18, 2008 at 09:09:07AM -0800, Andre Lue wrote:
Is the web interface on the appliance available for download or will it make
it to opensolaris sometime in the near future?
It's not, and it's unlikely to make it to OpenSolaris.
Adam
--
Adam Leventhal, Fishworks
1 - 100 of 185 matches
Mail list logo