the perfect use case for an L2ARC on SSD.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
read/write that fast when
pulling snapshot contents off the disks, since they're essentially
random access on a server that's been creating/deleting snapshots for
a long time.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
- i.e. when caching ZFS metadata.
Would an ashift of 12 conceivably address that issue?
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
don't want to ask about that subject just
yet. Thanks for the help.
Most of the supermicro stuff works great for me.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
the H200/H700 adapters, since we don't really need 6Gbit/s and are
still ordering our systems with SAS 6/iR.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
with the HP boxes was the unsupported RAID cards.
We ended up getting Dell T610 boxes with SAS6i/R cards, which are
properly supported in Solaris/OI.
Supposedly the H200/H700 cards are just their name for the 6gbit LSI
SAS cards, but I haven't tested them personally.
--eric
--
Eric D. Mudama
edmud
was done on a virgin
pool, or whether it was allocated out of an existing pool. If the
latter, your comment is the likely explanation. If the former, your
comment wouldn't explain the slow performance.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
for 'recv'.
http://wikitech-static.wikimedia.org/articles/z/f/s/Zfs_replication.html
Their data doesn't match mine.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
amplification of 1.0) In practice, wAmp is often much higher,
depending on the workload.
How long do you plan on having this device last? How much retention
do you need in your application? What is your workload?
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
. Assuming the
original drive was a 100GB/100GiB design, you now have (100*0.07)+20
GB of spare area, which depending on the design, may significantly
lower write amplification and thus increase performance on a device
that is full.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
for that work area (hourly for a
day, daily for a week, weekly for a month, monthly for a year, etc.)
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
/raid_recommendations_space_vs_mttdl
http://blog.richardelling.com/2010/02/zfs-data-protection-comparison.html
I think the second picture is the one you were thinking of. The 3rd
link adds raidz3 data to the charts.
--
Eric D. Mudama
edmud...@bounceswoosh.org
-200 for cache
enabled writes. The above are all full-stroke, so the average seek is
1/3 stroke (unqueued). On a smaller data set where the drive dwarfs
the data set, average seek distance is much shorter and the resulting
IOPS can be quite a bit higher.
--eric
--
Eric D. Mudama
edmud
eliminate #2.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
printed on the label of the disk. It's likely not visible, however,
if you had a maintenance window you could pull the disks to write them
down and just keep the paper handy.
That, or use the trusty 'dd' to read from it and find the solid light.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
years ago and Newegg
may have changed their packaging since then.
NewEgg packaging is exactly what you describe, unchanged in the last
few years. Most recent newegg drive purchase was last week for me.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
bubble
wrap.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, May 4 at 12:21, Adam Serediuk wrote:
Both iostat and zpool iostat show very little to zero load on the devices even
while blocking.
Any suggestions on avenues of approach for troubleshooting?
is 'iostat -en' error free?
--
Eric D. Mudama
edmud...@bounceswoosh.org
.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
edmudama$ zpool list
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
rpool 29.8G 22.0G 7.79G73% 1.00x ONLINE -
tank 1.81T 879G 977G47% 1.00x ONLINE -
Is something broken? Any idea why I am seeing the wrong sizes in ls?
--eric
--
Eric D. Mudama
edmud
On Mon, May 2 at 14:01, Bob Friesenhahn wrote:
On Mon, 2 May 2011, Eric D. Mudama wrote:
Hi. While doing a scan of disk usage, I noticed the following oddity.
I have a directory of files (named file.dat for this example) that all
appear as ~1.5GB when using 'ls -l', but that (correctly
On Mon, May 2 at 20:50, Darren J Moffat wrote:
On 05/ 2/11 08:41 PM, Eric D. Mudama wrote:
On Mon, May 2 at 14:01, Bob Friesenhahn wrote:
On Mon, 2 May 2011, Eric D. Mudama wrote:
Hi. While doing a scan of disk usage, I noticed the following oddity.
I have a directory of files (named
On Mon, May 2 at 15:30, Brandon High wrote:
On Mon, May 2, 2011 at 1:56 PM, Eric D. Mudama
edmud...@bounceswoosh.org wrote:
that the application would have done the seek+write combination, since
on NTFS (which doesn't support sparse) these would have been real
1.5GB files, and there would
6Gbit/s SAS lanes are connected for that many devices though. Maybe
that plus a support contract from Sun would be a worthy replacement,
though you definitely won't have a single vendor to contact for
service issues.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
, which also works well.
I can't comment on HP's support, I have no experience with it. We now
self-support our software (OpenIndiana b148)
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
have had.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in improved performance in some workloads.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
looked like, plus how fragmented the free
space is. Into a device with plenty of free space, small writes
should be significantly faster than write-in-place.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss
that there was a pci and a pci-x version of
the 3124, so watch out.)
Most 3124 I've seen are PCI-X natively, but they work fine in PCI
slots, albiet with less bandwidth available.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs
, resulting in numerous
reports of compatibility and performance problems with 3112/3114
hardware.
I +1 the suggestion to find something more modern if at all possible.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs
On Wed, Feb 23 at 13:16, Mauricio Tavares wrote:
On Wed, Feb 23, 2011 at 12:53 PM, Eric D. Mudama
edmud...@bounceswoosh.org wrote:
On Wed, Feb 23 at 13:29, Andrew Gabriel wrote:
Mauricio Tavares wrote:
Perhaps a bit off-topic (I asked on the rescue list --
http://web.archiveorange.com
been converted to use the new pool when you recv
it.
I could be wrong though, we update our pools in lockstep and err on
the side of backwards compliance with our multi-system backup.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing
to be maintained by
the garbage collection engine. Depending on the design of the SSD,
this can significantly reduce the write amplification of the SSD.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
to rsync or a
half dozen other techniques.
At work we always use -i, and our send|recv is anywhere from 5-20
minutes, depending on what data was added or modified.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs
understood it.
http://www.tomshardware.com/news/seagate-hdd-harddrive,8279.html
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, though not everyone realizes how much overhead there can be in
small operations, even sequential ones.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
test it.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
If they could just get their channel working at 3GHz instead of 2GHz
or whatever, they'd use that capability to pack even more bits into
the consumer drives to lower costs.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss
command across all filesystems in the tree, even if it takes
10 seconds to actually complete the command. However, I have no such
system where I can prove this guess as correct or not.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs
this discussion.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
only need 2 adapters per stack of JBODs.
With adapters A0 and A1, and JBODs J0 through J3, you get:
A0 - J0 - J1 - J2 - J3
A1 - J3 - J2 - J1 - J0
Yes, all the above are daisy-chained, starting at a different side of
the stack with each adapter.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
the above is in the context of having to restore from
backup, which is rare, however in live usage I don't think the math
changes a whole lot.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
performance counters
we should look for to identify the bottlenecks. Don't want to
replace a component just to find that there was no improvement in
iometer reading.
fsstat zfs 1
zpool iostat 1
any suggestions beyond that will require a lot more detail on your
setup and target workload
--
Eric D
performance boost when
used as a ZIL in their system.
For a huge ZFS box providing tens of ZFS filesystems in a pool all
with huge user loads, sure, a RAM based device makes sense, but it's
overkill for some large percentage of ZFS users, I imagine.
--
Eric D. Mudama
edmud
using NAND technology.
Non-NAND SSDs may or may not have similar or related limitations.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. At that point, you're probably better-off not having a
dedicated ZIL, instead of burning 10 slots and 150W.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
and the Vertex 2 Pro.
Okay, I understand where you're coming from.
Yes, buyers must be aware of the test methodologies for published
benchmark results, especially those used to sell drives by the vendors
themselves. Up to is generally a poor thing to base a buying
decision.
--eric
--
Eric D. Mudama
On Tue, Dec 21 at 8:24, Edward Ned Harvey wrote:
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
On Mon, Dec 20 at 19:19, Edward Ned Harvey wrote:
If there is no correlation between on-disk order of blocks for different
disks within
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
configuration with lots of drives.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1
supported_max_cstates 1
supported_max_cstates 1
supported_max_cstates 1
supported_max_cstates 1
supported_max_cstates 1
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
server in the future.
Out of curiosity, did you run into this:
http://blogs.everycity.co.uk/alasdair/2010/06/broadcom-nics-dropping-out-on-solaris-10/
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss
for improvement, to an asymptotic limit driven by servo
settle speed.
Obviously this performance improvement comes with the standard WB
risks, and YMMV, IANAL, etc.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs
tank'
Am I missing something?
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
? Our boot drives (32GB X25-E) will resilver in about 1 minute.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
have benchmark data indicating unaligned or
aligned+offset access on the X25-E is significantly worse than aligned
access?
I'd thought the tier1 SSDs didn't have problems with these workloads.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
for a 4k rotating drive, which has a much different
latency profile than an SSD. I was wondering if anyone had a
benchmarking showing this alignment mattered on the latest SSDs. My
guess is no, but I have no data.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
to the limited data set size.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
believe he was arguing that
by moving code into the kernel and marking as experimental, it's more
likely to be tested and have the bugs worked out, than if it forever
lives as patchsets.
Given the test environment, can't say I can argue against that point
of view.
--
Eric D. Mudama
edmud
?
Maybe allowing SANs built upon btrfs to be natively used within
Solaris/Oracle at some point in the future? Adding btrfs-zfs
conversion utilities that do things like maintain snapshots, data set
properties, etc?
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
to me, a non-compete cannot legally prevent you from earning
a living. If your one skill is in writing filesystems, you cannot be
prevented from doing so by a noncompete.
However, please get your own legal advice, as it varies significantly
state-to-state.
--
Eric D. Mudama
edmud
, the gateway drug where people
can experiment inexpensively to try out new technologies (ZFS, dtrace,
crossbow, comstar, etc.) and eventually step up to Oracle's big iron
as their business grows.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
+C = P
A+x+C-A-C = P-A-C
x = P-A-C
and voila, you now have your original B contents, since B=x.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
devices.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
even need the same pool layout on the backup machine.
Primary can be a stripe of mirrors, while your backup can be a wide
raidz2 setup.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
associated with them)
Any company that believes it can add more value in their IT supply
chain than the vendor they'd be buying from would be foolish not to
put energy into that space (if they can afford to.) Google is but a
single example, though I am sure there are others.
--
Eric D. Mudama
a reinstall of the OS, and the
amount of custom configuration is minimal in our rpool.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?DriveID=610Â [Green]
* 6x WD1002FBYS -
[4]http://www.wdc.com/en/products/Products.asp?DriveID=503Â [RE3]
We use the WD1002FBYS (1.0TB WD RE3) and haven't had an issue yet in
our Dell T610 chassis.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
.
As a replacement recommendation, we've been beating on the WD 1TB RE3
drives for 18 months or so, and we're happy with both performance and
the price for what we get. $160/ea with a 5 year warranty.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs
people
complain that their scrub takes too long. There may be knobs for
individuals to use, but I don't think overall there's a magic answer.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, Apr 20 at 11:41, Don Turnbull wrote:
Not to be a conspiracy nut but anyone anywhere could have registered
that gmail account and supplied that answer. It would be a lot more
believable from Mr Kay's Oracle or Sun account.
+1
Glad I wasn't the only one who noticed.
--
Eric D. Mudama
, and does JBOD no problem.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that can have its snapshots
managed with a specific policy for addressing the usage model.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
virtually guarantee
every storage, SSD and OS vendor is generating that data internally
however.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
On Fri, Apr 16 at 14:42, Miles Nordin wrote:
edm == Eric D Mudama edmud...@bounceswoosh.org writes:
edm How would you stripe or manage a dataset across a mix of
edm devices with different geometries?
the ``geometry'' discussed is 1-dimensional: sector size.
The way that you do
On Tue, Apr 13 at 9:52, Bob Friesenhahn wrote:
On Mon, 12 Apr 2010, Eric D. Mudama wrote:
The advantage of TRIM, even in high end SSDs, is that it allows you to
effectively have additional considerable extra space available to
the device for garbage collection and wear management when not all
that they're fixed in what I'm
likely to be upgrading to next.
Yes, hopefully.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that is no longer active.
Based on the above, I think TRIM has the potential to help every SSD,
not just the cheap SSDs.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
=N82E16817994075
The doors are a bit light perhaps, but it works just fine for my
needs and holds drives securely. The small fans are a bit noisy, but
since the box lives in the basement I don't really care.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
On Wed, Apr 7 at 12:41, Jason S wrote:
And just to clarify as far as expanding this pool in the future my
only option is to add another 7 spindle RaidZ2 array correct?
That is correct, unless you want to use the -f option to force-allow
an asymmetric expansion of your pool.
--eric
--
Eric D
integrated part?
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and the L2ARC serves data at greater
than 100MB/s (wire speed) without stressing much of anything.
The BIOS settings in our T610 are exactly as they arrived from Dell
when we bought it over a year ago.
Thoughts?
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
HPA or
DCO) that is changing the capacity. It's possible one of these is in
effect.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
mitigate or exacerbate.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that is PCI-e x4 which should be a lot faster.
Doesn't matter for rotating drives, but for SSDs it's important.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
than ~5GB
full with our workloads, most every file access saturates the wire
(1.0 Gb/s ethernet) once the cache has warmed up, resulting in very
little IO to our spindles.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
in
ZFS. Are there any resources available that will show me how this
is done?
You could try zdb.
Or just look at the source code.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, that
is already handled through another program.
I'm pretty sure the configuration is embedded in the pool itself.
Just import on the new machine. You may need --force/-f the pool
wasn't exported on the old system properly.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
the same device? I'm barely familiar with solaris
partitioning and labels... what's the difference between a slice and a
partition?
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Sat, Mar 6 at 15:04, Richard Elling wrote:
On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote:
On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote:
hdd ONLINE 0 0 0
c7t0d0p3 ONLINE 0 0 0
rpool ONLINE 0 0 0
D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
boot time is more than just impact. That's
getting hit by a train.
Might be useful for folks, if the above document listed a few concrete
datapoints of boot time scaling with the number of filesystems or
something similar.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
drives, you're looking at a TON of spindles to move through 400
million 1KB files quickly.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
, with a separate box as a live backup using raidz of
larger SATA drives.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the
parts I could use effectively and comfortably.
no one is selling disk brackets without disks. not Dell, not EMC, not
NetApp, not IBM, not HP, not Fujitsu, ...
http://discountechnology.com/Products/SCSI-Hard-Drive-Caddies-Trays
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
here, or something less complex
entirely?
sounds like dedupe to me... My non-dedupe zpools are scrubbing at the
same rate as ever in b130 on multiple servers.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs
1 - 100 of 174 matches
Mail list logo