rfect use case for an L2ARC on SSD.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
don't read/write that fast when
pulling snapshot contents off the disks, since they're essentially
random access on a server that's been creating/deleting snapshots for
a long time.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-
sizes - i.e. when caching ZFS metadata.
Would an ashift of 12 conceivably address that issue?
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and
until I go over some of those I don't want to ask about that subject just
yet. Thanks for the help.
Most of the supermicro stuff works great for me.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@ope
H200/H700 adapters, since we don't really need 6Gbit/s and are
still ordering our systems with SAS 6/iR.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
hat got us with the HP boxes was the unsupported RAID cards.
We ended up getting Dell T610 boxes with SAS6i/R cards, which are
properly supported in Solaris/OI.
Supposedly the H200/H700 cards are just their name for the 6gbit LSI
SAS cards, but I haven't tested them personally.
--eric
ate whether his 160GB file test was done on a virgin
pool, or whether it was allocated out of an existing pool. If the
latter, your comment is the likely explanation. If the former, your
comment wouldn't explain the slow performance.
--eric
--
for
'send' and 70 Mbytes for 'recv'.
http://wikitech-static.wikimedia.org/articles/z/f/s/Zfs_replication.html
Their data doesn't match mine.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-d
ssuming a write
amplification of 1.0) In practice, wAmp is often much higher,
depending on the workload.
How long do you plan on having this device last? How much retention
do you need in your application? What is your workload?
--eric
--
Eric D. Mudama
edmud...@b
it to 80GB. Assuming the
original drive was a 100GB/100GiB design, you now have (100*0.07)+20
GB of spare area, which depending on the design, may significantly
lower write amplification and thus increase performance on a device
that is "full."
--eric
--
Eric D. Mudama
edmud
eas via NFS or CIFS, and give them a
time-machine like picture of history for that work area (hourly for a
day, daily for a week, weekly for a month, monthly for a year, etc.)
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing lis
tions_space_vs_mttdl
http://blog.richardelling.com/2010/02/zfs-data-protection-comparison.html
I think the second picture is the one you were thinking of. The 3rd
link adds raidz3 data to the charts.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
maybe 150-200 for cache
enabled writes. The above are all full-stroke, so the average seek is
1/3 stroke (unqueued). On a smaller data set where the drive dwarfs
the data set, average seek distance is much shorter and the resulting
IOPS can be quite a bit higher.
--eric
--
Eric D. Mu
eliminate #2.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
' with the WWID
printed on the label of the disk. It's likely not visible, however,
if you had a maintenance window you could pull the disks to write them
down and just keep the paper handy.
That, or use the trusty 'dd' to read from it and find the solid light.
--eric
--
Eric
than bubble
wrap.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
years ago and Newegg
may have changed their packaging since then.
NewEgg packaging is exactly what you describe, unchanged in the last
few years. Most recent newegg drive purchase was last week for me.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
On Wed, May 4 at 12:21, Adam Serediuk wrote:
Both iostat and zpool iostat show very little to zero load on the devices even
while blocking.
Any suggestions on avenues of approach for troubleshooting?
is 'iostat -en' error free?
--
Eric D. Mudama
edmud...@bounce
nty fast and your CPU should be
fine too.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, May 2 at 15:30, Brandon High wrote:
On Mon, May 2, 2011 at 1:56 PM, Eric D. Mudama
wrote:
that the application would have done the seek+write combination, since
on NTFS (which doesn't support sparse) these would have been real
1.5GB files, and there would be hundreds or thousan
On Mon, May 2 at 20:50, Darren J Moffat wrote:
On 05/ 2/11 08:41 PM, Eric D. Mudama wrote:
On Mon, May 2 at 14:01, Bob Friesenhahn wrote:
On Mon, 2 May 2011, Eric D. Mudama wrote:
Hi. While doing a scan of disk usage, I noticed the following oddity.
I have a directory of files (named
On Mon, May 2 at 14:01, Bob Friesenhahn wrote:
On Mon, 2 May 2011, Eric D. Mudama wrote:
Hi. While doing a scan of disk usage, I noticed the following oddity.
I have a directory of files (named file.dat for this example) that all
appear as ~1.5GB when using 'ls -l', but that
ck counts
from what I can tell.
edmudama$ zpool list
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
rpool 29.8G 22.0G 7.79G73% 1.00x ONLINE -
tank 1.81T 879G 977G47% 1.00x ONLINE -
Is something broken? Any idea why I am seeing the wrong sizes in ls?
--
he documentation how many
6Gbit/s SAS lanes are connected for that many devices though. Maybe
that plus a support contract from Sun would be a worthy replacement,
though you definitely won't have a single vendor to contact for
service issues.
--eric
--
Eric D. Mudama
cally an
LSI 9211-8i, which also works well.
I can't comment on HP's support, I have no experience with it. We now
self-support our software (OpenIndiana b148)
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zf
have had.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
orts of
short-stroking the amount of time it accumulates write data resulting
in improved performance in some workloads.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
're overwriting looked like, plus how fragmented the free
space is. Into a device with plenty of free space, small writes
should be significantly faster than write-in-place.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mail
tached SAS
JBODs. If you do it right, while your chances of any single hardware
failure occurring goes up with the # of components in the whole
system, your probability of a failure taking you offline should be <=
the unified solution.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
was a pci and a pci-x version of
the 3124, so watch out.)
Most 3124 I've seen are PCI-X natively, but they work fine in PCI
slots, albiet with less bandwidth available.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailin
On Wed, Feb 23 at 13:16, Mauricio Tavares wrote:
On Wed, Feb 23, 2011 at 12:53 PM, Eric D. Mudama
wrote:
On Wed, Feb 23 at 13:29, Andrew Gabriel wrote:
Mauricio Tavares wrote:
Perhaps a bit off-topic (I asked on the rescue list --
http://web.archiveorange.com/archive/v/OaDWVGdLhxWVWIEabz4F
ies, resulting in numerous
reports of compatibility and performance problems with 3112/3114
hardware.
I +1 the suggestion to find something more modern if at all possible.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
stream that has been converted to use the new pool when you recv
it.
I could be wrong though, we update our pools in lockstep and err on
the side of backwards compliance with our multi-system backup.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-
need to be maintained by
the garbage collection engine. Depending on the design of the SSD,
this can significantly reduce the write amplification of the SSD.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
nel.
If they could just get their channel working at 3GHz instead of 2GHz
or whatever, they'd use that capability to pack even more bits into
the consumer drives to lower costs.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailin
At home now so can't test it.
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
operations.
Me too, though not everyone realizes how much overhead there can be in
small operations, even sequential ones.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
anufacture at high yields as I
understood it.
http://www.tomshardware.com/news/seagate-hdd-harddrive,8279.html
--
Eric D. Mudama
edmud...@bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t of new
data since the last time you sent incrementals, similar to rsync or a
half dozen other techniques.
At work we always use -i, and our send|recv is anywhere from 5-20
minutes, depending on what data was added or modified.
--eric
--
Eric D. Mudama
edmud...@bounceswoosh.org
_
he snapshot time is the time of the
initial command across all filesystems in the tree, even if it takes
10 seconds to actually complete the command. However, I have no such
system where I can prove this guess as correct or not.
--eric
--
Eric D. Mu
ldn't be having this discussion.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
think you only need 2 adapters per stack of JBODs.
With adapters A0 and A1, and JBODs J0 through J3, you get:
A0 -> J0 -> J1 -> J2 -> J3
A1 -> J3 -> J2 -> J1 -> J0
Yes, all the above are daisy-chained, starting at a different side of
the stack with each adapt
performance counters
we should look for to identify the bottlenecks. Don't want to
replace a component just to find that there was no improvement in
iometer reading.
fsstat zfs 1
zpool iostat 1
any suggestions beyond that will require a lot more detail on your
setup and target workload
--
E
ions.html
Now, obviously the above is in the context of having to restore from
backup, which is rare, however in live usage I don't think the math
changes a whole lot.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing li
Vertex 2 Pro.
Okay, I understand where you're coming from.
Yes, buyers must be aware of the test methodologies for published
benchmark results, especially those used to sell drives by the vendors
themselves. "Up to" is generally a poor thing to base a buying
decision.
--eric
--
just don't think the math
works out. At that point, you're probably better-off not having a
dedicated ZIL, instead of burning 10 slots and 150W.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discus
ch were still in use.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
appen using NAND technology.
Non-NAND SSDs may or may not have similar or related limitations.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
7;ll still provide a huge performance boost when
used as a ZIL in their system.
For a huge ZFS box providing tens of ZFS filesystems in a pool all
with huge user loads, sure, a RAM based device makes sense, but it's
overkill for some large percentage of ZFS users, I imagine.
On Tue, Dec 21 at 8:24, Edward Ned Harvey wrote:
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
On Mon, Dec 20 at 19:19, Edward Ned Harvey wrote:
>If there is no correlation between on-disk order of blocks for different
>disks
the system.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ast in any configuration with lots of drives.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ax_cstates 1
supported_max_cstates 1
supported_max_cstates 1
supported_max_cstates 1
supported_max_cstates 1
supported_max_cstates 1
--eric
--
Eric D. Mudama
edmud...@mail.bounce
server in the future.
Out of curiosity, did you run into this:
http://blogs.everycity.co.uk/alasdair/2010/06/broadcom-nics-dropping-out-on-solaris-10/
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss
pth, there is
opportunity for improvement, to an asymptotic limit driven by servo
settle speed.
Obviously this performance improvement comes with the standard WB
risks, and YMMV, IANAL, etc.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs
7;ps aux | grep tank'
Am I missing something?
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hour? Our boot drives (32GB X25-E) will resilver in about 1 minute.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ut that's for a 4k rotating drive, which has a much different
latency profile than an SSD. I was wondering if anyone had a
benchmarking showing this alignment mattered on the latest SSDs. My
guess is no, but I have no data.
--
Eric D. Mudama
edmud..
Do you specifically have benchmark data indicating unaligned or
aligned+offset access on the X25-E is significantly worse than aligned
access?
I'd thought the "tier1" SSDs didn't have problems with these workloads.
--eric
--
Eric
to create
the scenario you linked due to the limited data set size.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nal reference, but I believe he was arguing that
by moving code into the kernel and marking as experimental, it's more
likely to be tested and have the bugs worked out, than if it forever
lives as patchsets.
Given the test environment, can't say I can argue against that point
of
explained to me, a non-compete cannot legally prevent you from earning
a living. If your one skill is in writing filesystems, you cannot be
prevented from doing so by a noncompete.
However, please get your own legal advice, as it varies significantly
state-to-state.
--
Eric D. Mu
would btrfs serve Oracle outside of the Linux kernel?
Maybe allowing SANs built upon btrfs to be natively used within
Solaris/Oracle at some point in the future? Adding btrfs->zfs
conversion utilities that do things like maintain snapshots, data set
p
t into the "main" OS.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in theory, the "gateway drug" where people
can experiment inexpensively to try out new technologies (ZFS, dtrace,
crossbow, comstar, etc.) and eventually step up to Oracle's "big iron"
as their business grows.
--eric
--
Eric D. Mu
+C = P
A+x+C-A-C = P-A-C
x = P-A-C
and voila, you now have your original B contents, since B=x.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
ot as a tail to each stored byte on individual
devices.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hatever
frequency you choose.
You don't even need the same pool layout on the backup machine.
Primary can be a stripe of mirrors, while your backup can be a wide
raidz2 setup.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-dis
pport contracts associated with them)
Any company that believes it can add more value in their IT supply
chain than the vendor they'd be buying from would be foolish not to
put energy into that space (if they can "afford" to.) Google is but a
single example, though I am sure there a
alk just fine to the 9211 HBAs.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
covery allows a reinstall of the OS, and the
amount of custom configuration is minimal in our rpool.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
www.wdc.com/en/products/products.asp?DriveID=610Â [Green]
* 6x WD1002FBYS -
[4]http://www.wdc.com/en/products/Products.asp?DriveID=503Â [RE3]
We use the WD1002FBYS (1.0TB WD RE3) and haven't had an issue yet in
our Dell T610 chassis.
--
Eric
the raidz
variants, you really need to use multiple physical devices to provide
that capability.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
peers in this workload.
As a replacement recommendation, we've been beating on the WD 1TB RE3
drives for 18 months or so, and we're happy with both performance and
the price for what we get. $160/ea with a 5 year warranty.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
27;ve got the cases where people
complain that their scrub takes too long. There may be knobs for
individuals to use, but I don't think overall there's a magic answer.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss maili
On Tue, Apr 20 at 11:41, Don Turnbull wrote:
Not to be a conspiracy nut but anyone anywhere could have registered
that gmail account and supplied that answer. It would be a lot more
believable from Mr Kay's Oracle or Sun account.
+1
Glad I wasn't the only one who noticed.
--
Eric
On Fri, Apr 16 at 14:42, Miles Nordin wrote:
"edm" == Eric D Mudama writes:
edm> How would you stripe or manage a dataset across a mix of
edm> devices with different geometries?
the ``geometry'' discussed is 1-dimensional: sector size.
The way that you do it i
it. I can virtually guarantee
every storage, SSD and OS vendor is generating that data internally
however.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
ystem that can have its snapshots
managed with a specific policy for addressing the usage model.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
does JBOD no problem.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Apr 13 at 9:52, Bob Friesenhahn wrote:
On Mon, 12 Apr 2010, Eric D. Mudama wrote:
The advantage of TRIM, even in high end SSDs, is that it allows you to
effectively have additional "considerable extra space" available to
the device for garbage collection and wear managemen
But now I'm feeling hopeful that they're fixed in what I'm
likely to be upgrading to next.
Yes, hopefully.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for
anything but tracking the data that is no longer active.
Based on the above, I think TRIM has the potential to help every SSD,
not just the "cheap" SSDs.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss maili
;s no "best guess" work at
locating the wells.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Product/Product.aspx?Item=N82E16817994075
The doors are a bit "light" perhaps, but it works just fine for my
needs and holds drives securely. The small fans are a bit noisy, but
since the box lives in the basement I don't really care.
--eric
--
Eric
On Wed, Apr 7 at 12:41, Jason S wrote:
And just to clarify as far as expanding this pool in the future my
only option is to add another 7 spindle RaidZ2 array correct?
That is correct, unless you want to use the -f option to force-allow
an asymmetric expansion of your pool.
--eric
--
Eric D
this server is relatively low, and the L2ARC serves data at greater
than 100MB/s (wire speed) without stressing much of anything.
The BIOS settings in our T610 are exactly as they arrived from Dell
when we bought it over a year ago.
Thoughts?
--eric
--
Eric D. Mudama
using the same integrated part?
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ure on the drives (like HPA or
DCO) that is changing the capacity. It's possible one of these is in
effect.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
expander)
firmware changes can mitigate or exacerbate.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a 3134 variant that is PCI-e x4 which should be a lot faster.
Doesn't matter for rotating drives, but for SSDs it's important.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
referenced in
ZFS. Are there any resources available that will show me how this
is done?
You could try zdb.
Or just look at the source code.
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
We have a 32GB X25-E as L2ARC and though it's never more than ~5GB
full with our workloads, most every file access saturates the wire
(1.0 Gb/s ethernet) once the cache has warmed up, resulting in very
little IO to our spindles.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
pool, that
is already handled through another program.
I'm pretty sure the configuration is embedded in the pool itself.
Just import on the new machine. You may need --force/-f the pool
wasn't exported on the old system properly.
--eric
--
Eric D. Mudama
edmud...@mail.b
On Sat, Mar 6 at 15:04, Richard Elling wrote:
On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote:
On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote:
hdd ONLINE 0 0 0
c7t0d0p3 ONLINE 0 0 0
rpool ONLINE 0 0 0
same device? I'm barely familiar with solaris
partitioning and labels... what's the difference between a slice and a
partition?
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ot time to a 4+ hour boot time is more than just "impact". That's
getting hit by a train.
Might be useful for folks, if the above document listed a few concrete
datapoints of boot time scaling with the number of filesystems or
something similar.
--eric
--
Eric D. Mu
d SAS
drives, you're looking at a TON of spindles to move through 400
million 1KB files quickly.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
vs at work, with a separate box as a "live" backup using raidz of
larger SATA drives.
--eric
--
Eric D. Mudama
edmud...@mail.bounceswoosh.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 184 matches
Mail list logo