with
Unbuffered RAM and still keep everything stable.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
into the older AMD
Barcelona-based Opterons. They're equivalent to the Phenom, plus their
motherboards come with just stupid numbers of DIMM slots.
:-)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs
comes down to amount of
L3 cache, and HT speed. I'd be interested in doing some benchmarking to
see exactly how the variations make a difference.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss
hosts from using different component
links. e.g. you could have an HTTP and FTP connection each use
different links, even though both have the same two machines involved.
But, someone, please correct me on this if I'm wrong.
And, we're getting pretty far off topic here...
--
Erik Trimble
Java
.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
://supermicro.com/products/accessories/mobilerack/CSE-M28E2.cf
I'm aware of the Supermicro chassis, and, while they're nice, I'm after
an external JBOD chassis, not a server chassis.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
really be helpful.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
would need more than a GB
or two as reserve space...
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Richard Elling wrote:
On Jan 18, 2010, at 3:25 PM, Erik Trimble wrote:
Given my (imperfect) understanding of the internals of ZFS, the non-ZIL
portions of the reserved space are there mostly to insure that there is
sufficient (reasonably) contiguous space for doing COW. Hopefully, once BP
Daniel Carosone wrote:
On Mon, Jan 18, 2010 at 03:25:56PM -0800, Erik Trimble wrote:
Hopefully, once BP rewrite materializes (I know, I'm treating this
much to much as a Holy Grail, here to save us from all the ZFS
limitations, but really...), we can implement defragmentation which
Tim Cook wrote:
On Tue, Jan 19, 2010 at 12:16 AM, Erik Trimble erik.trim...@sun.com
mailto:erik.trim...@sun.com wrote:
A poster in another forum mentioned that Seagate (and Hitachi,
amongst others) is now selling something labeled as NearLine SAS
storage (e.g. Seagate's NL35
in implementation appears to
occur sometime shortly after the introduction of the Indilinx
controllers. My fault for not catching this.
-Erik
Eric D. Mudama wrote:
On Sat, Jan 2 at 22:24, Erik Trimble wrote:
In MLC-style SSDs, you typically have a block size of 2k or 4k.
However, you have a Page
the single Fusion-IO card eat about 1/4 the CPU power that a
8Gbit Fibre Channel card HBA does, and roughly the same as a 10Gbit
Ethernet card. So, it's not out of line with comparable throughput
add-in cards. It does need significantly more CPU than a SAS or SCSI
controller, though.
--
Erik
Eric D. Mudama wrote:
On Fri, Jan 1 at 21:21, Erik Trimble wrote:
That all said, it certainly would be really nice to get a SSD
controller which can really push the bandwidth, and the only way I
see this happening now is to go the stupid route, and dumb down the
controller as much
sections,
though. Which would be interesting: ZFS would write in Page Size
increments, and read in Block Size amounts.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs
Joerg Schilling wrote:
Erik Trimble erik.trim...@sun.com wrote:
From ZFS's standpoint, the optimal configuration would be for the SSD
to inform ZFS as to it's PAGE size, and ZFS would use this as the
fundamental BLOCK size for that device (i.e. all writes are in integer
It seems
Ragnar Sundblad wrote:
On 2 jan 2010, at 13.10, Erik Trimble wrote
Joerg Schilling wrote:
the TRIM command is what is intended for an OS to notify the SSD as to which blocks are deleted/erased, so the SSD's internal free list can be updated (that is, it allows formerly-in-use blocks
filesystem (didn't make it into Windows 2008, but maybe Win2011), so
we'll have to see what that entails.
All that said, it would certainly be limited to Enterprise SSD, which,
are low-volume. But, on the up side, they're High Margin, so maybe we
can hope...
--
Erik Trimble
Java System
Ragnar Sundblad wrote:
On 2 jan 2010, at 22.49, Erik Trimble wrote:
Ragnar Sundblad wrote:
On 2 jan 2010, at 13.10, Erik Trimble wrote
Joerg Schilling wrote:
the TRIM command is what is intended for an OS to notify the SSD as to which
blocks are deleted/erased, so
David Magda wrote:
On Jan 2, 2010, at 16:49, Erik Trimble wrote:
My argument is that the OS has a far better view of the whole data
picture, and access to much higher performing caches (i.e.
RAM/registers) than the SSD, so not only can the OS make far better
decisions about the data and how
Ragnar Sundblad wrote:
On 3 jan 2010, at 04.19, Erik Trimble wrote:
Let's say I have 4k blocks, grouped into a 128k page. That is, the SSD's
fundamental minimum unit size is 4k, but the minimum WRITE size is 128k. Thus,
32 blocks in a page.
Do you know of SSD disks that have
Erik Trimble wrote:
Ragnar Sundblad wrote:
Yes, there is something to worry about, as you can only
erase flash in large pages - you can not erase them only where
the free data blocks in the Free List are.
I'm not sure that SSDs actually _have_ to erase - they just overwrite
anything
filesystem makers worry about
scheduling writes appropriately, doing redundancy, etc.
Oooh! Oooh! a whole cluster of USB thumb drives! Yeah!wink
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss
Bob Friesenhahn wrote:
On Fri, 1 Jan 2010, Erik Trimble wrote:
Maybe it's approaching time for vendors to just produce really stupid
SSDs: that is, ones that just do wear-leveling, and expose their true
page-size info (e.g. for MLC, how many blocks of X size have to be
written at once
to L2ARC. I would
disable any swap volume on the SSDs, however. If you need swap, put it
somewhere else.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing
.
In short, Checksumming is how ZFS /determines/ data corruption, and
Redundancy is how ZFS /fixes/ it. Checksumming is /always/ present,
while redundancy depends on the pool layout and options (cf. copies
property).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa
Richard Elling wrote:
On Dec 25, 2009, at 4:15 PM, Erik Trimble wrote:
I haven't seen this mentioned before, but the OCZ Vertex Turbo is
still an MLC-based SSD, and is /substantially/ inferior to an Intel
X25-E in terms of random write performance, which is what a ZIL
device does almost
in the case of NFS traffic.
In fact, I think that the Vertex's sustained random write IOPs
performance is actually inferior to a 15k SAS drive.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
as a mirrored ZIL into the zpool.
It's a (relatively) simple and ingenious suggestion.
-Erik
On Wed, Dec 23, 2009 at 9:40 AM, Erik Trimble erik.trim...@sun.com wrote:
Charles Hedrick wrote:
Is ISCSI reliable enough for this?
YES.
The original idea is a good one, and one
.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in a straight-through cable between the
two machine is the best idea here, rather than going through a switch.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss
are made after a snapshot.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
on the
SSD than reads are.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
for that level of IOPS to wear out the SSDs
(which, are likely OEM Intel X25-E). Something else is wrong.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
, if your drive really is taking 10-15 seconds to remap bad
sectors, maybe you _should_ replace it.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs
the pool, remove the device, remake the pool, then reimport
the pool) to even bother with?
--
BP rewrite is key to several oft-asked features: vdev removal, defrag,
raidz expansion, among others.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US
with it for the time being. The
differences for something like FreeNAS are relatively minor, and it's
better to Go With What You Know. Exploring OpenSolaris for a future
migration would be good, but for right now, I'd stick to FreeBSD.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone
hosts, and it
auto-picks the correct c1t1d0 drive.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Miles Nordin wrote:
et == Erik Trimble erik.trim...@sun.com writes:
et I'd still get the 7310 hardware.
et Worst case scenario is that you can blow away the AmberRoad
okay but, AIUI he was saying pricing is 6% more for half as much
physical disk. This is also why
is that you can blow away the AmberRoad software
load, and install OpenSolaris/Solaris. The hardware is a standard X4140
and J4200.
Note, that if you do that, well, you can't re-load A-R without a support
contract.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara
Erik Trimble wrote:
Miles Nordin wrote:
lz == Len Zaifman leona...@sickkids.ca writes:
lz So I now have 2 disk paths and two network paths as opposed to
lz only one in the 7310 cluster.
confused
You're configuring all your failover on the client, so the HA stuff
formance Systems
The Centre for Computational Biology
The Hospital for Sick Children
555 University Ave.
Toronto, Ont M5G 1X8
tel: 416-813-5513
email: leona...@sickkids.ca
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa
it boils down to is what
is the access time/throughput of a single local 15k SCSI drive vs a GigE
iSCSI volume?
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
be no gotchas on the zpool import (of course,
remember to zpool export from the original machines first as a good
practice).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss
the 'shareiscsi' property BEFORE you export them (or, after you
import them, then reboot). This prevents a potential conflict between
the old iSCSI implementation and COMSTAR.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
here for a Readzilla.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
:-)
-- richard
oooh, then I must be ecstatically happy!
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
sold in.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
c1t0d0s1 c1t1d0s1 c1t2d0s1
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
in
creating a new v10 filesystem on the 10u8 machine. However, you can't
send a v12 filesystem from the 10u8 machine to the 10u6 machine. If you
explicitly create a v10 filesystem on the 10u8 machine, you can send
that filesystem to the 10u6 machine.
I hope that's clear.
--
Erik Trimble
Java
primary problem is
that I have to keep both schemes in memory during the migration, and if
something should happen (i.e. reboot, panic, etc) then I lose the
current state of the zpool, and everything goes to hell in a handbasket.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone
is to
offline (ie export) the whole pool, and then pray that nothing
interrupts the expansion process.
That all said, I'm not a /real/ developer, so maybe someone else has
some free time to try.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US
to complete the relevant transaction
to the calling software.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Victor Latushkin wrote:
Erik Trimble wrote:
ZFS no longer has the issue where loss of a single device (even
intermittently) causes pool corruption. That's been fixed.
Erik, it does not help at all when you are talking about some issue
being fixed and does not provide corresponding CR number
the preferred method of arranging
things in ZFS, even with hardware raid backing the underlying LUN
(whether the LUN is from a SAN or local HBA doesn't matter).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
clone' function is for. clone your snapshot, promote it, and make
your modifications.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss
, not a
single vdev.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for Sun kit, but I'd be very wary of using any
no-service-contract hardware for something that is business critical,
which I can't imagine your digital editing system isn't. Don't be
penny-wise and pound-foolish.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
.
(I'd probably go with Socket AM3, with ECC, of course)
I'd sell them in both fully loaded with the Amber Road software (and
mandatory Service Contract), and no-OS Loaded, no-Service Contract
appliance versions.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara
the original boot environment.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
in advance!!
Steffen
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
better.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Darren J Moffat wrote:
Erik Trimble wrote:
So SSDs for ZIL/L2ARC don't bring that much when used with
raidz2/raidz3,
if I write a lot, at least, and don't access the cache very much,
according
to some recent posts on this list.
Not true.
Remember: ZIL = write cache
ZIL is NOT a write
Carson Gaspar wrote:
Erik Trimble wrote:
I haven't see this specific problem, but it occurs to me thus:
For the reverse of the original problem, where (say) I back up a 'zfs
send' stream to tape, then later on, after upgrading my system, I
want to get that stream back.
Does 'zfs receive
', and modifying
'zfs send' to be able to specify a zfs filesystem version during stream
creation. As per Lori's original RFE CR.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
?
If not, frankly, that's a higher priority than the reverse.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
best with groups of identical disks, and can be expanded by
adding groups of identical disks (not necessarily of the same size as
the originals).
Once again, please read the archives for more information about
expanding zpools.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone
Eric D. Mudama wrote:
On Wed, Aug 12 at 12:11, Erik Trimble wrote:
Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB,
500 GB, etc), can I put them all into one big array and have data
redundancy, etc? (RAID-Z?)
Yes. RAID-Z requires a minimum of 3 drives, and it can use
-height (CDROM size) form factor OR 3.5 form factor
(4) preferably SAS interface, though 3.0Gbps SATA is OK, too.
(5) battery backup
(6) sync to dedicated Compact Flash, Flash SSD, or hard drive on power
failure
(7) UNDER $500, without RAM.
--
Erik Trimble
Java System Support
Mailstop
use it for stuff that
is WORM (or at least, hardly ever changes). A sharable /usr/local or
/opt springs to mind...
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss
that - several actual SAS connections in a
single plug. The other 6 ports next to them (in black) are SATA ports
connected to the ICH9R.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs
Master. No setting required.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
from them support unregistered,
unbuffered ECC. I suspect it's the same for the other board makers, too.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Erik Trimble wrote:
I _believe_ all socket AM2, AM2+ and AM3 consumer chips (Phenom,
Phenom II, Athlon X2, Athlon X3 and Athlon X4) also support unbuffered
non-registered ECC. The AMD Specs page for the above processors
indicates I'm right about those CPUs.
Quick correction
into a cheap tape drive or
consider the external USB drive. In either case, your parents will need
to backup the machine nightly, and take the tape/USB drive home with
them at night (and bring it back in the morning).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa
to the IOPS
rating for things, than the sync read/write speeds.
I'm testing that set up right now for iSCSI-based xVM guests, so we'll
see if it can stand the IOPs.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
- i.e. $100 or so).
The Supermicro X7SBL-LN[12] boards also look good, though they won't
support the network KVM option.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs
.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1.364TiB ~ 9.546TiB
Lose 2.2% for ZFS overhead: 9.546TiB x 0.978 ~ 9.34 TiB
That's todays math lesson!
:-)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss
space, give or take a hundred or two GB.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to
it for /any/ service outage. Large enough batteries to handle anything
more than a couple of minutes are frankly a fire-hazard for the home,
not to mention a maintenance PITA.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
-rp' or 'rsync' is
a good idea.
We really should have something like 'zpool scrub' do this automatically.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss
under the hood (regardless of whose name is on the outside), I
_hope_ it was just a HD-specific firmware bug.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
configured. With no
RAID devices configured, it runs as a pure HBA (i.e. in JBOD mode).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, AND the nVidia MCP55-based 6-port
SATA controller, no need for any more PCI-cards, and it supports the
add-in card for remote KVM console; it's a dual-socket, Extended ATX
size, though).
The MCP55 is the chipset currently in use in the Sun X2200 M2 series of
servers.
--
Erik Trimble
Java System
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must
reasonable total size (say 1MB or so). That way, you could get
reconstruction rates of 100MB/s (that is, reconstruct the parity for
100MB of data, NOT writing 100MB/s). 1TB of data @ 100MB/s is only 3
hours.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara
used blocks are rebuilt, but exactly what is the methodology being
used?
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
for Readzilla/Logzilla :
http://www.ocztechnology.com/products/flash_drives/ocz_summit_series_sata_ii_2_5-ssd
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss
two
slices instead of whole-drives? That is, one slice for Read and the
other for ZIL?
My main concern is exactly how the on-drive cache would be used in a
two-slices configuration. In order to get decent performance, I really
need the on-drive cache to be used properly.
--
Erik Trimble
Richard Elling wrote:
Erik Trimble wrote:
I just looked at pricing for the higher-end MLC devices, and it looks
like I'm better off getting a single drive of 2X capacity than two
with X capacity.
Leaving aside the issue that by using 2 drives I get 2 x 3.0Gbps
SATA performance instead of 1
James Lever wrote:
Hi Erik,
On 22/06/2009, at 1:15 PM, Erik Trimble wrote:
I just looked at pricing for the higher-end MLC devices, and it looks
like I'm better off getting a single drive of 2X capacity than two
with X capacity.
Leaving aside the issue that by using 2 drives I get 2 x
Erik Trimble wrote:
Fajar A. Nugraha wrote:
Are they feasible targets for zfs?
The N610N that I have (BCM3302, 300MHz, 64MB) isn't even powerful
enough to saturate either the gigabit wired or 802.11n wireless. It
only goes about 25Mbps.
Last time I test on EEPC 2G's Celeron, zfs is slow
.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you?
Each OS has its strengths and weaknesses; pick your poison. It's
actually NOT a good idea for all OSes to have the same feature set.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
:
# zpool replace /some/path/here c0t4d0
You can do something similar for RAIDZ2 pools.
Obviously, you can only have 1 fake drive in a RAIDZ1 pool, and 2
fake drives in a RAIDZ2 pool.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT
301 - 400 of 507 matches
Mail list logo