Hello.
zpool lists my pool as having 2 disks which have identical names. One
is offline, the other is online. How do I tell zpool to replace the
offline one? I want to replace the offline c5t5d0 with c5t2d0 but
when I run the command:
pfexec zpool replace -f space c5t5d0 c5t2d0
I get the
On 18 apr 2010, at 06.43, Richard Elling wrote:
On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Vrona
1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be
Hi.
I'm using two SIIG eSATA II PCIe PRO adapters on a Sun Ultra 24
workstation, too. The adapters are connected to four external eSATA
drives that made up a zpool used for scheduled back-up purposes. I'm
now running SXCE b129, live upgraded from b116. Before the live
upgrade the external disks
On 18 apr 2010, at 00.52, Dave Vrona wrote:
Ok, so originally I presented the X-25E as a
reasonable approach. After reading the follow-ups,
I'm second guessing my statement.
Any decent alternatives at a reasonable price?
How much is reasonable? :-)
How about $1000 per device?
The Acard device mentioned in this thread looks interesting:
http://opensolaris.org/jive/thread.jspa?messageID=401719#401719
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
From: Richard Elling [mailto:richard.ell...@gmail.com]
On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote:
For zpool 19, which includes all present releases of Solaris 10 and
Opensolaris 2009.06, it is critical to mirror your ZIL log device. A
failed
unmirrored log device would
Or, DDRDrive X1 ? Would the X1 need to be mirrored?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Erik Trimble erik.trim...@oracle.com writes:
Since we're talking about an old PCI slot here, I'd say there's really
two good options:
A SiliconImage Sil3114-based card, which is a 32-bit/66Mhz card, with
4 SATA-1 ports, usually for $25
A Supermicro AOC-SAT2-MV8 card, which is a
IMHO, whether a dedicated log device needs redundancy (mirrored), should
be determined by the dynamics of each end-user environment (zpool version,
goals/priorities, and budget).
If mirroring is deemed important, a key benefit of the DDRdrive X1, is the
HBA / storage device integration. For
re == Richard Elling richard.ell...@gmail.com writes:
A failed unmirrored log device would be the
permanent death of the pool.
re It has also been shown that such pools are recoverable, albeit
re with tedious, manual procedures required.
for the 100th time, No, they're not,
IMHO, whether a dedicated log device needs redundancy
(mirrored), should
be determined by the dynamics of each end-user
environment (zpool version,
goals/priorities, and budget).
Well, I populate a chassis with dual HBAs because my _perception_ is they tend
to fail more than other cards.
Carson Gaspar wrote:
I just found an 8 GB SATA Zeus (Z4S28I) for £83.35 (~US$127) shipped to
California. That should be more than large enough for my ZIL @home,
based on zilstat.
The web site says EOL, limited to current stock.
On Sun, 18 Apr 2010, Carson Gaspar wrote:
Before (Mac OS 10.6.3 NFS client over GigE, local subnet, source file in
RAM):
carson:arthas 0 $ time tar jxf /Volumes/RamDisk/gcc-4.4.3.tar.bz2
real92m33.698s
user0m20.291s
sys 0m37.978s
That's awful!
carson:arthas 130 $ time tar
There is no definitive answer (yes or no) on whether to mirror a dedicated
log device, as reliability is one of many variables. This leads me to the
frequently given but never satisfying it depends.
In a time when too many good questions go unanswered, let me take
advantage of our less rigid
On Sun, 18 Apr 2010, Christopher George wrote:
In summary, the DDRdrive X1 is designed, built and tested with immense
pride and an overwhelming attention to detail.
Sounds great. What performance does DDRdrive X1 provide for this
simple NFS write test from a single client over gigabit
On Sun, Apr 18, 2010 at 20:08, Harry Putnam rea...@newsguy.com wrote:
Seems like you can get some pretty large discrepancies in sizes of
pools. and directories.
They all answer different things, sure, but they're all things that an
administrator might want to know.
zpool list
How many bytes
re == Richard Elling richard.ell...@gmail.com writes:
re a well managed system will not lose zpool.cache or any other
re file.
I would complain this was circular reasoning if it weren't such
obvious chest-puffing bullshit.
It's normal even to the extent of being a best practice to have
So if the Intel X25E is a bad device- can anyone recommend an SLC device with
good firmware? (Or an MLC drive that performs as well?)
I've got 80 spindles in 5 16 bay drive shelves (76 15k RPM SAS drives in 19 4
disk raidz sets, 2 hot spares, and 2 bays set aside for a mirrored ZIL)
connected
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
On Sun, 18 Apr 2010, Christopher George wrote:
In summary, the DDRdrive X1 is designed, built and tested with
immense
pride and an overwhelming attention to detail.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Don
I've got 80 spindles in 5 16 bay drive shelves (76 15k RPM SAS drives
in 19 4 disk raidz sets, 2 hot spares, and 2 bays set aside for a
mirrored ZIL) connected to two servers (so if one
If you have a pair of heads talking to shared disks with ZFS- what can you do
to ensure the second head always has a current copy of the zpool.cache file?
I'd prefer not to lose the ZIL, fail over, and then suddenly find out I can't
import the pool on my second head.
--
This message posted
But if the X25E doesn't honor cache flushes then it really doesn't matter if
they are mirrored- they both may cache the data, not write it out, and leave me
screwed.
I'm running 2009.06 and not one of the newer developer candidates that handle
ZIL losses gracefully (or at all- at least as far
On Sun, Apr 18, 2010 at 07:02:38PM -0700, Don wrote:
If you have a pair of heads talking to shared disks with ZFS- what can you do
to ensure the second head always has a current copy of the zpool.cache file?
I'd prefer not to lose the ZIL, fail over, and then suddenly find out I can't
Hullo All:
I'm having a problem importing a ZFS pool. When I first built my fileserver
I created two VDEVs and a log device as follows:
raidz1-0 ONLINE
c12t0d0 ONLINE
c12t1d0 ONLINE
c12t2d0 ONLINE
c12t3d0 ONLINE
raidz1-2 ONLINE
c12t4d0 ONLINE
I'm not sure to what you are referring when you say my running BE
I haven't looked at the zpool.cache file too closely but if the devices don't
match between the two systems for some reason- isn't that going to cause a
problem? I was really asking if there is a way to build the cache file
Harry Putnam wrote:
Erik Trimble erik.trim...@oracle.com writes:
Bottom line: if you can live without true hot-swap capability
(i.e. shutdown the machine to change a drive), then save yourself $75
and go with 2 3114 cards.
That sounds like it would do all I need. I currently have
On Sun, 18 Apr 2010, Edward Ned Harvey wrote:
This seems to be the test of the day.
time tar jxf gcc-4.4.3.tar.bz2
I get 22 seconds locally and about 6-1/2 minutes from an NFS client.
There's no point trying to accelerate your disks if you're only going to use
a single client over
3 - community edition
Andrew
On Apr 18, 2010, at 11:15 PM, Richard Elling richard.ell...@gmail.com wrote:
Nexenta version 2 or 3?
-- richard
On Apr 18, 2010, at 7:13 PM, Andrew Kener wrote:
Hullo All:
I'm having a problem importing a ZFS pool. When I first built my fileserver
I
Any comments on NexentaStor Community/Developer Edition vs EON for
NAS/small server/home server usage? It seems like Nexenta has been
around longer or at least received more press attention. Are there
strong reasons to recommend one over the other? (At one point usable
space would have been
On Sun, Apr 18, 2010 at 10:33:36PM -0500, Bob Friesenhahn wrote:
Probably the DDRDrive is able to go faster since it should have lower
latency than a FLASH SSD drive. However, it may have some bandwidth
limits on its interface.
It clearly has some. They're just as clearly well in excess
On Apr 18, 2010, at 7:02 PM, Don wrote:
If you have a pair of heads talking to shared disks with ZFS- what can you do
to ensure the second head always has a current copy of the zpool.cache file?
By definition, the zpool.cache file is always up to date.
I'd prefer not to lose the ZIL, fail
On Sun, Apr 18, 2010 at 07:37:10PM -0700, Don wrote:
I'm not sure to what you are referring when you say my running BE
Running boot environment - the filesystem holding /etc/zpool.cache
--
Dan.
pgpbKUgqnePjv.pgp
Description: PGP signature
___
On Mon, Apr 19, 2010 at 03:37:43PM +1000, Daniel Carosone wrote:
the filesystem holding /etc/zpool.cache
or, indeed, /etc/zfs/zpool.cache :-)
--
Dan.
pgpSCBv4eR19k.pgp
Description: PGP signature
___
zfs-discuss mailing list
33 matches
Mail list logo