From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Albert Frenz
since i am really new to zfs, i got 2 important questions for starting.
i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my
question for future proof would be,
From: Richard Elling [mailto:richard.ell...@gmail.com]
On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote:
For zpool 19, which includes all present releases of Solaris 10 and
Opensolaris 2009.06, it is critical to mirror your ZIL log device. A
failed
unmirrored log device would
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
On Sun, 18 Apr 2010, Christopher George wrote:
In summary, the DDRdrive X1 is designed, built and tested with
immense
pride and an overwhelming attention to detail.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Don
I've got 80 spindles in 5 16 bay drive shelves (76 15k RPM SAS drives
in 19 4 disk raidz sets, 2 hot spares, and 2 bays set aside for a
mirrored ZIL) connected to two servers (so if one
From: Richard Elling [mailto:richard.ell...@gmail.com]
Um ... All the same time.
Even if I stat those directories ...
Access: Modify: and Change: are all useless...
which is why you need to stat the destination :-)
Ahh. I see it now.
By stat'ing the destination instead of the
From: Erik Trimble [mailto:erik.trim...@oracle.com]
So the suggestion, or question is: Is it possible or planned to
implement a
rollback command, that works as fast as a link or re-link operation,
implemented at a file or directory level, instead of the entire
filesystem?
so why
From: Ian Collins [mailto:i...@ianshome.com]
But is a fundamental of zfs:
snapshot
A read-only version of a file system or volume at a
given point in time. It is specified as filesys...@name
or vol...@name.
Erik Trimble's assessment that it
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Vrona
1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be
mirrored ?
IMHO, the best answer to this question is the one from the ZFS Best
Practices guide. (I wrote
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Vrona
2) ZIL write cache. It appears some have disabled
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tonmaus
are the drives properly configured in cfgadm?
I agree. You need to do these:
devfsadm -Cv
cfgadm -al
___
zfs-discuss
AFAIK, if you want to restore a snapshot version of a file or directory, you
need to use cp or such commands, to copy the snapshot version into the
present. This is not done in-place, meaning, the cp or whatever tool must
read the old version of objects and write new copies of the objects. You
If you've got nested zfs filesystems, and you're in some subdirectory where
there's a file or something you want to rollback, it's presently difficult
to know how far back up the tree you need to go, to find the correct .zfs
subdirectory, and then you need to figure out the name of the snapshots
The typical problem scenario is: Some user or users fill up the filesystem.
They rm some files, but disk space is not freed. You need to destroy all
the snapshots that contain the deleted files, before disk space is available
again.
It would be nice if you could rm files from snapshots, without
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Willard Korfhage
devfsadm -Cv gave a lot of removing file messages, apparently for
items that were not relevant.
That's good. If there were no necessary changes, devfsadm would say
nothing.
From: Richard Elling [mailto:richard.ell...@gmail.com]
There are some interesting design challenges here. For the general
case, you
can't rely on the snapshot name to be in time order, so you need to
sort by the
mtime of the destination.
Actually ...
drwxr-xr-x 16 root root 20 Mar 29
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Not to be a contrary person, but the job you describe above is properly
the duty of a BACKUP system. Snapshots *aren't* traditional backups,
though some people use them as such. While I see no technical reason
why snapshots couldn't
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Sent: Friday, April 16, 2010 7:35 PM
Doesn't that defeat the purpose of a snapshot?
Eric hits
the
nail right on the head: you *don't* want to support such a feature,
as it breaks the fundamental assumption about what a snapshot is
From: Nicolas Williams [mailto:nicolas.willi...@oracle.com]
you should send your snapshots to backup and clean them out from
time to time anyways.
When using ZFS as a filesystem in a fileserver, the desired configuration
such as auto-snapshots is something like:
Every 15 mins for the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of John
Just to add more details, the issue only occurred for the first direct
access to the file.
From a windows client that has never access the file, you can issue:
dir
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Daniel
Im pretty new to the whole OpenSolaris thing, i've been doing a bit of
research but cant find anything on what i need.
I am thinking of making myself a home file server running
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eric D. Mudama
I believe the reason strings of bits leak on rotating drives you've
overwritten (other than grown defects) is because of minute off-track
occurances while writing (vibration,
Carson Gaspar wrote:
Does anyone who understands the internals better than care to take a
stab at what happens if:
- ZFS writes data to /dev/foo
- /dev/foo looses power and the data from the above write, not yet
flushed to rust (say a field tech pulls the wrong drive...)
- /dev/foo
From: Tim Cook [mailto:t...@cook.ms]
Awesome! Thanks for letting us know the results of your tests Ed,
that's extremely helpful. I was actually interested in grabbing some
of the cheaper intel SSD's for home use, but didn't want to waste my
money if it wasn't going to handle the various
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
Thanks for the testing. so FINALLY with version 19 does ZFS
demonstrate production-ready status in my book. How long is it going to
take Solaris to catch up?
Oh, it's been production worthy for some time - Just don't use
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
r...@karlsbakk.net wrote:
Hi all
Is it possible to securely delete a file from a zfs dataset/zpool
once it's been snapshotted, meaning delete (and perhaps overwrite) all
copies of this file?
No, until all snapshots
From: Richard Elling [mailto:richard.ell...@gmail.com]
On Apr 11, 2010, at 5:36 AM, Edward Ned Harvey wrote:
In the event a pool is faulted, I wish you didn't have to power cycle
the
machine. Let all the zfs filesystems that are in that pool simply
disappear, and when somebody does
From: Daniel Carosone [mailto:d...@geek.com.au]
Please look at the pool property failmode. Both of the preferences
you have expressed are available, as well as the default you seem so
unhappy with.
I ... did not know that. :-)
Thank you.
___
Due to recent experiences, and discussion on this list, my colleague and I
performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was introduced in zpool 19) In any way possible,
you lose an unmirrored log device, and the OS
Neil or somebody? Actual ZFS developers? Taking feedback here? ;-)
While I was putting my poor little server through cruel and unusual
punishment as described in my post a moment ago, I noticed something
unexpected:
I expected that while I'm stressing my log device by infinite sync
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
I don't think that the BIOS and rebooting part ever has to be true,
at least I don't hope so. You shouldn't have to reboot just because
you replace a hot plug disk.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
I don't know how to identify what card is installed in your system.
Actually, this is useful:
prtpicl -v | less
Search for RAID. On my system, I get this snippet (out
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eric Andersen
I backup my pool to 2 external 2TB drives that are simply striped using
zfs send/receive followed by a scrub. As of right now, I only have
1.58TB of actual data. ZFS send
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Daniel Bakken
My zfs filesystem hangs when transferring large filesystems (500GB)
with a couple dozen snapshots between servers using zfs send/receive
with
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeroen Roodhart
If you're running solaris proper, you better mirror
your
ZIL log device.
...
I plan to get to test this as well, won't be until
late next week though.
Running
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Rather: ... =19 would be ... if you don't mind loosing data written
the ~30 seconds before the crash, you don't have to mirror your log
device.
If you have a system crash, *and* a failed log device at the same time, this
is an important
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
It is also worth pointing out that in normal operation the slog is
essentially a write-only device which is only read at boot time. The
writes are assumed to work if the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Chris Dunbar
like to clarify something. If read performance is paramount, am I
correct in thinking RAIDZ is not the best way to go? Would not the ZFS
equivalent of RAID 10 (striped mirror
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
If you're going to go with (Open)Solaris, the OP may also want to look
into the multi-platform pkgsrc for third-party open source software:
http://www.pkgsrc.org/
I have reason to believe that both the drive, and the OS are correct.
I have suspicion that the HBA simply handled the creation of this
volume somehow differently than how it handled the original. Don't
know the answer for sure yet.
Ok, that's confirmed now. Apparently when the drives ship
We ran into something similar with these drives in an X4170 that
turned
out to
be an issue of the preconfigured logical volumes on the drives. Once
we made
sure all of our Sun PCI HBAs where running the exact same version of
firmware
and recreated the volumes on new drives arriving
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson
I have a problem with my zfs system, it's getting slower and slower
over time. When the OpenSolaris machine is rebooted and just started I
get about 30-35MB/s in read and
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
I would like to remove the two SSDs as log devices from the pool and
instead add them as a separate pool for sole use by the database to
see how this enhences performance.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson
pool: s1
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
s1 ONLINE 0 0 0
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
Thanks for the clarification! This is very annoying. My intend was to
create a log mirror. I used
zpool add tank log c1t6d0 c1t7d0
and this was obviously false.
From: Kyle McDonald [mailto:kmcdon...@egenera.com]
So does your HBA have newer firmware now than it did when the first
disk
was connected?
Maybe it's the HBA that is handling the new disks differently now, than
it did when the first one was plugged in?
Can you down rev the HBA FW? Do you
When running the card in copyback write cache mode, I got horrible
performance (with zfs), much worse than with copyback disabled
(which I believe should mean it does write-through), when tested
with filebench.
When I benchmark my disks, I also find that the system is slower with
WriteBack
Your experience is exactly why I suggested ZFS start doing some right
sizing if you will. Chop off a bit from the end of any disk so that
we're guaranteed to be able to replace drives from different
manufacturers. The excuse being no reason to, Sun drives are always
of identical size. If
CR 6844090, zfs should be able to mirror to a smaller disk
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090
b117, June 2009
Awesome. Now if someone would only port that to solaris, I'd be a happy
man. ;-)
___
zfs-discuss mailing
Hmm, when you did the write-back test was the ZIL SSD included in the
write-back?
What I was proposing was write-back only on the disks, and ZIL SSD
with no write-back.
The tests I did were:
All disks write-through
All disks write-back
With/without SSD for ZIL
All the permutations of the
Actually, It's my experience that Sun (and other vendors) do exactly
that for you when you buy their parts - at least for rotating drives, I
have no experience with SSD's.
The Sun disk label shipped on all the drives is setup to make the drive
the standard size for that sun part number.
There is some question about performance. Is there any additional
overhead caused by using a slice instead of the whole physical device?
No.
If the disk is only used for ZFS, then it is ok to enable volatile disk
write caching
if the disk also supports write cache flush requests.
If
I haven't taken that approach, but I guess I'll give it a try.
From: Tim Cook [mailto:t...@cook.ms]
Sent: Sunday, April 04, 2010 11:00 PM
To: Edward Ned Harvey
Cc: Richard Elling; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] To slice, or not to slice
On Sun, Apr 4, 2010
Momentarily, I will begin scouring the omniscient interweb for information, but
I'd like to know a little bit of what people would say here. The question is
to slice, or not to slice, disks before using them in a zpool.
One reason to slice comes from recent personal experience. One disk of a
One reason to slice comes from recent personal experience. One disk
of
a mirror dies. Replaced under contract with an identical disk. Same
model number, same firmware. Yet when it's plugged into the system,
for an unknown reason, it appears 0.001 Gb smaller than the old disk,
and
And finally, if anyone has experience doing this, and process
recommendations? That is
My next task is to go read documentation
again, to refresh my memory from years ago, about the difference
between format, partition, label, fdisk, because those terms
dont have the same meaning
On Apr 2, 2010, at 2:29 PM, Edward Ned Harvey wrote:
I've also heard that the risk for unexpected failure of your pool is
higher if/when you reach 100% capacity. I've heard that you should
always create a small ZFS filesystem within a pool, and give it some
reserved space, along
I would return the drive to get a bigger one before doing something as
drastic as that. There might have been a hichup in the production line,
and that's not your fault.
Yeah, but I already have 2 of the replacement disks, both doing the same
thing. One has a firmware newer than my old disk
Your original zpool status says that this pool was last accessed on
another system, which I believe is what caused of the pool to fail,
particularly if it was accessed simultaneously from two systems.
The message last accessed on another system is the normal behavior if the
pool is
On opensolaris? Did you try deleting any old BEs?
Don't forget to zfs destroy rp...@snapshot
In fact, you might start with destroying snapshots ... if there are any
occupying space.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Seriously, all disks configured WriteThrough (spindle and SSD disks
alike)
using the dedicated ZIL SSD device, very noticeably faster than
enabling the
WriteBack.
What do you get with both SSD ZIL and WriteBack disks enabled?
I mean if you have both why not use both? Then both
I know it is way after the fact, but I find it best to coerce each
drive down to the whole GB boundary using format (create Solaris
partition just up to the boundary). Then if you ever get a drive a
little smaller it still should fit.
It seems like it should be unnecessary. It seems like
http://nfs.sourceforge.net/
I think B4 is the answer to Casper's question:
We were talking about ZFS, and under what circumstances data is flushed to
disk, in what way sync and async writes are handled by the OS, and what
happens if you disable ZIL and lose power to your system.
We were
I am envisioning a database, which issues a small sync write,
followed by a
larger async write. Since the sync write is small, the OS would
prefer to
defer the write and aggregate into a larger block. So the
possibility of
the later async write being committed to disk before the older
hello
i have had this problem this week. our zil ssd died (apt slc ssd 16gb).
because we had no spare drive in stock, we ignored it.
then we decided to update our nexenta 3 alpha to beta, exported the
pool and made a fresh install to have a clean system and tried to
import the pool. we
ZFS recovers to a crash-consistent state, even without the slog,
meaning it recovers to some state through which the filesystem passed
in the seconds leading up to the crash. This isn't what UFS or XFS
do.
The on-disk log (slog or otherwise), if I understand right, can
actually make the
If you have zpool less than version 19 (when ability to remove log
device
was introduced) and you have a non-mirrored log device that failed, you
had
better treat the situation as an emergency.
Instead, do man zpool and look for zpool
remove.
If it says supports removing log devices
Dude, don't be so arrogant. Acting like you know what I'm talking
about
better than I do. Face it that you have something to learn here.
You may say that, but then you post this:
Acknowledged. I read something arrogant, and I replied even more arrogant.
That was dumb of me.
Only a broken application uses sync writes
sometimes, and async writes at other times.
Suppose there is a virtual machine, with virtual processes inside it. Some
virtual process issues a sync write to the virtual OS, meanwhile another
virtual process issues an async write. Then the virtual OS
The purpose of the ZIL is to act like a fast log for synchronous
writes. It allows the system to quickly confirm a synchronous write
request with the minimum amount of work.
Bob and Casper and some others clearly know a lot here. But I'm hearing
conflicting information, and don't know what
with the filesystem that you actually plan to use in your pool. Anyone care
to offer any comments on that?
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Sent: Friday, April 02, 2010 5:23 PM
To: zfs-discuss@opensolaris.org
Subject
If you disable the ZIL, the filesystem still stays correct in RAM, and
the
only way you lose any data such as you've described, is to have an
ungraceful power down or reboot.
The advice I would give is: Do zfs autosnapshots frequently (say ...
every
5 minutes, keeping the most recent 2
Can you elaborate? Just today, we got the replacement drive that has
precisely the right version of firmware and everything. Still, when
we
plugged in that drive, and create simple volume in the storagetek
raid
utility, the new drive is 0.001 Gb smaller than the old drive. I'm
still
If you have an ungraceful shutdown in the middle of writing stuff,
while the
ZIL is disabled, then you have corrupt data. Could be files that are
partially written. Could be wrong permissions or attributes on files.
Could be missing files or directories. Or some other problem.
Some
This approach does not solve the problem. When you do a snapshot,
the txg is committed. If you wish to reduce the exposure to loss of
sync data and run with ZIL disabled, then you can change the txg commit
interval -- however changing the txg commit interval will not eliminate
the
Is that what sync means in Linux?
A sync write is one in which the application blocks until the OS acks that
the write has been committed to disk. An async write is given to the OS,
and the OS is permitted to buffer the write to disk at its own discretion.
Meaning the async write function
Use something other than Open/Solaris with ZFS as an NFS server? :)
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been trying to more than a year, and
watching dozens, if not hundreds of threads.
Getting half-ways decent performance from NFS
Nobody knows any way for me to remove my unmirrored
log device. Nobody knows any way for me to add a mirror to it (until
Since snv_125 you can remove log devices. See
http://bugs.opensolaris.org/view_bug.do?bug_id=6574286
I've used this all the time during my testing and was able to
Would your users be concerned if there was a possibility that
after extracting a 50 MB tarball that files are incomplete, whole
subdirectories are missing, or file permissions are incorrect?
Correction: Would your users be concerned if there was a possibility that
after extracting a 50MB
I did those test and here are results:
r...@sl-node01:~# zfs list
NAMEUSED AVAIL REFER MOUNTPOINT
mypool01 91.9G 136G23K /mypool01
mypool01/storage01 91.9G 136G 91.7G /mypool01/storage01
I see the source for some confusion. On the ZFS Best Practices page:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
It says:
Failure of the log device may cause the storage pool to be inaccessible
if
you are running the Solaris Nevada release prior to build 96 and
A MegaRAID card with write-back cache? It should also be cheaper than
the F20.
I haven't posted results yet, but I just finished a few weeks of extensive
benchmarking various configurations. I can say this:
WriteBack cache is much faster than naked disks, but if you can buy an SSD
or two for
We ran into something similar with these drives in an X4170 that turned
out to
be an issue of the preconfigured logical volumes on the drives. Once
we made
sure all of our Sun PCI HBAs where running the exact same version of
firmware
and recreated the volumes on new drives arriving from
On Mon, Mar 29, 2010 at 5:39 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Or to generate the list of files for incremental backups via NetBackup
or similar. This is especially important
But the speedup of disabling the ZIL altogether is
appealing (and would
probably be acceptable in this environment).
Just to make sure you know ... if you disable the ZIL altogether, and you
have a power interruption, failed cpu, or kernel halt, then you're likely to
have a corrupt unusable
standard ZIL: 7m40s (ZFS default)
1x SSD ZIL: 4m07s (Flash Accelerator F20)
2x SSD ZIL: 2m42s (Flash Accelerator F20)
2x SSD mirrored ZIL: 3m59s (Flash Accelerator F20)
3x SSD ZIL: 2m47s (Flash Accelerator F20)
4x SSD
The problem that I have now is that each created snapshot is always
equal to zero... zfs just not storing changes that I have made to the
file system before making a snapshot.
r...@sl-node01:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool01
Again, we can't get a straight answer on this one..
(or at least not 1 straight answer...)
Since the ZIL logs are committed atomically they are either committed
in FULL, or NOT at all (by way of rollback of incomplete ZIL applies at
zpool mount time / or transaction rollbacks if things
Anyway, my question is, [...]
as expected I can't import it because the pool was created
with a newer version of ZFS. What options are there to import?
I'm quite sure there is no option to import or receive or downgrade a zfs
filesystem from a later version. I'm pretty sure your only option
If the ZIL device goes away then zfs might refuse to use the pool
without user affirmation (due to potential loss of uncommitted
transactions), but if the dedicated ZIL device is gone, zfs will use
disks in the main pool for the ZIL.
This has been clarified before on the list by top zfs
So you think it would be ok to shutdown, physically remove the log
device,
and then power back on again, and force import the pool? So although
there
may be no live way to remove a log device from a pool, it might still
be
possible if you offline the pool to ensure writes are all completed
Just to make sure you know ... if you disable the ZIL altogether, and
you
have a power interruption, failed cpu, or kernel halt, then you're
likely to
have a corrupt unusable zpool, or at least data corruption. If that
is
indeed acceptable to you, go nuts. ;-)
I believe that the
You can't share a device (either as ZIL or L2ARC) between multiple
pools.
Discussion here some weeks ago reached suggested that an L2ARC device
was used for all ARC evictions, regardless of the pool.
I'd very much like an authoritative statement (and corresponding
documentation
Using fewer than 4 disks in a raidz2 defeats the purpose of raidz2, as
you will always be in a degraded mode.
Freddie, are you nuts? This is false.
Sure you can use raidz2 with 3 disks in it. But it does seem pointless to do
that instead of a 3-way mirror.
Coolio. Learn something new everyday. One more way that raidz is
different from RAID5/6/etc.
Freddie, again, you're wrong. Yes, it's perfectly acceptable to create either
raid-5 or raidz using 2 disks. It's not degraded, but it does seem pointless
to do this instead of a mirror.
Just because most people are probably too lazy to click the link, I’ll paste a
phrase from that sun.com webpage below:
“Creating a single-parity RAID-Z pool is identical to creating a mirrored pool,
except that the ‘raidz’ or ‘raidz1’ keyword is used instead of ‘mirror’.”
And
“zpool create
OK, I have 3Ware looking into a driver for my cards (3ware 9500S-8) as
I dont see an OpenSolaris driver for them.
But this leads me that they do have a FreeBSD Driver, so I could still
use ZFS.
What does everyone thing about that? I bet it is not as mature as on
OpenSolaris.
mature is
It seems like the zpool export will ques the drives and mark the pool
as exported. This would be good if we wanted to move the pool at that
time but we are thinking of a disaster recovery scenario. It would be
nice to export just the config to where if our controller dies, we can
use the
In the Thoughts on ZFS Pool Backup Strategies thread it was stated
that zfs send, sends uncompress data and uses the ARC.
If zfs send sends uncompress data which has already been compress
this is not very efficient, and it would be *nice* to see it send the
original compress data. (or an
While I use zfs with FreeBSD (FreeNAS appliance with 4x SATA 1 TByte
drives)
it is trailing OpenSolaris by at least a year if not longer and hence
lacks
many key features people pick zfs over other file systems. The
performance,
especially CIFS is quite lacking. Purportedly (I have never
I think the point is to say: ZFS software raid is both faster and
more
reliable than your hardware raid. Surprising though it may be for a
newcomer, I have statistics to back that up,
Can you share it?
Sure. Just go to http://nedharvey.com and you'll see four links on the left
side,
901 - 1000 of 1109 matches
Mail list logo