s you are using, but I think that there are
recent updates to Solaris 10 and OpenSolaris which are supposed to
solve this problem (zfs blocking access to CPU by applications).
From Solaris 10 x86 (kernel 142901-09):
6586537 async zio taskqs can block out userland commands
Bob
--
Bob Friese
traditional backup tools
like tar, cpio, etc…?
The whole stream will be rejected if a single bit is flopped. Tar and
cpio will happily barge on through the error.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
early enough that it will
always be there when needed. The profiling application might need to
drive a disk for several hours (or a day) in order to fully understand
how it behaves. Remapped failed sectors would cause this micro-timing
to fail, but only for the remapped sectors.
Bob
-
Perhaps if you
were talking about a maximum ambient temperature specification or
maximum allowed elevation, then a maximum specification makes sense.
Perhaps the device is fine (I have no idea) but these posted
specifications are virtually useless.
Bob
--
Bob Friesenhahn
bfrie...@simple.da
che, even if it is just a 4K working buffer
representing one SSD erasure block.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
der
extremely ideal circumstances. It seems that toilet paper may of much
more practical use than these specifications. In fact, I reject them
as being specifications at all.
The Apollo reentry vehicle was able to reach amazing speeds, but only
for a single use.
Bob
--
Bob Friesenhahn
bfrie...
is necessarily best for this. Perhaps the
load issued to these two disks contains more random access requests.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
lying power to the
drive before the capacitor is sufficiently charged, and some circuitry
which shuts off the flow of energy back into the power supply when the
power supply shuts off (could be a silicon diode if you don't mind the
0.7 V drop).
Bob
--
Bob Friesenhahn
bfrie...@simple.da
adding this to your /etc/system file:
* Set device I/O maximum concurrency
*
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29
set zfs:zfs_vdev_max_pending = 5
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
en be overwritten.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
depends on
plenty of already erased blocks) would stop once the spare space in
the SSD has been consumed.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
he
vendor has historically published reliable specification sheets.
This may not be the same as money in the bank, but it is better than
relying on thoughts from some blog posting.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
blind assumptions in the above. The
only good choice for ZIL is when you know for a certainty and not
assumptions based on 3rd party articles and blog postings. Otherwise
it is like assuming that if you jump through an open window that there
will be firemen down below to catch you.
Bob
e
maximum number of files per directory all make a difference to how
much RAM you should have for good performance. If you have 200TB of
stored data, but only actually access 2GB of it at any one time, then
the caching requirements are not very high.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
;re not
going to get this if any part of the log device is at the other side of a
WAN. So either add a mirror of log devices locally and not across the WAN,
or don't do it at all.
This depends on the nature of the WAN. The WAN latency may still be
relatively low as compared with drive lat
would look just as bad. Regardless, the first step would be to
investigate 'sd5'. If 'sd4' is also a terrible performer, then
resilvering a disk replacement of 'sd5' may take a very long time.
Use 'iostat -xen' to obtain more informatio
are using, and the
controller type.
It seems likely that one or more of your disks are barely working from
time of initial installation. Even 20 MB/s is quite slow.
Use 'iostat -x 30' with an I/O load to see if one disk is much slower
than the others.
Bob
--
Bob Friesen
nting the fileysystem is sufficient. For
my own little cpio-based "benchmark", umount/mount was sufficient to
restore uncached behavior.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
I certainly would not
want to use a system which is in this dire condition.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
things up again by not caching file data
via the "unified page cache" and using a specialized ARC instead. It
seems that simple paging and MMU control was found not to be smart
enough.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfri
/proc/sys/vm/swappiness.
Anyone that knows how this is tuned in osol, btw?
While this is the zfs-discuss list, usually we are talking about
Solaris/OpenSolaris here rather than "most OSes". No?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user
ap enabled, the kernel is given another degree of freedom, to choose
which is colder: idle process memory, or cold cached files.
Are you sure about this? It is always good to be sure ...
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
pps that need memory, but is vxfs
asking for memory in a way that zfs is pushing it into the corner?
Zfs does not push very hard (although it likes to consume).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
be even better.
The good news is that if you do slice your disks, you can change your
mind in the future if/when you add more disks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
the number of simultaneous requests.
Currently available L2ARC SSD devices are very good with a high number
of I/Os, but they are quite a bottleneck for bulk reads as compared
to L1ARC in RAM .
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
ful to me.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
e far
different. Solaris 10 can then play catch-up with the release of U9
in 2012.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-di
percolate down to Solaris 10
so I don't feel as left out as Richard would like me to feel.
From a zfs standpoint, Solaris 10 does not seem to be behind the
currently supported OpenSolaris release.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr
h 'zpool status' does not mention it and it says the disk is
resilvered. The flashing lights annoyed me so I exported and imported
the pool and then the flashing lights were gone.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
s quite doable since it is the normal case as when a system is
installed onto a mirror pair of disks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
t
like in OpenSolaris. Solaris 10 Live Upgrade is dramatically improved
in conjunction with zfs boot. I am not sure how far behind it is from
OpenSolaris new boot administration tools but under zfs its function
can not be terribly different.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
90-day eval
entitlement) has expired. As a result, it is wise for Solaris 10
users to maintain a local repository of licensed patches in case their
service contract should expire.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
ld be somewhat problematic. Once the new drive is functional,
you can detatch the failing one. Make sure that GRUB will boot from
the new drive.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagic
ill usually hit the (old) hardware less severely
than scrub. Resilver does not have to access any of the redundant
copies of data or metadata, unless they are the only remaining good
copy.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
On Sun, 2 May 2010, Roy Sigurd Karlsbakk wrote:
Any guidance on how to do it? I tried to do zfs snapshot
You can't boot off raidz. That's for data only.
Unless you use FreeBSD ...
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user
s in return for wisdom.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
flaws, become dominant. The human factor is
often the most dominant factor when it comes to data loss since most
data loss is still due to human error. Most data loss problems we see
reported here are due to human error or hardware design flaws.
Bob
--
Bob Friesenhahn
bfrie
all", then you would be ok.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opens
ragile and easily broken.
It is necessary to look at all the factors which might result in data
loss before deciding what the most effective steps are to minimize
the probability of loss.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
Then
you would update to the minimum version required to support that
feature. Note that if the default filesystem version changes and you
create a new filesystem, this may also cause problems (I have been
bit by that before).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
re is more than one level of redundancy,
scrubs are not really warranted. With just one level of redundancy it
becomes much more important to verify that both copies were written to
disk correctly.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr
that you will be able
to reinstall the OS and achieve what you had before. An exact
recovery method (dd of partition images or recreate pool with 'zfs
receive') seems like the only way to be assured of recovery moving
forward.
Bob
--
Bob Friesenhahn
bfrie...@simple.dal
l for discovering bit-rot in singly-redundant
pools.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-di
many hours. I rather have the disk finished resilvering before
I have the chance to replace the bad disk than to risk more disks
fail before It had a chance to resilverize.
Would your opinion change if the disks you used took 7 days to
resilver?
Bob
--
Bob Friesenhahn
bfrie
possible to
create an older pool version using newer software. Of course, it is
also necessary to make sure that any created filesystems are a
sufficiently low version.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
made that smaller less capable simplex-routed
shelves may be a more cost effective and reliable solution when used
carefully with zfs. For example, mini-shelves which support 8 drives
each.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
uld remain alive. Likewise, there is
likely more I/O bandwidth available if the vdevs are spread across
controllers.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
become IOPS bound.
It is true that all HDDs become IOPS bound, but the mirror
configuration offers more usable IOPS and therefore the user waits for
less time for their request to be satisfied.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
immediately when the transfer starts?
Please do me a favor and check this for me.
Thanks,
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
orm of replace like 'zpool replace
tank c1t1d0 c1t1d7'. If I understand things correctly, this allows
you to replace one good disk with another without risking the data in
your pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
disk resources. A pool based on mirror devices will behave much
more nicely while being scrubbed than one based on RAIDz2.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagic
there is space for the new drive, it is not necessary to
degrade the array and lose redundancy while replacing a device. As
long as you can physically add a drive to the system (even
temporarily) it is not necessary to deliberately create a fault
situation.
Bob
--
Bob Friesenhahn
and the type of drives used.
All types of drives fail but typical SATA drives fail more often.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Sweet! My primary thought
is that your working set is currently smaller than the available RAM.
Notice that this particular 'zpool iostat' is not showing any reads.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
will still
available.
You are not going to get this using zfs. Zfs spreads the file data
across all of the drives in the pool. You need to have vdev
redundancy in order to be able to lose a drive without losing the
whole pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
space for double the data, then I recommend creating
the destination directory and then using this in the source directory
find . -depth -print | cpio -pdum /destdir
Using something like 'mv' would be dreadfully (even) slower.
Thankfully, zfs is pretty quick at recursively removin
.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
ces at a time
(2GB allocation each) on my system with no problem at all.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss maili
would also need to verify that this feature is not protected
by a patent.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss ma
not included.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
orthy as
your own temporary yahoo.com email account.
Next someone with a yahoo.com email account will be posting that Ford
will no longer supports round tires on their trucks. Statements to
the contrary will not be accepted unless they come from a @ford.com
address.
Bob
--
Bob Friesenhahn
" (with FLASH backup) will behave
dramatically differently than a typical SSD.
If the SSD employed supports sufficient IOPS and bandwidth, then
adding more will not help since it is not the bottleneck.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
the underlying store is concerned) in order to
assure proper ordering. This would result in a very high TXG issue
rate. Pool fragmentation would be increased.
I am sure that someone will correct me if this is wrong.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www
utate what really happens on
your system before you invest in extra hardware.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discu
convert a mirror vdev into a
single-disk vdev. This means that you can upgrade your simple
"stripe" into a stripe of mirrors.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
es or decreases the
available data flow to the network.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zf
different system, then the zpool.cache file
generated on that system will be different due to
differing device names and a different host name.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
ster since it should have lower
latency than a FLASH SSD drive. However, it may have some bandwidth
limits on its interface.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
ethernet?
This seems to be the test of the day.
time tar jxf gcc-4.4.3.tar.bz2
I get 22 seconds locally and about 6-1/2 minutes from an NFS client.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
No SSDs are used for the intent log.
The StorageTek 2540 seems to offer 330MB of battery-backed cache per
controller.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://w
desired behavior:
--inplace --no-whole-file
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
ly to cache flush requests. The intent log is flushed
frequently. Previously some have reported (based on testing) that the
X-25E does not flush the write cache reliably when it is enabled. It
may be that some X-25E versions work better than others.
Bob
--
Bob Friesenhahn
On Fri, 16 Apr 2010, Eric D. Mudama wrote:
On Fri, Apr 16 at 10:05, Bob Friesenhahn wrote:
It is much more efficient (from a housekeeping perspective) if filesystem
sectors map directly to SSD pages, but we are not there yet.
How would you stripe or manage a dataset across a mix of devices
erformance advantages of
TRIM.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolari
as to do some heavy lifting. It has to keep track of many
small "holes" in the FLASH pages. This seems pretty complicated since
all of this information needs to be well-preserved in non-volatile
storage.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesy
s. This is also the reason why zfs encourages that all
vdevs use the same organization.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
are the new 4K sector variety, even
you might notice.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zf
how they behave. If they
behave well, then destroy the temporary pool and add the drives to
your main pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
ock to the device. This reduces the
probability that the FLASH device will need to update an existing
FLASH block. COW increases the total amount of data written, but it
also reduces the FLASH read/update/re-write cycle.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simples
e becomes a
pervasive part of the operating system. Every article I have read
about the value of TRIM is pure speculation.
Perhaps it will be found that TRIM has more value for SAN storage (to
reclaim space for accounting purposes) than for SSDs.
Bob
--
Bob Friesenhahn
bfri
default? Please?
There are some situations where many reports may be sent per second so
it is not necessarily a wise idea for this to be enabled by default.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
e (as used by media experts) when
discussing the feature on a Solaris list. ;-)
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discu
intermittent benchmark. Of
course, the background TRIM commands might clog other on-going
operations so it might hurt the benchmark.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
h smarts or do very good
wear leveling, and these devices might benefit from the Windows 7 TRIM
command.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/_
grade devices like USB
dongles and compact flash.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-di
On Sat, 10 Apr 2010, Harry Putnam wrote:
Would you mind expanding the abbrevs: ssd zil 12arc?
SSD = Solid State Device
ZIL = ZFS Intent Log (log of pending synchronous writes)
L2ARC = Level 2 Adaptive Replacement Cache
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
On Sat, 10 Apr 2010, Bob Friesenhahn wrote:
Since he is already using mirrors, he already has enough free space since he
can move one disk from each mirror to the "main" pool (which unfortunately,
can't be the boot 'rpool' pool), send the data, and then move the sec
ost convenient in the long run for the root pool
to be physically separate from the data storage though.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
. However, the maximum
synchronous bulk write rate will still be limited by the bandwidth of
your intent log devices. Huge synchronous bulk writes are pretty rare
since usually the bottleneck is elsewhere, such as the ethernet.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
, even after reboots.
Is anyone willing to share what zfs version will be included with
Solaris 10 U9? Will graceful intent log removal be included?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
X the
performance.
Luckily, since you are using mirrors, you can easily migrate disks
from your existing extra pools to the coalesced pool. Just make sure
to scrub first in order to have confidence that there won't be data
loss.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
that reads/writes are
temporarily stalled during part of the TXG write cycle.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
ow, in the Real Near Future when we have 1TB+ SSDs that are 1cent/GB, well,
then, it will be nice to swap up. But not until then...
I don't see that happening any time soon. FLASH is close to hitting
the wall on device geometries and tri-level and quad-level only gets
you so far. A ne
s. I did have to apply a source patch to the FreeBSD kernel
the last time around.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-di
for someone to produce binaries.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opens
this, raidz2 should be seen as a way
to improve storage efficiency and data reliability, and not so much as
a way to improve sequential performance.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
ber of vdevs if you go for the two vdev solution.
With two vdevs and four readers, there will have to be disk seeking
for data even if the data is perfectly sequentially organized.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Main
tions to reduce its effect.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
301 - 400 of 1490 matches
Mail list logo