and the type of drives used.
All types of drives fail but typical SATA drives fail more often.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
there is space for the new drive, it is not necessary to
degrade the array and lose redundancy while replacing a device. As
long as you can physically add a drive to the system (even
temporarily) it is not necessary to deliberately create a fault
situation.
Bob
--
Bob Friesenhahn
disk resources. A pool based on mirror devices will behave much
more nicely while being scrubbed than one based on RAIDz2.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagic
orm of replace like 'zpool replace
tank c1t1d0 c1t1d7'. If I understand things correctly, this allows
you to replace one good disk with another without risking the data in
your pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
immediately when the transfer starts?
Please do me a favor and check this for me.
Thanks,
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
become IOPS bound.
It is true that all HDDs become IOPS bound, but the mirror
configuration offers more usable IOPS and therefore the user waits for
less time for their request to be satisfied.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
uld remain alive. Likewise, there is
likely more I/O bandwidth available if the vdevs are spread across
controllers.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
made that smaller less capable simplex-routed
shelves may be a more cost effective and reliable solution when used
carefully with zfs. For example, mini-shelves which support 8 drives
each.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
possible to
create an older pool version using newer software. Of course, it is
also necessary to make sure that any created filesystems are a
sufficiently low version.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
many hours. I rather have the disk finished resilvering before
I have the chance to replace the bad disk than to risk more disks
fail before It had a chance to resilverize.
Would your opinion change if the disks you used took 7 days to
resilver?
Bob
--
Bob Friesenhahn
bfrie
l for discovering bit-rot in singly-redundant
pools.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-di
that you will be able
to reinstall the OS and achieve what you had before. An exact
recovery method (dd of partition images or recreate pool with 'zfs
receive') seems like the only way to be assured of recovery moving
forward.
Bob
--
Bob Friesenhahn
bfrie...@simple.dal
re is more than one level of redundancy,
scrubs are not really warranted. With just one level of redundancy it
becomes much more important to verify that both copies were written to
disk correctly.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr
Then
you would update to the minimum version required to support that
feature. Note that if the default filesystem version changes and you
create a new filesystem, this may also cause problems (I have been
bit by that before).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
ragile and easily broken.
It is necessary to look at all the factors which might result in data
loss before deciding what the most effective steps are to minimize
the probability of loss.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
all", then you would be ok.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opens
flaws, become dominant. The human factor is
often the most dominant factor when it comes to data loss since most
data loss is still due to human error. Most data loss problems we see
reported here are due to human error or hardware design flaws.
Bob
--
Bob Friesenhahn
bfrie
s in return for wisdom.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
On Sun, 2 May 2010, Roy Sigurd Karlsbakk wrote:
Any guidance on how to do it? I tried to do zfs snapshot
You can't boot off raidz. That's for data only.
Unless you use FreeBSD ...
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user
ill usually hit the (old) hardware less severely
than scrub. Resilver does not have to access any of the redundant
copies of data or metadata, unless they are the only remaining good
copy.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
ld be somewhat problematic. Once the new drive is functional,
you can detatch the failing one. Make sure that GRUB will boot from
the new drive.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagic
90-day eval
entitlement) has expired. As a result, it is wise for Solaris 10
users to maintain a local repository of licensed patches in case their
service contract should expire.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
t
like in OpenSolaris. Solaris 10 Live Upgrade is dramatically improved
in conjunction with zfs boot. I am not sure how far behind it is from
OpenSolaris new boot administration tools but under zfs its function
can not be terribly different.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
s quite doable since it is the normal case as when a system is
installed onto a mirror pair of disks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
h 'zpool status' does not mention it and it says the disk is
resilvered. The flashing lights annoyed me so I exported and imported
the pool and then the flashing lights were gone.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
percolate down to Solaris 10
so I don't feel as left out as Richard would like me to feel.
From a zfs standpoint, Solaris 10 does not seem to be behind the
currently supported OpenSolaris release.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr
e far
different. Solaris 10 can then play catch-up with the release of U9
in 2012.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-di
ful to me.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
the number of simultaneous requests.
Currently available L2ARC SSD devices are very good with a high number
of I/Os, but they are quite a bottleneck for bulk reads as compared
to L1ARC in RAM .
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
be even better.
The good news is that if you do slice your disks, you can change your
mind in the future if/when you add more disks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
pps that need memory, but is vxfs
asking for memory in a way that zfs is pushing it into the corner?
Zfs does not push very hard (although it likes to consume).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
ap enabled, the kernel is given another degree of freedom, to choose
which is colder: idle process memory, or cold cached files.
Are you sure about this? It is always good to be sure ...
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
/proc/sys/vm/swappiness.
Anyone that knows how this is tuned in osol, btw?
While this is the zfs-discuss list, usually we are talking about
Solaris/OpenSolaris here rather than "most OSes". No?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user
things up again by not caching file data
via the "unified page cache" and using a specialized ARC instead. It
seems that simple paging and MMU control was found not to be smart
enough.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfri
I certainly would not
want to use a system which is in this dire condition.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
nting the fileysystem is sufficient. For
my own little cpio-based "benchmark", umount/mount was sufficient to
restore uncached behavior.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
are using, and the
controller type.
It seems likely that one or more of your disks are barely working from
time of initial installation. Even 20 MB/s is quite slow.
Use 'iostat -x 30' with an I/O load to see if one disk is much slower
than the others.
Bob
--
Bob Friesen
would look just as bad. Regardless, the first step would be to
investigate 'sd5'. If 'sd4' is also a terrible performer, then
resilvering a disk replacement of 'sd5' may take a very long time.
Use 'iostat -xen' to obtain more informatio
;re not
going to get this if any part of the log device is at the other side of a
WAN. So either add a mirror of log devices locally and not across the WAN,
or don't do it at all.
This depends on the nature of the WAN. The WAN latency may still be
relatively low as compared with drive lat
e
maximum number of files per directory all make a difference to how
much RAM you should have for good performance. If you have 200TB of
stored data, but only actually access 2GB of it at any one time, then
the caching requirements are not very high.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
blind assumptions in the above. The
only good choice for ZIL is when you know for a certainty and not
assumptions based on 3rd party articles and blog postings. Otherwise
it is like assuming that if you jump through an open window that there
will be firemen down below to catch you.
Bob
he
vendor has historically published reliable specification sheets.
This may not be the same as money in the bank, but it is better than
relying on thoughts from some blog posting.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
depends on
plenty of already erased blocks) would stop once the spare space in
the SSD has been consumed.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
en be overwritten.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
adding this to your /etc/system file:
* Set device I/O maximum concurrency
*
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29
set zfs:zfs_vdev_max_pending = 5
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
lying power to the
drive before the capacitor is sufficiently charged, and some circuitry
which shuts off the flow of energy back into the power supply when the
power supply shuts off (could be a silicon diode if you don't mind the
0.7 V drop).
Bob
--
Bob Friesenhahn
bfrie...@simple.da
is necessarily best for this. Perhaps the
load issued to these two disks contains more random access requests.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
der
extremely ideal circumstances. It seems that toilet paper may of much
more practical use than these specifications. In fact, I reject them
as being specifications at all.
The Apollo reentry vehicle was able to reach amazing speeds, but only
for a single use.
Bob
--
Bob Friesenhahn
bfrie...
che, even if it is just a 4K working buffer
representing one SSD erasure block.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
Perhaps if you
were talking about a maximum ambient temperature specification or
maximum allowed elevation, then a maximum specification makes sense.
Perhaps the device is fine (I have no idea) but these posted
specifications are virtually useless.
Bob
--
Bob Friesenhahn
bfrie...@simple.da
early enough that it will
always be there when needed. The profiling application might need to
drive a disk for several hours (or a day) in order to fully understand
how it behaves. Remapped failed sectors would cause this micro-timing
to fail, but only for the remapped sectors.
Bob
-
traditional backup tools
like tar, cpio, etc…?
The whole stream will be rejected if a single bit is flopped. Tar and
cpio will happily barge on through the error.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
s you are using, but I think that there are
recent updates to Solaris 10 and OpenSolaris which are supposed to
solve this problem (zfs blocking access to CPU by applications).
From Solaris 10 x86 (kernel 142901-09):
6586537 async zio taskqs can block out userland commands
Bob
--
Bob Friese
performance from zfs using these drives. I
suggest getting rid of it and replace it with a drive which still uses
standard 512 byte sectors internally. It will be like night and day.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
te in each burst. I
think that (with multiple writers) the zfs pool will be "healthier"
and less fragmented if you can offer zfs more RAM and accept some
stalls during writing. There are always tradeoffs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystem
h may seem like more CPU but
could be related to PCI-E access, interrupts, or a controller
bottleneck.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
. Depending on your system, this might not be an issue, but it is
possible that there is an I/O threshold beyond which something
(probably hardware) causes a performance issue.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Main
service a drive which has completely failed. Some arrays
may lose the LUN entirely and require that it be recreated via a
management interface rather than immediately making a new drive
available via the original LUN ID.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
from raidz3 is
unlikely to be borne out in practice. Other potential failures modes
will completely drown out the on-paper reliability improvement
provided by raidz3.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
gram arranges to modify a byte in each page of allocated
memory, then there is a better chance of success (but not assured).
Expect system performance to suffer dramatically. SunOS 4 provided a
program named "chill" which performed this function.
Bob
--
Bob Friesenhahn
, or a tree can fall
on the computer during a storm.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@
ld be able to
get a gigabit of traffic in both directions at once, but this depends
on the quality of your ethernet switch, ethernet adaptor card, device
driver, and capabilities of where the data is read and written to.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystem
based on recent activity and
should dynamically tune prefech based on that knowledge.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
the
next transaction group. If the disk fails to sync its cache and
writes data out of order (data from multiple transaction groups), then
zfs loses consistency.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintaine
local-link level, and
with long delays. You can be sure that companies like cisco will be
(or are) selling FCoE hardware to compete with FC SANs. The intention
is that ethernet will put fibre channel out of business. We shall see
if history repeats itself.
Bob
--
Bob Friesenhahn
release/patch level
they were at before.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
he weeds although there are probably plenty of
weeds growing at the ranch.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-
RAM also help immensely.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
n).
These rules of thumb are not terribly accurate. If performance is
important, then there is no substitute for actual testing.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
the kernel and other caches use a large part
of the memory.
Don't forget that virtual memory pages may also come from memory
mapped files from the filesystem. However, it seems that zfs is
effectively diminishing this.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.si
very good at hiding existing text in
its user interface so people think nothing of including most/all of
the email they are replying to.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
the CDDLd original ZFS implementation into the Linux
kernel.
+1
The issues are largely philosophical.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
ompatibility"
might actually mean.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
ng these things since if it was all
actually true, then Linux, *BSD, and Solaris distributions could not
legally exist. Thankfully, only part of the above is true.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintaine
On Fri, 11 Jun 2010, Freddie Cash wrote:
On Fri, Jun 11, 2010 at 12:25 PM, Bob Friesenhahn
wrote:
On Fri, 11 Jun 2010, Freddie Cash wrote:
For the record, the following paragraph was incorrectly quoted by Bob. This
paragraph was originally
It would not have been incorrectly quoted
www.fcoe.com/
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-462176.html
http://www.brocade.com/products-solutions/solutions/connectivity/FCoE/index.page
http://www.emulex.com/products/converged-network-adapters.html
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
GPL license. GPLv3 was written over a span of quite a few
years, with many lawyers involved. Opinions/advice on the FSF/GNU web
site are now based on GPLv3 since it is the current GPL license.
Linux is locked into the GPLv2 license since Linus did not trust the
FSF.
Bob
--
Bob Friesenhahn
highly unlikely. Also, that the zil is not
read back unless the system is improperly shut down.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
rom the original zfs design.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
to link a computer containing
GPLed code to the Internet. I think I heard on usenet or a blog that
it was illegal to link GPLed code with non-GPLed code. The Internet
itself is obviously a derived work and is therefore subject to the
GPL.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
response latency). Battery-backed RAM in the
adaptor card or storage array can do almost as well as the SSD as long
as the amount of data does not overrun the limited write cache.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
oller acks need to be
ultimately delivered to the disk or else there WILL be data loss.
The RAID controller should not purge its own record until the disk
reports that it has flushed its cache. Once the RAID controller's
cache is full, then it should start stalling writes.
Bob
--
Bob F
?
No. Normally you would not want to use a MLC SSD as a slog. The SLC
SSDs wear out quicker than one would like under heavy repeated writes.
Over-provisioning the slog SSD storage size should help.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
or thought.
More food!
The MLC drives seem to usually have more write latency than the SLC
drives.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
tem should be stuffed with RAM first as long as
the budget can afford it.
Luckily, most servers experience mostly repeated reads.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
help and result in more
vdevs in the pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolari
to be lost. Data would only be recovered up to the first
point of loss, even though some newer data is still available on a
different SSD.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
eans) if you study how it works in sufficient detail. At
least that is what we have been told.
The slog does not do micro-striping, nano-striping, pico-striping, or
femto-striping (not even at the sub-bit level) but it does do
mega-striping.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
around without
first exporting the pool is something which is best avoided.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
is comprised entirely of FLASH SSDs. This should help with the IOPS,
particularly when reading.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
have suggested, perhaps you should try FreeBSD?
As long as the hardware supports 64-bits, I definitely second that
suggestion. FreeBSD is often severely underestimated by those who
have never used it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
you specify that in the Bacula config file as
to how long you want to keep the tapes around. So it really comes down to your
use-case.
--
___
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo
On Wed, 30 Jun 2010, Fred Liu wrote:
Any duration limit on the supercap? How long can it sustain the data?
A supercap on a SSD drive only needs to sustain the data until it has
been saved (perhaps 10 milliseconds). It is different than a RAID
array battery.
Bob
--
Bob Friesenhahn
bfrie
y being in cache. A quite busy
system can still report very little via 'zpool iostat' if it has
enough RAM to cache the requested data.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
tremendously smaller than today's zfs
storage pools, and they might even be on just one disk. Regardless,
only someone with severely failing memory might think that "old-time
filesystems" are somehow less failure prone than a zfs storage pool.
The "good old days"
used for synchronous writes, and a
local file copy is not normally going to use synchronous writes.
Also, even if the slog was used, it gets emptied pretty quickly.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
compelled
in any way to offer a license for use of the patent. Without a patent
license, shipping products can be stopped dead in their tracks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
not quite necessary to fork
and create a truely independent distribution just yet.
OpenSolaris is currently not left any more in the lurch than Solaris
10 is since paying Solaris 10 users are still wondering what happened
to the U9 release which Sun had already promised.
Bob
--
Bob Friesen
ally leaner but FreeBSD plus zfs is not
(yet) as memory efficient as Solaris. Solaris and zfs do the Vulcan
mind-meld when it comes to memory but FreeBSD does not.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Main
101 - 200 of 1490 matches
Mail list logo