issues.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
at are the zilstat.ksh and
arc_summary.pl scripts which you should find mentioned in the list
archives.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
/messages?
Use
/usr/sbin/fmadm faulty
to see any existing fault reports.
use
/usr/sbin/fmdump
to dump error reports. Use
/usr/sbin/fmdump -f
to do a sort of 'tail' on error reports as they arrive.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
, deleting the file does
not cause its blocks to be freed if a snapshot still references them.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
On Sun, 24 Oct 2010, Stephan Budach wrote:
I believe that the lower ones were files in snapshots that have been deleted,
but why are they still
referenced like this?
Have you used 'zpool clear' to clear the errors in the pool?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
with sufficiently large RAM and fast I/O.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
and tuned array could support more disks per
vdev.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
, it would
make understanding the problem a bit more complex. A 1:1 mapping
between zfs devices and actual hardware makes things much easier to
manage.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
a gross
underestimate of the time required. The time required depends quite a
lot on the performance of the disks involved, and if the pool was made
more complex by using snapshots, or if the original pool was ever
allowed to become excessively full (and thereby fragmented).
Bob
--
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
On Fri, 15 Oct 2010, Gerry Bragg wrote:
Is it possible for a read to bypass the write cache and fetch from
disk before the flush of the cache to disk occurs?
No. Zfs is fully coherent in memory. On a server, most accesses are
to the data in memory rather than from disk.
Bob
--
Bob
and the checksum adds to the latency. The risk of block corruption is
increased. 128K is already quite large for a block.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
written inefficiently?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
to write sequentially.
Otherwise it would offer much less benefit. Maybe this random-write
issue with Sandforce would not be a problem?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
On Sat, 9 Oct 2010, Richard Elling wrote:
On Oct 8, 2010, at 10:01 AM, Bob Friesenhahn wrote:
Regardless, nothing beats raidz3 based on computable statistics.
Well, no, not really. It all depends on the number of sets and the MTTR.
Well, ok. I should have appended except for 3-way mirrors
algorithm is closely aligned to the zfs data storage
model so it is unlikely to dramatically improve.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
the
system or the hardware, then they are more likely to do something
wrong which damages the data.
It also does not account for an OS kernel which caches quite a lot of
data in memory (relying on ECC for reliability), and which may have
bugs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
scrub in
the plan. The good news is that mirrors scrub quickly with far fewer
I/Os and system impact than raidz?.
Regardless, nothing beats raidz3 based on computable statistics.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
to that scenario.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
block will be omitted from the
output.
The copy could be to a new zvol in the same pool (assuming you trust
the disks) or you could pipe it over ssh to another 'dd'.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
and
hope for the best. Expect the replacement to take a very long time.
It is wise to restart the pool from scratch with multiple vdevs
comprised of fewer devices.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
, the zfs
blocks are already broken up into smaller chunks, using a smaller
alignment than the zfs record size. For zfs send, the data is
uncompressed to full records prior to sending.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
. Oracle's handshake
agreement with NetApp does not in any way shield other zfs commercial
users from a patent lawsuit from NetApp.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
... but ultimately those conversations
are between Coraid and NetApp.
There should be little doubt that NetApp's goal was to make money by
suing Sun. Nexenta does not have enough income/assets to make a risky
lawsuit worthwhile.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
that you are misusing the
term NVRAM.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
is considered volatile and data loss is not possible should it fail.
What gets scrubbed in the slog? The slog contains transient
data which exists for only seconds at a time. The slog is quite
likely to be empty at any given point in time.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
oriented languages. Most of these people should not be
programming at all.
Zfs could have been implemented in C++, but it would not be as
friendly in a kernel which is already implemented in C.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
GPL if it was developed in such a way that it
specifically depends on GPL components.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
in the same program. It is a mind set
issue with the Linux developers rather than a legal one.
If ZFS was not tied to a big greedy controlling company then the Linux
kernel developers would be more likely to change their mind.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
signed a contract assigning copyrights to Sun, Oracle may be forced to
distribute source updates to zfs if it has been 'tainted' by
contributions by outside developers.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
to be able to afford support.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
compared to current Intel and AMD CPUs. There is
little indication that Oracle will change this situation.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
this and may feel betrayed if
Oracle (again) does not do what it said it was going to do. Betrayed
engineers may jump ship for the competition. High caliber engineers
are very difficult to obtain.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
trade-offs are now often resulting in larger capacity
drives with reduced performance.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
now
Odd. What type of applications are you running on this system? Are
applications running on the server competing with client accesses?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
idea to use rsync v3. Previous versions had
to recurse the whole tree on both sides (storing what was
learned in memory) before doing anything.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
by an NVRAM ZIL.
Smart software developers will access the source code via NFS but
write the object files to local client disk.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
two zfs
zvols (volumes) which are hopefully in two different raidz-based zfs
pools, and then create a new zfs pool using those two devices. The
end result would be three zfs pools. It is probably not a wise idea
to use this layered approach.
Bob
--
Bob Friesenhahn
bfrie
. Those that do, have had a
I am still using applications built under Solaris 2.1 in 1993. :-)
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
.
There is also no GPL requirement for the scripts to work for anyone
other than the person who wrote them. The only requirement is that
they are what is normally used for development.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
the dominant
factor.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, then it is possible that hard
errors could be counted. FMA usually waits until several errors have
been reported over a period of time before reporting a fault.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
necessary to fork
and create a truely independent distribution just yet.
OpenSolaris is currently not left any more in the lurch than Solaris
10 is since paying Solaris 10 users are still wondering what happened
to the U9 release which Sun had already promised.
Bob
--
Bob Friesenhahn
bfrie
is not compelled
in any way to offer a license for use of the patent. Without a patent
license, shipping products can be stopped dead in their tracks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
for synchronous writes, and a
local file copy is not normally going to use synchronous writes.
Also, even if the slog was used, it gets emptied pretty quickly.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
than today's zfs
storage pools, and they might even be on just one disk. Regardless,
only someone with severely failing memory might think that old-time
filesystems are somehow less failure prone than a zfs storage pool.
The good old days does not apply to filesystems.
Bob
--
Bob Friesenhahn
. A quite busy
system can still report very little via 'zpool iostat' if it has
enough RAM to cache the requested data.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
On Wed, 30 Jun 2010, Fred Liu wrote:
Any duration limit on the supercap? How long can it sustain the data?
A supercap on a SSD drive only needs to sustain the data until it has
been saved (perhaps 10 milliseconds). It is different than a RAID
array battery.
Bob
--
Bob Friesenhahn
bfrie
have suggested, perhaps you should try FreeBSD?
As long as the hardware supports 64-bits, I definitely second that
suggestion. FreeBSD is often severely underestimated by those who
have never used it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
is comprised entirely of FLASH SSDs. This should help with the IOPS,
particularly when reading.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
around without
first exporting the pool is something which is best avoided.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
be stuffed with RAM first as long as
the budget can afford it.
Luckily, most servers experience mostly repeated reads.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
and result in more
vdevs in the pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
. Data would only be recovered up to the first
point of loss, even though some newer data is still available on a
different SSD.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
detail. At
least that is what we have been told.
The slog does not do micro-striping, nano-striping, pico-striping, or
femto-striping (not even at the sub-bit level) but it does do
mega-striping.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
.
More food!
The MLC drives seem to usually have more write latency than the SLC
drives.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
?
No. Normally you would not want to use a MLC SSD as a slog. The SLC
SSDs wear out quicker than one would like under heavy repeated writes.
Over-provisioning the slog SSD storage size should help.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
acks need to be
ultimately delivered to the disk or else there WILL be data loss.
The RAID controller should not purge its own record until the disk
reports that it has flushed its cache. Once the RAID controller's
cache is full, then it should start stalling writes.
Bob
--
Bob Friesenhahn
to link a computer containing
GPLed code to the Internet. I think I heard on usenet or a blog that
it was illegal to link GPLed code with non-GPLed code. The Internet
itself is obviously a derived work and is therefore subject to the
GPL.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
response latency). Battery-backed RAM in the
adaptor card or storage array can do almost as well as the SSD as long
as the amount of data does not overrun the limited write cache.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
) is highly unlikely. Also, that the zil is not
read back unless the system is improperly shut down.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
the original zfs design.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the
FSF.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
the CDDLd original ZFS implementation into the Linux
kernel.
+1
The issues are largely philosophical.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
mean.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
actually true, then Linux, *BSD, and Solaris distributions could not
legally exist. Thankfully, only part of the above is true.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
On Fri, 11 Jun 2010, Freddie Cash wrote:
On Fri, Jun 11, 2010 at 12:25 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 11 Jun 2010, Freddie Cash wrote:
For the record, the following paragraph was incorrectly quoted by Bob. This
paragraph was originally
It would
/collateral/switches/ps9441/ps9670/white_paper_c11-462176.html
http://www.brocade.com/products-solutions/solutions/connectivity/FCoE/index.page
http://www.emulex.com/products/converged-network-adapters.html
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
RAM also help immensely.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
are not terribly accurate. If performance is
important, then there is no substitute for actual testing.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
and other caches use a large part
of the memory.
Don't forget that virtual memory pages may also come from memory
mapped files from the filesystem. However, it seems that zfs is
effectively diminishing this.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org
is very good at hiding existing text in
its user interface so people think nothing of including most/all of
the email they are replying to.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
plenty of
weeds growing at the ranch.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
-link level, and
with long delays. You can be sure that companies like cisco will be
(or are) selling FCoE hardware to compete with FC SANs. The intention
is that ethernet will put fibre channel out of business. We shall see
if history repeats itself.
Bob
--
Bob Friesenhahn
bfrie
release/patch level
they were at before.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
transaction group. If the disk fails to sync its cache and
writes data out of order (data from multiple transaction groups), then
zfs loses consistency.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
based on recent activity and
should dynamically tune prefech based on that knowledge.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
be able to
get a gigabit of traffic in both directions at once, but this depends
on the quality of your ethernet switch, ethernet adaptor card, device
driver, and capabilities of where the data is read and written to.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org
raidz3 is
unlikely to be borne out in practice. Other potential failures modes
will completely drown out the on-paper reliability improvement
provided by raidz3.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
in each page of allocated
memory, then there is a better chance of success (but not assured).
Expect system performance to suffer dramatically. SunOS 4 provided a
program named chill which performed this function.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
on the computer during a storm.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
on your system, this might not be an issue, but it is
possible that there is an I/O threshold beyond which something
(probably hardware) causes a performance issue.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
and service a drive which has completely failed. Some arrays
may lose the LUN entirely and require that it be recreated via a
management interface rather than immediately making a new drive
available via the original LUN ID.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
are using, but I think that there are
recent updates to Solaris 10 and OpenSolaris which are supposed to
solve this problem (zfs blocking access to CPU by applications).
From Solaris 10 x86 (kernel 142901-09):
6586537 async zio taskqs can block out userland commands
Bob
--
Bob Friesenhahn
performance from zfs using these drives. I
suggest getting rid of it and replace it with a drive which still uses
standard 512 byte sectors internally. It will be like night and day.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
burst. I
think that (with multiple writers) the zfs pool will be healthier
and less fragmented if you can offer zfs more RAM and accept some
stalls during writing. There are always tradeoffs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
but
could be related to PCI-E access, interrupts, or a controller
bottleneck.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
traditional backup tools
like tar, cpio, etc…?
The whole stream will be rejected if a single bit is flopped. Tar and
cpio will happily barge on through the error.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
enough that it will
always be there when needed. The profiling application might need to
drive a disk for several hours (or a day) in order to fully understand
how it behaves. Remapped failed sectors would cause this micro-timing
to fail, but only for the remapped sectors.
Bob
--
Bob
circumstances. It seems that toilet paper may of much
more practical use than these specifications. In fact, I reject them
as being specifications at all.
The Apollo reentry vehicle was able to reach amazing speeds, but only
for a single use.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
if you
were talking about a maximum ambient temperature specification or
maximum allowed elevation, then a maximum specification makes sense.
Perhaps the device is fine (I have no idea) but these posted
specifications are virtually useless.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
.
This may not be the same as money in the bank, but it is better than
relying on thoughts from some blog posting.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
on
plenty of already erased blocks) would stop once the spare space in
the SSD has been consumed.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
be overwritten.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
adding this to your /etc/system file:
* Set device I/O maximum concurrency
*
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29
set zfs:zfs_vdev_max_pending = 5
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
power to the
drive before the capacitor is sufficiently charged, and some circuitry
which shuts off the flow of energy back into the power supply when the
power supply shuts off (could be a silicon diode if you don't mind the
0.7 V drop).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
best for this. Perhaps the
load issued to these two disks contains more random access requests.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
201 - 300 of 1392 matches
Mail list logo