to
synchronously. The loss of IOPS, and the risk of performance loss due
to imperfectly-matched hardware, results in increased risk of
performance loss with too many devices in a raidz1/raidz2 vdev.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
pools on one system, too many
filesystems in a pool, and too many disks in one raidz2 vdef.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
. You could be encountering a bug which has already been fixed.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs
just bad luck.
As I recall, Albert Chin-A-Young posted about a pool failure where
many devices in the same raidz2 vdev spontaneously failed somehow (in
his case the whole pool was lost). He is using different hardware but
this looks somewhat similar.
Bob
--
Bob Friesenhahn
bfrie
problem, or a bad batch of disks.
Since you are using only raidz1, it is wise to scrub periodically in
order to uncover any failing data before it might be needed to support
a resilver.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
advantage of
concurrency.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
operations which are timing out directory lookups, 'stat',
or 'open' calls?
If files are also being created at a rapid pace, the reader may be
blocked from accessing the directory while it is updated.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
On Fri, 20 Nov 2009, Richard Elling wrote:
Buy a large, read-optimized SSD (or several) and add it as a cache device :-)
But first install as much RAM as the machine will accept. :-)
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
. With the size of your data, it
seems inconvenient to restart the pool from scratch.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
and
decisions in zfs tend to change over time based on bug reports and the
zfs implementor's accumlated experience.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
, which is deadly to
zfs integrity. I am using LaCie d2 Quadra drives and have not
observed any zfs issues at all. However, the external power supplies
on these drives tend to fail so I am not sure if I would recommend
them (my solution was to buy a box of spare power supplies).
Bob
--
Bob
the 2x500GB disks into a larger device, which could then be
used as a single device by zfs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
and
implementation are quite solid, it seems that dedupe should increase
data reliability.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
, it seems likely that
this system is both paging badly, and is also not succeeding to cache
enough data to operate efficiently. Zfs is re-reading from disk where
normally the data would be cached.
The simple solution is to install a lot more RAM. 2GB is a good
starting point.
Bob
--
Bob
to simultaneous writes (to each side if
the mirror) rather than reads.
If it is using parallel SCSI, perhaps there is a problem with the SCSI
bus termination or a bad cable?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
to try
exotic technologies.
Does PATA daisy-chain disks onto the same cable and controller?
If this PATA and drives are becoming overwelmed, maybe it will help to
tune zfs:zfs_vdev_max_pending down to a very small value in the
kernel.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
with significant phrases
like tritium core. Some secrets are very great and should not be
trusted to a marketing department.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
on how fast the
device can erase blocks. Some server environments will write to the
device at close to 100% most of the time, and especially for
relatively slow devices like the X25-E.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
offered by TRIM.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
may be subject to freeze-spray attack if the whole computer is
compromised while it is still running. Otherwise use a sledge-hammer
followed by incineration.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
to trust assurances from a
product vendor that their product never leaves behind copies of data.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
reads will be read from
different disks in a mirror pair. Sometimes sequential reads may be
from the same side of the mirror.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
to be
automatically repaired and so there was no data loss.
Metadata always has a redundant copy, and if you are using something
like raidz2, then your data still has a redundant copy while
resilvering a disk.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
of
total data.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
environment, or perhaps FreeBSD offers a different mechanism
to mirror the root partition which is also bootable.
If maximizing disk space is important to you, you don't have to use
zfs mirroring, but most people here would recommend it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
is the
most common problem. Using the time() system call is no longer good
enough if multiple processes are somehow involved. It is useful to
include additional information such as PID and microseconds. Reading a
few characters from /dev/random to create the seed is even better.
Bob
--
Bob
I clear / recover from the error ?
It seems likely that this command will clear it:
zpool clear zpool01
and then you can pretend it did not happen.
Definitely do
zpool scrub zpool01
to see if there is any other decay.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
happen may every 15-30 seconds.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of where that 8504 KB
of data comes from, and it is due to a daemon I have running. On
another system (which still uses UFS root, but with most data in a ZFS
pool), only a few tiny writes (biggest 810KB).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
, but with the
ability to raise the priority if user processes continue to hog all
CPU. This means that it requires more than a simple zfs fix.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
ever run a ZFS pool for a
long duration of time at very close to full since it will become
excessively fragmented.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
at a reasonable price-point.
Everything needed is already in OpenSolaris. It is not necessary to
depend on Sun for everything.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
will run on such CPUs so their respective ports of
ZFS would be available. It would be useful if OpenSolaris was ported
to ARM.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
(with -pdvum options) seems like the best way to copy
files at the moment.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
OpenSolaris where the end user gets to
experiment with hardware configurations and tunings to get the best
performance (but might not achieve it).
Fishworks engineers are even known to holler at the drives as part
of the rigorous product testing.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
be disabled.
It is naive to think that no one else will ever access your system and
appreciate what they can find in a second, when otherwise it might
have taken hours or days.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
be a problem
with the drive, or the OS if it is not issuing the cache flush
request.
Is solaris incapable of issuing a SATA command FLUSH CACHE EXT?
It issues one for each update to the intent log.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
On Sat, 24 Oct 2009, Bob Friesenhahn wrote:
Is solaris incapable of issuing a SATA command FLUSH CACHE EXT?
It issues one for each update to the intent log.
I should mention that FLASH SSDs without a capacitor/battery-backed
cache flush (like the X25-E) are likely to get burned out pretty
, and consuming them
at a 5X elevated rate with a 5 disk raidz2. It seems that a SSD for
the intent log would help quite a lot for this situation so that zfs
can aggregate the writes. If the typical writes are small, it would
also help to reduce the filesystem blocksize to 8K.
Bob
--
Bob Friesenhahn
bfrie
which has no linkages to other code,
yet can still be successfully loaded and used. In this case it seems
that the module could be loaded into the Linux kernel without itself
being distributed under GPL terms.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org
the data block, or to reconstruct a disk.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
involved. :-)
There are a few vendors who have managed to distribute proprietary
drivers as binaries for Linux. Nvidia is one such vendor.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
On Thu, 22 Oct 2009, Marc Bevand wrote:
Bob Friesenhahn bfriesen at simple.dallas.tx.us writes:
For random write I/O, caching improves I/O latency not sustained I/O
throughput (which is what random write IOPS usually refer to). So Intel can't
cheat with caching. However they can cheat
On Wed, 21 Oct 2009, Marc Bevand wrote:
Bob Friesenhahn bfriesen at simple.dallas.tx.us writes:
[...]
X25-E's write cache is volatile), the X25-E has been found to offer a
bit more than 1000 write IOPS.
I think this is incorrect. On the paper the X25-E offers 3300 random write
4kB IOPS
not have much to
do with its steady-state performance since the peak performance is
often defined by the hard drive cache size and the interface type and
clock rate.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
requirements. 1K non-volatile write IOPS vs 84k non-volatile write
IOPS. Seems like night and day to me (and I am sure that Sun prices
accordingly).
The only thing I agree with is the need to perform real world testing
for the intended application.
Bob
--
Bob Friesenhahn
bfrie
can hear about their experiences.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
disks are very
full, then more traffic may be sent to the new disks, which results in
less benefit.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
to administer and repair.
This is why there is indeed such a thing as too much redundancy.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
1000 write IOPS. With 16GB of RAM, you should not need
a L2ARC for a backup to disk target (a write-mostly application).
The ZFS ARC will be able to expand to 14GB or so, which is quite a lot
of read caching already.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
of RAM later?
The write performace of the X25-E is likely to be the bottleneck for a
write-mostly storage server if the storage server has excellent
network connectivity.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
are satisfied in 50us. Limitations of
existing software stacks are likely reasons why Sun is designing
hardware with more device interfaces and more independent devices.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
firmware? Certain products (e.g.
particular Seagate models) are known to spontaneously expire due to
firmware bugs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
, screw up the data layout, and lead to poor
write performance.
The primarycache=none option seems to simply disable the data cache,
which means that written data also does not remain in the ARC. That
does not mean that written data is not buffered before it is written.
Bob
--
Bob Friesenhahn
as
well for operations like writes and file/directory deletes as ext4 or
XFS. The ext4 option is to be avoided for obvious reasons.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
to be in /any/ cache? If this isn't
The MMU page cache (memory) is the common interface to swap so swap
is often cached. It does not make sense to cache it twice though.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
to do a full 'zfs scrub' before voluntarily replacing the
suspect drive in case there is some undetected data error on one of
the other drives which can still be corrected.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
be able to intelligently schedule I/O for multiple drives, so
performance is reduced.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
update. However, new firmware may not
provide the same interpretation of the values.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
drives as active on the
other controller, and the drives are individually exported with a LUN
per drive. I used CAM to do that. MPXIO sees the changes and does
map 1/2 the paths down each FC link for more performance than one FC
link offers.
Bob
--
Bob Friesenhahn
bfrie
inventors. As with most
things, it is not a black/white issue and there are plenty of valid
reasons to put zfs on a big-LUN SAN device. It does not necessarily
end badly.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
in a different chassis.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
really are getting something close to 10Gbits of bandwidth. It is
quite possible that you have a broken network.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
, with
practically no reads.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
by Richard Elling. This tool will tell you how much and what
type of synchronous write traffic you have.
It is currently difficult to remove slog devices so it is safer to add
them if you determine they will help rather than reduce performance.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
(including
patch version if using Solaris 10) can make a big difference.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing
Solaris 10 U4 maybe you are using the dinosaur
version of fletcher2?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing
there is usually no penalty for enabling fletcher4. It
does seem like there could be some CPU impact for synchronous writes
from fletcher4 since it is more likely that the data is in cache for a
synchronous write.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
, there is a
point where disk storage size becomes unmanageable. This is the point
where we should transition from 3-1/2 disk to 2-1/2 disks with
smaller storage sizes. I see that 2-1/2 disks are already up to
500GB.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
to silence is to keep the
equipment cool enough that their fans run on low speed. If the doors
were replaced with solid core doors, then there would be a lot more
silence.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
be
better. Also, make sure that you have plenty of RAM installed.
What disk configuration (number of disks, and RAID topology) is the
NetApp using?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
.
Hanging everything via the nylon straps that you will find in the
plumbing/AC section of the hardware store is by far the best way to
eliminate transmission of vibration.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
. For
example, the duplicate metadata copy might be corrupt but the problem
is not detected since it did not happen to be used.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
On Mon, 28 Sep 2009, Bob Friesenhahn wrote:
This should work but it does not verify the redundant metadata. For example,
the duplicate metadata copy might be corrupt but the problem is not detected
since it did not happen to be used.
I am finding that your tar incantation is reading hardly
active
I/O. Of course this is not a green energy efficient solution.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
. But when reading one-at-a-time from individual 5 or 8MB files,
the data rate is much less (around 130MB/second).
I am using Solaris 10. OpenSolaris performance seems to be better
than Solaris 10.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
379809 385453 549364 553948
67108864 256 380286 377397 551060 550414
67108864 512 378225 385588 550131 557150
It seems like every time I run the benchmark, the numbers have
improved.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
be a fun experiment.
Others have done similar experiments with considerable success.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
pools once a week.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
in VirtualBox's local filesystem access to the host's files.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
write bursts of at least double that or
else it will not be helping bulk-write performance.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
for writes.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
. Previously we were advised that the slog is basically a
log of uncommitted system calls so the size of the data chunks written
to the slog should be similar to the data sizes in the system calls.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
is the turn this single
drive into a mirror. It seems that this sort of human error occurs
pretty often and there is not yet a way to properly fix it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
block
size. The /proc/mounts file for my Debian install shows that 1048576
is being used. This is quite large and perhaps a smaller value would
help. If you are willing to accept the risk, using the Linux 'async'
mount option may make things seem better.
Bob
--
Bob Friesenhahn
bfrie
is slow copying of many small files, this
COMMIT approach does not help very much since very little data is sent
per file and most time is spent creating directories and files.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
.
This is correct. The same applies to blocksize and compression.
I need to corroborate this understanding. Could someone please
point me to a document that states this? I have searched and
searched and cannot find this.
Sorry, I am not aware of a document and don't have time to look.
Bob
--
Bob
about your SAN device. If your SAN
device fails, the whole ZFS pool may be lost, and if the failure is
temporary, then the pool will be down until the SAN is restored.
If you care to keep your pool up and alive as much as possible, then
mirroring across SAN devices is recommended.
Bob
--
Bob
checksums.
This only helps for block-level corruption. It does not help much at
all if a whole LUN goes away. It seems best for single disk rpools.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
, and then recreate it with your zfs send file.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
will see far more complaints about
raidz taking a long time.
Resilver of mirrors will surely do better for large pools which
continue to be used during the resilvering.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
, mirrors are known to be more resilient to temporary path
failures.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing
faces kernel panics with recent U7+ kernel
patches (on AMD64 and SPARC) related to PCI bus upset, I expect that
Sun will take the time to make sure that the implementation is as good
as it can be and is thoroughly tested before release.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
be a real hardware
problem.
Regardless, when the integrity of our data is involved, I prefer to
wait for more testing rather than to potentially have to recover the
pool from backup.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
601 - 700 of 1392 matches
Mail list logo