to work quite
well. What have you heard is wrong with it?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs
disk to a raidz vdev, but you can add additional vdevs
to your pool (typically requires more than one disk) in order to
expand the pool size.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
and write more data than absolutely required so that zfs does not
need to read an existing block in order to update it. This also
explains why the l2arc can be so valuable, if the data then fits in
the ARC.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org
needs to send the data. It won't read a redundant copy if it does not
have to. It won't traverse metadata that it does not have to. A
scrub reads/verifies all data and metadata.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
ciently distant to protect against any local disaster. If there
is a disaster in the local data center, that system could be
immediately put on line (assuming adequate connectivity), or that
system could be loaded on a truck for overnight delivery as a
replacement to the data center.
Bob
--
/messages containing the trace.
Is there any point in submitting a bug report?
I seem to recall that you are not using ECC memory. If so, maybe the
panic is a good thing.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
.
I use zfs_arc_max. The reason is that this system tends to run
applications for a short period of time which require quite a lot of
memory but also do a lot of disk I/O. It is useful to hold some
memory in reserve.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
eTek 2540 here.
I agree with Ray Van Dolson that the evidence supplied thus far points
to an issue with the SSD. Perhaps the system is noticing a problem
and is continually resetting it. Check for messages in
/var/adm/messages.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://ww
somewhere else in the kernel.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
transitive protection offered by vendor
"support" are interesting, I will be glad to meet you in the
unemployment line then we can share some coffee and discuss the good
old days.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
which does not attempt to strictly balance the reads.
This does provide more performance than one disk, but not twice the
performance.
Is it even possible to do a raid 0+1?
No.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsM
that does not make them right.
At some point, using the wrong terminology becomes foolish and
counterproductive.
Striping and load-share seem quite different to me. The difference is
immediately apparent when watching the drive activity LEDs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.
words, I think that you are making a wise choice. :-)
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@
at there are enough working drives remaining to keep up
with RMAed units.
Be sure to mark any failed drive using a sledgehammer so that you
don't accidentally use it again by mistake.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/u
rrors instead of something like RAID-Z2 / RAID-Z3?
Because raidz3 only supports tripple redundancy but mirrors can
support much more.
And how many drives do you (recommend to) use within each mirror vdev?
Ten for this model of drive.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
ed by
design.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
or does it actually exist? I am
using s10_u8.
The Solaris 10 automounter should handle this for you:
% cat /etc/auto_home
# Home directory map for automounter
#
#+auto_home
* myserver:/export/home/&
Notice that the referenced path is subordinate to the exported zfs
filesystem.
Bob
--
B
ms are NFS exported due to the inheritance of
zfs properties from their parent directory. The property is only
set in one place.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
a.
Anyone who has played with audio frequency sweeps and a large
subwoofer soon becomes familiar with resonance and that the lower
frequencies often cause more problems than the higher ones.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
ummary for our edification?
Thanks,
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolari
algorithm does
consume noticeable amounts of CPU, particularly since the checksums
are computed when a transaction group is saved.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
hanism can synchronise its data to disk
before requesting a snapshot.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailin
ences thus far is that if you pay for a Sun service contract,
then you should definitely pay extra for Sun branded parts.
Hopefully Oracle will do better than Sun at explaining the benefits
and services provided by a service contract.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
specially designed to support it
whereas iSCSI is a TCP-based protocol. FCoE is basically fiber
channel "SAN" protocol over ethernet.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
uilt open standards
to solve it, is out there desperately trying to push some baloney
called Etherband or something because all you bank admins are too daft
to buy anything that does not have Ether in the name. :(
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/u
lesystem, which stores a cache of those file on the local system.
Clearcase intruments access to its versioning filesystem so it knows
all of the actions which resulted in a built object. This means that
there are two places (server and client) where zfs may be involved.
Bob
--
Bob Fr
single disk. For
random access, the stripe performance can not be faster than the
slowest disk though.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
our data.
The X25-M is about as valuable as a paper weight for use as a zfs
slog. Toilet paper would be a step up.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
umber of people have verified this for themselves and
posted results. Even the X25-E has been shown to lose some
transactions.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
Regardless, for zfs, memory is more important than raw CPU
performance.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mail
"Enterprise" as some tend to
believe.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@open
how much it costs. After that comes multi-threaded
memory I/O performance and power consumption. Raw CPU computational
performance should be way down in the priority level. Even a fairly
slow CPU should be able to saturate gigabit ethernet.
Bob
--
Bob Friesenhahn
bfrie...@
ors.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
On Mon, 8 Feb 2010, Richard Elling wrote:
If there is insufficient controller bandwidth capacity, then the
controller becomes the bottleneck.
We don't tend to see this for HDDs, but SSDs can crush a controller and
channel.
It is definitely seen with older PCI hardware.
Bob
-
ot aggressive enough.
I have observed that there may still be considerably more read
performance available (to another program/thread) even while a
benchmark program is reading sequentially as fast as it can.
Try running two copies of your benchmark program at once and see what
happens.
Bo
Some of us (outside
of Moscow) are keenly aware of the economic down-turn. There were
also grave errors in judgement from certain people in Sun management.
The only winner in the server-wars has been IBM. All the other
big players have been losing. Even Dell has been losing.
Bob
ppy Oracle product summary page which provides practically no
useful information at all.
As a long-time devoted Sun customer who selects products based on web
sites, I would not buy anything from Oracle until this gets fixed.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
#x27;iostat -xe'.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
0
Make sure to also test with a command like
iozone -m -t 8 -T -O -r 128k -o -s 12G
I am eager to read your test report.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
On Sat, 13 Feb 2010, Bob Friesenhahn wrote:
Make sure to also test with a command like
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
since it creates a 4GB test file for each thread, with 8 threads.
Bob
, which does
work on Solaris 10. See "http://www.brendangregg.com/dtrace.html";.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/_
g if there is a limit to the maximum size of an IDE-based
device and so some devices are claimed larger than others.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
l), he has no qualms with
suggesting to test the unstable version. ;-)
Regardless of denials to the contrary, Solaris 10 is still the stable
enterprise version of Solaris, and will be for quite some time. It
has not yet achieved the status of Solaris 8.
Bob
--
Bob Friesenhahn
memory to work with since that is now it is expected to
be used.
The performance of Solaris when it is given enough memory to do
reasonable caching is astounding.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintaine
laris performance postings I have seen are not terribly far from
Solaris 10.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss ma
talled. Others have relied on patience. A few have given up and
considered their pool totally lost.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
about read
latency, but L2ARC does not necessarily help with read bandwidth. It
is also useful to keep in mind that L2ARC offers at least 40x less
bandwidth than ARC in RAM. So always populate RAM first if you can
afford it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
nk that this is what you need to prepare for,
particularly with hardware going out on a truck to the field.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
may cause one to want fewer disks per raidz-N vdev, or
to use a higher level of raidz protection (e.g. raidz2 rather than
raidz1, or raidz3 rather than raidz2).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
le
providing the best service.
Bob
P.S. NASA is tracking large asteroids and meteors with the hope that
they will eventually be able to deflect any which will strike our
planet in order to in an effort to save your precious data.
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
g
another pool.
The vast majority of complaints to this list are about pool-wide
problems and not lost files due to media/disk failure.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
On Wed, 17 Feb 2010, Daniel Carosone wrote:
These small numbers just tell you to be more worried about defending
against the other stuff.
Let's not forget that the most common cause of data loss is human
error!
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
from human memory while it is still
underway. If an impeccable log book is not kept and understood, then
it is up to (potentially) multiple administrators with varying levels
of experience to correctly understand and interpret the output of
'zpool status'.
Bob
--
Bob Fr
ed in the Solaris release notes (maybe U5 or U6?)
and it happened to me. A fix to /etc/power.conf was required.
Perhaps that is what is happening to you.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
laris/zfs defaults. This would also allows you to expand the
partition size a bit for a larger pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/__
n" drive when it comes to update performance.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@op
e requests and orders them on disk in such a way that
subsequent "sequential" reads by the name number of threads in a
roughly similar order would see a performance benefit.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
resilver should complete within an 8-hour
work day so that a maintenance action can be performed in the morning,
and another in the evening.
Computers should be there to serve the attendant humans, not the other
way around. :-)
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
that measuring this is useless since
results like this are posted all over the internet, I challenge that
someone to find this data already published somewhere.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
ous writes cost more.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolari
mance drives were hideously expensive and
rather brute force). Which was relatively recently. The industry is
still evolving rapidly.
What is the problem is it that the X25-M cracked? The X25-M is
demonstrated to ignore cache sync and toss transactions. As such, it
is useless for a ZIL.
Bob
-
transaction group is written, then all of that
transient activity at the larger size is as if it never happened.
Eventually this is seen as a blessing.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
escription. --J. J.
Tolkien (The
Hobbit)
I am glad to be able to contribute positively and constructively to
this discussion.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
like with HP? Is there a loss of bandwidth
or reliability due to their approach?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
z
CPU
consumption is unexpected.
Are compression, sha256 checksums, or deduplication enabled for the
filesystem you are using?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
.
OpenSolaris is the king of multi-threading and excels on multiple
cores. Without this fine level of threading, SPARC CMT hardware would
be rendered useless.
With this in mind, some older versions of OpenSolaris did experience a
thread priority problem when compression was used.
Bob
--
Bob
Ie (4 lane) fiber channel card and its
duplex connection to the storage array.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss ma
.
Another option is to try latest opensolaris livecd from genunix.org,
and try to import it there.
Just a couple of days ago there was discussion of importing disks from
Linux FUSE zfs. The import was successful. The same methods used
(directory containing symbolic links to desired de
hate to use anything
but mirrors with so many tiny files.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs
zfs filesystem which has its recordsize property
set to a size not much larger than the size of the files. This should
reduce waste, resulting in reduced potential for fragmentation in the
rest of the pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
havior has been changed)? Did I
misunderstand?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opens
large numbers of file systems, in my configuration I have about 16K file
systems
to share and boot times can be several hours. There is an open bug
Is boot performance with 16K mounted and exported file systems a whole
lot better if you use UFS instead?
Bob
--
Bob Friesenhahn
ich writes continuously. The main thing you can do is
to adjust zfs tunables to limit the size of a transaction group, or to
increase the frequency of transaction group commits. One such tunable
is
zfs:zfs_write_limit_override
set in /etc/system.
Bob
--
Bob Friesenhahn
bfrie...@simple.da
le to
reasonably and efficiently support 16K mounted and exported file
systems.
Eventually Solaris is likely to work much better for this than it does
today, but most likely there are higher priorities at the moment.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystem
eed what the backing disks can sustain. Unfortunately, this
may increase the total amount of data written to underlying storage.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
to see if there are
unusually slow (or overloaded) disks or increasing error counts? Is
the CPU load unusually high?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
n sounds like a good idea. I doubt that GRUB
supports gzip compression so take care that you use a compression
algorithm that GRUB understands or your system won't boot.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsM
rge pool written to a single huge
SAN LUN suffers from concurrency issues. ZFS loses the ability to
intelligently schedule I/O for individual disks and instead must use
the strategy to post a lot of (up to 35) simultaneous I/Os and hope
for the best.
Bob
P.S. The term "zero" is quoted si
ences to the same block need to be updated whenever that block is
updated (copied).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-di
be due to writing
metadata.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
rge drives.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
install enough memory, then this becomes a non-issue.
If you are planning to build an NFS server, then it is good to know
that Solaris does NFS better than Linux or FreeBSD.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Main
On Wed, 19 Oct 2011, Peter Jeremy wrote:
Doesn't a scrub do more than what 'fsck' does?
It does different things. I'm not sure about "more".
Zfs scrub validates user data while 'fsck' does not. I consider that
as being definitel
x27;s only mailman email now.
I notice that the mail activity has diminished substantially since the
forums were shut down. Apparently they were still in use.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
itions and
resilvering will be slow due to the drive heads flailing back and
forth between partitions. There is also the issue that the block
allocation is not likely to be very efficient in terms of head
movement if two partitions are used.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.t
will still "resilver" blocks which
failed to read as long as there is a redundant copy.
If you do want to increase reliability then you should mirror between
disks, even if you feel that this will be slow. It will still be
faster (for reads) than using just one disk.
sed until the next transaction has
been successfully started by writing the previous TXG group record to
disk. Given properly working hardware, the worst case scenario is
losing the whole transaction group and no "corruption" occurs.
Loss of data as seen by the client can definitely occur.
B
itten in
a zfs transaction group not being representative of a coherent
database transaction.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
d in Solaris 11?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
ly check if low-level faults are being reported to fmd.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opens
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
Anything else you suggest I'd check for faults? (Though I'm sort of doubting it
is an issue, I'm happy to be
thorough)
Try running
fmdump -ef
and see if new low-level fault events are comming in during the zfs
receive.
Bob
--
disk was still working.
Raidz1 is not very robust when used with large disks and with one
drive totally failed.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
__
On Thu, 22 Dec 2011, Gareth de Vaux wrote:
On Thu 2011-12-22 (10:09), Bob Friesenhahn wrote:
One of your disks failed to return a sector. Due to redundancy, the
original data was recreated from the remaining disks. This is normal
good behavior (other than the disk failing to read the sector
t it would be a grevious error if the zpool
version supported by the BootCD was never than what the installed GRUB
and OS can support.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
omatically removed (from existing vdevs), and used to
add more vdevs. Eventually a limit would be hit so that no more
mirrors are allowed to be removed.
Obviously this approach works with simple mirrors but not for raidz.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems
re) which
provide an accurate description of how the ZIL works.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zf
the file is written.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
(e.g. 8K) then it can become a significant
issue. As Richard Elling points out, a database layered on top of zfs
may already be fragmented by design.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
d on underlying disk sectors
rather than filesystem blocks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-
401 - 500 of 1490 matches
Mail list logo