Re: [zfs-discuss] ZFS ARC Cache and Solaris u8

2010-01-14 Thread Bob Friesenhahn
to work quite well. What have you heard is wrong with it? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Add disk to raidz pool

2010-01-15 Thread Bob Friesenhahn
disk to a raidz vdev, but you can add additional vdevs to your pool (typically requires more than one disk) in order to expand the pool size. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] Recordsize...

2010-01-17 Thread Bob Friesenhahn
and write more data than absolutely required so that zfs does not need to read an existing block in order to update it. This also explains why the l2arc can be so valuable, if the data then fits in the ARC. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Bob Friesenhahn
needs to send the data. It won't read a redundant copy if it does not have to. It won't traverse metadata that it does not have to. A scrub reads/verifies all data and metadata. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ Grap

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Bob Friesenhahn
ciently distant to protect against any local disaster. If there is a disaster in the local data center, that system could be immediately put on line (assuming adequate connectivity), or that system could be loaded on a truck for overnight delivery as a replacement to the data center. Bob --

Re: [zfs-discuss] Panic running a scrub

2010-01-19 Thread Bob Friesenhahn
/messages containing the trace. Is there any point in submitting a bug report? I seem to recall that you are not using ECC memory. If so, maybe the panic is a good thing. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer

Re: [zfs-discuss] ZFS System Tuning - Solaris 10 u8

2010-01-19 Thread Bob Friesenhahn
. I use zfs_arc_max. The reason is that this system tends to run applications for a short period of time which require quite a lot of memory but also do a lot of disk I/O. It is useful to hold some memory in reserve. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] ZFS/NFS/LDOM performance issues

2010-01-19 Thread Bob Friesenhahn
eTek 2540 here. I agree with Ray Van Dolson that the evidence supplied thus far points to an issue with the SSD. Perhaps the system is noticing a problem and is continually resetting it. Check for messages in /var/adm/messages. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://ww

Re: [zfs-discuss] ZFS/NFS/LDOM performance issues

2010-01-19 Thread Bob Friesenhahn
somewhere else in the kernel. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Bob Friesenhahn
transitive protection offered by vendor "support" are interesting, I will be glad to meet you in the unemployment line then we can share some coffee and discuss the good old days. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Bob Friesenhahn
which does not attempt to strictly balance the reads. This does provide more performance than one disk, but not twice the performance. Is it even possible to do a raid 0+1? No. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsM

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-21 Thread Bob Friesenhahn
that does not make them right. At some point, using the wrong terminology becomes foolish and counterproductive. Striping and load-share seem quite different to me. The difference is immediately apparent when watching the drive activity LEDs. Bob -- Bob Friesenhahn bfrie...@simple.dallas.

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Bob Friesenhahn
words, I think that you are making a wise choice. :-) Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Bob Friesenhahn
at there are enough working drives remaining to keep up with RMAed units. Be sure to mark any failed drive using a sledgehammer so that you don't accidentally use it again by mistake. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/u

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Bob Friesenhahn
rrors instead of something like RAID-Z2 / RAID-Z3? Because raidz3 only supports tripple redundancy but mirrors can support much more. And how many drives do you (recommend to) use within each mirror vdev? Ten for this model of drive. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us,

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Bob Friesenhahn
ed by design. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/ma

Re: [zfs-discuss] nfs mounts don't follow child filesystems?

2010-01-23 Thread Bob Friesenhahn
or does it actually exist? I am using s10_u8. The Solaris 10 automounter should handle this for you: % cat /etc/auto_home # Home directory map for automounter # #+auto_home * myserver:/export/home/& Notice that the referenced path is subordinate to the exported zfs filesystem. Bob -- B

Re: [zfs-discuss] nfs mounts don't follow child filesystems?

2010-01-23 Thread Bob Friesenhahn
ms are NFS exported due to the inheritance of zfs properties from their parent directory. The property is only set in one place. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Bob Friesenhahn
-- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread Bob Friesenhahn
a. Anyone who has played with audio frequency sweeps and a large subwoofer soon becomes familiar with resonance and that the lower frequencies often cause more problems than the higher ones. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-28 Thread Bob Friesenhahn
ummary for our edification? Thanks, Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Checksum fletcher4 or sha256 ?

2010-01-29 Thread Bob Friesenhahn
algorithm does consume noticeable amounts of CPU, particularly since the checksums are computed when a transaction group is saved. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] ZFS Snapshots

2010-01-31 Thread Bob Friesenhahn
hanism can synchronise its data to disk before requesting a snapshot. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailin

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Bob Friesenhahn
ences thus far is that if you pay for a Sun service contract, then you should definitely pay extra for Sun branded parts. Hopefully Oracle will do better than Sun at explaining the benefits and services provided by a service contract. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us,

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Bob Friesenhahn
specially designed to support it whereas iSCSI is a TCP-based protocol. FCoE is basically fiber channel "SAN" protocol over ethernet. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://ww

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Bob Friesenhahn
uilt open standards to solve it, is out there desperately trying to push some baloney called Etherband or something because all you bank admins are too daft to buy anything that does not have Ether in the name. :( -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/u

Re: [zfs-discuss] ZFS compression on Clearcase

2010-02-04 Thread Bob Friesenhahn
lesystem, which stores a cache of those file on the local system. Clearcase intruments access to its versioning filesystem so it knows all of the actions which resulted in a built object. This means that there are two places (server and client) where zfs may be involved. Bob -- Bob Fr

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Bob Friesenhahn
single disk. For random access, the stripe performance can not be faster than the slowest disk though. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] Impact of an enterprise class SSD on ZIL performance

2010-02-04 Thread Bob Friesenhahn
our data. The X25-M is about as valuable as a paper weight for use as a zfs slog. Toilet paper would be a step up. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://ww

Re: [zfs-discuss] Impact of an enterprise class SSD on ZIL performance

2010-02-04 Thread Bob Friesenhahn
umber of people have verified this for themselves and posted results. Even the X25-E has been shown to lose some transactions. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://ww

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Bob Friesenhahn
Regardless, for zfs, memory is more important than raw CPU performance. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mail

Re: [zfs-discuss] Impact of an enterprise class SSD on ZIL performance

2010-02-05 Thread Bob Friesenhahn
"Enterprise" as some tend to believe. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@open

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Bob Friesenhahn
how much it costs. After that comes multi-threaded memory I/O performance and power consumption. Raw CPU computational performance should be way down in the priority level. Even a fairly slow CPU should be able to saturate gigabit ethernet. Bob -- Bob Friesenhahn bfrie...@

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-08 Thread Bob Friesenhahn
ors. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/li

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-08 Thread Bob Friesenhahn
On Mon, 8 Feb 2010, Richard Elling wrote: If there is insufficient controller bandwidth capacity, then the controller becomes the bottleneck. We don't tend to see this for HDDs, but SSDs can crush a controller and channel. It is definitely seen with older PCI hardware. Bob -

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-08 Thread Bob Friesenhahn
ot aggressive enough. I have observed that there may still be considerably more read performance available (to another program/thread) even while a benchmark program is reading sequentially as fast as it can. Try running two copies of your benchmark program at once and see what happens. Bo

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-09 Thread Bob Friesenhahn
Some of us (outside of Moscow) are keenly aware of the economic down-turn. There were also grave errors in judgement from certain people in Sun management. The only winner in the server-wars has been IBM. All the other big players have been losing. Even Dell has been losing. Bob

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-09 Thread Bob Friesenhahn
ppy Oracle product summary page which provides practically no useful information at all. As a long-time devoted Sun customer who selects products based on web sites, I would not buy anything from Oracle until this gets fixed. Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Bob Friesenhahn
#x27;iostat -xe'. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Bob Friesenhahn
0 Make sure to also test with a command like iozone -m -t 8 -T -O -r 128k -o -s 12G I am eager to read your test report. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Bob Friesenhahn
On Sat, 13 Feb 2010, Bob Friesenhahn wrote: Make sure to also test with a command like iozone -m -t 8 -T -O -r 128k -o -s 12G Actually, it seems that this is more than sufficient: iozone -m -t 8 -T -r 128k -o -s 4G since it creates a 4GB test file for each thread, with 8 threads. Bob

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Bob Friesenhahn
, which does work on Solaris 10. See "http://www.brendangregg.com/dtrace.html";. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/_

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Bob Friesenhahn
g if there is a limit to the maximum size of an IDE-based device and so some devices are claimed larger than others. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
l), he has no qualms with suggesting to test the unstable version. ;-) Regardless of denials to the contrary, Solaris 10 is still the stable enterprise version of Solaris, and will be for quite some time. It has not yet achieved the status of Solaris 8. Bob -- Bob Friesenhahn

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
memory to work with since that is now it is expected to be used. The performance of Solaris when it is given enough memory to do reasonable caching is astounding. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintaine

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
laris performance postings I have seen are not terribly far from Solaris 10. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___ zfs-discuss ma

Re: [zfs-discuss] ZFS Volume Destroy Halts I/O

2010-02-15 Thread Bob Friesenhahn
talled. Others have relied on patience. A few have given up and considered their pool totally lost. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

Re: [zfs-discuss] SSD and ZFS

2010-02-16 Thread Bob Friesenhahn
about read latency, but L2ARC does not necessarily help with read bandwidth. It is also useful to keep in mind that L2ARC offers at least 40x less bandwidth than ARC in RAM. So always populate RAM first if you can afford it. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us,

Re: [zfs-discuss] Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3

2010-02-16 Thread Bob Friesenhahn
nk that this is what you need to prepare for, particularly with hardware going out on a truck to the field. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] Plan for upgrading a ZFS based SAN

2010-02-16 Thread Bob Friesenhahn
may cause one to want fewer disks per raidz-N vdev, or to use a higher level of raidz protection (e.g. raidz2 rather than raidz1, or raidz3 rather than raidz2). Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-16 Thread Bob Friesenhahn
le providing the best service. Bob P.S. NASA is tracking large asteroids and meteors with the hope that they will eventually be able to deflect any which will strike our planet in order to in an effort to save your precious data. -- Bob Friesenhahn bfrie...@simple.dallas.tx.us,

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-16 Thread Bob Friesenhahn
g another pool. The vast majority of complaints to this list are about pool-wide problems and not lost files due to media/disk failure. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-17 Thread Bob Friesenhahn
On Wed, 17 Feb 2010, Daniel Carosone wrote: These small numbers just tell you to be more worried about defending against the other stuff. Let's not forget that the most common cause of data loss is human error! Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us,

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-17 Thread Bob Friesenhahn
from human memory while it is still underway. If an impeccable log book is not kept and understood, then it is up to (potentially) multiple administrators with varying levels of experience to correctly understand and interpret the output of 'zpool status'. Bob -- Bob Fr

Re: [zfs-discuss] false DEGRADED status based on "cannot open" device at boot.

2010-02-17 Thread Bob Friesenhahn
ed in the Solaris release notes (maybe U5 or U6?) and it happened to me. A fix to /etc/power.conf was required. Perhaps that is what is happening to you. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] Help with corrupted pool

2010-02-17 Thread Bob Friesenhahn
laris/zfs defaults. This would also allows you to expand the partition size a bit for a larger pool. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/__

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Bob Friesenhahn
n" drive when it comes to update performance. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Bob Friesenhahn
e requests and orders them on disk in such a way that subsequent "sequential" reads by the name number of threads in a roughly similar order would see a performance benefit. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-18 Thread Bob Friesenhahn
resilver should complete within an 8-hour work day so that a maintenance action can be performed in the morning, and another in the evening. Computers should be there to serve the attendant humans, not the other way around. :-) Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Bob Friesenhahn
that measuring this is useless since results like this are posted all over the internet, I challenge that someone to find this data already published somewhere. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ Grap

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Bob Friesenhahn
ous writes cost more. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Bob Friesenhahn
mance drives were hideously expensive and rather brute force). Which was relatively recently. The industry is still evolving rapidly. What is the problem is it that the X25-M cracked? The X25-M is demonstrated to ignore cache sync and toss transactions. As such, it is useless for a ZIL. Bob -

Re: [zfs-discuss] More performance questions [on zfs over nfs]

2010-02-21 Thread Bob Friesenhahn
transaction group is written, then all of that transient activity at the larger size is as if it never happened. Eventually this is seen as a blessing. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] future of OpenSolaris

2010-02-23 Thread Bob Friesenhahn
escription. --J. J. Tolkien (The Hobbit) I am glad to be able to contribute positively and constructively to this discussion. Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-23 Thread Bob Friesenhahn
like with HP? Is there a loss of bandwidth or reliability due to their approach? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ z

Re: [zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Bob Friesenhahn
CPU consumption is unexpected. Are compression, sha256 checksums, or deduplication enabled for the filesystem you are using? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Bob Friesenhahn
. OpenSolaris is the king of multi-threading and excels on multiple cores. Without this fine level of threading, SPARC CMT hardware would be rendered useless. With this in mind, some older versions of OpenSolaris did experience a thread priority problem when compression was used. Bob -- Bob

Re: [zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Bob Friesenhahn
Ie (4 lane) fiber channel card and its duplex connection to the storage array. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss ma

Re: [zfs-discuss] Import zpool from FreeBSD in OpenSolaris

2010-02-23 Thread Bob Friesenhahn
. Another option is to try latest opensolaris livecd from genunix.org, and try to import it there. Just a couple of days ago there was discussion of importing disks from Linux FUSE zfs. The import was successful. The same methods used (directory containing symbolic links to desired de

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Bob Friesenhahn
hate to use anything but mirrors with so many tiny files. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Bob Friesenhahn
zfs filesystem which has its recordsize property set to a size not much larger than the size of the files. This should reduce waste, resulting in reduced potential for fragmentation in the rest of the pool. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Bob Friesenhahn
havior has been changed)? Did I misunderstand? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-25 Thread Bob Friesenhahn
large numbers of file systems, in my configuration I have about 16K file systems to share and boot times can be several hours.  There is an open bug Is boot performance with 16K mounted and exported file systems a whole lot better if you use UFS instead? Bob -- Bob Friesenhahn

Re: [zfs-discuss] application writes are blocked near the end of spa_sync

2010-02-25 Thread Bob Friesenhahn
ich writes continuously. The main thing you can do is to adjust zfs tunables to limit the size of a transaction group, or to increase the frequency of transaction group commits. One such tunable is zfs:zfs_write_limit_override set in /etc/system. Bob -- Bob Friesenhahn bfrie...@simple.da

Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-25 Thread Bob Friesenhahn
le to reasonably and efficiently support 16K mounted and exported file systems. Eventually Solaris is likely to work much better for this than it does today, but most likely there are higher priorities at the moment. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystem

Re: [zfs-discuss] application writes are blocked near the end of spa_sync

2010-02-26 Thread Bob Friesenhahn
eed what the backing disks can sustain. Unfortunately, this may increase the total amount of data written to underlying storage. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] slow zfs scrub?

2010-02-28 Thread Bob Friesenhahn
to see if there are unusually slow (or overloaded) disks or increasing error counts? Is the CPU load unusually high? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://ww

Re: [zfs-discuss] ZFS compression and deduplication on root pool on SSD

2010-02-28 Thread Bob Friesenhahn
n sounds like a good idea. I doubt that GRUB supports gzip compression so take care that you use a compression algorithm that GRUB understands or your system won't boot. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsM

Re: [zfs-discuss] ZFS Large scale deployment model

2010-03-02 Thread Bob Friesenhahn
rge pool written to a single huge SAN LUN suffers from concurrency issues. ZFS loses the ability to intelligently schedule I/O for individual disks and instead must use the strategy to post a lot of (up to 35) simultaneous I/Os and hope for the best. Bob P.S. The term "zero" is quoted si

Re: [zfs-discuss] zvol space consumption vs ashift, metadata packing

2011-10-10 Thread Bob Friesenhahn
ences to the same block need to be updated whenever that block is updated (copied). Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-di

Re: [zfs-discuss] zvol space consumption vs ashift, metadata packing

2011-10-10 Thread Bob Friesenhahn
be due to writing metadata. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-15 Thread Bob Friesenhahn
rge drives. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.o

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Bob Friesenhahn
install enough memory, then this becomes a non-issue. If you are planning to build an NFS server, then it is good to know that Solaris does NFS better than Linux or FreeBSD. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Main

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Bob Friesenhahn
On Wed, 19 Oct 2011, Peter Jeremy wrote: Doesn't a scrub do more than what 'fsck' does? It does different things. I'm not sure about "more". Zfs scrub validates user data while 'fsck' does not. I consider that as being definitel

Re: [zfs-discuss] Log disk with all ssd pool?

2011-11-01 Thread Bob Friesenhahn
x27;s only mailman email now. I notice that the mail activity has diminished substantially since the forums were shut down. Apparently they were still in use. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] Couple of questions about ZFS on laptops

2011-11-08 Thread Bob Friesenhahn
itions and resilvering will be slow due to the drive heads flailing back and forth between partitions. There is also the issue that the block allocation is not likely to be very efficient in terms of head movement if two partitions are used. Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] Couple of questions about ZFS on laptops

2011-11-08 Thread Bob Friesenhahn
will still "resilver" blocks which failed to read as long as there is a redundant copy. If you do want to increase reliability then you should mirror between disks, even if you feel that this will be slow. It will still be faster (for reads) than using just one disk.

Re: [zfs-discuss] zfs sync=disabled property

2011-11-10 Thread Bob Friesenhahn
sed until the next transaction has been successfully started by writing the previous TXG group record to disk. Given properly working hardware, the worst case scenario is losing the whole transaction group and no "corruption" occurs. Loss of data as seen by the client can definitely occur. B

Re: [zfs-discuss] zfs sync=disabled property

2011-11-11 Thread Bob Friesenhahn
itten in a zfs transaction group not being representative of a coherent database transaction. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Bob Friesenhahn
d in Solaris 11? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bob Friesenhahn
ly check if low-level faults are being reported to fmd. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bob Friesenhahn
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote: Anything else you suggest I'd check for faults? (Though I'm sort of doubting it is an issue, I'm happy to be thorough) Try running fmdump -ef and see if new low-level fault events are comming in during the zfs receive. Bob --

Re: [zfs-discuss] Corrupt Array

2011-12-22 Thread Bob Friesenhahn
disk was still working. Raidz1 is not very robust when used with large disks and with one drive totally failed. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ __

Re: [zfs-discuss] Corrupt Array

2011-12-22 Thread Bob Friesenhahn
On Thu, 22 Dec 2011, Gareth de Vaux wrote: On Thu 2011-12-22 (10:09), Bob Friesenhahn wrote: One of your disks failed to return a sector. Due to redundancy, the original data was recreated from the remaining disks. This is normal good behavior (other than the disk failing to read the sector

Re: [zfs-discuss] ZFS Upgrade

2012-01-07 Thread Bob Friesenhahn
t it would be a grevious error if the zpool version supported by the BootCD was never than what the installed GRUB and OS can support. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] ZFS and spread-spares (kinda like GPFS declustered RAID)?

2012-01-07 Thread Bob Friesenhahn
omatically removed (from existing vdevs), and used to add more vdevs. Eventually a limit would be hit so that no more mirrors are allowed to be removed. Obviously this approach works with simple mirrors but not for raidz. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems

Re: [zfs-discuss] ZIL on a dedicated HDD slice (1-2 disk systems)

2012-01-08 Thread Bob Friesenhahn
re) which provide an accurate description of how the ZIL works. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zf

Re: [zfs-discuss] zfs defragmentation via resilvering?

2012-01-08 Thread Bob Friesenhahn
the file is written. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] zfs defragmentation via resilvering?

2012-01-09 Thread Bob Friesenhahn
(e.g. 8K) then it can become a significant issue. As Richard Elling points out, a database layered on top of zfs may already be fragmented by design. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] zfs defragmentation via resilvering?

2012-01-13 Thread Bob Friesenhahn
d on underlying disk sectors rather than filesystem blocks. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-

<    1   2   3   4   5   6   7   8   9   10   >