Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-26 Thread Bob Friesenhahn
and the type of drives used. All types of drives fail but typical SATA drives fail more often. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-27 Thread Bob Friesenhahn
there is space for the new drive, it is not necessary to degrade the array and lose redundancy while replacing a device. As long as you can physically add a drive to the system (even temporarily) it is not necessary to deliberately create a fault situation. Bob -- Bob Friesenhahn

Re: [zfs-discuss] Performance drop during scrub?

2010-04-27 Thread Bob Friesenhahn
disk resources. A pool based on mirror devices will behave much more nicely while being scrubbed than one based on RAIDz2. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagic

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-27 Thread Bob Friesenhahn
orm of replace like 'zpool replace tank c1t1d0 c1t1d7'. If I understand things correctly, this allows you to replace one good disk with another without risking the data in your pool. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-27 Thread Bob Friesenhahn
immediately when the transfer starts? Please do me a favor and check this for me. Thanks, Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ _

Re: [zfs-discuss] Performance drop during scrub?

2010-04-28 Thread Bob Friesenhahn
become IOPS bound. It is true that all HDDs become IOPS bound, but the mirror configuration offers more usable IOPS and therefore the user waits for less time for their request to be satisfied. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Bob Friesenhahn
uld remain alive. Likewise, there is likely more I/O bandwidth available if the vdevs are spread across controllers. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Bob Friesenhahn
made that smaller less capable simplex-routed shelves may be a more cost effective and reliable solution when used carefully with zfs. For example, mini-shelves which support 8 drives each. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen

Re: [zfs-discuss] backwards/forward compatibility

2010-04-28 Thread Bob Friesenhahn
possible to create an older pool version using newer software. Of course, it is also necessary to make sure that any created filesystems are a sufficiently low version. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Bob Friesenhahn
many hours. I rather have the disk finished resilvering before I have the chance to replace the bad disk than to risk more disks fail before It had a chance to resilverize. Would your opinion change if the disks you used took 7 days to resilver? Bob -- Bob Friesenhahn bfrie

Re: [zfs-discuss] Performance drop during scrub?

2010-04-29 Thread Bob Friesenhahn
l for discovering bit-rot in singly-redundant pools. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-04-30 Thread Bob Friesenhahn
that you will be able to reinstall the OS and achieve what you had before. An exact recovery method (dd of partition images or recreate pool with 'zfs receive') seems like the only way to be assured of recovery moving forward. Bob -- Bob Friesenhahn bfrie...@simple.dal

Re: [zfs-discuss] Performance drop during scrub?

2010-04-30 Thread Bob Friesenhahn
re is more than one level of redundancy, scrubs are not really warranted. With just one level of redundancy it becomes much more important to verify that both copies were written to disk correctly. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-01 Thread Bob Friesenhahn
Then you would update to the minimum version required to support that feature. Note that if the default filesystem version changes and you create a new filesystem, this may also cause problems (I have been bit by that before). Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] Performance drop during scrub?

2010-05-01 Thread Bob Friesenhahn
ragile and easily broken. It is necessary to look at all the factors which might result in data loss before deciding what the most effective steps are to minimize the probability of loss. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ Grap

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-01 Thread Bob Friesenhahn
all", then you would be ok. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] Performance drop during scrub?

2010-05-02 Thread Bob Friesenhahn
flaws, become dominant. The human factor is often the most dominant factor when it comes to data loss since most data loss is still due to human error. Most data loss problems we see reported here are due to human error or hardware design flaws. Bob -- Bob Friesenhahn bfrie

Re: [zfs-discuss] Performance drop during scrub?

2010-05-02 Thread Bob Friesenhahn
s in return for wisdom. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] Default 'zpool' I want to move it to my new raidz pool 'gpool' how?

2010-05-02 Thread Bob Friesenhahn
On Sun, 2 May 2010, Roy Sigurd Karlsbakk wrote: Any guidance on how to do it? I tried to do zfs snapshot You can't boot off raidz. That's for data only. Unless you use FreeBSD ... Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user

Re: [zfs-discuss] Performance drop during scrub?

2010-05-02 Thread Bob Friesenhahn
ill usually hit the (old) hardware less severely than scrub. Resilver does not have to access any of the redundant copies of data or metadata, unless they are the only remaining good copy. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Default 'zpool' I want to move it to my new raidz pool 'gpool' how?

2010-05-02 Thread Bob Friesenhahn
ld be somewhat problematic. Once the new drive is functional, you can detatch the failing one. Make sure that GRUB will boot from the new drive. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagic

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-04 Thread Bob Friesenhahn
90-day eval entitlement) has expired. As a result, it is wise for Solaris 10 users to maintain a local repository of licensed patches in case their service contract should expire. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ Grap

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-04 Thread Bob Friesenhahn
t like in OpenSolaris. Solaris 10 Live Upgrade is dramatically improved in conjunction with zfs boot. I am not sure how far behind it is from OpenSolaris new boot administration tools but under zfs its function can not be terribly different. Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-05 Thread Bob Friesenhahn
s quite doable since it is the normal case as when a system is installed onto a mirror pair of disks. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-05 Thread Bob Friesenhahn
h 'zpool status' does not mention it and it says the disk is resilvered. The flashing lights annoyed me so I exported and imported the pool and then the flashing lights were gone. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-05 Thread Bob Friesenhahn
percolate down to Solaris 10 so I don't feel as left out as Richard would like me to feel. From a zfs standpoint, Solaris 10 does not seem to be behind the currently supported OpenSolaris release. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-05 Thread Bob Friesenhahn
e far different. Solaris 10 can then play catch-up with the release of U9 in 2012. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-di

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Bob Friesenhahn
ful to me. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/m

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Bob Friesenhahn
the number of simultaneous requests. Currently available L2ARC SSD devices are very good with a high number of I/Os, but they are quite a bottleneck for bulk reads as compared to L1ARC in RAM . Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen

Re: [zfs-discuss] Disks configuration for a zfs based samba/cifs file server

2010-05-07 Thread Bob Friesenhahn
be even better. The good news is that if you do slice your disks, you can change your mind in the future if/when you add more disks. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] ZFS root ARC memory usage on VxFS system...

2010-05-07 Thread Bob Friesenhahn
pps that need memory, but is vxfs asking for memory in a way that zfs is pushing it into the corner? Zfs does not push very hard (although it likes to consume). Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ Grap

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-08 Thread Bob Friesenhahn
ap enabled, the kernel is given another degree of freedom, to choose which is colder: idle process memory, or cold cached files. Are you sure about this? It is always good to be sure ... -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-09 Thread Bob Friesenhahn
/proc/sys/vm/swappiness. Anyone that knows how this is tuned in osol, btw? While this is the zfs-discuss list, usually we are talking about Solaris/OpenSolaris here rather than "most OSes". No? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-09 Thread Bob Friesenhahn
things up again by not caching file data via the "unified page cache" and using a specialized ARC instead. It seems that simple paging and MMU control was found not to be smart enough. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfri

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-10 Thread Bob Friesenhahn
I certainly would not want to use a system which is in this dire condition. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ _

Re: [zfs-discuss] Storage 7410 Flush ARC for filebench

2010-05-12 Thread Bob Friesenhahn
nting the fileysystem is sufficient. For my own little cpio-based "benchmark", umount/mount was sufficient to restore uncached behavior. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] Very serious performance degradation

2010-05-18 Thread Bob Friesenhahn
are using, and the controller type. It seems likely that one or more of your disks are barely working from time of initial installation. Even 20 MB/s is quite slow. Use 'iostat -x 30' with an I/O load to see if one disk is much slower than the others. Bob -- Bob Friesen

Re: [zfs-discuss] Very serious performance degradation

2010-05-18 Thread Bob Friesenhahn
would look just as bad. Regardless, the first step would be to investigate 'sd5'. If 'sd4' is also a terrible performer, then resilvering a disk replacement of 'sd5' may take a very long time. Use 'iostat -xen' to obtain more informatio

Re: [zfs-discuss] ZFS in campus clusters

2010-05-19 Thread Bob Friesenhahn
;re not going to get this if any part of the log device is at the other side of a WAN. So either add a mirror of log devices locally and not across the WAN, or don't do it at all. This depends on the nature of the WAN. The WAN latency may still be relatively low as compared with drive lat

Re: [zfs-discuss] ZFS memory recommendations

2010-05-19 Thread Bob Friesenhahn
e maximum number of files per directory all make a difference to how much RAM you should have for good performance. If you have 200TB of stored data, but only actually access 2GB of it at any one time, then the caching requirements are not very high. Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-21 Thread Bob Friesenhahn
blind assumptions in the above. The only good choice for ZIL is when you know for a certainty and not assumptions based on 3rd party articles and blog postings. Otherwise it is like assuming that if you jump through an open window that there will be firemen down below to catch you. Bob

Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn
he vendor has historically published reliable specification sheets. This may not be the same as money in the bank, but it is better than relying on thoughts from some blog posting. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn
depends on plenty of already erased blocks) would stop once the spare space in the SSD has been consumed. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn
en be overwritten. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari

Re: [zfs-discuss] ZFS no longer working with FC devices.

2010-05-22 Thread Bob Friesenhahn
adding this to your /etc/system file: * Set device I/O maximum concurrency * http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29 set zfs:zfs_vdev_max_pending = 5 Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] New SSD options

2010-05-22 Thread Bob Friesenhahn
lying power to the drive before the capacitor is sufficiently charged, and some circuitry which shuts off the flow of energy back into the power supply when the power supply shuts off (could be a silicon diode if you don't mind the 0.7 V drop). Bob -- Bob Friesenhahn bfrie...@simple.da

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Bob Friesenhahn
is necessarily best for this. Perhaps the load issued to these two disks contains more random access requests. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] questions about zil

2010-05-25 Thread Bob Friesenhahn
der extremely ideal circumstances. It seems that toilet paper may of much more practical use than these specifications. In fact, I reject them as being specifications at all. The Apollo reentry vehicle was able to reach amazing speeds, but only for a single use. Bob -- Bob Friesenhahn bfrie...

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-25 Thread Bob Friesenhahn
che, even if it is just a 4K working buffer representing one SSD erasure block. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss

Re: [zfs-discuss] questions about zil

2010-05-25 Thread Bob Friesenhahn
Perhaps if you were talking about a maximum ambient temperature specification or maximum allowed elevation, then a maximum specification makes sense. Perhaps the device is fine (I have no idea) but these posted specifications are virtually useless. Bob -- Bob Friesenhahn bfrie...@simple.da

Re: [zfs-discuss] creating a fast ZIL device for $200

2010-05-26 Thread Bob Friesenhahn
early enough that it will always be there when needed. The profiling application might need to drive a disk for several hours (or a day) in order to fully understand how it behaves. Remapped failed sectors would cause this micro-timing to fail, but only for the remapped sectors. Bob -

Re: [zfs-discuss] zfs send/recv reliability

2010-05-28 Thread Bob Friesenhahn
traditional backup tools like tar, cpio, etc…? The whole stream will be rejected if a single bit is flopped. Tar and cpio will happily barge on through the error. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-31 Thread Bob Friesenhahn
s you are using, but I think that there are recent updates to Solaris 10 and OpenSolaris which are supposed to solve this problem (zfs blocking access to CPU by applications). From Solaris 10 x86 (kernel 142901-09): 6586537 async zio taskqs can block out userland commands Bob -- Bob Friese

Re: [zfs-discuss] mirror writes 10x slower than individual writes

2010-05-31 Thread Bob Friesenhahn
performance from zfs using these drives. I suggest getting rid of it and replace it with a drive which still uses standard 512 byte sectors internally. It will be like night and day. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-31 Thread Bob Friesenhahn
te in each burst. I think that (with multiple writers) the zfs pool will be "healthier" and less fragmented if you can offer zfs more RAM and accept some stalls during writing. There are always tradeoffs. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystem

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-31 Thread Bob Friesenhahn
h may seem like more CPU but could be related to PCI-E access, interrupts, or a controller bottleneck. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-06-01 Thread Bob Friesenhahn
. Depending on your system, this might not be an issue, but it is possible that there is an I/O threshold beyond which something (probably hardware) causes a performance issue. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Main

Re: [zfs-discuss] HDD best practices

2010-06-01 Thread Bob Friesenhahn
service a drive which has completely failed. Some arrays may lose the LUN entirely and require that it be recreated via a management interface rather than immediately making a new drive available via the original LUN ID. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Bob Friesenhahn
from raidz3 is unlikely to be borne out in practice. Other potential failures modes will completely drown out the on-paper reliability improvement provided by raidz3. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] ZFS ARC cache issue

2010-06-03 Thread Bob Friesenhahn
gram arranges to modify a byte in each page of allocated memory, then there is a better chance of success (but not assured). Expect system performance to suffer dramatically. SunOS 4 provided a program named "chill" which performed this function. Bob -- Bob Friesenhahn

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Bob Friesenhahn
, or a tree can fall on the computer during a storm. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-06-04 Thread Bob Friesenhahn
ld be able to get a gigabit of traffic in both directions at once, but this depends on the quality of your ethernet switch, ethernet adaptor card, device driver, and capabilities of where the data is read and written to. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystem

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-06-05 Thread Bob Friesenhahn
based on recent activity and should dynamically tune prefech based on that knowledge. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-05 Thread Bob Friesenhahn
-- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] zfs corruptions in pool

2010-06-06 Thread Bob Friesenhahn
the next transaction group. If the disk fails to sync its cache and writes data out of order (data from multiple transaction groups), then zfs loses consistency. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintaine

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Bob Friesenhahn
local-link level, and with long delays. You can be sure that companies like cisco will be (or are) selling FCoE hardware to compete with FC SANs. The intention is that ethernet will put fibre channel out of business. We shall see if history repeats itself. Bob -- Bob Friesenhahn

Re: [zfs-discuss] NOTICE: spa_import_rootpool: error 5

2010-06-07 Thread Bob Friesenhahn
release/patch level they were at before. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-08 Thread Bob Friesenhahn
he weeds although there are probably plenty of weeds growing at the ranch. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-10 Thread Bob Friesenhahn
RAM also help immensely. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-10 Thread Bob Friesenhahn
n). These rules of thumb are not terribly accurate. If performance is important, then there is no substitute for actual testing. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] swap - where is it coming from?

2010-06-10 Thread Bob Friesenhahn
the kernel and other caches use a large part of the memory. Don't forget that virtual memory pages may also come from memory mapped files from the filesystem. However, it seems that zfs is effectively diminishing this. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.si

Re: [zfs-discuss] Please trim posts

2010-06-10 Thread Bob Friesenhahn
very good at hiding existing text in its user interface so people think nothing of including most/all of the email they are replying to. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Bob Friesenhahn
the CDDLd original ZFS implementation into the Linux kernel. +1 The issues are largely philosophical. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ _

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Bob Friesenhahn
ompatibility" might actually mean. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Bob Friesenhahn
ng these things since if it was all actually true, then Linux, *BSD, and Solaris distributions could not legally exist. Thankfully, only part of the above is true. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintaine

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Bob Friesenhahn
On Fri, 11 Jun 2010, Freddie Cash wrote: On Fri, Jun 11, 2010 at 12:25 PM, Bob Friesenhahn wrote: On Fri, 11 Jun 2010, Freddie Cash wrote: For the record, the following paragraph was incorrectly quoted by Bob.  This paragraph was originally It would not have been incorrectly quoted

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-11 Thread Bob Friesenhahn
www.fcoe.com/ http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-462176.html http://www.brocade.com/products-solutions/solutions/connectivity/FCoE/index.page http://www.emulex.com/products/converged-network-adapters.html Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us

Re: [zfs-discuss] Native ZFS for Linux

2010-06-12 Thread Bob Friesenhahn
GPL license. GPLv3 was written over a span of quite a few years, with many lawyers involved. Opinions/advice on the FSF/GNU web site are now based on GPLv3 since it is the current GPL license. Linux is locked into the GPLv2 license since Linus did not trust the FSF. Bob -- Bob Friesenhahn

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Bob Friesenhahn
highly unlikely. Also, that the zil is not read back unless the system is improperly shut down. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Bob Friesenhahn
rom the original zfs design. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

Re: [zfs-discuss] Native ZFS for Linux

2010-06-15 Thread Bob Friesenhahn
to link a computer containing GPLed code to the Internet. I think I heard on usenet or a blog that it was illegal to link GPLed code with non-GPLed code. The Internet itself is obviously a derived work and is therefore subject to the GPL. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us

Re: [zfs-discuss] SSDs adequate ZIL devices?

2010-06-15 Thread Bob Friesenhahn
response latency). Battery-backed RAM in the adaptor card or storage array can do almost as well as the SSD as long as the amount of data does not overrun the limited write cache. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick

Re: [zfs-discuss] SSDs adequate ZIL devices?

2010-06-16 Thread Bob Friesenhahn
oller acks need to be ultimately delivered to the disk or else there WILL be data loss. The RAID controller should not purge its own record until the disk reports that it has flushed its cache. Once the RAID controller's cache is full, then it should start stalling writes. Bob -- Bob F

Re: [zfs-discuss] does sharing an SSD as slog and l2arc reduces its life span?

2010-06-19 Thread Bob Friesenhahn
? No. Normally you would not want to use a MLC SSD as a slog. The SLC SSDs wear out quicker than one would like under heavy repeated writes. Over-provisioning the slog SSD storage size should help. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users

Re: [zfs-discuss] does sharing an SSD as slog and l2arc reduces its life span?

2010-06-20 Thread Bob Friesenhahn
or thought. More food! The MLC drives seem to usually have more write latency than the SLC drives. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ _

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Bob Friesenhahn
tem should be stuffed with RAM first as long as the budget can afford it. Luckily, most servers experience mostly repeated reads. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Bob Friesenhahn
help and result in more vdevs in the pool. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Bob Friesenhahn
to be lost. Data would only be recovered up to the first point of loss, even though some newer data is still available on a different SSD. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Bob Friesenhahn
eans) if you study how it works in sufficient detail. At least that is what we have been told. The slog does not do micro-striping, nano-striping, pico-striping, or femto-striping (not even at the sub-bit level) but it does do mega-striping. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us

Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Bob Friesenhahn
around without first exporting the pool is something which is best avoided. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Bob Friesenhahn
is comprised entirely of FLASH SSDs. This should help with the IOPS, particularly when reading. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-27 Thread Bob Friesenhahn
have suggested, perhaps you should try FreeBSD? As long as the hardware supports 64-bits, I definitely second that suggestion. FreeBSD is often severely underestimated by those who have never used it. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users

Re: [zfs-discuss] zfs-discuss Digest, Vol 56, Issue 126

2010-06-30 Thread Bob Friesenhahn
you specify that in the Bacula config file as to how long you want to keep the tapes around. So it really comes down to your use-case. -- ___ zfs-discuss mailing list http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-30 Thread Bob Friesenhahn
On Wed, 30 Jun 2010, Fred Liu wrote: Any duration limit on the supercap? How long can it sustain the data? A supercap on a SSD drive only needs to sustain the data until it has been saved (perhaps 10 milliseconds). It is different than a RAID array battery. Bob -- Bob Friesenhahn bfrie

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Bob Friesenhahn
y being in cache. A quite busy system can still report very little via 'zpool iostat' if it has enough RAM to cache the requested data. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] Cache flush (or the lack of such) and corruption

2010-07-10 Thread Bob Friesenhahn
tremendously smaller than today's zfs storage pools, and they might even be on just one disk. Regardless, only someone with severely failing memory might think that "old-time filesystems" are somehow less failure prone than a zfs storage pool. The "good old days"

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-11 Thread Bob Friesenhahn
used for synchronous writes, and a local file copy is not normally going to use synchronous writes. Also, even if the slog was used, it gets emptied pretty quickly. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Bob Friesenhahn
compelled in any way to offer a license for use of the patent. Without a patent license, shipping products can be stopped dead in their tracks. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] Encryption?

2010-07-14 Thread Bob Friesenhahn
not quite necessary to fork and create a truely independent distribution just yet. OpenSolaris is currently not left any more in the lurch than Solaris 10 is since paying Solaris 10 users are still wondering what happened to the U9 release which Sun had already promised. Bob -- Bob Friesen

Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Bob Friesenhahn
ally leaner but FreeBSD plus zfs is not (yet) as memory efficient as Solaris. Solaris and zfs do the Vulcan mind-meld when it comes to memory but FreeBSD does not. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Main

<    1   2   3   4   5   6   7   8   9   10   >