Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-31 Thread Bob Friesenhahn
s you are using, but I think that there are recent updates to Solaris 10 and OpenSolaris which are supposed to solve this problem (zfs blocking access to CPU by applications). From Solaris 10 x86 (kernel 142901-09): 6586537 async zio taskqs can block out userland commands Bob -- Bob Friese

Re: [zfs-discuss] zfs send/recv reliability

2010-05-28 Thread Bob Friesenhahn
traditional backup tools like tar, cpio, etc…? The whole stream will be rejected if a single bit is flopped. Tar and cpio will happily barge on through the error. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] creating a fast ZIL device for $200

2010-05-26 Thread Bob Friesenhahn
early enough that it will always be there when needed. The profiling application might need to drive a disk for several hours (or a day) in order to fully understand how it behaves. Remapped failed sectors would cause this micro-timing to fail, but only for the remapped sectors. Bob -

Re: [zfs-discuss] questions about zil

2010-05-25 Thread Bob Friesenhahn
Perhaps if you were talking about a maximum ambient temperature specification or maximum allowed elevation, then a maximum specification makes sense. Perhaps the device is fine (I have no idea) but these posted specifications are virtually useless. Bob -- Bob Friesenhahn bfrie...@simple.da

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-25 Thread Bob Friesenhahn
che, even if it is just a 4K working buffer representing one SSD erasure block. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss

Re: [zfs-discuss] questions about zil

2010-05-25 Thread Bob Friesenhahn
der extremely ideal circumstances. It seems that toilet paper may of much more practical use than these specifications. In fact, I reject them as being specifications at all. The Apollo reentry vehicle was able to reach amazing speeds, but only for a single use. Bob -- Bob Friesenhahn bfrie...

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Bob Friesenhahn
is necessarily best for this. Perhaps the load issued to these two disks contains more random access requests. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] New SSD options

2010-05-22 Thread Bob Friesenhahn
lying power to the drive before the capacitor is sufficiently charged, and some circuitry which shuts off the flow of energy back into the power supply when the power supply shuts off (could be a silicon diode if you don't mind the 0.7 V drop). Bob -- Bob Friesenhahn bfrie...@simple.da

Re: [zfs-discuss] ZFS no longer working with FC devices.

2010-05-22 Thread Bob Friesenhahn
adding this to your /etc/system file: * Set device I/O maximum concurrency * http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29 set zfs:zfs_vdev_max_pending = 5 Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn
en be overwritten. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari

Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn
depends on plenty of already erased blocks) would stop once the spare space in the SSD has been consumed. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn
he vendor has historically published reliable specification sheets. This may not be the same as money in the bank, but it is better than relying on thoughts from some blog posting. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-21 Thread Bob Friesenhahn
blind assumptions in the above. The only good choice for ZIL is when you know for a certainty and not assumptions based on 3rd party articles and blog postings. Otherwise it is like assuming that if you jump through an open window that there will be firemen down below to catch you. Bob

Re: [zfs-discuss] ZFS memory recommendations

2010-05-19 Thread Bob Friesenhahn
e maximum number of files per directory all make a difference to how much RAM you should have for good performance. If you have 200TB of stored data, but only actually access 2GB of it at any one time, then the caching requirements are not very high. Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] ZFS in campus clusters

2010-05-19 Thread Bob Friesenhahn
;re not going to get this if any part of the log device is at the other side of a WAN. So either add a mirror of log devices locally and not across the WAN, or don't do it at all. This depends on the nature of the WAN. The WAN latency may still be relatively low as compared with drive lat

Re: [zfs-discuss] Very serious performance degradation

2010-05-18 Thread Bob Friesenhahn
would look just as bad. Regardless, the first step would be to investigate 'sd5'. If 'sd4' is also a terrible performer, then resilvering a disk replacement of 'sd5' may take a very long time. Use 'iostat -xen' to obtain more informatio

Re: [zfs-discuss] Very serious performance degradation

2010-05-18 Thread Bob Friesenhahn
are using, and the controller type. It seems likely that one or more of your disks are barely working from time of initial installation. Even 20 MB/s is quite slow. Use 'iostat -x 30' with an I/O load to see if one disk is much slower than the others. Bob -- Bob Friesen

Re: [zfs-discuss] Storage 7410 Flush ARC for filebench

2010-05-12 Thread Bob Friesenhahn
nting the fileysystem is sufficient. For my own little cpio-based "benchmark", umount/mount was sufficient to restore uncached behavior. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-10 Thread Bob Friesenhahn
I certainly would not want to use a system which is in this dire condition. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ _

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-09 Thread Bob Friesenhahn
things up again by not caching file data via the "unified page cache" and using a specialized ARC instead. It seems that simple paging and MMU control was found not to be smart enough. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfri

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-09 Thread Bob Friesenhahn
/proc/sys/vm/swappiness. Anyone that knows how this is tuned in osol, btw? While this is the zfs-discuss list, usually we are talking about Solaris/OpenSolaris here rather than "most OSes". No? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-08 Thread Bob Friesenhahn
ap enabled, the kernel is given another degree of freedom, to choose which is colder: idle process memory, or cold cached files. Are you sure about this? It is always good to be sure ... -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick

Re: [zfs-discuss] ZFS root ARC memory usage on VxFS system...

2010-05-07 Thread Bob Friesenhahn
pps that need memory, but is vxfs asking for memory in a way that zfs is pushing it into the corner? Zfs does not push very hard (although it likes to consume). Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ Grap

Re: [zfs-discuss] Disks configuration for a zfs based samba/cifs file server

2010-05-07 Thread Bob Friesenhahn
be even better. The good news is that if you do slice your disks, you can change your mind in the future if/when you add more disks. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Bob Friesenhahn
the number of simultaneous requests. Currently available L2ARC SSD devices are very good with a high number of I/Os, but they are quite a bottleneck for bulk reads as compared to L1ARC in RAM . Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Bob Friesenhahn
ful to me. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/m

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-05 Thread Bob Friesenhahn
e far different. Solaris 10 can then play catch-up with the release of U9 in 2012. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-di

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-05 Thread Bob Friesenhahn
percolate down to Solaris 10 so I don't feel as left out as Richard would like me to feel. From a zfs standpoint, Solaris 10 does not seem to be behind the currently supported OpenSolaris release. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-05 Thread Bob Friesenhahn
h 'zpool status' does not mention it and it says the disk is resilvered. The flashing lights annoyed me so I exported and imported the pool and then the flashing lights were gone. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-05 Thread Bob Friesenhahn
s quite doable since it is the normal case as when a system is installed onto a mirror pair of disks. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-04 Thread Bob Friesenhahn
t like in OpenSolaris. Solaris 10 Live Upgrade is dramatically improved in conjunction with zfs boot. I am not sure how far behind it is from OpenSolaris new boot administration tools but under zfs its function can not be terribly different. Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-04 Thread Bob Friesenhahn
90-day eval entitlement) has expired. As a result, it is wise for Solaris 10 users to maintain a local repository of licensed patches in case their service contract should expire. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ Grap

Re: [zfs-discuss] Default 'zpool' I want to move it to my new raidz pool 'gpool' how?

2010-05-02 Thread Bob Friesenhahn
ld be somewhat problematic. Once the new drive is functional, you can detatch the failing one. Make sure that GRUB will boot from the new drive. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagic

Re: [zfs-discuss] Performance drop during scrub?

2010-05-02 Thread Bob Friesenhahn
ill usually hit the (old) hardware less severely than scrub. Resilver does not have to access any of the redundant copies of data or metadata, unless they are the only remaining good copy. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Default 'zpool' I want to move it to my new raidz pool 'gpool' how?

2010-05-02 Thread Bob Friesenhahn
On Sun, 2 May 2010, Roy Sigurd Karlsbakk wrote: Any guidance on how to do it? I tried to do zfs snapshot You can't boot off raidz. That's for data only. Unless you use FreeBSD ... Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user

Re: [zfs-discuss] Performance drop during scrub?

2010-05-02 Thread Bob Friesenhahn
s in return for wisdom. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] Performance drop during scrub?

2010-05-02 Thread Bob Friesenhahn
flaws, become dominant. The human factor is often the most dominant factor when it comes to data loss since most data loss is still due to human error. Most data loss problems we see reported here are due to human error or hardware design flaws. Bob -- Bob Friesenhahn bfrie

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-01 Thread Bob Friesenhahn
all", then you would be ok. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] Performance drop during scrub?

2010-05-01 Thread Bob Friesenhahn
ragile and easily broken. It is necessary to look at all the factors which might result in data loss before deciding what the most effective steps are to minimize the probability of loss. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ Grap

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-01 Thread Bob Friesenhahn
Then you would update to the minimum version required to support that feature. Note that if the default filesystem version changes and you create a new filesystem, this may also cause problems (I have been bit by that before). Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] Performance drop during scrub?

2010-04-30 Thread Bob Friesenhahn
re is more than one level of redundancy, scrubs are not really warranted. With just one level of redundancy it becomes much more important to verify that both copies were written to disk correctly. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-04-30 Thread Bob Friesenhahn
that you will be able to reinstall the OS and achieve what you had before. An exact recovery method (dd of partition images or recreate pool with 'zfs receive') seems like the only way to be assured of recovery moving forward. Bob -- Bob Friesenhahn bfrie...@simple.dal

Re: [zfs-discuss] Performance drop during scrub?

2010-04-29 Thread Bob Friesenhahn
l for discovering bit-rot in singly-redundant pools. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Bob Friesenhahn
many hours. I rather have the disk finished resilvering before I have the chance to replace the bad disk than to risk more disks fail before It had a chance to resilverize. Would your opinion change if the disks you used took 7 days to resilver? Bob -- Bob Friesenhahn bfrie

Re: [zfs-discuss] backwards/forward compatibility

2010-04-28 Thread Bob Friesenhahn
possible to create an older pool version using newer software. Of course, it is also necessary to make sure that any created filesystems are a sufficiently low version. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Bob Friesenhahn
made that smaller less capable simplex-routed shelves may be a more cost effective and reliable solution when used carefully with zfs. For example, mini-shelves which support 8 drives each. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Bob Friesenhahn
uld remain alive. Likewise, there is likely more I/O bandwidth available if the vdevs are spread across controllers. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] Performance drop during scrub?

2010-04-28 Thread Bob Friesenhahn
become IOPS bound. It is true that all HDDs become IOPS bound, but the mirror configuration offers more usable IOPS and therefore the user waits for less time for their request to be satisfied. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-27 Thread Bob Friesenhahn
immediately when the transfer starts? Please do me a favor and check this for me. Thanks, Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ _

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-27 Thread Bob Friesenhahn
orm of replace like 'zpool replace tank c1t1d0 c1t1d7'. If I understand things correctly, this allows you to replace one good disk with another without risking the data in your pool. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen

Re: [zfs-discuss] Performance drop during scrub?

2010-04-27 Thread Bob Friesenhahn
disk resources. A pool based on mirror devices will behave much more nicely while being scrubbed than one based on RAIDz2. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagic

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-27 Thread Bob Friesenhahn
there is space for the new drive, it is not necessary to degrade the array and lose redundancy while replacing a device. As long as you can physically add a drive to the system (even temporarily) it is not necessary to deliberately create a fault situation. Bob -- Bob Friesenhahn

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-26 Thread Bob Friesenhahn
and the type of drives used. All types of drives fail but typical SATA drives fail more often. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] not showing data in L2ARC or ZIL

2010-04-24 Thread Bob Friesenhahn
-- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] not showing data in L2ARC or ZIL

2010-04-24 Thread Bob Friesenhahn
Sweet! My primary thought is that your working set is currently smaller than the available RAM. Notice that this particular 'zpool iostat' is not showing any reads. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Bob Friesenhahn
will still available. You are not going to get this using zfs. Zfs spreads the file data across all of the drives in the pool. You need to have vdev redundancy in order to be able to lose a drive without losing the whole pool. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] Data movement across filesystems within a pool

2010-04-24 Thread Bob Friesenhahn
space for double the data, then I recommend creating the destination directory and then using this in the source directory find . -depth -print | cpio -pdum /destdir Using something like 'mv' would be dreadfully (even) slower. Thankfully, zfs is pretty quick at recursively removin

Re: [zfs-discuss] Severe Problems on ZFS server

2010-04-22 Thread Bob Friesenhahn
. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Severe Problems on ZFS server

2010-04-22 Thread Bob Friesenhahn
ces at a time (2GB allocation each) on my system with no problem at all. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___ zfs-discuss maili

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-22 Thread Bob Friesenhahn
would also need to verify that this feature is not protected by a patent. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss ma

[zfs-discuss] Solaris 10 zfs updates

2010-04-21 Thread Bob Friesenhahn
not included. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-20 Thread Bob Friesenhahn
orthy as your own temporary yahoo.com email account. Next someone with a yahoo.com email account will be posting that Ford will no longer supports round tires on their trucks. Statements to the contrary will not be accepted unless they come from a @ford.com address. Bob -- Bob Friesenhahn

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
" (with FLASH backup) will behave dramatically differently than a typical SSD. If the SSD employed supports sufficient IOPS and bandwidth, then adding more will not help since it is not the bottleneck. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
the underlying store is concerned) in order to assure proper ordering. This would result in a very high TXG issue rate. Pool fragmentation would be increased. I am sure that someone will correct me if this is wrong. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
utate what really happens on your system before you invest in extra hardware. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discu

Re: [zfs-discuss] upgrade zfs stripe

2010-04-19 Thread Bob Friesenhahn
convert a mirror vdev into a single-disk vdev. This means that you can upgrade your simple "stripe" into a stripe of mirrors. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://ww

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
es or decreases the available data flow to the network. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zf

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
different system, then the zpool.cache file generated on that system will be different due to differing device names and a different host name. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Bob Friesenhahn
ster since it should have lower latency than a FLASH SSD drive. However, it may have some bandwidth limits on its interface. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Bob Friesenhahn
ethernet? This seems to be the test of the day. time tar jxf gcc-4.4.3.tar.bz2 I get 22 seconds locally and about 6-1/2 minutes from an NFS client. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] SSD sale on newegg

2010-04-18 Thread Bob Friesenhahn
No SSDs are used for the intent log. The StorageTek 2540 seems to offer 330MB of battery-backed cache per controller. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://w

Re: [zfs-discuss] zfs snapshots and rsync

2010-04-17 Thread Bob Friesenhahn
desired behavior: --inplace --no-whole-file Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Bob Friesenhahn
ly to cache flush requests. The intent log is flushed frequently. Previously some have reported (based on testing) that the X-25E does not flush the write cache reliably when it is enabled. It may be that some X-25E versions work better than others. Bob -- Bob Friesenhahn

Re: [zfs-discuss] Secure delete?

2010-04-16 Thread Bob Friesenhahn
On Fri, 16 Apr 2010, Eric D. Mudama wrote: On Fri, Apr 16 at 10:05, Bob Friesenhahn wrote: It is much more efficient (from a housekeeping perspective) if filesystem sectors map directly to SSD pages, but we are not there yet. How would you stripe or manage a dataset across a mix of devices

Re: [zfs-discuss] Secure delete?

2010-04-16 Thread Bob Friesenhahn
erformance advantages of TRIM. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Secure delete?

2010-04-16 Thread Bob Friesenhahn
as to do some heavy lifting. It has to keep track of many small "holes" in the FLASH pages. This seems pretty complicated since all of this information needs to be well-preserved in non-volatile storage. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesy

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Bob Friesenhahn
s. This is also the reason why zfs encourages that all vdevs use the same organization. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Bob Friesenhahn
are the new 4K sector variety, even you might notice. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zf

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-13 Thread Bob Friesenhahn
how they behave. If they behave well, then destroy the temporary pool and add the drives to your main pool. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://ww

Re: [zfs-discuss] Secure delete?

2010-04-13 Thread Bob Friesenhahn
ock to the device. This reduces the probability that the FLASH device will need to update an existing FLASH block. COW increases the total amount of data written, but it also reduces the FLASH read/update/re-write cycle. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simples

Re: [zfs-discuss] Secure delete?

2010-04-13 Thread Bob Friesenhahn
e becomes a pervasive part of the operating system. Every article I have read about the value of TRIM is pure speculation. Perhaps it will be found that TRIM has more value for SAN storage (to reclaim space for accounting purposes) than for SSDs. Bob -- Bob Friesenhahn bfri

Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-12 Thread Bob Friesenhahn
default? Please? There are some situations where many reports may be sent per second so it is not necessarily a wise idea for this to be enabled by default. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Bob Friesenhahn
e (as used by media experts) when discussing the feature on a Solaris list. ;-) Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discu

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Bob Friesenhahn
intermittent benchmark. Of course, the background TRIM commands might clog other on-going operations so it might hurt the benchmark. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Bob Friesenhahn
h smarts or do very good wear leveling, and these devices might benefit from the Windows 7 TRIM command. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/_

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Bob Friesenhahn
grade devices like USB dongles and compact flash. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-10 Thread Bob Friesenhahn
On Sat, 10 Apr 2010, Harry Putnam wrote: Would you mind expanding the abbrevs: ssd zil 12arc? SSD = Solid State Device ZIL = ZFS Intent Log (log of pending synchronous writes) L2ARC = Level 2 Adaptive Replacement Cache Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-10 Thread Bob Friesenhahn
On Sat, 10 Apr 2010, Bob Friesenhahn wrote: Since he is already using mirrors, he already has enough free space since he can move one disk from each mirror to the "main" pool (which unfortunately, can't be the boot 'rpool' pool), send the data, and then move the sec

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-10 Thread Bob Friesenhahn
ost convenient in the long run for the root pool to be physically separate from the data storage though. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ _

Re: [zfs-discuss] Sync Write - ZIL log performance - Feedback for ZFS developers?

2010-04-10 Thread Bob Friesenhahn
. However, the maximum synchronous bulk write rate will still be limited by the bandwidth of your intent log devices. Huge synchronous bulk writes are pretty rare since usually the bottleneck is elsewhere, such as the ethernet. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-10 Thread Bob Friesenhahn
, even after reboots. Is anyone willing to share what zfs version will be included with Solaris 10 U9? Will graceful intent log removal be included? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http

Re: [zfs-discuss] about backup and mirrored pools

2010-04-10 Thread Bob Friesenhahn
Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/lis

Re: [zfs-discuss] about backup and mirrored pools

2010-04-09 Thread Bob Friesenhahn
X the performance. Luckily, since you are using mirrors, you can easily migrate disks from your existing extra pools to the coalesced pool. Just make sure to scrub first in order to have confidence that there won't be data loss. Bob -- Bob Friesenhahn bfrie...@simple.dallas.t

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Bob Friesenhahn
that reads/writes are temporarily stalled during part of the TXG write cycle. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Bob Friesenhahn
ow, in the Real Near Future when we have 1TB+ SSDs that are 1cent/GB, well, then, it will be nice to swap up. But not until then... I don't see that happening any time soon. FLASH is close to hitting the wall on device geometries and tri-level and quad-level only gets you so far. A ne

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Bob Friesenhahn
s. I did have to apply a source patch to the FreeBSD kernel the last time around. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-di

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Bob Friesenhahn
for someone to produce binaries. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Bob Friesenhahn
this, raidz2 should be seen as a way to improve storage efficiency and data reliability, and not so much as a way to improve sequential performance. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Bob Friesenhahn
ber of vdevs if you go for the two vdev solution. With two vdevs and four readers, there will have to be disk seeking for data even if the data is perfectly sequentially organized. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Main

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Bob Friesenhahn
tions to reduce its effect. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

<    1   2   3   4   5   6   7   8   9   10   >