Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-16 Thread Edward Ned Harvey
From: Daniel Carosone [mailto:d...@geek.com.au] Sent: Thursday, June 16, 2011 10:27 PM Is it still the case, as it once was, that allocating anything other than whole disks as vdevs forces NCQ / write cache off on the drive (either or both, forget which, guess write cache)? I will only

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-17 Thread Edward Ned Harvey
From: Daniel Carosone [mailto:d...@geek.com.au] Sent: Thursday, June 16, 2011 10:27 PM Is it still the case, as it once was, that allocating anything other than whole disks as vdevs forces NCQ / write cache off on the drive (either or both, forget which, guess write cache)? I will only

Re: [zfs-discuss] # disks per vdev

2011-06-17 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Lanky Doodle or is it completely random leaving me with some trial and error to work out what disk is on what port? It's highly desirable to have drives with lights on them. So you can

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-17 Thread Edward Ned Harvey
From: Daniel Carosone [mailto:d...@geek.com.au] Sent: Thursday, June 16, 2011 11:05 PM the [sata] channel is idle, blocked on command completion, while the heads seek. I'm interested in proving this point. Because I believe it's false. Just hand waving for the moment ... Presenting the

Re: [zfs-discuss] # disks per vdev

2011-06-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Marty Scholes On a busy array it is hard even to use the leds as indicators. Offline the disk. Light stays off. Use dd to read the disk. Light stays on. That should make it easy enough.

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-19 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] Sent: Saturday, June 18, 2011 7:47 PM Actually, all of the data I've gathered recently shows that the number of IOPS does not significantly increase for HDDs running random workloads. However the response time does :-( Could you

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] Sent: Sunday, June 19, 2011 11:03 AM I was planning, in the near future, to go run iozone on some system with, and without the disk cache enabled according to format -e. If my hypothesis is right, it shouldn't significantly affect

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-22 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave U.Random My personal preference, assuming 4 disks, since the OS is mostly reads and only a little bit of writes, is to create a 4-way mirrored 100G partition for the OS, and the

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Nomen Nescio Hello Bob! Thanks for the reply. I was thinking about going with a 3 way mirror and a hot spare. But I don't think I can upgrade to larger drives unless I do it all at once, is

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave U.Random If I am going to make a new install of Solaris 10 does it give me the option to slice and dice my disks and to issue zpool commands? No way that I know of, to install Solaris

Re: [zfs-discuss] replace zil drive

2011-06-27 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Carsten John Now I'm wondering about the best option to replace the HDD with the SSD: What version of zpool are you running? If it's = 19, then you could actually survive a complete ZIL

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-07-02 Thread Edward Ned Harvey
From: Ross Walker [mailto:rswwal...@gmail.com] Sent: Friday, June 17, 2011 9:48 PM The on-disk buffer is there so data is ready when the hard drive head lands, without it the drive's average rotational latency will trend higher due to missed landings because the data wasn't in buffer at the

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-07-02 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey Conclusion: Yes it matters to enable the write_cache. Now the question of whether or not it matters to use the whole disk versus partitioning, and how to enable

Re: [zfs-discuss] Changed to AHCI, can not access disk???

2011-07-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Orvar Korvar Here is my problem: I have an 1.5TB disk with OpenSolaris (b134, b151a) using non AHCI. I then changed to AHCI in BIOS, which results in severe problems: I can not boot the

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-09 Thread Edward Ned Harvey
Given the abysmal performance, I have to assume there is a significant number of overhead reads or writes in order to maintain the DDT for each actual block write operation. Something I didn't mention in the other email is that I also tracked iostat throughout the whole operation. It's all

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-09 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey When you read back duplicate data that was previously written with dedup, then you get a lot more cache hits, and as a result, the reads go faster.  Unfortunately

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-09 Thread Edward Ned Harvey
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net] Sent: Saturday, July 09, 2011 2:33 PM Could you test with some SSD SLOGs and see how well or bad the system performs? These are all async writes, so slog won't be used. Async writes that have a single fflush() and fsync() at the end

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-10 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey --- Performance loss: I ran one more test, that is rather enlightening. I repeated test #2 (tweak arc_meta_limit, use the default primarycache=all) but this time I wrote

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-10 Thread Edward Ned Harvey
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net] Sent: Saturday, July 09, 2011 3:44 PM Could you test with some SSD SLOGs and see how well or bad the system performs? These are all async writes, so slog won't be used. Async writes that have a single fflush() and fsync() at

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-10 Thread Edward Ned Harvey
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net] Sent: Saturday, July 09, 2011 3:44 PM Sorry, my bad, I meant L2ARC to help buffer the DDT Also, bear in mind, the L2ARC is only for reads. So it can't help accelerate writing updates to the DDT. Those updates need to hit the pool,

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-12 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov By the way, did you estimate how much is dedup's overhead in terms of metadata blocks? For example it was often said on the list that you shouldn't bother with dedup unless you

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-12 Thread Edward Ned Harvey
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Tuesday, July 12, 2011 9:58 AM You know what? A year ago I would have said dedup still wasn't stable enough for production. Now I would say it's plenty stable enough... But it needs performance enhancement before it's

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-14 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey I understand the argument, DDT must be stored in the primary storage pool so you can increase the size of the storage pool without running out of space to hold the DDT

Re: [zfs-discuss] Zil on multiple usb keys

2011-07-15 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tiernan OToole This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol? I am

Re: [zfs-discuss] Zil on multiple usb keys

2011-07-16 Thread Edward Ned Harvey
From: Tiernan OToole [mailto:lsmart...@gmail.com] Sent: Saturday, July 16, 2011 7:46 AM I have 2 500Gb internal drives and 2 300Gb USB drives. If i where to create a 2 pools, a 300Gb and a 500Gb in each, and then mirror over them, would that work? is it even posible? or what would you

Re: [zfs-discuss] Zil on multiple usb keys

2011-07-16 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov Well, in terms of mirroring over stripes, if any component of any stripe breaks, the whole half of the mirror is degraded. If another drive from another half also breaks,

[zfs-discuss] latest zpool version in solaris 11 express

2011-07-16 Thread Edward Ned Harvey
I recently installed a system, but it seems like the system update process isn't working right, or else I have wrong expectations. What I really want is to ensure I have the latest... It says zpool version 31 and zfs version 5. Can anybody please confirm or deny that this is the absolute

Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-16 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Maurice R Volaski You need to point to the support repository and install a certificate. /usr/bin/pkg set-publisher -k /var/pkg/ssl/Oracle_Solaris_11_Express_Support.key.pem \ -c

Re: [zfs-discuss] Zil on multiple usb keys

2011-07-17 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov if the OP were so inclined, he could craft a couple of striped pools (300+500) and then make a ZFS pool over these two. Actually, you can't do that. You can't make a vdev from

Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-18 Thread Edward Ned Harvey
From: Edward Ned Harvey [mailto:opensolarisisdeadlongliveopensola...@nedharvey.com] Intuitive. ;-) Thank you. Kidding aside, for anyone finding this thread at a later time, here's the answer. It sounds unnecessarily complex at first, but then I went through it ... Only took like a minute

Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-18 Thread Edward Ned Harvey
From: Edward Ned Harvey [mailto:opensolarisisdeadlongliveopensola...@nedharvey.com] It says zpool version 31 and zfs version 5.  Can anybody please confirm or deny that this is the absolute latest version available to the public in any way? After applying all updates, it's still zpool 31

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-21 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Gordon Ross Anyone have experience with either one?  (good or bad) Opinions whether the lower capacity and higher cost of the SSD is justified in terms of performance for things like

[zfs-discuss] add device to mirror rpool in sol11exp

2011-07-22 Thread Edward Ned Harvey
In my new oracle server, sol11exp, it's using multipath device names... Presently I have two disks attached: (I removed the other 10 disks for now, because these device names are so confusing. This way I can focus on *just* the OS disks.) 0. c0t5000C5003424396Bd0

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-23 Thread Edward Ned Harvey
From: Ian Collins [mailto:i...@ianshome.com] Sent: Saturday, July 23, 2011 4:02 AM Can you provide more details of your tests? Here's everything: http://dl.dropbox.com/u/543241/dedup%20tests/dedup%20tests.zip In particular: Under the work server directory. The basic concept goes like

Re: [zfs-discuss] add device to mirror rpool in sol11exp

2011-07-23 Thread Edward Ned Harvey
From: Edward Ned Harvey [mailto:opensolarisisdeadlongliveopensola...@nedharvey.com] Disk 0 is the one where the OS is installed.  During installation, I opted to install the OS into a partition.  Now I'm trying to replicate the fdisk partition tables (and partition slice tables) onto

Re: [zfs-discuss] add device to mirror rpool in sol11exp

2011-07-23 Thread Edward Ned Harvey
From: Edward Ned Harvey [mailto:opensolarisisdeadlongliveopensola...@nedharvey.com] sudo zpool attach -f rpool ${firstdisk}s0 ${seconddisk}s0 I assume this is still important too: sudo installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/${seconddisk}s0

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-24 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Orvar Korvar So boot is much quicker. Everyday use, I dont notice anything. Every application boots quick, and I dont think about application boot time anymore. OT but... At work, we

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-25 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins Add to that: if running dedup, get plenty of RAM and cache. Add plenty RAM. And tweak your arc_meta_limit. You can at least get dedup performance that's on the same order of

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-25 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Erik Trimble Honestly, I think TRIM isn't really useful for anyone. I'm going to have to disagree. There are only two times when TRIM isn't useful: 1) Your demand of the system is

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-26 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha Shouldn't modern SSD controllers be smart enough already that they know: - if there's a request to overwrite a sector, then the old data on that sector is no longer needed

[zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Edward Ned Harvey
After a certain rev, I know you can set the sync property, and it takes effect immediately, and it's persistent across reboots. But that doesn't apply to Solaris 10. My question: Is there any way to make Disabled ZIL a normal mode of operations in solaris 10? Particularly: If I do

Re: [zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Edward Ned Harvey
From: Darren J Moffat [mailto:darr...@opensolaris.org] Sent: Friday, August 05, 2011 10:14 AM echo set zfs:zil_disable = 1 /etc/system This is a great way to cure /etc/system viruses :-) LOL! :-) Thank you. ___ zfs-discuss mailing list

Re: [zfs-discuss] Problem booting after zfs upgrade

2011-08-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins On 08/ 6/11 11:48 AM, stuart anderson wrote: After upgrading to zpool version 29/zfs version 5 on a S10 test system via the kernel patch 144501-19 it will now boot only as far

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Orvar Korvar Ok, so mirrors resilver faster. But, it is not uncommon that another disk shows problem during resilver (for instance r/w errors), this scenario would mean your entire raid is

Re: [zfs-discuss] Mirrored rpool

2011-08-08 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Lanky Doodle Before I go ahead and continue building my server (zpools) I want to make sure the above guide is correct for S11E? You should simply boot from each disk, while the other disk

Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-10 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Do you have dedup enabled? I have always found, zfs destroy takes some

Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-11 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bob Friesenhahn Unfortunately, if dedup was previously enabled, the damage was already done since dedup is baked into your pool. The situation may improve over time as dedup blocks are

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-11 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ray Van Dolson Are any of you using the Intel 320 as ZIL? It's MLC based, but I understand its wear and performance characteristics can be bumped up significantly by increasing the

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-13 Thread Edward Ned Harvey
From: Ian Collins [mailto:i...@ianshome.com] Sent: Friday, August 12, 2011 11:24 PM For ZIL, I suppose we could get the 300GB drive and overcommit to 95%! What kind of benefit does that offer? I suppose, if you have a 300G drive and the OS can only see 30G of it, then the drive can

Re: [zfs-discuss] Space usage

2011-08-14 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Freddie Cash zpool list output show raw disk usage, including all redundant copies of metadata, all redundant copies of data blocks, all redundancy accounted for (mirror, raidz), etc.

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Edward Ned Harvey
From: Ray Van Dolson [mailto:rvandol...@esri.com] Sent: Monday, August 15, 2011 12:26 PM On the Intel SSD 320 Series, the spare capacity reserved at the factory is 7% to 11% (depending on the SKU) of the full NAND capacity. For better random write performance and endurance, the

Re: [zfs-discuss] zfs send and zfs destroy at the same time

2011-08-29 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] What do you expect to happen if you're in progress doing a zfs send, and then simultaneously do a zfs destroy of the snapshot you're sending? It depends on the release. For modern implementations, a hold is placed on the snapshot and

Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-29 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Daniel Carosone On Sat, Aug 27, 2011 at 08:44:13AM -0700, Richard Elling wrote: I'm getting a but tired of people designing for fast resilvering. It is a design consideration, regardless,

Re: [zfs-discuss] Advice with SSD, ZIL and L2ARC

2011-08-30 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jesus Cea 1. Is the L2ARC data stored in the SSD checksummed?. If so, can I expect that ZFS goes directly to the disk if the checksum is wrong?. Yup. 2. Can I import a POOL if one/both

Re: [zfs-discuss] zfs send and dedupe

2011-09-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Richard Elling For example, does 'zfs send -D' use the same DDT as the pool? No. Or does it require more memory for it's own DDT, thus impacting performance of both? Yes, no. How

Re: [zfs-discuss] zfs send and dedupe

2011-09-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Freddie Cash Will be interesting to see whether or not -D works with ZFSv28 in FreeBSD 8- STABLE/9-BETA. And whether or not zfs send is faster/better/easier/more reliable than rsyncing

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Fred Liu For my carelessness, I added two disks into a raid-z2 zpool as normal data disk, but in fact I want to make them as zil devices. That's a huge bummer, and it's the main reason why

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Edward Ned Harvey
From: Fred Liu [mailto:fred_...@issi.com] Yeah, I also realized this when I send out this message. In NetApp, it is so easy to change raid group size. There is still a long way for zfs to go. Hope I can see that in the future. This one missing feature of ZFS, IMHO, does not result in a long

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Edward Ned Harvey
From: Krunal Desai [mailto:mov...@gmail.com] On Mon, Sep 19, 2011 at 9:29 AM, Fred Liu fred_...@issi.com wrote: Yes. I have connected them back to server. But it does not help. I am really sad now... I'll tell you what does not help. This email. Now that you know what you're trying to

Re: [zfs-discuss] Beginner Question: Limited conf: file-based storage pools vs. FSs directly on rpool

2011-09-23 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Raúl Valencia I must configure a small file server. It only has two disk drives, and they are (forcibly) destined to be used in a mirrored, hot-spare configuration. I think you just

Re: [zfs-discuss] Mirror Gone

2011-09-27 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tony MacDoodle Original: mirror-0  ONLINE   0 0 0     c1t2d0  ONLINE   0 0 0     c1t3d0  ONLINE   0 0 0   mirror-1  ONLINE  

Re: [zfs-discuss] All (pure) SSD pool rehash

2011-09-27 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Matt Banks Am I crazy for putting something like this into production using Solaris 10/11? On paper, it really seems ideal for our needs. Do you have an objection to solaris 10/11 for some

Re: [zfs-discuss] All (pure) SSD pool rehash

2011-09-28 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] Also, the default settings for the resilver throttle are set for HDDs. For SSDs, it is a good idea to change the throttle to be more aggressive. You mean... Be more aggressive, resilver faster? or Be more aggressive, throttling the

Re: [zfs-discuss] S10 version question

2011-09-29 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins Got a quick question: what are the latest zpool and zfs versions supported in Solaris 10 Update 10? In update 10: pool version 29, ZFS version 5. I don't know what the other

Re: [zfs-discuss] commercial zfs-based storage replication software?

2011-09-30 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha Does anyone know a good commercial zfs-based storage replication software that runs on Solaris (i.e. not an appliance, not another OS based on solaris)? Kinda like Amanda,

Re: [zfs-discuss] ZFS issue on read performance

2011-10-11 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of deg...@free.fr I'm not familiar with ZFS stuff, so I'll try to give you as much as info I can get with our environment We are using a ZFS pool as a VLS for a backup server (Sun V445 Solaris

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-13 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Cindy Swearingen In the steps below, you're missing a zpool import step. I would like to see the error message when the zpool import step fails. I see him doing this... # truss -t open

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-13 Thread Edward Ned Harvey
From: casper@oracle.com [mailto:casper@oracle.com] What is the partition table? He also said this... -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of John D Groenveld # zpool create foo c1t0d0

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-14 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov I guess Richard was correct about the usecase description - I should detail what I'm thinking about, to give some illustration. After reading all this, I'm still unclear on what

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-15 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov The idea is you would dedicate one of the servers in the chassis to be a Solaris system, which then presents NFS out to the rest of the hosts.   Actually, I looked into a

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-15 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tim Cook In my example - probably not a completely clustered FS. A clustered ZFS pool with datasets individually owned by specific nodes at any given time would suffice for such VM farms.

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-15 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Toby Thain Hmm, of course the *latency* of Ethernet has always been much less, but I did not see it reaching the *throughput* of a single direct attached disk until gigabit. Nobody runs a

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Harry Putnam FreeNAS and freebsd. Maybe you can give a little synopsis of those too. I mean when it comes to utilizing zfs; is it much the same as if running it on solaris? For somebody

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Paul Kraus I have done a poor man's rebalance by copying data after adding devices. I know this is not a substitute for a real online rebalance, but it gets the job done (if you can take the

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tim Cook I had and have redundant storage, it has *NEVER* automatically fixed it.  You're the first person I've heard that has had it automatically fix it. That's probably just because it's

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bob Friesenhahn On Wed, 19 Oct 2011, Peter Jeremy wrote: Doesn't a scrub do more than what 'fsck' does? It does different things. I'm not sure about more. Zfs scrub validates user

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Edward Ned Harvey
From: Fajar A. Nugraha [mailto:w...@fajar.net] Sent: Tuesday, October 18, 2011 7:46 PM * In btrfs, there is no equivalent or alternative to zfs send | zfs receive Planned. No actual working implementation yet. In fact, I saw, actual work started on this task about a month ago. So it's

Re: [zfs-discuss] Stream versions in Solaris 10.

2011-10-20 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins I just tried sending from a oi151a system to a Solaris 10 backup server and the server barfed with zfs_receive: stream is unsupported version 17 I can't find any

Re: [zfs-discuss] Growing CKSUM errors with no READ/WRITE errors

2011-10-20 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov new CKSUM errors are being found. There are zero READ or WRITE error counts, though. Should we be worried about replacing the ex-hotspare drive ASAP as well? You should not be

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-22 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Robert Watzlavick What failure scenario could have caused this? The file was obviously initially good on the raidz because it got backed up to the USB drive and that matches the good

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-24 Thread Edward Ned Harvey
From: Robert Watzlavick [mailto:rob...@watzlavick.com] Sent: Sunday, October 23, 2011 4:36 PM Now on to find out why the 3 Acronis Backup files got modified. This is good news so far... I expect you'll find the same thing for Acronis. Acronis updates those individual files to make them

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-24 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Robert Watzlavick I have two external USB hard drives that I use to back up the contents of the ZFS raidz on alternating months. The USB hard drives use EXT3 so they are connected to a

Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-27 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of weiliam.hong 3. All 4 drives are connected to a single HBA, so I assume the mpt_sas driver is used. Are SAS and SATA drives handled differently ? If they're all on the same HBA, they may be

Re: [zfs-discuss] Log disk with all ssd pool?

2011-10-28 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Mark Wolek Still kicking around this idea and didn’t see it addressed in any of the threads before the forum closed. If one made an all ssd pool, would a log/cache drive just slow you

Re: [zfs-discuss] (Incremental) ZFS SEND at sub-snapshot level

2011-10-29 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov summer, and came up with a new question. In short, is it possible to add restartability to ZFS SEND, for example Rather than building something new and special into the

Re: [zfs-discuss] Log disk with all ssd pool?

2011-11-01 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Karl Rossing On 10/28/2011 01:04 AM, Mark Wolek wrote: before the forum closed. Did I miss something? Yes. The forums no longer exist. It's only mailman email now.

Re: [zfs-discuss] Log disk with all ssd pool?

2011-11-01 Thread Edward Ned Harvey
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] I notice that the mail activity has diminished substantially since the forums were shut down. Apparently they were still in use. I'm sure nobody thought they were unused. I'm sure it was a cost saving measure. Jive forums start

[zfs-discuss] (OT) forums and email was RE: Log disk with all ssd pool?

2011-11-01 Thread Edward Ned Harvey
From: Daniel Carosone [mailto:d...@geek.com.au] On Tue, Nov 01, 2011 at 06:17:57PM -0400, Edward Ned Harvey wrote: You can do both poorly for free, or you can do both very well for big bucks. That's what opensolaris was doing. That mess was costing someone money and considered very well

Re: [zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-11-03 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Lachlan Mulcahy I have been having issues with Solaris kernel based systems locking up and am wondering if anyone else has observed a similar symptom before. ... Dell R710 / 80G Memory

Re: [zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-11-03 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Lachlan Mulcahy *       Recommendation from Sun (Oracle) to work around a bug: *       6958068 - Nehalem deeper C-states cause erratic scheduling behavior set idle_cpu_prefer_mwait = 0 set

Re: [zfs-discuss] Remove corrupt files from snapshot

2011-11-04 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of sbre...@hotmail.com However, snapshots seem to be read-only: Is there any way to force the file removal? You need to destroy the snapshot completely - But if you want to selectively delete

Re: [zfs-discuss] zpool scrub bad block list

2011-11-08 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Didier Rebeix from ZFS documentation it appears unclear to me if a zpool scrub will black list any found bad blocks so they won't be used anymore. If there are any physically bad

Re: [zfs-discuss] zfs sync=disabled property

2011-11-08 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Evaldas Auryla I'm trying to evaluate what are the risks of running NFS share of zfs dataset with sync=disabled property. The clients are vmware hosts in our environment and server is

Re: [zfs-discuss] Couple of questions about ZFS on laptops

2011-11-08 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov 1) Use a ZFS mirror of two SSDs - seems too pricey 2) Use a HDD with redundant data (copies=2 or mirroring over two partitions), and an SSD for L2ARC (+maybe ZIL) -

Re: [zfs-discuss] Data distribution not even between vdevs

2011-11-09 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ding Honghui But now, as show below, the first 2 raidz1 vdev usage is about 78% and the last 2 raidz1 vdev usage is about 93%. In this case, when you write, it should be writing to the first

Re: [zfs-discuss] zfs sync=disabled property

2011-11-09 Thread Edward Ned Harvey
From: Evaldas Auryla [mailto:evaldas.aur...@edqm.eu] Sent: Wednesday, November 09, 2011 8:55 AM I was thinking about STEC ZeusRAM, but unfortunately it's SAS only device, and it won't make into X4540 (SATA ports only), so another option could be STEC MACH16iops (50GB SLC SATA SSD). Perhaps

Re: [zfs-discuss] Data distribution not even between vdevs

2011-11-10 Thread Edward Ned Harvey
From: Gregg Wonderly [mailto:gregg...@gmail.com] There is no automatic way to do it. For me, this is a key issue. If there was an automatic rebalancing mechanism, that same mechanism would work perfectly to allow pools to have disk sets removed. It would provide the needed basic

Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jeff Savit Also, not a good idea for performance to partition the disks as you suggest. Not totally true. By default, if you partition the disks, then the disk write cache gets disabled.

Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of darkblue 1 * XEON 5606 1 * supermirco X8DT3-LN4F 6 * 4G RECC RAM 22 * WD RE3 1T harddisk 4 * intel 320 (160G) SSD 1 * supermicro 846E1-900B chassis I just want to say, this isn't

Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of darkblue Why would you want your root pool to be on the SSD? Do you expect an extremely high I/O rate for the OS disks? Also, not a good idea for performance to partition the disks as you

<    4   5   6   7   8   9   10   11   12   >