Re: [zfs-discuss] [osol-help] zfs destroy stalls, need to hard reboot

2009-12-29 Thread Brent Jones
On Sun, Dec 27, 2009 at 1:35 PM, Brent Jones br...@servuhome.net wrote: On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach stephan.bud...@jvm.de wrote: Brent, I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we're at v130. I have also seached the

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread przemolicc
On Mon, Dec 28, 2009 at 01:40:03PM -0800, Brad wrote: This doesn't make sense to me. You've got 32 GB, why not use it? Artificially limiting the memory use to 20 GB seems like a waste of good money. I'm having a hard time convincing the dbas to increase the size of the SGA to 20GB because

Re: [zfs-discuss] [osol-help] zfs destroy stalls, need to hard reboot

2009-12-29 Thread Stephan Budach
Hi Brent, what you have noticed makes sense and that behaviour has been present since v127, when dedupe was introduced in OpenSolaris. This also fits into my observations. I thought I had totally messed up one of my OpenSolaris boxes which I used to take my first steps with ZFS/dedupe and

Re: [zfs-discuss] ZFS write bursts cause short app stalls

2009-12-29 Thread Robert Milkowski
I included networking-discuss@ On 28/12/2009 15:50, Saso Kiselkov wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Thank you for the advice. After trying flowadm the situation improved somewhat, but I'm still getting occasional packet overflow (10-100 packets about every 10-15 minutes).

Re: [zfs-discuss] ZFS write bursts cause short app stalls

2009-12-29 Thread Saso Kiselkov
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I tried removing the flow and subjectively packet loss occurs a bit less often, but still it is happening. Right now I'm trying to figure out of it's due to the load on the server or not - I've left only about 15 concurrent recording instances,

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Brad
Thanks for the suggestion! I have heard mirrored vdevs configuration are preferred for Oracle but whats the difference between a raidz mirrored vdev vs a raid10 setup? We have tested a zfs stripe configuration before with 15 disks and our tester was extremely happy with the performance.

[zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Duane Walker
I tried running an OpenSolaris server so I could use ZFS but SMB Serving wasn't reliable (it would only work for about 15 minutes). I also couldn't get Cacti working (No PHP-SNMP support and I tried building PHP with SNMP but it failed). So now I am going to run Ubuntu with RAID1 drives. I am

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Ross Walker
On Dec 29, 2009, at 7:55 AM, Brad bene...@yahoo.com wrote: Thanks for the suggestion! I have heard mirrored vdevs configuration are preferred for Oracle but whats the difference between a raidz mirrored vdev vs a raid10 setup? A mirrored raidz provides redundancy at a steep cost to

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Eric D. Mudama
On Tue, Dec 29 at 4:55, Brad wrote: Thanks for the suggestion! I have heard mirrored vdevs configuration are preferred for Oracle but whats the difference between a raidz mirrored vdev vs a raid10 setup? We have tested a zfs stripe configuration before with 15 disks and our tester was

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Brad
@ross Because each write of a raidz is striped across the disks the effective IOPS of the vdev is equal to that of a single disk. This can be improved by utilizing multiple (smaller) raidz vdevs which are striped, but not by mirroring them. So with random reads, would it perform better on a

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Brad
@eric As a general rule of thumb, each vdev has the random performance roughly the same as a single member of that vdev. Having six RAIDZ vdevs in a pool should give roughly the performance as a stripe of six bare drives, for random IO. It sounds like we'll need 16 vdevs striped in a pool to at

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Bob Friesenhahn
On Tue, 29 Dec 2009, Ross Walker wrote: A mirrored raidz provides redundancy at a steep cost to performance and might I add a high monetary cost. I am not sure what a mirrored raidz is. I have never heard of such a thing before. With raid10 each mirrored pair has the IOPS of a single

Re: [zfs-discuss] [osol-help] zfs destroy stalls, need to hard reboot

2009-12-29 Thread Richard Elling
On Dec 29, 2009, at 12:34 AM, Brent Jones wrote: On Sun, Dec 27, 2009 at 1:35 PM, Brent Jones br...@servuhome.net wrote: On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach stephan.bud...@jvm.de wrote: Brent, I had known about that bug a couple of weeks ago, but that bug has been files

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Mattias Pantzare
On Tue, Dec 29, 2009 at 18:16, Brad bene...@yahoo.com wrote: @eric As a general rule of thumb, each vdev has the random performance roughly the same as a single member of that vdev. Having six RAIDZ vdevs in a pool should give roughly the performance as a stripe of six bare drives, for

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Eric D. Mudama
On Tue, Dec 29 at 9:16, Brad wrote: @eric As a general rule of thumb, each vdev has the random performance roughly the same as a single member of that vdev. Having six RAIDZ vdevs in a pool should give roughly the performance as a stripe of six bare drives, for random IO. It sounds like we'll

Re: [zfs-discuss] [osol-help] zfs destroy stalls, need to hard reboot

2009-12-29 Thread Eric D. Mudama
On Tue, Dec 29 at 9:50, Richard Elling wrote: I don't believe compression matters. But dedup can really make a big difference. When you enable dedup, the deduplication table (DDT) is created to keep track of the references to blocks. When you remove a Are there any published notes on

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Richard Elling
On Dec 29, 2009, at 9:16 AM, Brad wrote: @eric As a general rule of thumb, each vdev has the random performance roughly the same as a single member of that vdev. Having six RAIDZ vdevs in a pool should give roughly the performance as a stripe of six bare drives, for random IO. This model

Re: [zfs-discuss] [osol-help] zfs destroy stalls, need to hard reboot

2009-12-29 Thread Richard Elling
On Dec 29, 2009, at 10:03 AM, Eric D. Mudama wrote: On Tue, Dec 29 at 9:50, Richard Elling wrote: I don't believe compression matters. But dedup can really make a big difference. When you enable dedup, the deduplication table (DDT) is created to keep track of the references to blocks. When

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Tim Cook
On Tue, Dec 29, 2009 at 12:07 PM, Richard Elling richard.ell...@gmail.comwrote: On Dec 29, 2009, at 9:16 AM, Brad wrote: @eric As a general rule of thumb, each vdev has the random performance roughly the same as a single member of that vdev. Having six RAIDZ vdevs in a pool should give

Re: [zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Tim Cook
On Tue, Dec 29, 2009 at 9:04 AM, Duane Walker du...@walker-family.orgwrote: I tried running an OpenSolaris server so I could use ZFS but SMB Serving wasn't reliable (it would only work for about 15 minutes). I've been running native cifs on Opensolaris for 3 years with about 15 minutes of

Re: [zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Eric D. Mudama
On Tue, Dec 29 at 12:40, Tim Cook wrote: On Tue, Dec 29, 2009 at 9:04 AM, Duane Walker du...@walker-family.orgwrote: I tried running an OpenSolaris server so I could use ZFS but SMB Serving wasn't reliable (it would only work for about 15 minutes). I've been running native cifs on

Re: [zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Tim Cook
On Tue, Dec 29, 2009 at 12:48 PM, Eric D. Mudama edmud...@bounceswoosh.orgwrote: On Tue, Dec 29 at 12:40, Tim Cook wrote: On Tue, Dec 29, 2009 at 9:04 AM, Duane Walker du...@walker-family.org wrote: I tried running an OpenSolaris server so I could use ZFS but SMB Serving wasn't reliable

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Erik Trimble
Eric D. Mudama wrote: On Tue, Dec 29 at 9:16, Brad wrote: The disk cost of a raidz pool of mirrors is identical to the disk cost of raid10. ZFS can't do a raidz of mirrors or a mirror of raidz. Members of a mirror or raidz[123] must be a fundamental device (i.e. file or drive) This

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-29 Thread scottford
I booted the snv_130 live cd and ran zpool import -fFX and it took a day, but it imported my pool and rolled it back to a previous version. I haven't looked to see what was missing, but I didn't need any of the changes over the last few weeks. Scott -- This message posted from

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Brad
@relling For small, random read IOPS the performance of a single, top-level vdev is performance = performance of a disk * (N / (N - P)) 133 * 12/(12-1)= 133 * 12/11 where, N = number of disks in the vdev P = number of parity devices in the vdev performance of a disk

Re: [zfs-discuss] Zfs upgrade freezes desktop

2009-12-29 Thread roland
i have a problem which is perhaps related. i installed opensolaris snv_130. after adding 4 additional disks and creating a raidz on them with compression=gzip and dedup enabled, i got reproducable system freeze (not sure, but the desktop/mouse-coursor froze) directly after login - without

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-29 Thread tom wagner
I booted the snv_130 live cd and ran zpool import -fFX and it took a day, but it imported my pool and rolled it back to a previous version. I haven't looked to see what was missing, but I didn't need any of the changes over the last few weeks. Scott I'll give it a shot. Hope this works,

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Richard Elling
On Dec 29, 2009, at 11:26 AM, Brad wrote: @relling For small, random read IOPS the performance of a single, top-level vdev is performance = performance of a disk * (N / (N - P)) 133 * 12/(12-1)= 133 * 12/11 where, N = number of disks in the vdev P = number of parity

[zfs-discuss] Scrub slow (again) after dedupe

2009-12-29 Thread Michael Herf
I have a 4-disk RAIDZ, and I reduced the time to scrub it from 80 hours to about 14 by reducing the number of snapshots, adding RAM, turning off atime, compression, and some other tweaks. This week (after replaying a large volume with dedup=on) it's back up, way up. I replayed a 700G filesystem

Re: [zfs-discuss] zfs zend is very slow

2009-12-29 Thread Brandon High
On Wed, Dec 16, 2009 at 8:19 AM, Brandon High bh...@freaks.com wrote: On Wed, Dec 16, 2009 at 8:05 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:  In his case 'zfs send' to /dev/null was still quite fast and the network was also quite fast (when tested with benchmark software).  The

[zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread Brad
Hi! I'm attempting to understand the pros/cons between raid5 and raidz after running into a performance issue with Oracle on zfs (http://opensolaris.org/jive/thread.jspa?threadID=120703tstart=0). I would appreciate some feedback on what I've understood so far: WRITES raid5 - A FS block is

Re: [zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread A Darren Dunham
On Tue, Dec 29, 2009 at 02:37:20PM -0800, Brad wrote: I would appreciate some feedback on what I've understood so far: WRITES raid5 - A FS block is written on a single disk (or multiple disks depending on size data???) There is no direct relationship between a filesystem and the RAID

Re: [zfs-discuss] Can I destroy a Zpool without importing it?

2009-12-29 Thread A Darren Dunham
On Sun, Dec 27, 2009 at 06:02:18PM +0100, Colin Raven wrote: Are there any negative consequences as a result of a force import? I mean STUNT; Sudden Totally Unexpected and Nasty Things -Me If the pool is not in use, no. It's a safety check to avoid problems that can easily crop up when

Re: [zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Eric D. Mudama
On Tue, Dec 29 at 12:49, Tim Cook wrote: Serious CIFS work meaning what? I've got a system that's been running 2009.06 for 6 months in a small office setting and it hasn't been unusable for anything I've needed. Wierd. Win7-x64 clients crashed my 2009.06 installation within 30 seconds of

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Eric D. Mudama
On Tue, Dec 29 at 11:14, Erik Trimble wrote: Eric D. Mudama wrote: On Tue, Dec 29 at 9:16, Brad wrote: The disk cost of a raidz pool of mirrors is identical to the disk cost of raid10. ZFS can't do a raidz of mirrors or a mirror of raidz. Members of a mirror or raidz[123] must be a

Re: [zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Duane Walker
I was trying to get Cacti running and it was all working except the PHP-SNMP. I installed it but the SNMP support wasn't recognised (in phpinfo()). I was reading the posts for the Cacti package and they said they were planning to add the SNMP support. I am running a combination of Win7-64 and

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2009-12-29 Thread James Dickens
not sure of your experience level, but did you try running devfsadm and then checking in format for your new disks James Dickens uadmin.blogspot.com On Sun, Dec 27, 2009 at 3:59 AM, Muhammed Syyid opensola...@syyid.netwrote: Hi I just picked up one of these cards and had a few questions

Re: [zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Eric D. Mudama
On Tue, Dec 29 at 17:00, Duane Walker wrote: I am running a combination of Win7-64 and 32 bit computers and someone else mentioned that win7 64 causes problems. The server itself was very stable and SCP (WinSCP) worked fine but SMB wouldn't stay up. I tried restarting the servives but only a

Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread Ross Walker
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 29 Dec 2009, Ross Walker wrote: A mirrored raidz provides redundancy at a steep cost to performance and might I add a high monetary cost. I am not sure what a mirrored raidz is. I have never heard

Re: [zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Sriram Narayanan
Each of these problems that you faced can be solved. Please ask for help on each of these via separate emails to osol-discuss and you'll get help. I say so because I'm moving my infrastructure to opensolaris for these services, among others. -- Sriram On 12/29/09, Duane Walker

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2009-12-29 Thread Muhammed Syyid
Thanks a bunch - that did the trick :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raidz vs raid5 clarity needed

2009-12-29 Thread Brad
@ross If the write doesn't span the whole stripe width then there is a read of the parity chunk, write of the block and a write of the parity chunk which is the write hole penalty/vulnerability, and is 3 operations (if the data spans more then 1 chunk then it is written in parallel so you can

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-29 Thread Jack Kielsmeier
I got my pool back Did a rig upgrade (new motherboard, processor, and 8 GB of RAM), re-installed opensolaris 2009.06, did an upgrade to snv_130, and did the import! The import only took about 4 hours! I have a hunch that I was running into some sort of issue with not having enough RAM

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-29 Thread Jack Kielsmeier
I should note that my import command was: zpool import -f vault -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss