Re: [zfs-discuss] how do i prevent changing device names? is this even a problem in ZFS

2010-01-04 Thread Mark Bennett
I'd recommend a SAS non-raid controller (with sas backplane) over sata. It has better hot plug support. I use the Supermicro SC836E1 and a AOC-USAS-L4i with a UIO M/b. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-d

Re: [zfs-discuss] ZFS write bursts cause short app stalls

2010-01-04 Thread Roch
Tim Cook writes: > On Sun, Dec 27, 2009 at 6:43 PM, Bob Friesenhahn < > bfrie...@simple.dallas.tx.us> wrote: > > > On Sun, 27 Dec 2009, Tim Cook wrote: > > > >> > >> That is ONLY true when there's significant free space available/a fresh > >> pool. Once those files have been deleted and

Re: [zfs-discuss] how do i prevent changing device names? is this even a problem in ZFS

2010-01-04 Thread Thomas Burgess
It's too late. I ordered 3 AOC-SAT2-MV8 cards. when you say "better hot plug support" what exactly does that mean. It was my understanding that the AOC-SAT2-MV8 was the same (or similar) controller which sun used in the x4500 i know THAT has hotplug support (right?!?!) the case i used has 20 h

Re: [zfs-discuss] zvol (slow) vs file (fast) performance snv_130

2010-01-04 Thread Ross Walker
On Sun, Jan 3, 2010 at 1:59 AM, Brent Jones wrote: > On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker wrote: >> On Dec 30, 2009, at 11:55 PM, "Steffen Plotner" >> wrote: >> >> Hello, >> >> I was doing performance testing, validating zvol performance in >> particularly, and found that zvol write perf

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-04 Thread Ross Walker
On Mon, Jan 4, 2010 at 2:27 AM, matthew patton wrote: > I find it baffling that RaidZ(2,3) was designed to split a record-size block > into N (N=# of member devices) pieces and send the uselessly tiny requests to > spinning rust when we know the massive delays entailed in head seeks and > rotat

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2010-01-04 Thread tom wagner
In this last iteration, I switched to a completely different box with twice the resources. Somehow, from the symptoms, I don't think trying it on one of the 48 or 128GB servers at work is going to change the outcome. The hang happens too fast. it seems like something in the destroy is causing

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-01-04 Thread tom wagner
I'm also curious, but for b130 of Opensolaris. Any way to try to import a pool without the log device? Seems like the ability to rollback of the pool recovery import should help with this scenario if you are willing to take data loss to get to a consistent state with a failed or physically rem

Re: [zfs-discuss] (snv_129, snv_130) can't import zfs pool

2010-01-04 Thread tom wagner
Bump. Any devs want to take him up on his offer, obviously this is effecting a few users and judging from the view counts of the other threads about this problem, many more. This would probably effect the 7000 series as well. Thanks. -- This message posted from opensolaris.org

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-04 Thread Giovanni Tirloni
On Sat, Jan 2, 2010 at 4:07 PM, R.G. Keen wrote: > OK. From the above suppositions, if we had a desktop (infinitely > long retry on fail) disk and a soft-fail error in a sector, then the > disk would effectively hang each time the sector was accessed. > This would lead to > (1) ZFS->SD-> disk read

Re: [zfs-discuss] image-update failed; "ZFS returned an error"

2010-01-04 Thread Cindy Swearingen
Hi Garen, Does this system have a mirrored root pool and if so, is a p0 device included as a root pool device instead of an s0 device? Thanks, Cindy On 12/22/09 18:56, Garen Parham wrote: Never seen this before: # pkg image-update DOWNLOAD PKGS FILES

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-04 Thread Joerg Schilling
Giovanni Tirloni wrote: > We use Seagate Barracuda ES.2 1TB disks and every time the OS starts > to bang on a region of the disk with bad blocks (which essentially > degrades the performance of the whole pool) we get a call from our > clients complaining about NFS timeouts. They usually last for

[zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Thomas Burgess
I'm not 100% sure i'm going to need a separate SSD for my ZIL but if i did want to look for one, i was wondering if anyone could suggest/recommend a few budget options. My current hardware is something like this: intel core2quad 9550 8 gb ddr2 800 unbuffered ECC 3 AOC-SAT2-MV8 controllers 21 720

Re: [zfs-discuss] image-update failed; "ZFS returned an error"

2010-01-04 Thread Lori Alt
Also, you might want to pursue this question at caiman-disc...@opensolaris.org, since that's where you'll find the experts on beadm. Lori On 01/04/10 10:46, Cindy Swearingen wrote: Hi Garen, Does this system have a mirrored root pool and if so, is a p0 device included as a root pool devic

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Richard Elling
On Jan 4, 2010, at 10:00 AM, Thomas Burgess wrote: I'm not 100% sure i'm going to need a separate SSD for my ZIL but if i did want to look for one, i was wondering if anyone could suggest/ recommend a few budget options. Start with zilstat, which will help you determine if your workload uses

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-04 Thread Giovanni Tirloni
On Mon, Jan 4, 2010 at 3:51 PM, Joerg Schilling wrote: > Giovanni Tirloni wrote: > >> We use Seagate Barracuda ES.2 1TB disks and every time the OS starts >> to bang on a region of the disk with bad blocks (which essentially >> degrades the performance of the whole pool) we get a call from our >>

[zfs-discuss] Can't export pool after zfs receive

2010-01-04 Thread David Dyer-Bennet
I initialized a new whole-disk pool on an external USB drive, and then did zfs send from my big data pool and zfs recv onto the new external pool. Sometimes this fails, but this time it completed. Zpool status showed no errors on the external pool. Zpool export hangs, with no apparent disk acti

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Chris Du
You need SLC SSD for ZIL. The only SLC SSD I'd recommend is Intel X25-E. Others are either too expensive or much slower than Intel. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-04 Thread matthew patton
Chris Siebenmann wrote: > People have already mentioned the RAID-[56] write hole, > but it's more > than that; in a never-overwrite system with multiple blocks > in one RAID > stripe, how do you handle updates to some of the blocks? > > See: >     http://utcc.utoronto.ca/~cks/space/blog/solar

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Thomas Burgess
slightly outside of my price range. I'll either do without or wait till they drop in priceis there a "second best" option or is this pretty much it? On Mon, Jan 4, 2010 at 1:27 PM, Chris Du wrote: > You need SLC SSD for ZIL. The only SLC SSD I'd recommend is Intel X25-E. > Others are eithe

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-04 Thread Richard Elling
On Jan 3, 2010, at 11:27 PM, matthew patton wrote: I find it baffling that RaidZ(2,3) was designed to split a record- size block into N (N=# of member devices) pieces and send the uselessly tiny requests to spinning rust when we know the massive delays entailed in head seeks and rotational d

Re: [zfs-discuss] Can't export pool after zfs receive

2010-01-04 Thread Richard Elling
On Jan 4, 2010, at 10:26 AM, David Dyer-Bennet wrote: I initialized a new whole-disk pool on an external USB drive, and then did zfs send from my big data pool and zfs recv onto the new external pool. Sometimes this fails, but this time it completed. Zpool status showed no errors on the e

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Menno Lageman
On 01/04/10 19:35, Thomas Burgess wrote: slightly outside of my price range. I'll either do without or wait till they drop in priceis there a "second best" option or is this pretty much it? I guess it depends on your workload and your performance expectations/requirements vs budget. For e

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Richard Elling
On Jan 4, 2010, at 10:35 AM, Thomas Burgess wrote: slightly outside of my price range. I'll either do without or wait till they drop in priceis there a "second best" option or is this pretty much it? If you need the separate log, then you can figure the relative latency gain for latency

Re: [zfs-discuss] Can't export pool after zfs receive

2010-01-04 Thread Ross
> I initialized a new whole-disk pool on an external > USB drive, and then did zfs send from my big data pool and zfs recv onto the > new external pool. > Sometimes this fails, but this time it completed. That's the key bit for me - zfs send /receive should not just fail at random. It sounds li

[zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-04 Thread Carl Rathman
I have a zpool raidz1 array (called storage) that I created under snv_118. I then created a zfs filesystem called storage/vmware which I shared out via iscsi. I then deleted the vmware filesystem, using 'zpool destroy -f storage/vmware' -- which resulted in heavy disk activity, and then hard lo

Re: [zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-04 Thread tom wagner
Did you happen to set dedup on that zvol that you destroyed? Your symptoms sound just like mine. Check out the threads concerning losing a pool after destroying a deduped dataset. There's 3 or 4 of them. I get heavy read activity for about 4 minutes and then the systems just hangs and I can'

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread tom wagner
Myself and others had good luck with the OCZ vertex. I use two 30GB versions and they have very high write and read throughputs for such a cheap MLC. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

Re: [zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-04 Thread Carl Rathman
On Mon, Jan 4, 2010 at 4:22 PM, tom wagner wrote: > Did you happen to set dedup on that zvol that you destroyed?  Your symptoms > sound just like mine.  Check out the threads concerning losing a pool after > destroying a deduped dataset. There's 3 or  4 of them.  I get heavy read > activity for

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Thomas Burgess
I'm PRETTY sure the kingston drives i ordered are as good/better i just didnt' know that they weren't "good enough" Basically, if i have 3 raidz2 groups or 4 raidz groups with a total of 20 7200 RPM drives is using a cheaper MLC drive going to make things WORSE? thanks for the idea though, i may

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Chris Du
They are fast when they are new. Once all the blocks are written, performance degrades significantly. SLC will also degrade over time, but when it needs to erase blocks and rewrite, it is much faster than MLC. That's why for ZIL, SLC SSD is prefered. It's possible to remove MLC ZIL and use wipe

Re: [zfs-discuss] preview of new SSD based on SandForce controller

2010-01-04 Thread Wes Felter
Eric D. Mudama wrote: I am not convinced that a general purpose CPU, running other software in parallel, will be able to be timely and responsive enough to maximize bandwidth in an SSD controller without specialized hardware support. Fusion-io would seem to be a counter-example, since it use

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Thomas Burgess
so are you saying that the "degrading problem" with ssd's can be fixed completely with such a utility? Don't they STILL wear out and become more or less broken after heavy use On Mon, Jan 4, 2010 at 5:43 PM, Chris Du wrote: > They are fast when they are new. Once all the blocks are written, >

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Chris Du
You can use the utility to erase all blocks and regain performance, but it's a manual process and quite complex. Windows 7 support TRIM, if SSD firmware also supports it, the process is run in the background so you will not notice performance degrade. I don't think any other OS supports TRIM. I

Re: [zfs-discuss] Zones on shared storage - a warning

2010-01-04 Thread Cindy Swearingen
Hi Mike, It is difficult to comment on the root cause of this failure since the several interactions of these features are unknown. You might consider seeing how Ed's proposal plays out and let him do some more testing... If you are interested in testing this with NFSv4 and it still fails the sa

[zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Brad
If a 8K file system block is written on a 9 disk raidz vdev, how is the data distributed (writtened) between all devices in the vdev since a zfs write is one continuously IO operation? Is it distributed evenly (1.125KB) per device? -- This message posted from opensolaris.org ___

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread tom wagner
Fast is a relative term, because even after the first write to the end, they are still really fast for a small server and the latency is still low <1ms which is often more important than throughput. The topic said poor mans slog. The vertexes can be had for $100 and the vertex turbo a little mo

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Adam Leventhal
Hi Brad, RAID-Z will carve up the 8K blocks into chunks at the granularity of the sector size -- today 512 bytes but soon going to 4K. In this case a 9-disk RAID-Z vdev will look like this: | P | D00 | D01 | D02 | D03 | D04 | D05 | D06 | D07 | | P | D08 | D09 | D10 | D11 | D12 | D13 | D14 |

Re: [zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-04 Thread tom wagner
Interesting, I had assumed the cause of my problem was de-dup because the symptoms are similar to what others have reported destroying their deduped datasets, but their system hangs didn't happen for hours while my system hard locks in 3 to 4 minutes. But now you have me thinking because the d

[zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-04 Thread Carl Rathman
I have a zpool raidz1 array (called storage) that I created under snv_118. I then created a zfs filesystem called storage/vmware which I shared out via iscsi. I then deleted the vmware filesystem, using 'zpool destroy -f storage/vmware' -- which resulted in heavy disk activity, and then hard lo

[zfs-discuss] Solaris 10 and ZFS dedupe status

2010-01-04 Thread Tony Russell
I am under the impression that dedupe is still only in OpenSolaris and that support for dedupe is limited or non existent. Is this true? I would like to use ZFS and the dedupe capability to store multiple virtual machine images. The problem is that this will be in a production environment and

Re: [zfs-discuss] Solaris 10 and ZFS dedupe status

2010-01-04 Thread Ray Van Dolson
On Mon, Jan 04, 2010 at 03:51:17PM -0800, Tony Russell wrote: > I am under the impression that dedupe is still only in OpenSolaris and > that support for dedupe is limited or non existent. Is this true? I > would like to use ZFS and the dedupe capability to store multiple > virtual machine images

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Al Hopper
On Mon, Jan 4, 2010 at 4:39 PM, Thomas Burgess wrote: > > I'm PRETTY sure the kingston drives i ordered are as good/better > > i just didnt' know that they weren't "good enough" I disagree that those drives are "good enough". That particular drive uses the dreaded JMicron controller - which has

Re: [zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-04 Thread Richard Elling
On Jan 4, 2010, at 6:40 AM, Carl Rathman wrote: I have a zpool raidz1 array (called storage) that I created under snv_118. I then created a zfs filesystem called storage/vmware which I shared out via iscsi. I then deleted the vmware filesystem, using 'zpool destroy -f storage/vmware' -- w

Re: [zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-04 Thread Richard Elling
On Jan 4, 2010, at 6:40 AM, Carl Rathman wrote: I have a zpool raidz1 array (called storage) that I created under snv_118. I then created a zfs filesystem called storage/vmware which I shared out via iscsi. I then deleted the vmware filesystem, using 'zpool destroy -f storage/vmware' -- w

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-04 Thread Thomas Burgess
> I disagree that those drives are "good enough". That particular drive > uses the dreaded JMicron controller - which has a really bad > reputation. And a poor reputation that it *earned* and deserves. > Even though these drives use a newer revision of the original JMicron > part (that basically

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Brad
Hi Adam, >From your the picture, it looks like the data is distributed evenly (with the >exception of parity) across each spindle then wrapping around again (final 4K) >- is this one single write operation or two? | P | D00 | D01 | D02 | D03 | D04 | D05 | D06 | D07 | <-one write op