Re: [zfs-discuss] Restricting smb share to specific interfaces

2010-01-05 Thread Jerome Warnier
Tim Cook wrote: On Sun, Jan 3, 2010 at 6:58 PM, Jerome Warnier jwarn...@beeznest.net mailto:jwarn...@beeznest.net wrote: Hi, I'm smbsharing ZFS filesystems. I know how to restrict access to it to some hosts (and users), but did not find any way to forbid the smb protocol

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-05 Thread Eric D. Mudama
On Mon, Jan 4 at 22:01, Thomas Burgess wrote: I guess i got some bad advice then I was told the kingston snv125-s2 used almost the exact same hardware as an x25-m and should be considered the poor mans x25-m ... Right, i couldn't find any of the 40 gb's in stock so i ordered the 64

Re: [zfs-discuss] preview of new SSD based on SandForce controller

2010-01-05 Thread Eric D. Mudama
On Mon, Jan 4 at 16:43, Wes Felter wrote: Eric D. Mudama wrote: I am not convinced that a general purpose CPU, running other software in parallel, will be able to be timely and responsive enough to maximize bandwidth in an SSD controller without specialized hardware support. Fusion-io would

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-05 Thread Thomas Burgess
The SNV125-S2/40GB is the half an X25-M drive which can be often found as a bare OEM drive for about $85 w/ rebate. Kingston does sell rebranded Intel SLC drives as well, but under a different model number: SNE-125S2/32 or SNE-125S2/64. I don't believe the 64GB Kingston MLC (SNV-125S2/64)

Re: [zfs-discuss] preview of new SSD based on SandForce controller

2010-01-05 Thread Andrey Kuzmin
600? I've heard 1.5GBps reported. On 1/5/10, Eric D. Mudama edmud...@bounceswoosh.org wrote: On Mon, Jan 4 at 16:43, Wes Felter wrote: Eric D. Mudama wrote: I am not convinced that a general purpose CPU, running other software in parallel, will be able to be timely and responsive enough to

[zfs-discuss] send/recv, apparent data loss

2010-01-05 Thread Michael Herf
I replayed a bunch of filesystems in order to get dedupe benefits. Only thing is a couple of them are rolled back to November or so (and I didn't notice before destroy'ing the old copy). I used something like: zfs snapshot pool/f...@dd zfs send -Rp pool/f...@dd | zfs recv -d pool/fs2 (after

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-05 Thread Joerg Schilling
Chris Du dilid...@gmail.com wrote: You can use the utility to erase all blocks and regain performance, but it's a manual process and quite complex. Windows 7 support TRIM, if SSD firmware also supports it, the process is run in the background so you will not notice performance degrade. I

Re: [zfs-discuss] send/recv, apparent data loss

2010-01-05 Thread Ian Collins
Michael Herf wrote: I replayed a bunch of filesystems in order to get dedupe benefits. Only thing is a couple of them are rolled back to November or so (and I didn't notice before destroy'ing the old copy). I used something like: zfs snapshot pool/f...@dd zfs send -Rp pool/f...@dd | zfs recv

Re: [zfs-discuss] send/recv, apparent data loss

2010-01-05 Thread Michael Herf
I didn't use -v, so I don't know. I just waited until the process exited, assuming it would succeed or fail. The sizes looked equivalent, so I went ahead with the destroy, rename. For the jobs a couple weeks ago, I turned off the snapshot service. For this one, I probably left it on. Anything

[zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mikko Lammi
Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of them (because they eat 80% of disk space) it

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joerg Schilling
Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Markus Kovero
Hi, while not providing complete solution, I'd suggest turning atime off so find/rm does not change access time and possibly destroying unnecessary snapshots before removing files, should be quicker. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mike Gerdts
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
Mike Gerdts wrote: On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues.

Re: [zfs-discuss] preview of new SSD based on SandForce controller

2010-01-05 Thread Joerg Schilling
Juergen Nickelsen n...@jnickelsen.de wrote: joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) writes: The netapps patents contain claims on ideas that I invented for my Diploma thesis work between 1989 and 1991, so the netapps patents only describe prior art. The new ideas

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 05:34, Mikko Lammi wrote: As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
On Tue, January 5, 2010 05:34, Mikko Lammi wrote: As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mikko Lammi
On Tue, January 5, 2010 17:08, David Magda wrote: On Tue, January 5, 2010 05:34, Mikko Lammi wrote: As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any

Re: [zfs-discuss] Solaris 10 and ZFS dedupe status

2010-01-05 Thread Bob Friesenhahn
On Mon, 4 Jan 2010, Tony Russell wrote: I am under the impression that dedupe is still only in OpenSolaris and that support for dedupe is limited or non existent.  Is this true?  I would like to use ZFS and the dedupe capability to store multiple virtual machine images.  The problem is that

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 10:12, casper@sun.com wrote: How about creating a new data set, moving the directory into it, and then destroying it? Assuming the directory in question is /opt/MYapp/data: 1. zfs create rpool/junk 2. mv /opt/MYapp/data /rpool/junk/ 3. zfs destroy rpool/junk

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
On Tue, January 5, 2010 10:12, casper@sun.com wrote: How about creating a new data set, moving the directory into it, and then destroying it? Assuming the directory in question is /opt/MYapp/data: 1. zfs create rpool/junk 2. mv /opt/MYapp/data /rpool/junk/ 3. zfs destroy rpool/junk

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Dennis Clarke
On Tue, January 5, 2010 10:12, casper@sun.com wrote: How about creating a new data set, moving the directory into it, and then destroying it? Assuming the directory in question is /opt/MYapp/data: 1. zfs create rpool/junk 2. mv /opt/MYapp/data /rpool/junk/ 3. zfs destroy rpool/junk

Re: [zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-05 Thread Carl Rathman
On Mon, Jan 4, 2010 at 8:59 PM, Richard Elling richard.ell...@gmail.com wrote: On Jan 4, 2010, at 6:40 AM, Carl Rathman wrote: I have a zpool raidz1 array (called storage) that I created under snv_118. I then created a zfs filesystem called storage/vmware which I shared out via iscsi. I

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 10:50, Michael Schuster wrote: David Magda wrote: Normally when you do a move with-in a 'regular' file system all that's usually done is the directory pointer is shuffled around. This is not the case with ZFS data sets, even though they're on the same pool? no - mv

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Roch
Richard Elling writes: On Jan 3, 2010, at 11:27 PM, matthew patton wrote: I find it baffling that RaidZ(2,3) was designed to split a record- size block into N (N=# of member devices) pieces and send the uselessly tiny requests to spinning rust when we know the massive delays

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 2:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joerg Schilling
Michael Schuster michael.schus...@sun.com wrote: rm -rf would be at least as quick. Normally when you do a move with-in a 'regular' file system all that's usually done is the directory pointer is shuffled around. This is not the case with ZFS data sets, even though they're on the same

Re: [zfs-discuss] Can't export pool after zfs receive

2010-01-05 Thread David Dyer-Bennet
On Mon, January 4, 2010 13:51, Ross wrote: I initialized a new whole-disk pool on an external USB drive, and then did zfs send from my big data pool and zfs recv onto the new external pool. Sometimes this fails, but this time it completed. That's the key bit for me - zfs send /receive

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
no - mv doesn't know about zpools, only about posix filesystems. mv doesn't care about filesystems only about the interface provided by POSIX. There is no zfs specific interface which allows you to move a file from one zfs to the next. Casper ___

Re: [zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 7:54 AM, Carl Rathman wrote: I didn't mean to destroy the pool. I used zpool destroy on a zvol, when I should have used zfs destroy. When I used zpool destroy -f mypool/myvolume the machine hard locked after about 20 minutes. This would be a bug. zpool destroy should

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Dyer-Bennet
On Tue, January 5, 2010 10:01, Richard Elling wrote: OTOH, if you can reboot you can also run the latest b130 livecd which has faster stat(). How much faster is it? He estimated 250 days to rm -rf them; so 10x faster would get that down to 25 days, 100x would get it down to 2.5 days (assuming

Re: [zfs-discuss] Solaris 10 and ZFS dedupe status

2010-01-05 Thread Henrik Johansson
On Jan 5, 2010, at 4:38 PM, Bob Friesenhahn wrote: On Mon, 4 Jan 2010, Tony Russell wrote: I am under the impression that dedupe is still only in OpenSolaris and that support for dedupe is limited or non existent. Is this true? I would like to use ZFS and the dedupe capability to store

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-05 Thread Kjetil Torgrim Homme
Brad bene...@yahoo.com writes: Hi Adam, I'm not Adam, but I'll take a stab at it anyway. BTW, your crossposting is a bit confusing to follow, at least when using gmane.org. I think it is better to stick to one mailing list anyway? From your the picture, it looks like the data is distributed

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: On Tue, January 5, 2010 10:01, Richard Elling wrote: OTOH, if you can reboot you can also run the latest b130 livecd which has faster stat(). How much faster is it? He estimated 250 days to rm -rf them; so 10x faster would get that down

Re: [zfs-discuss] zpool destroy -f hangs system, now zpool import hangs system.

2010-01-05 Thread Carl Rathman
On Tue, Jan 5, 2010 at 10:12 AM, Richard Elling richard.ell...@gmail.com wrote: On Jan 5, 2010, at 7:54 AM, Carl Rathman wrote: I didn't mean to destroy the pool.  I used zpool destroy on a zvol, when I should have used zfs destroy. When I used zpool destroy -f mypool/myvolume the machine

Re: [zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2010-01-05 Thread Cindy Swearingen
Hi Paul, I opened 6914208 to cover the sysevent/zfsdle problem. If the system crashed due to a power failure and the disk labels for this pool were corrupted, then I think you will need to follow the steps to get the disks relabeled correctly. You might review some previous postings by Victor

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Tim Cook
On Tue, Jan 5, 2010 at 11:25 AM, Richard Elling richard.ell...@gmail.comwrote: On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: On Tue, January 5, 2010 10:01, Richard Elling wrote: OTOH, if you can reboot you can also run the latest b130 livecd which has faster stat(). How much

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Robert Milkowski
On 05/01/2010 16:00, Roch wrote: That said, I truly am for a evolution for random read workloads. Raid-Z on 4K sectors is quite appealing. It means that small objects become nearly mirrored with good random read performance while large objects are stored efficiently. Have you got any

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Daniel Rock
Am 05.01.2010 16:22, schrieb Mikko Lammi: However when we deleted some other files from the volume and managed to raise free disk space from 4 GB to 10 GB, the rm -rf directory method started to perform significantly faster. Now it's deleting around 4,000 files/minute (240,000/h - quite an

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joe Blount
On 01/ 5/10 10:01 AM, Richard Elling wrote: How are the files named? If you know something about the filename pattern, then you could create subdirs and mv large numbers of files to reduce the overall size of a single directory. Something like: mkdir .A mv A* .A mkdir .B mv B*

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Paul Gress
On 01/ 5/10 05:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of them

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
Paul Gress wrote: On 01/ 5/10 05:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Fajar A. Nugraha
On Wed, Jan 6, 2010 at 12:44 AM, Michael Schuster michael.schus...@sun.com wrote: we need to get rid of them (because they eat 80% of disk space) it seems to be quite challenging. I've been following this thread.  Would it be faster to do the reverse.  Copy the 20% of disk then format then

Re: [zfs-discuss] zpool import -f not forceful enough?

2010-01-05 Thread Cindy Swearingen
Hi Dan, Can you describe what you are trying to recover from with more details because we can't quite follow what steps might have lead to this scenario. For example, your hdc pool had two disks, c1t0d0s0 and c8t1d0s0, and your rpool has c8t0d0s0 so s8t0d0s0 cannot be wiped clean. Maybe you

Re: [zfs-discuss] zpool import -f not forceful enough?

2010-01-05 Thread Dan McDonald
Hi Dan, Can you describe what you are trying to recover from with more details because we can't quite follow what steps might have lead to this scenario. Sorry. I was running Nevada 103 with a root zpool called hdc with c1t0d0s0 and c1t1d0s0. I first uttered: zpool detach hdc

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread A Darren Dunham
On Tue, Jan 05, 2010 at 04:49:00PM +, Robert Milkowski wrote: A possible *workaround* is to use SVM to set-up RAID-5 and create a zfs pool on top of it. How does SVM handle R5 write hole? IIRC SVM doesn't offer RAID-6. As far as I know, it does not address it. It's possible that adding a

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Roch Bourbonnais
Le 5 janv. 10 à 17:49, Robert Milkowski a écrit : On 05/01/2010 16:00, Roch wrote: That said, I truly am for a evolution for random read workloads. Raid-Z on 4K sectors is quite appealing. It means that small objects become nearly mirrored with good random read performance while large objects

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote: On 05/01/2010 16:00, Roch wrote: That said, I truly am for a evolution for random read workloads. Raid-Z on 4K sectors is quite appealing. It means that small objects become nearly mirrored with good random read performance while large

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-05 Thread Richard Elling
On Jan 4, 2010, at 7:08 PM, Brad wrote: Hi Adam, From your the picture, it looks like the data is distributed evenly (with the exception of parity) across each spindle then wrapping around again (final 4K) - is this one single write operation or two? | P | D00 | D01 | D02 | D03 | D04 |

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 8:52 AM, Daniel Rock wrote: Am 05.01.2010 16:22, schrieb Mikko Lammi: However when we deleted some other files from the volume and managed to raise free disk space from 4 GB to 10 GB, the rm -rf directory method started to perform significantly faster. Now it's deleting

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Robert Milkowski
On 05/01/2010 18:37, Roch Bourbonnais wrote: Writes are not the problem and we have log device to offload them. It's really about maintaining integrity of raid-5 type layout in the presence of bit-rot even if such bit-rot occur within free space. How is it addressed in RAID-DP? --

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Robert Milkowski
On 05/01/2010 18:49, Richard Elling wrote: On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote: The problem is that while RAID-Z is really good for some workloads it is really bad for others. Sometimes having L2ARC might effectively mitigate the problem but for some workloads it won't (due to

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Dyer-Bennet
On Tue, January 5, 2010 10:25, Richard Elling wrote: On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: It's interesting how our ability to build larger disks, and our software's ability to do things like create really large numbers of files, comes back to bite us on the ass every now

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Tristan Ball
On 6/01/2010 3:00 AM, Roch wrote: Richard Elling writes: On Jan 3, 2010, at 11:27 PM, matthew patton wrote: I find it baffling that RaidZ(2,3) was designed to split a record- size block into N (N=# of member devices) pieces and send the uselessly tiny requests to

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 11:30 AM, Robert Milkowski wrote: On 05/01/2010 18:49, Richard Elling wrote: On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote: The problem is that while RAID-Z is really good for some workloads it is really bad for others. Sometimes having L2ARC might effectively

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 11:56 AM, Tristan Ball wrote: On 6/01/2010 3:00 AM, Roch wrote: That said, I truly am for a evolution for random read workloads. Raid-Z on 4K sectors is quite appealing. It means that small objects become nearly mirrored with good random read performance while large objects

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Bob Friesenhahn
On Tue, 5 Jan 2010, Richard Elling wrote: Since there are already 1 TB SSDs on the market, the only thing keeping the HDD market alive is the low $/TB. Moore's Law predicts that cost advantage will pass. SSDs are already the low $/IOPS winners. SSD vendors are still working to stabilize

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Tristan Ball
On 6/01/2010 7:19 AM, Richard Elling wrote: If you are doing small, random reads on dozens of TB of data, then you've got a much bigger problem on your hands... kinda like counting grains of sand on the beach during low tide :-). Hopefully, you do not have to randomly update that data

[zfs-discuss] (Practical) limit on the number of snapshots?

2010-01-05 Thread Juergen Nickelsen
Is there any limit on the number of snapshots in a file system? The documentation -- manual page, admin guide, troubleshooting guide -- does not mention any. That seems to confirm my assumption that is is probably not a fixed limit, but there may still be a practical one, just like there is no

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Robert Milkowski
On 05/01/2010 20:19, Richard Elling wrote: On Jan 5, 2010, at 11:30 AM, Robert Milkowski wrote: On 05/01/2010 18:49, Richard Elling wrote: On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote: The problem is that while RAID-Z is really good for some workloads it is really bad for others.

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Robert Milkowski
On 05/01/2010 20:19, Richard Elling wrote: [...] Fortunately, most workloads are not of that size and scope. Forgot to mention it in my last email - yes, I agree. The environment I'm talking about is rather unusual and in most other cases where RAID-5/6 was considered the performance of

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-05 Thread Miles Nordin
dm == David Magda dma...@ee.ryerson.ca writes: dm 4096 - to-512 blocks aiui NAND flash has a minimum write size (determiined by ECC OOB bits) of 2 - 4kB, and a minimum erase size that's much larger. Remapping cannot abstract away the performance implication of the minimum write size if you

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Michael Herf
Many large-scale photo hosts start with netapp as the default good enough way to handle multiple-TB storage. With a 1-5% cache on top, the workload is truly random-read over many TBs. But these workloads almost assume a frontend cache to take care of hot traffic, so L2ARC is just a nice

Re: [zfs-discuss] preview of new SSD based on SandForce controller

2010-01-05 Thread Wes Felter
Eric D. Mudama wrote: On Mon, Jan 4 at 16:43, Wes Felter wrote: Eric D. Mudama wrote: I am not convinced that a general purpose CPU, running other software in parallel, will be able to be timely and responsive enough to maximize bandwidth in an SSD controller without specialized hardware

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Robert Milkowski
On 05/01/2010 23:31, Michael Herf wrote: The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be used instead of netapp. But this certainly reduces the cost advantage significantly. This is true to some extent. I didn't want to bring it up as I wanted to focus only on

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread David Magda
On Jan 5, 2010, at 16:06, Bob Friesenhahn wrote: Perhaps inovative designers like Suncast will figure out how to build reliable SSDs based on parts which are more likely to wear out and forget. At which point we'll probably start seeing the memristor start making an appearance in various

Re: [zfs-discuss] (Practical) limit on the number of snapshots?

2010-01-05 Thread Ian Collins
Juergen Nickelsen wrote: Is there any limit on the number of snapshots in a file system? The documentation -- manual page, admin guide, troubleshooting guide -- does not mention any. That seems to confirm my assumption that is is probably not a fixed limit, but there may still be a practical

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-05 Thread R.G. Keen
One reason I was so interested in this issue was the double-price of raid enabled disks. However, I realized that I am doing the initial proving, not production - even if personal - of the system I'm building. So for that purpose, an array of smaller and cheaper disks might be good. In the

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-05 Thread Tristan Ball
For those searching list archives, the SNV125-S2/40GB given below is not based on the Intel controller. I queried Kingston directly about this because there appears to be so much confusion (and I'm considering using these drives!), and I got back that: The V series uses a JMicron Controller The

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-05 Thread Erik Trimble
As a further update, I went back and re-read my SSD controller info, and then did some more Googling. Turns out, I'm about a year behind on State-of-the-SSD.Eric is correct on the way current SSDs implement writes (both SLC and MLC), so I'm issuing a mea-cupla here. The change in

Re: [zfs-discuss] preview of new SSD based on SandForce controller

2010-01-05 Thread Erik Trimble
Wes Felter wrote: Eric D. Mudama wrote: On Mon, Jan 4 at 16:43, Wes Felter wrote: Eric D. Mudama wrote: I am not convinced that a general purpose CPU, running other software in parallel, will be able to be timely and responsive enough to maximize bandwidth in an SSD controller without

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-05 Thread Eric D. Mudama
On Wed, Jan 6 at 14:56, Tristan Ball wrote: For those searching list archives, the SNV125-S2/40GB given below is not based on the Intel controller. I queried Kingston directly about this because there appears to be so much confusion (and I'm considering using these drives!), and I got back

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-05 Thread Thomas Burgess
I think the confusing part is that the 64gb version seems to use a different controller all together I couldn't find any SNV125-S2/40's in stock so i got 3 SNV125-S2/64's thinking it would be the same,m only bigger.looks like it was stupid on my part. now i understand why i got such a good

[zfs-discuss] ZFS filesystem size mismatch

2010-01-05 Thread Nils K . Schøyen
A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a 'du -sh' on the filesystem root, I only get appr. 300GB which is the correct size. The file system became full during Christmas and I increased the quota from 1 to 1.5 to 2TB and then decreased to 1.5TB. No