Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-02-02 Thread Ragnar Sundblad
2012, at 03:06, Ragnar Sundblad wrote: On 1 feb 2012, at 02:43, Edmund White wrote: You will definitely want to have a Smart Array card (p411 or p811) on hand to update the firmware on the enclosure. Make sure you're on firmware version 0131. You may also want to update the disk firmware

[zfs-discuss] zfs send and receive - how are people driving them?

2012-01-31 Thread Ragnar Sundblad
I guess many of you on this list are using zfs send and receive to move data from one machine to another, or between pools on the same machine, for redundancy or for other purposes, and perhaps over ssh or other channels. Is there any standard way of doing this that people use, or has everyone

Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad
Just to follow up on this, in case there are others interested: The D2700s seems to work quite ok for us. We have four issues with them, all of which we will ignore for now: - They hang when I insert an Intel SSD SATA (!) disk (I wanted to test, both for log device and cache device, and I had

Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad
On 1 feb 2012, at 02:38, Hung-Sheng Tsao (laoTsao) wrote: what is the server you attach to D2700? It is different Sun/Oracle X4NN0s, x86-86 boxes. the hp spec for d2700 did not include solaris, so not sure how you get support from hp:-( We don't. :-( /ragge

Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad
On 1 feb 2012, at 02:43, Edmund White wrote: You will definitely want to have a Smart Array card (p411 or p811) on hand to update the firmware on the enclosure. Make sure you're on firmware version 0131. You may also want to update the disk firmware at the same time. I have multipath and

Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad
Hello Rocky! On 1 feb 2012, at 03:07, Rocky Shek wrote: Ragnar, Which Intel SSD do you use? We use 320 and 710. We have bad experience with 510 in the past I tried with Intel X25-M 160 and 80 GB and a X25-E 64 GB (only because that was what I had in my drawer). I am not sure which one of

Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad
Hello James! On 1 feb 2012, at 02:43, James C. McPherson wrote: The supported way to enable MPxIO is to run # /usr/sbin/stmsboot -e You shouldn't need to do this for mpt_sas HBAs such as your 9205 controllers; we enable MPxIO by default on them. If you _do_ edit scsi_vhci.conf, you

[zfs-discuss] questions about the DDT and other things

2011-12-01 Thread Ragnar Sundblad
I am sorry if these are dumb questions. If there are explanations available somewhere for those questions that I just haven't found, please let me know! :-) 1. It has been said that when the DDT entries, some 376 bytes or so, are rolled out on L2ARC, there still is some 170 bytes in the ARC to

Re: [zfs-discuss] questions about the DDT and other things

2011-12-01 Thread Ragnar Sundblad
Thanks for your answers! On 2 dec 2011, at 02:54, Erik Trimble wrote: On 12/1/2011 4:59 PM, Ragnar Sundblad wrote: I am sorry if these are dumb questions. If there are explanations available somewhere for those questions that I just haven't found, please let me know! :-) 1. It has been

[zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Ragnar Sundblad
based), which are 3 Gb/s. Would those (probably) work OK even if we should consider switching to 6 Gb/s HBAs? What 6 Gb/s HBA is currently recommended (LSI 920[05]?s). Thanks for any advice and/or thoughts! Ragnar Sundblad Royal Institute of Technology Stockholm, Sweden

Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Ragnar Sundblad
On 30 nov 2011, at 14:40, Edmund White wrote: Absolutely. I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server running NexentaStor. On the HBA side, I used the LSI 9211-8i 6G controllers for the server's internal disks (boot, a handful of large disks, Pliant SSDs for

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-08 Thread Ragnar Sundblad
On 8 jul 2010, at 17.23, Garrett D'Amore wrote: You want the write cache enabled, for sure, with ZFS. ZFS will do the right thing about ensuring write cache is flushed when needed. That is not for sure at all, it all depends on what the right thing is, which depends on the application and/or

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Ragnar Sundblad
On 12 apr 2010, at 22.32, Carson Gaspar wrote: Carson Gaspar wrote: Miles Nordin wrote: re == Richard Elling richard.ell...@gmail.com writes: How do you handle the case when a hotplug SATA drive is powered off unexpectedly with data in its write cache? Do you replay the writes, or do

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Ragnar Sundblad
On 30 jun 2010, at 22.46, Garrett D'Amore wrote: On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote: To be safe, the protocol needs to be able to discover that the devices (host or disk) has been disconnected and reconnected or has been reset and that either parts assumptions about

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-17 Thread Ragnar Sundblad
On 17 jun 2010, at 18.17, Richard Jahnel wrote: The EX specs page does list the supercap The pro specs page does not. They do for both on the Specifications tab on the web page:

Re: [zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread Ragnar Sundblad
On 30 maj 2010, at 01.53, morris hooten wrote: I have 6 zfs pools and after rebooting init 6 the vpath device path names have changed for some unknown reason. But I can't detach, remove and reattach to the new device namesANY HELP! please pjde43m01 - - - -

Re: [zfs-discuss] zfs replace multiple drives

2010-05-24 Thread Ragnar Sundblad
On 24 maj 2010, at 02.44, Erik Trimble wrote: On 5/23/2010 5:00 PM, Andreas Iannou wrote: Is it safe or possible to do a zpool replace for multiple drives at once? I think I have one of the troublesome WD Green drives as replacing it has taken 39hrs and only reslivered 58Gb, I have another

Re: [zfs-discuss] zfs replace multiple drives

2010-05-24 Thread Ragnar Sundblad
On 24 maj 2010, at 10.26, Brandon High wrote: On Mon, May 24, 2010 at 1:02 AM, Ragnar Sundblad ra...@csc.kth.se wrote: Is that really true if you use the zpool replace command with both the old and the new drive online? Yes. (Don't you mean no then? :-) zpool replace [-f] pool

Re: [zfs-discuss] New SSD options

2010-05-22 Thread Ragnar Sundblad
On 22 maj 2010, at 07.40, Don wrote: The SATA power connector supplies 3.3, 5 and 12v. A complete solution will have all three. Most drives use just the 5v, so you can probably ignore 3.3v and 12v. I'm not interested in building something that's going to work for every possible drive

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Ragnar Sundblad
On 20 maj 2010, at 00.20, Don wrote: You can lose all writes from the last committed transaction (i.e., the one before the currently open transaction). And I don't think that bothers me. As long as the array itself doesn't go belly up- then a few seconds of lost transactions are largely

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Ragnar Sundblad
On 20 maj 2010, at 20.35, David Magda wrote: On Thu, May 20, 2010 14:12, Travis Tabbal wrote: On May 19, 2010, at 2:29 PM, Don wrote: The data risk is a few moments of data loss. However, if the order of the uberblock updates is not preserved (which is why the caches are flushed) then

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Ragnar Sundblad
On 21 maj 2010, at 00.53, Ross Walker wrote: On May 20, 2010, at 6:25 PM, Travis Tabbal tra...@tabbal.net wrote: use a slog at all if it's not durable? You should disable the ZIL instead. This is basically where I was going. There only seems to be one SSD that is considered

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Ragnar Sundblad
On 2010-05-19 08.32, sensille wrote: Don wrote: With that in mind- Is anyone using the new OCZ Vertex 2 SSD's as a ZIL? They're claiming 50k IOPS (4k Write- Aligned), 2 million hour MTBF, TRIM support, etc. That's more write IOPS than the ZEUS (40k IOPS, $) but at half the price of an

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-14 Thread Ragnar Sundblad
On 12 maj 2010, at 22.39, Miles Nordin wrote: bh == Brandon High bh...@freaks.com writes: bh If you boot from usb and move your rpool from one port to bh another, you can't boot. If you plug your boot sata drive into bh a different port on the motherboard, you can't bh boot.

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-12 Thread Ragnar Sundblad
On 10 maj 2010, at 20.04, Miles Nordin wrote: bh == Brandon High bh...@freaks.com writes: bh The drive should be on the same USB port because the device bh path is saved in the zpool.cache. If you removed the bh zpool.cache, it wouldn't matter where the drive was plugged bh

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-12 Thread Ragnar Sundblad
On 12 maj 2010, at 05.31, Brandon High wrote: On Tue, May 11, 2010 at 8:17 PM, Richard Elling richard.ell...@gmail.com wrote: boot single user and mv it (just like we've done for fstab/vfstab for the past 30+ years :-) It would be nice to have a grub menu item that ignores the cache, so

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Ragnar Sundblad
On 6 maj 2010, at 08.17, Pasi Kärkkäinen wrote: On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Robert Milkowski if you can disable ZIL and compare the performance to

Re: [zfs-discuss] Mapping inode numbers to file names

2010-04-28 Thread Ragnar Sundblad
On 28 apr 2010, at 14.06, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey Look up the inode number of README. (for example, ls -i README) (suppose it’s inode 12345) find

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-26 Thread Ragnar Sundblad
On 25 apr 2010, at 20.12, Richard Elling wrote: On Apr 25, 2010, at 5:45 AM, Edward Ned Harvey wrote: From: Richard Elling [mailto:richard.ell...@gmail.com] Sent: Saturday, April 24, 2010 7:42 PM Next, mv /a/e /a/E ls -l a/e/.snapshot/snaptime ENOENT? ls -l

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-24 Thread Ragnar Sundblad
On 24 apr 2010, at 16.43, Richard Elling wrote: I do not recall reaching that conclusion. I think the definition of the problem is what you continue to miss. Me too then, I think. Can you please enlighten us about the definition of the problem? The .snapshot directories do precisely what

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Ragnar Sundblad
On 18 apr 2010, at 06.43, Richard Elling wrote: On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Ragnar Sundblad
On 17 apr 2010, at 20.51, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be mirrored ? ... Personally, I recommend the latest build

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Ragnar Sundblad
On 18 apr 2010, at 00.52, Dave Vrona wrote: Ok, so originally I presented the X-25E as a reasonable approach. After reading the follow-ups, I'm second guessing my statement. Any decent alternatives at a reasonable price? How much is reasonable? :-) I guess there are STEC drives that

Re: [zfs-discuss] Secure delete?

2010-04-16 Thread Ragnar Sundblad
On 16 apr 2010, at 17.05, Bob Friesenhahn wrote: On Fri, 16 Apr 2010, Kyle McDonald wrote: But doesn't the TRIM command help here. If as the OS goes along it makes sectors as unused, then the SSD will have a lighter wight lift to only need to read for example 1 out of 8 (assuming sectors

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-13 Thread Ragnar Sundblad
On 12 apr 2010, at 19.10, Kyle McDonald wrote: On 4/12/2010 9:10 AM, Willard Korfhage wrote: I upgraded to the latest firmware. When I rebooted the machine, the pool was back, with no errors. I was surprised. I will work with it more, and see if it stays good. I've done a scrub, so now

Re: [zfs-discuss] Replacing disk in zfs pool

2010-04-09 Thread Ragnar Sundblad
On 9 apr 2010, at 10.58, Andreas Höschler wrote: Hi all, I need to replace a disk in a zfs pool on a production server (X4240 running Solaris 10) today and won't have access to my documentation there. That's why I would like to have a good plan on paper before driving to that location.

Re: [zfs-discuss] Replacing disk in zfs pool

2010-04-09 Thread Ragnar Sundblad
On 9 apr 2010, at 12.04, Andreas Höschler wrote: Hi Ragnar, I need to replace a disk in a zfs pool on a production server (X4240 running Solaris 10) today and won't have access to my documentation there. That's why I would like to have a good plan on paper before driving to that

Re: [zfs-discuss] Replacing disk in zfs pool

2010-04-09 Thread Ragnar Sundblad
On 9 apr 2010, at 14.17, Edward Ned Harvey wrote: ... I recently went through an exercise very similar to this on an x4275. I also tried to configure the HBA via the ILOM but couldn't find any way to do it. ... Oh no, this is a BIOS system. The card is an autonomous entity that lives a life

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-04-08 Thread Ragnar Sundblad
On 12 mar 2010, at 03.58, Damon Atkins wrote: ... Unfortunately DNS spoofing exists, which means forward lookups can be poison. And IP address spoofing, and... The best (maybe only) way to make NFS secure is NFSv4 and Kerb5 used together. Amen! DNS is NOT an authentication system! IP is NOT

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-04-08 Thread Ragnar Sundblad
On 8 apr 2010, at 23.21, Miles Nordin wrote: rs == Ragnar Sundblad ra...@csc.kth.se writes: rs use IPSEC to make IP address spoofing harder. IPsec with channel binding is win, but not until SA's are offloaded to the NIC and all NIC's can do IPsec AES at line rate. Until this happens

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Ragnar Sundblad
On 7 apr 2010, at 14.28, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jeroen Roodhart If you're running solaris proper, you better mirror your ZIL log device. ... I plan to get to test this as well, won't

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Ragnar Sundblad
On 7 apr 2010, at 18.13, Edward Ned Harvey wrote: From: Ragnar Sundblad [mailto:ra...@csc.kth.se] Rather: ... =19 would be ... if you don't mind loosing data written the ~30 seconds before the crash, you don't have to mirror your log device. If you have a system crash, *and* a failed

Re: [zfs-discuss] Problems with zfs and a STK RAID INT SAS HBA

2010-04-05 Thread Ragnar Sundblad
On 5 apr 2010, at 04.35, Edward Ned Harvey wrote: When running the card in copyback write cache mode, I got horrible performance (with zfs), much worse than with copyback disabled (which I believe should mean it does write-through), when tested with filebench. When I benchmark my disks, I

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-04 Thread Ragnar Sundblad
On 4 apr 2010, at 06.01, Richard Elling wrote: Thank you for your reply! Just wanted to make sure. Do not assume that power outages are the only cause of unclean shutdowns. -- richard Thanks, I have seen that mistake several times with other (file)systems, and hope I'll never ever make it

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-03 Thread Ragnar Sundblad
On 1 apr 2010, at 06.15, Stuart Anderson wrote: Assuming you are also using a PCI LSI HBA from Sun that is managed with a utility called /opt/StorMan/arcconf and reports itself as the amazingly informative model number Sun STK RAID INT what worked for me was to run, arcconf delete (to delete

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-03 Thread Ragnar Sundblad
On 2 apr 2010, at 22.47, Neil Perrin wrote: Suppose there is an application which sometimes does sync writes, and sometimes async writes. In fact, to make it easier, suppose two processes open two files, one of which always writes asynchronously, and one of which always writes

[zfs-discuss] Problems with zfs and a STK RAID INT SAS HBA

2010-04-03 Thread Ragnar Sundblad
Hello, Maybe this question should be put on another list, but since there are a lot of people here using all kinds of HBAs, this could be right anyway; I have a X4150 running snv_134. It was shipped with a STK RAID INT adaptec/intel/storagetek/sun SAS HBA. When running the card in copyback

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI client assumes all writes are synchronised to nonvolatile storage,

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote: The PERC cache measurably and significantly accelerates small disk writes. However, for read operations, it is insignificant compared to system ram, both in terms of size and speed. There is no significant performance improvement by

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 23.40, Eugen Leitl wrote: On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote: I found the Hyperdrive 5/5M, which is a half-height drive bay sata ramdisk with battery backup and auto-backup to compact flash at power failure. Promises 65,000 IOPS and thus

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 23.20, Ross Walker wrote: On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 23.22, Phil Harman wrote: On 19/02/2010 21:57, Ragnar Sundblad wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Ragnar Sundblad
On 20 feb 2010, at 02.34, Rob Logan wrote: An UPS plus disabling zil, or disabling synchronization, could possibly achieve the same result (or maybe better) iops wise. Even with the fastest slog, disabling zil will always be faster... (less bytes to move) This would probably work given

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-15 Thread Ragnar Sundblad
On 15 feb 2010, at 23.33, Bob Beverage wrote: On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote: I've seen exactly the same thing. Basically, terrible transfer rates with Windows and the server sitting there completely idle. I am also seeing this behaviour.

Re: [zfs-discuss] zero out block / sectors

2010-01-29 Thread Ragnar Sundblad
On 28 jan 2010, at 12.11, Björn JACKE wrote: On 2010-01-28 at 00:30 +0100 Ragnar Sundblad sent off: Are there any plans to add unwritten extent support into ZFS or any reason why not? I have no idea, but just out of curiosity - when do you want that? when you have many data streams

Re: [zfs-discuss] zero out block / sectors

2010-01-27 Thread Ragnar Sundblad
On 27 jan 2010, at 10.44, Björn JACKE wrote: On 2010-01-25 at 08:31 -0600 Mike Gerdts sent off: You are missing the point. Compression and dedup will make it so that the blocks in the devices are not overwritten with zeroes. The goal is to overwrite the blocks so that a back-end storage

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ragnar Sundblad
On 19 jan 2010, at 20.11, Ian Collins wrote: Julian Regel wrote: Based on what I've seen in other comments, you might be right. Unfortunately, I don't feel comfortable backing up ZFS filesystems because the tools aren't there to do it (built into the operating system or using

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ragnar Sundblad
On 20 jan 2010, at 17.22, Julian Regel wrote: It is actually not that easy. Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare + 2x OS disks. The four raidz2 group form a single pool. This

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ragnar Sundblad
On 21 jan 2010, at 00.20, Al Hopper wrote: I remember for about 5 years ago (before LT0-4 days) that streaming tape drives would go to great lengths to ensure that the drive kept streaming - because it took so much time to stop, backup and stream again. And one way the drive firmware

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-03 Thread Ragnar Sundblad
Eric D. Midama did a very good job answering this, and I don't have much to add. Thanks Eric! On 3 jan 2010, at 07.24, Erik Trimble wrote: I think you're confusing erasing with writing. I am now quite certain that it actually was you who were confusing those. I hope this discussion has

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-02 Thread Ragnar Sundblad
On 1 jan 2010, at 17.44, Richard Elling wrote: On Dec 31, 2009, at 12:59 PM, Ragnar Sundblad wrote: Flash SSDs actually always remap new writes into a only-append-to-new-pages style, pretty much as ZFS does itself. So for a SSD there is no big difference between ZFS and filesystems as UFS

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-02 Thread Ragnar Sundblad
On 1 jan 2010, at 17.28, David Magda wrote: On Jan 1, 2010, at 11:04, Ragnar Sundblad wrote: But that would only move the hardware specific and dependent flash chip handling code into the file system code, wouldn't it? What is won with that? As long as the flash chips have larger pages

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-02 Thread Ragnar Sundblad
On 2 jan 2010, at 12.43, Joerg Schilling wrote: Ragnar Sundblad ra...@csc.kth.se wrote: I certainly agree, but there still isn't much they can do about the WORM-like properties of flash chips, were reading is pretty fast, writing is not to bad, but erasing is very slow and must be done

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-02 Thread Ragnar Sundblad
On 2 jan 2010, at 13.10, Erik Trimble wrote: Joerg Schilling wrote: Ragnar Sundblad ra...@csc.kth.se wrote: On 1 jan 2010, at 17.28, David Magda wrote: Don't really see how things are either hardware specific or dependent. The inner workings of a SSD flash drive

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-02 Thread Ragnar Sundblad
On 2 jan 2010, at 22.49, Erik Trimble wrote: Ragnar Sundblad wrote: On 2 jan 2010, at 13.10, Erik Trimble wrote Joerg Schilling wrote: the TRIM command is what is intended for an OS to notify the SSD as to which blocks are deleted/erased, so the SSD's internal free list can be updated

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-02 Thread Ragnar Sundblad
On 3 jan 2010, at 04.19, Erik Trimble wrote: Ragnar Sundblad wrote: On 2 jan 2010, at 22.49, Erik Trimble wrote: Ragnar Sundblad wrote: On 2 jan 2010, at 13.10, Erik Trimble wrote Joerg Schilling wrote: the TRIM command is what is intended for an OS to notify the SSD

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-02 Thread Ragnar Sundblad
On 3 jan 2010, at 06.07, Ragnar Sundblad wrote: (I don't think they typically merge pages, I believe they rather just pick pages with some freed blocks, copies the active blocks to the end of the disk, and erases the page.) (And of course you implement wear leveling with the same mechanism

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-01 Thread Ragnar Sundblad
On 31 dec 2009, at 22.53, David Magda wrote: On Dec 31, 2009, at 13:44, Joerg Schilling wrote: ZFS is COW, but does the SSD know which block is in use and which is not? If the SSD did know whether a block is in use, it could erase unused blocks in advance. But what is an unused block on

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-01 Thread Ragnar Sundblad
On 1 jan 2010, at 14.14, David Magda wrote: On Jan 1, 2010, at 04:33, Ragnar Sundblad wrote: I see the possible win that you could always use all the working blocks on the disk, and when blocks goes bad your disk will shrink. I am not sure that is really what people expect, though. Apart

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-31 Thread Ragnar Sundblad
On 31 dec 2009, at 06.01, Richard Elling wrote: On Dec 30, 2009, at 2:24 PM, Ragnar Sundblad wrote: On 30 dec 2009, at 22.45, Richard Elling wrote: On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote: Richard, That's an interesting question, if it's worth it or not. I guess

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-31 Thread Ragnar Sundblad
On 31 dec 2009, at 00.31, Bob Friesenhahn wrote: On Wed, 30 Dec 2009, Mike Gerdts wrote: Should the block size be a tunable so that page size of SSD (typically 4K, right?) and upcoming hard disks that sport a sector size 512 bytes? Enterprise SSDs are still in their infancy. The

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-31 Thread Ragnar Sundblad
On 31 dec 2009, at 17.18, Bob Friesenhahn wrote: On Thu, 31 Dec 2009, Ragnar Sundblad wrote: Also, currently, when the SSDs for some very strange reason is constructed from flash chips designed for firmware and slowly changing configuration data and can only erase in very large chunks

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-31 Thread Ragnar Sundblad
On 31 dec 2009, at 19.26, Richard Elling wrote: [I TRIMmed the thread a bit ;-)] On Dec 31, 2009, at 1:43 AM, Ragnar Sundblad wrote: On 31 dec 2009, at 06.01, Richard Elling wrote: In a world with copy-on-write and without snapshots, it is obvious that there will be a lot of blocks

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-30 Thread Ragnar Sundblad
On 30 dec 2009, at 22.45, Richard Elling wrote: On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote: Richard, That's an interesting question, if it's worth it or not. I guess the question is always who are the targets for ZFS (I assume everyone, though in reality priorities has to set

Re: [zfs-discuss] Expected ZFS behavior?

2009-12-09 Thread Ragnar Sundblad
On 7 dec 2009, at 18.40, Bob Friesenhahn wrote: On Mon, 7 Dec 2009, Richard Bruce wrote: I started copying over all the data from my existing workstation. When copying files (mostly multi-gigabyte DV video files), network throughput drops to zero for ~1/2 second every 8-15 seconds. This

Re: [zfs-discuss] zpool import - device names not always updated?

2009-12-05 Thread Ragnar Sundblad
is and what isn't supported of the following, which we currently use or plan to use: - UFS in a ZFS volume, mounted locally? - a ZFS volume, iSCSI exported (soon to be COMSTAR), locally imported again, and with a ZFS in it locally mounted/imported? Thanks! /ragge On 12/03/09 15:26, Ragnar

Re: [zfs-discuss] zpool import - device names not always updated?

2009-12-03 Thread Ragnar Sundblad
Thank you Cindy for your reply! On 3 dec 2009, at 18.35, Cindy Swearingen wrote: A bug might exist but you are building a pool based on the ZFS volumes that are created in another pool. This configuration is not supported and possible deadlocks can occur. I had absolutely no idea that ZFS

[zfs-discuss] zpool import - device names not always updated?

2009-12-01 Thread Ragnar Sundblad
It seems that device names aren't always updated when importing pools if devices have moved. I am not sure if this is only an cosmetic issue or if it could actually be a real problem - could it lead to the device not being found at a later import? /ragge (This is on snv_127.) I ran the

[zfs-discuss] zfs and raid 51

2009-02-18 Thread Ragnar Sundblad
better. It would probably also give us even a little more security, since hot spares and data disks aren't paired. But which setup would give us better performance? Are there other issues we should consider when choosing? Thank you in advance for advice and hints! Ragnar Sundblad