Re: [zfs-discuss] upgrade zfs stripe

2010-04-19 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Albert Frenz since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be,

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote: For zpool 19, which includes all present releases of Solaris 10 and Opensolaris 2009.06, it is critical to mirror your ZIL log device. A failed unmirrored log device would

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bob Friesenhahn On Sun, 18 Apr 2010, Christopher George wrote: In summary, the DDRdrive X1 is designed, built and tested with immense pride and an overwhelming attention to detail.

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Don I've got 80 spindles in 5 16 bay drive shelves (76 15k RPM SAS drives in 19 4 disk raidz sets, 2 hot spares, and 2 bays set aside for a mirrored ZIL) connected to two servers (so if one

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-17 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] Um ... All the same time. Even if I stat those directories ... Access: Modify: and Change: are all useless... which is why you need to stat the destination :-) Ahh. I see it now. By stat'ing the destination instead of the

Re: [zfs-discuss] Making ZFS better: file/directory granularity in-place rollback

2010-04-17 Thread Edward Ned Harvey
From: Erik Trimble [mailto:erik.trim...@oracle.com] So the suggestion, or question is: Is it possible or planned to implement a rollback command, that works as fast as a link or re-link operation, implemented at a file or directory level, instead of the entire filesystem? so why

Re: [zfs-discuss] Making ZFS better: rm files/directories from snapshots

2010-04-17 Thread Edward Ned Harvey
From: Ian Collins [mailto:i...@ianshome.com] But is a fundamental of zfs: snapshot A read-only version of a file system or volume at a given point in time. It is specified as filesys...@name or vol...@name. Erik Trimble's assessment that it

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be mirrored ? IMHO, the best answer to this question is the one from the ZFS Best Practices guide. (I wrote

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 2) ZIL write cache. It appears some have disabled

Re: [zfs-discuss] Setting up ZFS on AHCI disks

2010-04-16 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tonmaus are the drives properly configured in cfgadm? I agree. You need to do these: devfsadm -Cv cfgadm -al ___ zfs-discuss

[zfs-discuss] Making ZFS better: file/directory granularity in-place rollback

2010-04-16 Thread Edward Ned Harvey
AFAIK, if you want to restore a snapshot version of a file or directory, you need to use cp or such commands, to copy the snapshot version into the present. This is not done in-place, meaning, the cp or whatever tool must read the old version of objects and write new copies of the objects. You

[zfs-discuss] Making ZFS better: zfshistory

2010-04-16 Thread Edward Ned Harvey
If you've got nested zfs filesystems, and you're in some subdirectory where there's a file or something you want to rollback, it's presently difficult to know how far back up the tree you need to go, to find the correct .zfs subdirectory, and then you need to figure out the name of the snapshots

[zfs-discuss] Making ZFS better: rm files/directories from snapshots

2010-04-16 Thread Edward Ned Harvey
The typical problem scenario is: Some user or users fill up the filesystem. They rm some files, but disk space is not freed. You need to destroy all the snapshots that contain the deleted files, before disk space is available again. It would be nice if you could rm files from snapshots, without

Re: [zfs-discuss] Setting up ZFS on AHCI disks

2010-04-16 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Willard Korfhage devfsadm -Cv gave a lot of removing file messages, apparently for items that were not relevant. That's good. If there were no necessary changes, devfsadm would say nothing.

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-16 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] There are some interesting design challenges here. For the general case, you can't rely on the snapshot name to be in time order, so you need to sort by the mtime of the destination. Actually ... drwxr-xr-x 16 root root 20 Mar 29

Re: [zfs-discuss] Making ZFS better: file/directory granularity in-place rollback

2010-04-16 Thread Edward Ned Harvey
From: Erik Trimble [mailto:erik.trim...@oracle.com] Not to be a contrary person, but the job you describe above is properly the duty of a BACKUP system. Snapshots *aren't* traditional backups, though some people use them as such. While I see no technical reason why snapshots couldn't

Re: [zfs-discuss] Making ZFS better: rm files/directories from snapshots

2010-04-16 Thread Edward Ned Harvey
From: Erik Trimble [mailto:erik.trim...@oracle.com] Sent: Friday, April 16, 2010 7:35 PM Doesn't that defeat the purpose of a snapshot? Eric hits the nail right on the head: you *don't* want to support such a feature, as it breaks the fundamental assumption about what a snapshot is

Re: [zfs-discuss] Making ZFS better: rm files/directories from snapshots

2010-04-16 Thread Edward Ned Harvey
From: Nicolas Williams [mailto:nicolas.willi...@oracle.com] you should send your snapshots to backup and clean them out from time to time anyways. When using ZFS as a filesystem in a fileserver, the desired configuration such as auto-snapshots is something like: Every 15 mins for the

Re: [zfs-discuss] casesensitivity mixed and CIFS

2010-04-15 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of John Just to add more details, the issue only occurred for the first direct access to the file. From a windows client that has never access the file, you can issue: dir

Re: [zfs-discuss] Fileserver help.

2010-04-13 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Daniel Im pretty new to the whole OpenSolaris thing, i've been doing a bit of research but cant find anything on what i need. I am thinking of making myself a home file server running

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Eric D. Mudama I believe the reason strings of bits leak on rotating drives you've overwritten (other than grown defects) is because of minute off-track occurances while writing (vibration,

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-12 Thread Edward Ned Harvey
Carson Gaspar wrote: Does anyone who understands the internals better than care to take a stab at what happens if: - ZFS writes data to /dev/foo - /dev/foo looses power and the data from the above write, not yet flushed to rust (say a field tech pulls the wrong drive...) - /dev/foo

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-11 Thread Edward Ned Harvey
From: Tim Cook [mailto:t...@cook.ms] Awesome!  Thanks for letting us know the results of your tests Ed, that's extremely helpful.  I was actually interested in grabbing some of the cheaper intel SSD's for home use, but didn't want to waste my money if it wasn't going to handle the various

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-11 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- Thanks for the testing. so FINALLY with version 19 does ZFS demonstrate production-ready status in my book. How long is it going to take Solaris to catch up? Oh, it's been production worthy for some time - Just don't use

Re: [zfs-discuss] Secure delete?

2010-04-11 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- r...@karlsbakk.net wrote: Hi all Is it possible to securely delete a file from a zfs dataset/zpool once it's been snapshotted, meaning delete (and perhaps overwrite) all copies of this file? No, until all snapshots

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-11 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] On Apr 11, 2010, at 5:36 AM, Edward Ned Harvey wrote: In the event a pool is faulted, I wish you didn't have to power cycle the machine. Let all the zfs filesystems that are in that pool simply disappear, and when somebody does

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-11 Thread Edward Ned Harvey
From: Daniel Carosone [mailto:d...@geek.com.au] Please look at the pool property failmode. Both of the preferences you have expressed are available, as well as the default you seem so unhappy with. I ... did not know that. :-) Thank you. ___

[zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-10 Thread Edward Ned Harvey
Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log device removal that was introduced in zpool 19) In any way possible, you lose an unmirrored log device, and the OS

[zfs-discuss] Sync Write - ZIL log performance - Feedback for ZFS developers?

2010-04-10 Thread Edward Ned Harvey
Neil or somebody? Actual ZFS developers? Taking feedback here? ;-) While I was putting my poor little server through cruel and unusual punishment as described in my post a moment ago, I noticed something unexpected: I expected that while I'm stressing my log device by infinite sync

Re: [zfs-discuss] Replacing disk in zfs pool

2010-04-09 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Andreas Höschler I don't think that the BIOS and rebooting part ever has to be true, at least I don't hope so. You shouldn't have to reboot just because you replace a hot plug disk.

Re: [zfs-discuss] Replacing disk in zfs pool

2010-04-09 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey I don't know how to identify what card is installed in your system. Actually, this is useful: prtpicl -v | less Search for RAID. On my system, I get this snippet (out

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-09 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Eric Andersen I backup my pool to 2 external 2TB drives that are simply striped using zfs send/receive followed by a scrub. As of right now, I only have 1.58TB of actual data. ZFS send

Re: [zfs-discuss] zfs send hangs

2010-04-09 Thread Edward Ned Harvey
-Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Daniel Bakken My zfs filesystem hangs when transferring large filesystems (500GB) with a couple dozen snapshots between servers using zfs send/receive with

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jeroen Roodhart If you're running solaris proper, you better mirror your ZIL log device. ... I plan to get to test this as well, won't be until late next week though. Running

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Edward Ned Harvey
From: Ragnar Sundblad [mailto:ra...@csc.kth.se] Rather: ... =19 would be ... if you don't mind loosing data written the ~30 seconds before the crash, you don't have to mirror your log device. If you have a system crash, *and* a failed log device at the same time, this is an important

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bob Friesenhahn It is also worth pointing out that in normal operation the slog is essentially a write-only device which is only read at boot time. The writes are assumed to work if the

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Chris Dunbar like to clarify something. If read performance is paramount, am I correct in thinking RAIDZ is not the best way to go? Would not the ZFS equivalent of RAID 10 (striped mirror

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of David Magda If you're going to go with (Open)Solaris, the OP may also want to look into the multi-platform pkgsrc for third-party open source software: http://www.pkgsrc.org/

Re: [zfs-discuss] To slice, or not to slice

2010-04-06 Thread Edward Ned Harvey
I have reason to believe that both the drive, and the OS are correct. I have suspicion that the HBA simply handled the creation of this volume somehow differently than how it handled the original. Don't know the answer for sure yet. Ok, that's confirmed now. Apparently when the drives ship

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-06 Thread Edward Ned Harvey
We ran into something similar with these drives in an X4170 that turned out to be an issue of the preconfigured logical volumes on the drives. Once we made sure all of our Sun PCI HBAs where running the exact same version of firmware and recreated the volumes on new drives arriving

Re: [zfs-discuss] ZFS getting slower over time

2010-04-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson I have a problem with my zfs system, it's getting slower and slower over time. When the OpenSolaris machine is rebooted and just started I get about 30-35MB/s in read and

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Andreas Höschler • I would like to remove the two SSDs as log devices from the pool and instead add them as a separate pool for sole use by the database to see how this enhences performance.

Re: [zfs-discuss] ZFS getting slower over time

2010-04-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson pool: s1 state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM s1 ONLINE 0 0 0

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Andreas Höschler Thanks for the clarification! This is very annoying. My intend was to create a log mirror. I used zpool add tank log c1t6d0 c1t7d0 and this was obviously false.

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-05 Thread Edward Ned Harvey
From: Kyle McDonald [mailto:kmcdon...@egenera.com] So does your HBA have newer firmware now than it did when the first disk was connected? Maybe it's the HBA that is handling the new disks differently now, than it did when the first one was plugged in? Can you down rev the HBA FW? Do you

Re: [zfs-discuss] Problems with zfs and a STK RAID INT SAS HBA

2010-04-04 Thread Edward Ned Harvey
When running the card in copyback write cache mode, I got horrible performance (with zfs), much worse than with copyback disabled (which I believe should mean it does write-through), when tested with filebench. When I benchmark my disks, I also find that the system is slower with WriteBack

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Edward Ned Harvey
Your experience is exactly why I suggested ZFS start doing some right sizing if you will.  Chop off a bit from the end of any disk so that we're guaranteed to be able to replace drives from different manufacturers.  The excuse being no reason to, Sun drives are always of identical size.  If

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Edward Ned Harvey
CR 6844090, zfs should be able to mirror to a smaller disk http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090 b117, June 2009 Awesome. Now if someone would only port that to solaris, I'd be a happy man. ;-) ___ zfs-discuss mailing

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-04 Thread Edward Ned Harvey
Hmm, when you did the write-back test was the ZIL SSD included in the write-back? What I was proposing was write-back only on the disks, and ZIL SSD with no write-back. The tests I did were: All disks write-through All disks write-back With/without SSD for ZIL All the permutations of the

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-04 Thread Edward Ned Harvey
Actually, It's my experience that Sun (and other vendors) do exactly that for you when you buy their parts - at least for rotating drives, I have no experience with SSD's. The Sun disk label shipped on all the drives is setup to make the drive the standard size for that sun part number.

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Edward Ned Harvey
There is some question about performance. Is there any additional overhead caused by using a slice instead of the whole physical device? No. If the disk is only used for ZFS, then it is ok to enable volatile disk write caching if the disk also supports write cache flush requests. If

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Edward Ned Harvey
I haven't taken that approach, but I guess I'll give it a try. From: Tim Cook [mailto:t...@cook.ms] Sent: Sunday, April 04, 2010 11:00 PM To: Edward Ned Harvey Cc: Richard Elling; zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] To slice, or not to slice On Sun, Apr 4, 2010

[zfs-discuss] To slice, or not to slice

2010-04-03 Thread Edward Ned Harvey
Momentarily, I will begin scouring the omniscient interweb for information, but I'd like to know a little bit of what people would say here. The question is to slice, or not to slice, disks before using them in a zpool. One reason to slice comes from recent personal experience. One disk of a

Re: [zfs-discuss] To slice, or not to slice

2010-04-03 Thread Edward Ned Harvey
One reason to slice comes from recent personal experience. One disk of a mirror dies. Replaced under contract with an identical disk. Same model number, same firmware. Yet when it's plugged into the system, for an unknown reason, it appears 0.001 Gb smaller than the old disk, and

Re: [zfs-discuss] To slice, or not to slice

2010-04-03 Thread Edward Ned Harvey
And finally, if anyone has experience doing this, and process recommendations?  That is … My next task is to go read documentation again, to refresh my memory from years ago, about the difference between “format,” “partition,” “label,” “fdisk,” because those terms don’t have the same meaning

Re: [zfs-discuss] To slice, or not to slice

2010-04-03 Thread Edward Ned Harvey
On Apr 2, 2010, at 2:29 PM, Edward Ned Harvey wrote: I've also heard that the risk for unexpected failure of your pool is higher if/when you reach 100% capacity. I've heard that you should always create a small ZFS filesystem within a pool, and give it some reserved space, along

Re: [zfs-discuss] To slice, or not to slice

2010-04-03 Thread Edward Ned Harvey
I would return the drive to get a bigger one before doing something as drastic as that. There might have been a hichup in the production line, and that's not your fault. Yeah, but I already have 2 of the replacement disks, both doing the same thing. One has a firmware newer than my old disk

Re: [zfs-discuss] is this pool recoverable?

2010-04-03 Thread Edward Ned Harvey
Your original zpool status says that this pool was last accessed on another system, which I believe is what caused of the pool to fail, particularly if it was accessed simultaneously from two systems. The message last accessed on another system is the normal behavior if the pool is

Re: [zfs-discuss] how can I remove files when the fiile system is full?

2010-04-02 Thread Edward Ned Harvey
On opensolaris? Did you try deleting any old BEs? Don't forget to zfs destroy rp...@snapshot In fact, you might start with destroying snapshots ... if there are any occupying space. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
Seriously, all disks configured WriteThrough (spindle and SSD disks alike) using the dedicated ZIL SSD device, very noticeably faster than enabling the WriteBack. What do you get with both SSD ZIL and WriteBack disks enabled? I mean if you have both why not use both? Then both

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
I know it is way after the fact, but I find it best to coerce each drive down to the whole GB boundary using format (create Solaris partition just up to the boundary). Then if you ever get a drive a little smaller it still should fit. It seems like it should be unnecessary. It seems like

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
http://nfs.sourceforge.net/ I think B4 is the answer to Casper's question: We were talking about ZFS, and under what circumstances data is flushed to disk, in what way sync and async writes are handled by the OS, and what happens if you disable ZIL and lose power to your system. We were

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
I am envisioning a database, which issues a small sync write, followed by a larger async write. Since the sync write is small, the OS would prefer to defer the write and aggregate into a larger block. So the possibility of the later async write being committed to disk before the older

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
hello i have had this problem this week. our zil ssd died (apt slc ssd 16gb). because we had no spare drive in stock, we ignored it. then we decided to update our nexenta 3 alpha to beta, exported the pool and made a fresh install to have a clean system and tried to import the pool. we

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
ZFS recovers to a crash-consistent state, even without the slog, meaning it recovers to some state through which the filesystem passed in the seconds leading up to the crash. This isn't what UFS or XFS do. The on-disk log (slog or otherwise), if I understand right, can actually make the

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
If you have zpool less than version 19 (when ability to remove log device was introduced) and you have a non-mirrored log device that failed, you had better treat the situation as an emergency. Instead, do man zpool and look for zpool remove. If it says supports removing log devices

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
Dude, don't be so arrogant. Acting like you know what I'm talking about better than I do. Face it that you have something to learn here. You may say that, but then you post this: Acknowledged. I read something arrogant, and I replied even more arrogant. That was dumb of me.

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
Only a broken application uses sync writes sometimes, and async writes at other times. Suppose there is a virtual machine, with virtual processes inside it. Some virtual process issues a sync write to the virtual OS, meanwhile another virtual process issues an async write. Then the virtual OS

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Edward Ned Harvey
The purpose of the ZIL is to act like a fast log for synchronous writes. It allows the system to quickly confirm a synchronous write request with the minimum amount of work. Bob and Casper and some others clearly know a lot here. But I'm hearing conflicting information, and don't know what

Re: [zfs-discuss] To slice, or not to slice

2010-04-02 Thread Edward Ned Harvey
with the filesystem that you actually plan to use in your pool. Anyone care to offer any comments on that? From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Edward Ned Harvey Sent: Friday, April 02, 2010 5:23 PM To: zfs-discuss@opensolaris.org Subject

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Edward Ned Harvey
If you disable the ZIL, the filesystem still stays correct in RAM, and the only way you lose any data such as you've described, is to have an ungraceful power down or reboot. The advice I would give is: Do zfs autosnapshots frequently (say ... every 5 minutes, keeping the most recent 2

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Edward Ned Harvey
Can you elaborate? Just today, we got the replacement drive that has precisely the right version of firmware and everything. Still, when we plugged in that drive, and create simple volume in the storagetek raid utility, the new drive is 0.001 Gb smaller than the old drive. I'm still

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Edward Ned Harvey
If you have an ungraceful shutdown in the middle of writing stuff, while the ZIL is disabled, then you have corrupt data. Could be files that are partially written. Could be wrong permissions or attributes on files. Could be missing files or directories. Or some other problem. Some

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Edward Ned Harvey
This approach does not solve the problem. When you do a snapshot, the txg is committed. If you wish to reduce the exposure to loss of sync data and run with ZIL disabled, then you can change the txg commit interval -- however changing the txg commit interval will not eliminate the

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Edward Ned Harvey
Is that what sync means in Linux? A sync write is one in which the application blocks until the OS acks that the write has been committed to disk. An async write is given to the OS, and the OS is permitted to buffer the write to disk at its own discretion. Meaning the async write function

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Edward Ned Harvey
Use something other than Open/Solaris with ZFS as an NFS server? :) I don't think you'll find the performance you paid for with ZFS and Solaris at this time. I've been trying to more than a year, and watching dozens, if not hundreds of threads. Getting half-ways decent performance from NFS

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Edward Ned Harvey
Nobody knows any way for me to remove my unmirrored log device. Nobody knows any way for me to add a mirror to it (until Since snv_125 you can remove log devices. See http://bugs.opensolaris.org/view_bug.do?bug_id=6574286 I've used this all the time during my testing and was able to

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Edward Ned Harvey
Would your users be concerned if there was a possibility that after extracting a 50 MB tarball that files are incomplete, whole subdirectories are missing, or file permissions are incorrect? Correction: Would your users be concerned if there was a possibility that after extracting a 50MB

Re: [zfs-discuss] VMware client solaris 10, RAW physical disk and zfs snapshots problem - all created snapshots are equal to zero.

2010-03-31 Thread Edward Ned Harvey
I did those test and here are results: r...@sl-node01:~# zfs list NAMEUSED AVAIL REFER MOUNTPOINT mypool01 91.9G 136G23K /mypool01 mypool01/storage01 91.9G 136G 91.7G /mypool01/storage01

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Edward Ned Harvey
I see the source for some confusion. On the ZFS Best Practices page: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide It says: Failure of the log device may cause the storage pool to be inaccessible if you are running the Solaris Nevada release prior to build 96 and

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Edward Ned Harvey
A MegaRAID card with write-back cache? It should also be cheaper than the F20. I haven't posted results yet, but I just finished a few weeks of extensive benchmarking various configurations. I can say this: WriteBack cache is much faster than naked disks, but if you can buy an SSD or two for

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Edward Ned Harvey
We ran into something similar with these drives in an X4170 that turned out to be an issue of the preconfigured logical volumes on the drives. Once we made sure all of our Sun PCI HBAs where running the exact same version of firmware and recreated the volumes on new drives arriving from

Re: [zfs-discuss] zfs diff

2010-03-30 Thread Edward Ned Harvey
On Mon, Mar 29, 2010 at 5:39 PM, Nicolas Williams nicolas.willi...@sun.com wrote: One really good use for zfs diff would be: as a way to index zfs send backups by contents. Or to generate the list of files for incremental backups via NetBackup or similar. This is especially important

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Edward Ned Harvey
But the speedup of disabling the ZIL altogether is appealing (and would probably be acceptable in this environment). Just to make sure you know ... if you disable the ZIL altogether, and you have a power interruption, failed cpu, or kernel halt, then you're likely to have a corrupt unusable

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Edward Ned Harvey
standard ZIL: 7m40s (ZFS default) 1x SSD ZIL: 4m07s (Flash Accelerator F20) 2x SSD ZIL: 2m42s (Flash Accelerator F20) 2x SSD mirrored ZIL: 3m59s (Flash Accelerator F20) 3x SSD ZIL: 2m47s (Flash Accelerator F20) 4x SSD

Re: [zfs-discuss] VMware client solaris 10, RAW physical disk and zfs snapshots problem - all created snapshots are equal to zero.

2010-03-30 Thread Edward Ned Harvey
The problem that I have now is that each created snapshot is always equal to zero... zfs just not storing changes that I have made to the file system before making a snapshot.  r...@sl-node01:~# zfs list NAME    USED  AVAIL  REFER  MOUNTPOINT mypool01

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Edward Ned Harvey
Again, we can't get a straight answer on this one.. (or at least not 1 straight answer...) Since the ZIL logs are committed atomically they are either committed in FULL, or NOT at all (by way of rollback of incomplete ZIL applies at zpool mount time / or transaction rollbacks if things

Re: [zfs-discuss] zfs recreate questions

2010-03-30 Thread Edward Ned Harvey
Anyway, my question is, [...] as expected I can't import it because the pool was created with a newer version of ZFS. What options are there to import? I'm quite sure there is no option to import or receive or downgrade a zfs filesystem from a later version. I'm pretty sure your only option

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Edward Ned Harvey
If the ZIL device goes away then zfs might refuse to use the pool without user affirmation (due to potential loss of uncommitted transactions), but if the dedicated ZIL device is gone, zfs will use disks in the main pool for the ZIL. This has been clarified before on the list by top zfs

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Edward Ned Harvey
So you think it would be ok to shutdown, physically remove the log device, and then power back on again, and force import the pool? So although there may be no live way to remove a log device from a pool, it might still be possible if you offline the pool to ensure writes are all completed

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Edward Ned Harvey
Just to make sure you know ... if you disable the ZIL altogether, and you have a power interruption, failed cpu, or kernel halt, then you're likely to have a corrupt unusable zpool, or at least data corruption. If that is indeed acceptable to you, go nuts. ;-) I believe that the

Re: [zfs-discuss] SSD As ARC

2010-03-28 Thread Edward Ned Harvey
You can't share a device (either as ZIL or L2ARC) between multiple pools. Discussion here some weeks ago reached suggested that an L2ARC device was used for all ARC evictions, regardless of the pool. I'd very much like an authoritative statement (and corresponding documentation

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-26 Thread Edward Ned Harvey
Using fewer than 4 disks in a raidz2 defeats the purpose of raidz2, as you will always be in a degraded mode. Freddie, are you nuts? This is false. Sure you can use raidz2 with 3 disks in it. But it does seem pointless to do that instead of a 3-way mirror.

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-26 Thread Edward Ned Harvey
Coolio. Learn something new everyday. One more way that raidz is different from RAID5/6/etc. Freddie, again, you're wrong. Yes, it's perfectly acceptable to create either raid-5 or raidz using 2 disks. It's not degraded, but it does seem pointless to do this instead of a mirror.

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-26 Thread Edward Ned Harvey
Just because most people are probably too lazy to click the link, I’ll paste a phrase from that sun.com webpage below: “Creating a single-parity RAID-Z pool is identical to creating a mirrored pool, except that the ‘raidz’ or ‘raidz1’ keyword is used instead of ‘mirror’.” And “zpool create

Re: [zfs-discuss] ZFS where to go!

2010-03-26 Thread Edward Ned Harvey
OK, I have 3Ware looking into a driver for my cards (3ware 9500S-8) as I dont see an OpenSolaris driver for them. But this leads me that they do have a FreeBSD Driver, so I could still use ZFS. What does everyone thing about that? I bet it is not as mature as on OpenSolaris. mature is

Re: [zfs-discuss] ZFS backup configuration

2010-03-26 Thread Edward Ned Harvey
It seems like the zpool export will ques the drives and mark the pool as exported. This would be good if we wanted to move the pool at that time but we are thinking of a disaster recovery scenario. It would be nice to export just the config to where if our controller dies, we can use the

Re: [zfs-discuss] zfs send and ARC

2010-03-26 Thread Edward Ned Harvey
In the Thoughts on ZFS Pool Backup Strategies thread it was stated that zfs send, sends uncompress data and uses the ARC. If zfs send sends uncompress data which has already been compress this is not very efficient, and it would be *nice* to see it send the original compress data. (or an

Re: [zfs-discuss] ZFS where to go!

2010-03-26 Thread Edward Ned Harvey
While I use zfs with FreeBSD (FreeNAS appliance with 4x SATA 1 TByte drives) it is trailing OpenSolaris by at least a year if not longer and hence lacks many key features people pick zfs over other file systems. The performance, especially CIFS is quite lacking. Purportedly (I have never

Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-25 Thread Edward Ned Harvey
I think the point is to say: ZFS software raid is both faster and more reliable than your hardware raid. Surprising though it may be for a newcomer, I have statistics to back that up, Can you share it? Sure. Just go to http://nedharvey.com and you'll see four links on the left side,

<    5   6   7   8   9   10   11   12   >