Re: [zfs-discuss] ZFS upgrade.

2010-01-07 Thread James Lever
Hi John, On 08/01/2010, at 7:19 AM, john_dil...@blm.gov wrote: Is there a way to upgrade my current ZFS version. I show the version could be as high as 22. The version of Solaris you are running only suport ZFS versions up to version 15 as demonstrated by your zfs upgrade -v output. You

Re: [zfs-discuss] How to destroy your system in funny way with ZFS

2009-12-27 Thread James Lever
Hi Tomas, On 27/12/2009, at 7:25 PM, Tomas Bodzar wrote: pfexec zpool set dedup=verify rpool pfexec zfs set compression=gzip-9 rpool pfexec zfs set devices=off rpool/export/home pfexec zfs set exec=off rpool/export/home pfexec zfs set setuid=off rpool/export/home grub doesn’t support gzip

Re: [zfs-discuss] will deduplication know about old blocks?

2009-12-09 Thread James Lever
On 10/12/2009, at 5:36 AM, Adam Leventhal wrote: The dedup property applies to all writes so the settings for the pool of origin don't matter, just those on the destination pool. Just a quick related question I’ve not seen answered anywhere else: Is it safe to have dedup running on your

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread James Lever
On 18/11/2009, at 7:33 AM, Dushyanth wrote: Now when i run dd and create a big file on /iftraid0/fs and watch `iostat -xnz 2` i dont see any stats for c8t4d0 nor does the write performance improves. I have not formatted either c9t9d0 or c8t4d0. What am i missing ? Last I checked, iSCSI

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread James Lever
On 03/11/2009, at 7:32 AM, Daniel Streicher wrote: But how can I update my current OpenSolaris (2009.06) or Solaris 10 (5/09) to use this. Or have I wait for a new stable release of Solaris 10 / OpenSolaris? For OpenSolaris, you change your repository and switch to the development

Re: [zfs-discuss] Solaris 10 samba in AD mode broken when user in 32 AD groups

2009-10-13 Thread James Lever
On 14/10/2009, at 2:27 AM, casper@sun.com wrote: So why not the built-in CIFS support in OpenSolaris? Probably has a similar issue, but still. In my case, it’s at least two reasons: * Crossing mountpoints requires separate shares - Samba can share an entire hierarchy regardless of

Re: [zfs-discuss] periodic slow responsiveness

2009-09-25 Thread James Lever
On 26/09/2009, at 1:14 AM, Ross Walker wrote: By any chance do you have copies=2 set? No, only 1. So the double data going to the slog (as reported by iostat) is still confusing me and clearly potentially causing significant harm to my performance. Also, try setting

Re: [zfs-discuss] periodic slow responsiveness

2009-09-24 Thread James Lever
On 25/09/2009, at 2:58 AM, Richard Elling wrote: On Sep 23, 2009, at 10:00 PM, James Lever wrote: So it turns out that the problem is that all writes coming via NFS are going through the slog. When that happens, the transfer speed to the device drops to ~70MB/s (the write speed of his

Re: [zfs-discuss] periodic slow responsiveness

2009-09-24 Thread James Lever
On 25/09/2009, at 1:24 AM, Bob Friesenhahn wrote: On Thu, 24 Sep 2009, James Lever wrote: Is there a way to tune this on the NFS server or clients such that when I perform a large synchronous write, the data does not go via the slog device? Synchronous writes are needed by NFS

Re: [zfs-discuss] periodic slow responsiveness

2009-09-24 Thread James Lever
On 25/09/2009, at 11:49 AM, Bob Friesenhahn wrote: The commentary says that normally the COMMIT operations occur during close(2) or fsync(2) system call, or when encountering memory pressure. If the problem is slow copying of many small files, this COMMIT approach does not help very much

Re: [zfs-discuss] periodic slow responsiveness

2009-09-24 Thread James Lever
I thought I would try the same test using dd bs=131072 if=source of=/ path/to/nfs to see what the results looked liked… It is very similar to before, about 2x slog usage and same timing and write totals. Friday, 25 September 2009 1:49:48 PM EST extended device

Re: [zfs-discuss] periodic slow responsiveness

2009-09-23 Thread James Lever
On 08/09/2009, at 2:01 AM, Ross Walker wrote: On Sep 7, 2009, at 1:32 AM, James Lever j...@jamver.id.au wrote: Well a MD1000 holds 15 drives a good compromise might be 2 7 drive RAIDZ2s with a hotspare... That should provide 320 IOPS instead of 160, big difference. The issue

[zfs-discuss] periodic slow responsiveness

2009-09-06 Thread James Lever
I’m experiencing occasional slow responsiveness on an OpenSolaris b118 system typically noticed when running an ‘ls’ (no extra flags, so no directory service lookups). There is a delay of between 2 and 30 seconds but no correlation has been noticed with load on the server and the slow

Re: [zfs-discuss] periodic slow responsiveness

2009-09-06 Thread James Lever
On 07/09/2009, at 6:24 AM, Richard Elling wrote: On Sep 6, 2009, at 7:53 AM, Ross Walker wrote: On Sun, Sep 6, 2009 at 9:15 AM, James Leverj...@jamver.id.au wrote: I’m experiencing occasional slow responsiveness on an OpenSolaris b118 system typically noticed when running an ‘ls’ (no

Re: [zfs-discuss] periodic slow responsiveness

2009-09-06 Thread James Lever
On 07/09/2009, at 11:08 AM, Richard Elling wrote: Ok, just so I am clear, when you mean local automount you are on the server and using the loopback -- no NFS or network involved? Correct. And the behaviour has been seen locally as well as remotely. You are looking for I/O that takes

Re: [zfs-discuss] periodic slow responsiveness

2009-09-06 Thread James Lever
On 07/09/2009, at 10:46 AM, Ross Walker wrote: zpool is RAIDZ2 comprised of 10 * 15kRPM SAS drives behind an LSI 1078 w/ 512MB BBWC exposed as RAID0 LUNs (Dell MD1000 behind PERC 6/ E) with 2x SSDs each partitioned as 10GB slog and 36GB remainder as l2arc behind another LSI 1078 w/ 256MB

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-09-01 Thread James Lever
On 02/09/2009, at 9:54 AM, Adam Leventhal wrote: After investigating this problem a bit I'd suggest avoiding deploying RAID-Z until this issue is resolved. I anticipate having it fixed in build 124. Thanks for the status update on this Adam. cheers, James

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-28 Thread James Lever
On 28/08/2009, at 3:23 AM, Adam Leventhal wrote: There appears to be a bug in the RAID-Z code that can generate spurious checksum errors. I'm looking into it now and hope to have it fixed in build 123 or 124. Apologies for the inconvenience. Are the errors being generated likely to cause

[zfs-discuss] zfs send/receive and compression

2009-08-22 Thread James Lever
Is there a mechanism by which you can perform a zfs send | zfs receive and not have the data uncompressed and recompressed at the other end? I have a gzip-9 compressed filesystem that I want to backup to a remote system and would prefer not to have to recompress everything again at such

Re: [zfs-discuss] Need tips on zfs pool setup..

2009-08-04 Thread James Lever
On 04/08/2009, at 9:42 PM, Joseph L. Casale wrote: I noticed a huge improvement when I moved a virtualized pool off a series of 7200 RPM SATA discs to even 10k SAS drives. Night and day... What I would really like to know is if it makes a big difference comparing say 7200RPM drives in

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-04 Thread James Lever
On 05/08/2009, at 10:36 AM, Carson Gaspar wrote: Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support recently. Yep, it's a mega raid device. I have been using one with a Samsung SSD in RAID0 mode (to avail myself of the cache) recently with great success. cheers, James

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-04 Thread James Lever
On 05/08/2009, at 11:36 AM, Ross Walker wrote: Which model? PERC 6/E w/512MB BBWC. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-04 Thread James Lever
On 05/08/2009, at 11:41 AM, Ross Walker wrote: What is your recipe for these? There wasn't one! ;) The drive I'm using is a Dell badged Samsung MCCOE50G5MPQ-0VAD3. cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] [storage-discuss] ZFS and deduplication

2009-08-03 Thread James Lever
Nathan Hudson-Crim, On 04/08/2009, at 8:02 AM, Nathan Hudson-Crim wrote: Andre, I've seen this before. What you have to do is ask James each question 3 times and on the third time he will tell the truth. ;) I know this is probably meant to be seen as a joke, but it's clearly in very poor

Re: [zfs-discuss] feature proposal

2009-07-30 Thread James Lever
Hi Darryn, On 30/07/2009, at 6:33 PM, Darren J Moffat wrote: That already works if you have the snapshot delegation as that user. It even works over NFS and CIFS. Can you give us an example of how to correctly get this working? I've read through the manpage but have not managed to get the

Re: [zfs-discuss] [indiana-discuss] zfs issues?

2009-07-29 Thread James Lever
On 29/07/2009, at 12:00 AM, James Lever wrote: CR 6865661 *HOT* Created, P1 opensolaris/triage-queue zfs scrub rpool causes zpool hang This bug I logged has been marked as related to CR 6843235 which is fixed in snv 119. cheers, James ___ zfs

Re: [zfs-discuss] [indiana-discuss] zfs issues?

2009-07-28 Thread James Lever
Thanks for that Brian. I've logged a bug: CR 6865661 *HOT* Created, P1 opensolaris/triage-queue zfs scrub rpool causes zpool hang Just discovered after trying to create a further crash dump that it's failing and rebooting with the following error (just caught it prior to the reboot):

Re: [zfs-discuss] [indiana-discuss] zfs issues?

2009-07-27 Thread James Lever
On 28/07/2009, at 6:44 AM, dick hoogendijk wrote: Are there any known issues with zfs in OpenSolaris B118? I run my pools formatted like the original release 2009.06 (I want to be able to go back to it ;-). I'm a bit scared after reading about serious issues in B119 (will be skipped, I heard).

Re: [zfs-discuss] [indiana-discuss] zfs issues?

2009-07-27 Thread James Lever
On 28/07/2009, at 9:22 AM, Robert Thurlow wrote: I can't help with your ZFS issue, but to get a reasonable crash dump in circumstances like these, you should be able to do savecore -L on OpenSolaris. That would be well and good if I could get a login - due to the rpool being unresponsive,

Re: [zfs-discuss] deduplication

2009-07-14 Thread James Lever
On 15/07/2009, at 1:51 PM, Jean Dion wrote: Do we know if this web article will be discuss at Brisbane Australia the conference this week? http://www.pcworld.com/article/168428/sun_tussles_with_deduplication_startup.html?tk=rss_news I do not expect details but at least Sun position on this

Re: [zfs-discuss] surprisingly poor performance

2009-07-07 Thread James Lever
On 07/07/2009, at 8:20 PM, James Andrewartha wrote: Have you tried putting the slog on this controller, either as an SSD or regular disk? It's supported by the mega_sas driver, x86 and amd64 only. What exactly are you suggesting here? Configure one disk on this array as a dedicated

Re: [zfs-discuss] surprisingly poor performance

2009-07-05 Thread James Lever
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote: It seems like you may have selected the wrong SSD product to use. There seems to be a huge variation in performance (and cost) with so- called enterprise SSDs. SSDs with capacitor-backed write caches seem to be fastest. Do you have any

Re: [zfs-discuss] [storage-discuss] surprisingly poor performance

2009-07-05 Thread James Lever
On 05/07/2009, at 1:57 AM, Ross Walker wrote: Barriers are by default are disabled on ext3 mounts... Google it and you'll see interesting threads in the LKML. Seems there was some serious performance degradation in using them. A lot of decisions in Linux are made in favor of performance over

Re: [zfs-discuss] surprisingly poor performance

2009-07-05 Thread James Lever
On 06/07/2009, at 9:31 AM, Ross Walker wrote: There are two types of SSD drives on the market, the fast write SLC (single level cell) and the slow write MLC (multi level cell). MLC is usually used in laptops as SLC drives over 16GB usually go for $1000+ which isn't cost effective in a

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
Hej Henrik, On 03/07/2009, at 8:57 PM, Henrik Johansen wrote: Have you tried running this locally on your OpenSolaris box - just to get an idea of what it could deliver in terms of speed ? Which NFS version are you using ? Most of the tests shown in my original message are local except the

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
Hi Mertol, On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote: ZFS SSD usage behaviour heavly depends on access pattern and for asynch ops ZFS will not use SSD's. I'd suggest you to disable SSD's , create a ram disk and use it as SLOG device to compare the performance. If performance

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
On 03/07/2009, at 10:37 PM, Victor Latushkin wrote: Slog in ramdisk is analogous to no slog at all and disable zil (well, it may be actually a bit worse). If you say that your old system is 5 years old difference in above numbers may be due to difference in CPU and memory speed, and so it

Re: [zfs-discuss] [storage-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
On 04/07/2009, at 10:42 AM, Ross Walker wrote: XFS on LVM or EVMS volumes can't do barrier writes due to the lack of barrier support in LVM and EVMS, so it doesn't do a hard cache sync like it would on a raw disk partition which makes the numbers higher, BUT with battery backed write

Re: [zfs-discuss] [storage-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
On 04/07/2009, at 1:49 PM, Ross Walker wrote: I ran some benchmarks back when verifying this, but didn't keep them unfortunately. You can google: XFS Barrier LVM OR EVMS and see the threads about this. Interesting reading. Testing seems to show that either it's not relevant or there is

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread James Lever
On 25/06/2009, at 5:16 AM, Miles Nordin wrote: and mpt is the 1068 driver, proprietary, works on x86 and SPARC. then there is also itmpt, the third-party-downloadable closed-source driver from LSI Logic, dunno much about it but someone here used it. I'm confused. Why do you say the mpt

Re: [zfs-discuss] cutting up a SSD for read/log use...

2009-06-21 Thread James Lever
Hi Erik, On 22/06/2009, at 1:15 PM, Erik Trimble wrote: I just looked at pricing for the higher-end MLC devices, and it looks like I'm better off getting a single drive of 2X capacity than two with X capacity. Leaving aside the issue that by using 2 drives I get 2 x 3.0Gbps SATA

Re: [zfs-discuss] how to do backup

2009-06-20 Thread James Lever
On 20/06/2009, at 9:55 PM, Charles Hedrick wrote: I have a USB disk, to which I want to do a backup. I've used send | receive. It works fine until I try to reboot. At that point the system fails to come up because the backup copy is set to be mounted at the original location so the system