Re: [zfs-discuss] ZFS Crypto Updates [PSARC/2009/443 FastTrack timeout 08/24/2009]

2009-08-17 Thread Garrett D'Amore
Darren J Moffat wrote: Dataset rename restrictions --- On rename a dataset can non be moved out of its wrapping key hierarchy ie where it inherits the keysource property from. This is best explained by example: # zfs get -r keysource tank NAMEPROPERTY

[zfs-discuss] no hot spare activation?

2010-04-05 Thread Garrett D'Amore
While testing a zpool with a different storage adapter using my blkdev device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would

Re: [zfs-discuss] no hot spare activation?

2010-04-05 Thread Garrett D'Amore
On 04/ 5/10 05:28 AM, Eric Schrock wrote: On Apr 5, 2010, at 3:38 AM, Garrett D'Amore wrote: Am I missing something here? Under what conditions can I expect hot spares to be recruited? Hot spares are activated by the zfs-retire agent in response to a list.suspect event containing

Re: [zfs-discuss] questions about zil

2010-05-24 Thread Garrett D'Amore
On 5/24/2010 2:48 PM, Thomas Burgess wrote: I recently got a new SSD (ocz vertex LE 50gb) Not familiar with that model It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn't have a supercap so lets' say dataloss occursis it just

Re: [zfs-discuss] get parent dataset

2010-05-25 Thread Garrett D'Amore
On 5/25/2010 2:55 AM, Vadim Comanescu wrote: Is there any way you can display the parent of a dataset by zfs (get/list) command ? I do not need to list for example for a dataset all it's children by using -r just to get the parent on a child. There are way's of grepping and doing some preg

Re: [zfs-discuss] get parent dataset

2010-05-25 Thread Garrett D'Amore
On 5/25/2010 8:24 AM, Brandon High wrote: On Tue, May 25, 2010 at 2:55 AM, Vadim Comanescuva...@syneto.net wrote: Is there any way you can display the parent of a dataset by zfs (get/list) command ? I do not need to list for example for a dataset all it's children by using -r just to get

Re: [zfs-discuss] USB Flashdrive as SLOG?

2010-05-25 Thread Garrett D'Amore
The USB stack in OpenSolaris is ... complex (STREAMs based!), and probably not the most performant or reliable portion of the system. Furthermore, the mass storage layer, which encapsulates SCSI, is not tuned for a high number of IOPS or low latencies, and the stack makes different

Re: [zfs-discuss] ZFS dedup, snapshots and NFS

2010-05-26 Thread Garrett D'Amore
On 5/26/2010 11:47 AM, Dmitry Sorokin wrote: Hi All, I was just wandering if the issue that affects NFS availability when deleting large snapshots on ZFS data sets with dedup enabled was fixed. There is a fix for this in b141 of the OpenSolaris source product. We are looking at

Re: [zfs-discuss] creating a fast ZIL device for $200

2010-05-27 Thread Garrett D'Amore
On 5/27/2010 10:33 AM, sensille wrote: (resent because of received bounce) Edward Ned Harvey wrote: From: sensille [mailto:sensi...@gmx.net] So this brings me back to the question I indirectly asked in the middle of a much longer previous email - Is there some way, in software, to detect

Re: [zfs-discuss] zfs/lofi/share panic

2010-05-27 Thread Garrett D'Amore
On 5/27/2010 12:21 PM, Carson Gaspar wrote: Jan Kryl wrote: the bug (6798273) has been closed as incomplete with following note: I cannot reproduce any issue with the given testcase on b137. So you should test this with b137 or newer build. There have been some extensive changes going to

Re: [zfs-discuss] nfs share of nested zfs directories?

2010-05-27 Thread Garrett D'Amore
I share filesystems all the time this way, and have never had this problem. My first guess would be a problem with NFS or directory permissions. You are using NFS, right? - Garrett On 5/27/2010 1:02 PM, Cassandra Pugh wrote: I was wondering if there is a special option to share out a

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Garrett D'Amore
Using a stripe of mirrors (RAID0) you can get the benefits of multiple spindle performance, easy expansion support (just add new mirrors to the end of the raid0 stripe), and 100% data redundancy. If you can afford to pay double for your storage (the cost of mirroring), this is IMO the best

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Garrett D'Amore
On Thu, 2010-06-03 at 10:35 -0500, David Dyer-Bennet wrote: On Thu, June 3, 2010 10:15, Garrett D'Amore wrote: Using a stripe of mirrors (RAID0) you can get the benefits of multiple spindle performance, easy expansion support (just add new mirrors to the end of the raid0 stripe), and 100

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Garrett D'Amore
On Thu, 2010-06-03 at 12:03 -0500, Bob Friesenhahn wrote: On Thu, 3 Jun 2010, David Dyer-Bennet wrote: In an 8-bay chassis, there are other concerns, too. Do I keep space open for a hot spare? There's no real point in a hot spare if you have only one vdev; that is, 8-drive RAIDZ3 is

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Garrett D'Amore
On Thu, 2010-06-03 at 08:50 -0700, Marty Scholes wrote: Maybe I have been unlucky too many times doing storage admin in the 90s, but simple mirroring still scares me. Even with a hot spare (you do have one, right?) the rebuild window leaves the entire pool exposed to a single failure.

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Garrett D'Amore
On Thu, 2010-06-03 at 12:22 -0400, Dennis Clarke wrote: If you're clever, you'll also try to make sure each side of the mirror is on a different controller, and if you have enough controllers available, you'll also try to balance the controllers across stripes. Something like this ? #

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Garrett D'Amore
On Thu, 2010-06-03 at 11:49 -0500, David Dyer-Bennet wrote: hot spares in place, but I have the bays reserved for that use. In the latest upgrade, I added 4 2.5 hot-swap bays (which got the system disks out of the 3.5 hot-swap bays). I have two free, and that's the form-factor SSDs come in

Re: [zfs-discuss] ZFS ARC cache issue

2010-06-03 Thread Garrett D'Amore
On Thu, 2010-06-03 at 11:36 -0700, Ketan wrote: Thanx Rick .. but this guide does not offer any method to reduce the ARC cache size on the fly without rebooting the system. And the system's memory utilization is running very high since 2 weeks now and just 5G of memory is free. And the arc

Re: [zfs-discuss] [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Garrett D'Amore
On Fri, 2010-06-04 at 16:03 +0100, Robert Milkowski wrote: On 04/06/2010 15:46, James Carlson wrote: Petr Benes wrote: add to /etc/system something like (value depends on your needs) * limit greedy ZFS to 4 GiB set zfs:zfs_arc_max = 4294967296 And yes, this has nothing to do

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Garrett D'Amore
On Mon, 2010-06-07 at 07:51 -0700, Christopher George wrote: No Slogs as I haven't seen a compliant SSD drive yet. As the architect of the DDRdrive X1, I can state categorically the X1 correctly implements the SCSI Synchronize Cache (flush cache) command. Christopher George Founder/CTO

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Garrett D'Amore
On Mon, 2010-06-07 at 11:49 -0700, Richard Jahnel wrote: Do you lose the data if you lose that 9v feed at the same time the computer losses power? Yes. Hence the need for a separate UPS. - Garrett ___ zfs-discuss mailing list

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Garrett D'Amore
On Mon, 2010-06-07 at 13:32 -0700, Richard Elling wrote: On Jun 7, 2010, at 11:06 AM, Miles Nordin wrote: the other difference is in the latest comstar which runs in sync-everything mode by default, AIUI. Or it does use that mode only when zvol-backed? Or something. It depends on

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Garrett D'Amore
You can hardly have too much. At least 8 GB, maybe 16 would be good. The benefit will depend on your workload, but zfs and buffer cache will use it all if you have a big enough read working set. -- Garrett Joe Auty j...@netmusician.org wrote: I'm also noticing that I'm a little short on

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Garrett D'Amore
For the record, with my driver (which is not the same as the one shipped by the vendor), I was getting over 150K IOPS with a single DDRdrive X1. It is possible to get very high IOPS with Solaris. However, it might be difficult to get such high numbers with systems based on SCSI/SCSA.

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Garrett D'Amore
On Fri, 2010-06-11 at 11:41 +0200, Joerg Schilling wrote: I am aware of (and this are many) explain, linking against an independent work creates a collective work and no derivative work. The GPL would only hit if a derivative work was created but even under US Copyright law, a derivative

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Garrett D'Amore
On Fri, 2010-06-11 at 13:58 +0400, Andrey Kuzmin wrote: # dd if=/dev/zero of=/dev/rdsk/cXtYdZs0 bs=512 I did a test on my workstation a moment ago and got about 21k IOPS from my sata drive (iostat). The trick here of course is that this is sequentail

Re: [zfs-discuss] High-Performance ZFS (2000MB/s+)

2010-06-15 Thread Garrett D'Amore
On Tue, 2010-06-15 at 04:42 -0700, Arve Paalsrud wrote: Hi, We are currently building a storage box based on OpenSolaris/Nexenta using ZFS. Our hardware specifications are as follows: Quad AMD G34 12-core 2.3 GHz (~110 GHz) 10 Crucial RealSSD (6Gb/s) 42 WD RAID Ed. 4 2TB disks + 6Gb/s

Re: [zfs-discuss] High-Performance ZFS (2000MB/s+)

2010-06-15 Thread Garrett D'Amore
On Tue, 2010-06-15 at 07:36 -0700, Richard Elling wrote: What I want to achieve is 2 GB/s+ NFS traffic against our ESX clusters (also InfiniBand-based), with both dedupe and compression enabled in ZFS. In general, both dedup and compression gain space by trading off performance. You

Re: [zfs-discuss] High-Performance ZFS (2000MB/s+)

2010-06-15 Thread Garrett D'Amore
On Tue, 2010-06-15 at 18:33 +0200, Arve Paalsrud wrote: What about the ZIL bandwidth in this case? I mean, could I stripe across multiple devices to be able to handle higher throughput? Otherwise I would still be limited to the performance of the unit itself (155 MB/s). I think so.

[zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-17 Thread Garrett D'Amore
So I've been working on solving a problem we noticed that when using certain hot pluggable busses (think SAS/SATA hotplug here), that removing a drive did not trigger any resulting response from either FMA or ZFS *until* something tried to use that device. (This removal of a drive can be thought

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-17 Thread Garrett D'Amore
On Thu, 2010-06-17 at 16:16 -0400, Eric Schrock wrote: On Jun 17, 2010, at 3:52 PM, Garrett D'Amore wrote: Anyway, I'm happy to share the code, and even go through the request-sponsor process to push this upstream. I would like the opinions of the ZFS and FMA teams though

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-17 Thread Garrett D'Amore
On Thu, 2010-06-17 at 17:53 -0400, Eric Schrock wrote: On Jun 17, 2010, at 4:35 PM, Garrett D'Amore wrote: I actually started with DKIOCGSTATE as my first approach, modifying sd.c. But I had problems because what I found is that nothing was issuing this ioctl properly except

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-17 Thread Garrett D'Amore
On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote: On Jun 17, 2010, at 6:13 PM, Garrett D'Amore wrote: So how do you diagnose the situation where someone trips over a cable, or where the drive was bumped and detached from the cable? I guess I'm OK with the idea

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Garrett D'Amore
On Fri, 2010-06-18 at 09:07 -0400, Eric Schrock wrote: On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote: On 18/06/2010 00:18, Garrett D'Amore wrote: On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote: On the SS7000 series, you get an alert that the enclosure has been

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-19 Thread Garrett D'Amore
if Nexenta were to get credit for the fix.) - Garrett On Fri, 2010-06-18 at 09:26 -0700, Garrett D'Amore wrote: On Fri, 2010-06-18 at 09:07 -0400, Eric Schrock wrote: On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote: On 18/06/2010 00:18, Garrett D'Amore wrote: On Thu, 2010-06

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-27 Thread Garrett D'Amore
On Sun, 2010-06-27 at 21:07 -0700, Richard Elling wrote: But that won't solve the OP's problem, which was that OpenSolaris doesn't support his hardware. Nexenta has the same hardware limitations as OpenSolaris. AFAICT, the OP's problem is with a keyboard. The vagaries of keyboards is

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-27 Thread Garrett D'Amore
On Sun, 2010-06-27 at 21:54 -0700, Erik Trimble wrote: On 6/27/2010 9:07 PM, Richard Elling wrote: On Jun 27, 2010, at 8:52 PM, Erik Trimble wrote: But that won't solve the OP's problem, which was that OpenSolaris doesn't support his hardware. Nexenta has the same hardware

Re: [zfs-discuss] ZFS bug - should I be worried about this?

2010-06-28 Thread Garrett D'Amore
On Mon, 2010-06-28 at 05:16 -0700, Gabriele Bulfon wrote: Yes...they're still running...but being aware that a power failure causing an unexpected poweroff may make the pool unreadable is a pain Yes. Patches should be available. Or adoption may be lowering a lot... I don't have access

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Garrett D'Amore
On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote: To be safe, the protocol needs to be able to discover that the devices (host or disk) has been disconnected and reconnected or has been reset and that either parts assumptions about the state of the other has to be invalidated. I

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-30 Thread Garrett D'Amore
On Wed, 2010-06-30 at 16:41 -0500, Nicolas Williams wrote: On Wed, Jun 30, 2010 at 01:35:31PM -0700, valrh...@gmail.com wrote: Finally, for my purposes, it doesn't seem like a ZIL is necessary? I'm the only user of the fileserver, so there probably won't be more than two or three computers,

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-03 Thread Garrett D'Amore
I am sorry you feel that way. I will look at your issue as soon as I am able, but I should say that it is almost certain that whatever the problem is, it probably is inherited from OpenSolaris and the build of NCP you were testing was indeed not the final release so some issues are not

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Garrett D'Amore
Compared to b134? Yes! We have fixed many bugs that still exist in 134. Fajar A. Nugraha fa...@fajar.net wrote: On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore garr...@nexenta.com wrote: I am sorry you feel that way.  I will look at your issue as soon as I am able, but I should say

Re: [zfs-discuss] ZFS crash

2010-07-07 Thread Garrett D'Amore
On Wed, 2010-07-07 at 10:09 -0700, Mark Christooph wrote: I had an interesting dilemma recently and I'm wondering if anyone here can illuminate on why this happened. I have a number of pools, including the root pool, in on-board disks on the server. I also have one pool on a SAN disk,

Re: [zfs-discuss] Remove non-redundant disk

2010-07-07 Thread Garrett D'Amore
I believe that long term folks are working on solving this problem. I believe bp_rewrite is needed for this work. Mid/short term, the solution to me at least seems to be to migrate your data to a new zpool on the newly configured array, etc. Most enterprises don't incrementally upgrade an array

Re: [zfs-discuss] 1068E mpt driver issue

2010-07-07 Thread Garrett D'Amore
On Wed, 2010-07-07 at 17:33 -0400, Jacob Ritorto wrote: Thank goodness! Where, specifically, does one obtain this firmware for SPARC? Firmware is firmware -- it should not be host-cpu specific. (At least one *hopes* not, although I *suppose* it is possible to have endian specific interfaces

Re: [zfs-discuss] 1068E mpt driver issue

2010-07-07 Thread Garrett D'Amore
or not. - Garrett thx jake On 07/07/10 17:46, Garrett D'Amore wrote: On Wed, 2010-07-07 at 17:33 -0400, Jacob Ritorto wrote: Thank goodness! Where, specifically, does one obtain this firmware for SPARC? Firmware is firmware -- it should not be host-cpu specific

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-07 Thread Garrett D'Amore
On Wed, 2010-07-07 at 18:52 -0700, Erik Trimble wrote: On 7/7/2010 6:33 PM, Peter Taps wrote: Folks, As you may have heard, NetApp has a lawsuit against Sun in 2007 (and now carried over to Oracle) for patent infringement with the zfs file system. Now, NetApp is taking a stronger

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-08 Thread Garrett D'Amore
You want the write cache enabled, for sure, with ZFS. ZFS will do the right thing about ensuring write cache is flushed when needed. For the case of a single JBOD, I don't find it surprising that UFS beats ZFS. ZFS is designed for more complex configurations, and provides much better data

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-08 Thread Garrett D'Amore
On Fri, 2010-07-09 at 00:23 +0200, Ragnar Sundblad wrote: On 8 jul 2010, at 17.23, Garrett D'Amore wrote: You want the write cache enabled, for sure, with ZFS. ZFS will do the right thing about ensuring write cache is flushed when needed. That is not for sure at all, it all depends

Re: [zfs-discuss] Hashing files rapidly on ZFS

2010-07-08 Thread Garrett D'Amore
On Thu, 2010-07-08 at 18:46 -0400, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bertrand Augereau is there a way to compute very quickly some hash of a file in a zfs? As I understand it, everything is

Re: [zfs-discuss] Hashing files rapidly on ZFS

2010-07-08 Thread Garrett D'Amore
On Fri, 2010-07-09 at 10:23 +1000, Peter Jeremy wrote: On 2010-Jul-09 06:46:54 +0800, Edward Ned Harvey solar...@nedharvey.com wrote: md5 is significantly slower (but surprisingly not much slower) and it's a cryptographic hash. Probably not necessary for your needs. As someone else has

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-09 Thread Garrett D'Amore
On Fri, 2010-07-09 at 15:02 -0400, Miles Nordin wrote: ab == Alex Blewitt alex.blew...@gmail.com writes: ab All Mac Minis have FireWire - the new ones have FW800. I tried attaching just two disks to a ZFS host using firewire, and it worked very badly for me. I found: 1. The

Re: [zfs-discuss] ZFS, IPS (IBM ServeRAID) driver, and a kernel panic...

2010-07-09 Thread Garrett D'Amore
First off, you need to test 3.0.3 if you're using dedup. Earlier versions had an unduly large number of issues when used with dedup. Hopefully with 3.0.3 we've got the bulk of the problems resolved. ;-) Secondly, from your stack backtrace, yes, it appears ips is implicated. If I had source for

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Garrett D'Amore
On Mon, 2010-07-12 at 17:05 +0100, Andrew Gabriel wrote: Linder, Doug wrote: Out of sheer curiosity - and I'm not disagreeing with you, just wondering - how does ZFS make money for Oracle when they don't charge for it? Do you think it's such an important feature that it's a big factor in

Re: [zfs-discuss] Encryption?

2010-07-12 Thread Garrett D'Amore
On Mon, 2010-07-12 at 12:55 -0700, Brandon High wrote: On Mon, Jul 12, 2010 at 10:00 AM, Garrett D'Amore garr...@nexenta.com wrote: Btw, if you want a commercially supported and maintained product, have you looked at NexentaStor? Regardless of what happens

Re: [zfs-discuss] How do I clean up corrupted files from zpool status -v?

2010-07-12 Thread Garrett D'Amore
Hey Kris (glad to see someone from my QCOM days!): It should automatically clear itself when you replace the disk. Right now you're still degraded since you don't have full redundancy. - Garrett On Mon, 2010-07-12 at 16:10 -0700, Kris Kasner wrote: Hi Folks.. I have a system that

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-13 Thread Garrett D'Amore
On Tue, 2010-07-13 at 10:51 -0400, Edward Ned Harvey wrote: From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] A private license, with support and indemnification from Sun, would shield Apple from any lawsuit from Netapp. The patent holder is not compelled in any way to

Re: [zfs-discuss] Encryption?

2010-07-14 Thread Garrett D'Amore
On Wed, 2010-07-14 at 01:06 -0700, Peter Taps wrote: Btw, if you want a commercially supported and maintained product, have you looked at NexentaStor? Regardless of what happens with OpenSolaris, we aren't going anywhere. (Full disclosure: I'm a Nexenta Systems employee. :-) --

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-14 Thread Garrett D'Amore
On Wed, 2010-07-14 at 13:59 -0700, Paul B. Henson wrote: On Wed, 14 Jul 2010, Roy Sigurd Karlsbakk wrote: Once the code is in the open, it'll remain there. To quote Cory Doctorow on this, it's easy release the source of a project, it's like adding ink to your swimming pool, but it's a

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-14 Thread Garrett D'Amore
On Thu, 2010-07-15 at 11:48 +0900, BM wrote: But hey, why to fork ZFS and mess with a stale Solaris code, if the entire future of Solaris is a closed proprietary payware anyway? And opposite to ZFS, we have totally free BTRFS that has been moved to the kernel.org and is *free* and is for

Re: [zfs-discuss] How do I clean up corrupted files from zpool status -v?

2010-07-15 Thread Garrett D'Amore
recommend doing a scrub. There are probably other experts here (Richard?) who can suggest a permanent fix. - Garrett Thanks again. --Kris Today at 16:15, Garrett D'Amore garr...@nexenta.com wrote: Hey Kris (glad to see someone from my QCOM days!): It should automatically clear

Re: [zfs-discuss] How do I clean up corrupted files from zpool status -v?

2010-07-15 Thread Garrett D'Amore
:12 -0700, Kris Kasner wrote: Today at 09:44, Garrett D'Amore garr...@nexenta.com wrote: Those corrupt files are corrupt forever. Until they are removed. I recommend doing a scrub. There are probably other experts here (Richard?) who can suggest a permanent fix. Right, and we're OK

Re: [zfs-discuss] ZFS bug - CVE-2010-2392

2010-07-15 Thread Garrett D'Amore
On Thu, 2010-07-15 at 13:47 -0500, Dave Pooser wrote: Looks like the bug affects through snv_137. Patches are available from the usual location-- https://pkg.sun.com/opensolaris/support for OpenSolaris. Got a CR number for this? (Or a link to where I can find out about the CVE number?)

Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Garrett D'Amore
1GB isn't enough for a real system. 2GB is a bare minimum. If you're going to use dedup, plan on a *lot* more. I think 4 or 8 GB are good for a typical desktop or home NAS setup. With FreeBSD you may be able to get away with less. (Probably, in fact.) Btw, instead of RAIDZ2, I'd recommend

Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Garrett D'Amore
On Fri, 2010-07-16 at 11:57 -0700, Michael Johnson wrote: us, why do you say I'd be able to get away with less RAM in FreeBSD (as compared to NexentaStor, I'm assuming)? I don't know tons about the OSs in question; is FreeBSD just leaner in general? Compared to Solaris, in my estimation,

Re: [zfs-discuss] Debunking the dedup memory myth

2010-07-18 Thread Garrett D'Amore
On Sun, 2010-07-18 at 16:18 -0700, Richard L. Hamilton wrote: I would imagine that if it's read-mostly, it's a win, but otherwise it costs more than it saves. Even more conventional compression tends to be more resource intensive than decompression... What I'm wondering is when dedup is

Re: [zfs-discuss] Performance advantages of spool with 2x raidz2 vdevs vs. Single vdev

2010-07-19 Thread Garrett D'Amore
On Mon, 2010-07-19 at 01:28 -0700, tomwaters wrote: Hi guys, I am about to reshape my data spool and am wondering what performance diff. I can expect from the new config. Vs. The old. The old config. Is a pool of a single vdev of 8 disks raidz2. The new pool config is 2vdev's of 7 disk

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-19 Thread Garrett D'Amore
On Mon, 2010-07-19 at 17:40 -0700, Chad Cantwell wrote: fyi, everyone, I have some more info here. in short, rich lowe's 142 works correctly (fast) on my hardware, while both my compilations (snv 143, snv 144) and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow. The idea

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Garrett D'Amore
So the next question is, lets figure out what richlowe did differently. ;-) - Garrett ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Garrett D'Amore
Your config makes me think this is an atypical ZFS configuration. As a result, I'm not as concerned. But I think the multithread/concurrency may be the biggest concern here. Perhaps the compilers are doing something different that causes significant cache issues. (Perhaps the compilers

Re: [zfs-discuss] slog/L2ARC on a hard drive and not SSD?

2010-07-21 Thread Garrett D'Amore
On Wed, 2010-07-21 at 07:56 -0700, Hernan F wrote: Hi, Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)? I know these are designed with SSDs in mind, and I know it's possible to use anything you want

Re: [zfs-discuss] CPU requirements for zfs performance

2010-07-21 Thread Garrett D'Amore
On Wed, 2010-07-21 at 17:12 +0200, Saso Kiselkov wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 If you plan on using it as a storage server for multimedia data (movies), don't even bother considering compression, as most media files already come heavily compressed. Dedup might still

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-21 Thread Garrett D'Amore
On Wed, 2010-07-21 at 02:21 -0400, Richard Lowe wrote: I built in the normal fashion, with the CBE compilers (cc: Sun C 5.9 SunOS_i386 Patch 124868-10 2009/04/30), and 12u1 lint. I'm not subscribed to zfs-discuss, but have you established whether the problematic build is DEBUG? (the bits I

Re: [zfs-discuss] L2ARC and ZIL on same SSD?

2010-07-21 Thread Garrett D'Amore
On Wed, 2010-07-21 at 09:42 -0700, Orvar Korvar wrote: Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts? Its probably a reasonable approach. The ZIL can be fairly small... only about 8 GB is probably sufficient for

Re: [zfs-discuss] NFS performance?

2010-07-23 Thread Garrett D'Amore
Fundamentally, my recommendation is to choose NFS if your clients can use it. You'll get a lot of potential advantages in the NFS/zfs integration, so better performance. Plus you can serve multiple clients, etc. The only reason to use iSCSI is when you don't have a choice, IMO. You should only

Re: [zfs-discuss] NFS performance?

2010-07-24 Thread Garrett D'Amore
On Sat, 2010-07-24 at 19:54 -0400, Edward Ned Harvey wrote: From: Garrett D'Amore [mailto:garr...@nexenta.com] Fundamentally, my recommendation is to choose NFS if your clients can use it. You'll get a lot of potential advantages in the NFS/zfs integration, so better performance. Plus

Re: [zfs-discuss] NFS performance?

2010-07-25 Thread Garrett D'Amore
On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote: I think there may be very good reason to use iSCSI, if you're limited to gigabit but need to be able to handle higher throughput for a single client. I may be wrong, but I believe iSCSI to/from a single initiator can take advantage of

Re: [zfs-discuss] NFS performance?

2010-07-26 Thread Garrett D'Amore
On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote: On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore garr...@nexenta.com wrote: On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote: I think there may be very good reason to use iSCSI, if you're limited to gigabit but need to be able

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Garrett D'Amore
On 08/13/10 09:02 PM, C. Bergström wrote: Erast wrote: On 08/13/2010 01:39 PM, Tim Cook wrote: http://www.theregister.co.uk/2010/08/13/opensolaris_is_dead/ I'm a bit surprised at this development... Oracle really just doesn't get it. The part that's most disturbing to me is the fact they

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Garrett D'Amore
On 08/14/10 09:36 AM, Paul B. Henson wrote: On Fri, 13 Aug 2010, Tim Cook wrote: http://www.theregister.co.uk/2010/08/13/opensolaris_is_dead/ Oracle will spend *more* money on OpenSolaris development than Sun did. At least, as a Sun customer, that's the line they were trying to

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread Garrett D'Amore
On Sun, 2010-08-15 at 07:38 -0700, Richard Jahnel wrote: FWIW I'm making a significant bet that Nexenta plus Illumos will be the future for the space in which I operate. I had already begun the process of migrating my 134 boxes over to Nexenta before Oracle's cunning plans became known.

Re: [zfs-discuss] ZFS diaspora (was Opensolaris is apparently dead)

2010-08-15 Thread Garrett D'Amore
is available, like some are still using OpenSolaris 2009.06 instead of one of the development releases. In another thread about a month ago Garrett D'Amore (from Nexenta and working with the IllumOS project which Nexenta is a sponsor of) wrote: There is another piece I'll add: even if Oracle

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Garrett D'Amore
Any code can become abandonware; where it effectively bitrots into oblivion. For either ZFS or BTRFS (or any other filesystem) to survive, there have to be sufficiently skilled developers with an interest in developing and maintaining it (whether the interest is commercial or recreational).

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Garrett D'Amore
see, that's good, and is a realistic future scenario for ZFS, AFAICT: there can be a branch that's safe to collaborate on, which cannot go into Solaris 11 and cannot be taken proprietary by Nexenta, either. In fact, we are in the process of creating a non-profit foundation for Illumos

Re: [zfs-discuss] 64-bit vs 32-bit applications

2010-08-16 Thread Garrett D'Amore
It can be as simple as impact on the cache. 64-bit programs tend to be bigger, and so they have a worse effect on the i-cache. Unless your program does something that can inherently benefit from 64-bit registers, or can take advantage of the richer instruction set that is available to amd64

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Garrett D'Amore
linking is probably going to be difficult. - Garrett On Tue, 2010-08-17 at 17:07 -0400, Miles Nordin wrote: gd == Garrett D'Amore garr...@nexenta.com writes: Joerg is correct that CDDL code can legally live right alongside the GPLv2 kernel code and run in the same program

Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Garrett D'Amore
On Wed, 2010-08-18 at 00:16 -0700, Alxen4 wrote: Is there any way run start-up script before non-root pool is mounted ? For example I'm trying to use ramdisk as ZIL device (ramdiskadm ) So I need to create ramdisk before actual pool is mounted otherwise it complains that log device is

Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Garrett D'Amore
On Wed, 2010-08-18 at 00:49 -0700, Alxen4 wrote: Any argumentation why ? Because a RAMDISK defeats the purpose of a ZIL, which is to provide a fast *stable storage* for data being written. If you are using a RAMDISK, you are not getting any non-volatility guarantees that the ZIL is supposed to

Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Garrett D'Amore
On Wed, 2010-08-18 at 01:20 -0700, Alxen4 wrote: Thanks...Now I think I understand... Let me summarize it andd let me know if I'm wrong. Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to report data acknowledgment before it actually was written to stable

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Garrett D'Amore
All of this is entirely legal conjecture, by people who aren't lawyers, for issues that have not been tested by court and are clearly subject to interpretation. Since it no longer is relevant to the topic of the list, can we please either take the discussion offline, or agree to just let the

Re: [zfs-discuss] 64-bit vs 32-bit applications

2010-08-19 Thread Garrett D'Amore
On Thu, 2010-08-19 at 20:14 +0100, Daniel Taylor wrote: On 19 Aug 2010, at 19:42, Garrett D'Amore wrote: Out of interest, what language do you recommend? Depends on the job -- I'm a huge fan of choosing the right tool for the job. I just think C++ tries to be jack of all trades and winds up

Re: [zfs-discuss] 64-bit vs 32-bit applications

2010-08-19 Thread Garrett D'Amore
On Fri, 2010-08-20 at 09:23 +1200, Ian Collins wrote: There is no common C++ ABI. So you get into compatibility concerns between code built with different compilers (like Studio vs. g++). Fail. Which is why we have extern C. Just about any Solaris driver, library or

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-11 Thread Garrett D'Amore
We have ZFS version 28. Whether we ever get another open source update of ZFS from *Oracle* is at this point doubtful. However, I will point out that there are a lot of former Oracle engineers, including both inventors of ZFS and many of the people who have worked on it over the years, who

Re: [zfs-discuss] stupid ZFS question - floating point operations

2010-12-22 Thread Garrett D'Amore
Generally, ZFS does not use floating point. And further, use of floating point in the kernel is exceptionally rare. The kernel does not save floating point context automatically, which means that code that uses floating point needs to take special care to make sure any context from userland

Re: [zfs-discuss] Looking for 3.5 SSD for ZIL

2010-12-23 Thread Garrett D'Amore
We should get the reformatter(s) ported to illumos/solaris, if source is available. Something to consider. - Garrett -Original Message- From: zfs-discuss-boun...@opensolaris.org on behalf of Erik Trimble Sent: Wed 12/22/2010 10:36 PM To: Christopher George Cc:

Re: [zfs-discuss] stupid ZFS question - floating point operations

2010-12-24 Thread Garrett D'Amore
] Sent: Thu 12/23/2010 1:32 AM To: Garrett D'Amore Cc: Erik Trimble; Jerry Kemp; zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] stupid ZFS question - floating point operations On 22/12/2010 20:27, Garrett D'Amore wrote: That said, some operations -- and cryptographic ones in particular

Re: [zfs-discuss] A few questions

2011-01-04 Thread Garrett D'Amore
On 01/ 3/11 05:08 AM, Robert Milkowski wrote: On 12/26/10 05:40 AM, Tim Cook wrote: On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote: There are more people outside of Oracle developing for ZFS than inside Oracle.

Re: [zfs-discuss] A few questions

2011-01-05 Thread Garrett D'Amore
On 01/ 4/11 11:48 PM, Tim Cook wrote: On Tue, Jan 4, 2011 at 8:21 PM, Garrett D'Amore garr...@nexenta.com mailto:garr...@nexenta.com wrote: On 01/ 4/11 09:15 PM, Tim Cook wrote: On Mon, Jan 3, 2011 at 5:56 AM, Garrett D'Amore garr...@nexenta.com mailto:garr...@nexenta.com

Re: [zfs-discuss] A few questions

2011-01-08 Thread Garrett D'Amore
On 01/ 6/11 05:28 AM, Edward Ned Harvey wrote: From: Khushil Dep [mailto:khushil@gmail.com] I've deployed large SAN's on both SuperMicro 825/826/846 and Dell R610/R710's and I've not found any issues so far. I always make a point of installing Intel chipset NIC's on the DELL's and disabling

Re: [zfs-discuss] A few questions

2011-01-08 Thread Garrett D'Amore
On 01/ 8/11 10:43 AM, Stephan Budach wrote: Am 08.01.11 18:33, schrieb Edward Ned Harvey: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Garrett D'Amore When you purchase NexentaStor from a top-tier Nexenta Hardware Partner, you get

  1   2   >