Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-10 Thread Fredrich Maney
On Wed, Jun 9, 2010 at 5:06 PM, Maurice Volaski maurice.vola...@einstein.yu.edu wrote: Are you sure of that? This directly contradicts what David Magda said yesterday. Yes. Just how is what he said contradictory? To quote from his message: Either the primary node OR the secondary node can

Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-10 Thread David Magda
On Jun 10, 2010, at 03:50, Fredrich Maney wrote: David Magda wrote: Either the primary node OR the secondary node can have active writes to a volume, but NOT BOTH at the same time. Once the secondary becomes active, and has made changes, you have to replicate the changes back to the primary.

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2010-06-10 Thread Peter Eriksson
Just a quick followup that the same issue still seems to be there on our X4500s with the latest Solaris 10 with all the latest patches and the following SSD disks: Intel X25-M G1 firmware 8820 (80GB MLC) Intel X25-M G2 firmware 02HD (160GB MLC) However - things seem to work smoothly with:

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2010-06-10 Thread Pasi Kärkkäinen
On Thu, Jun 10, 2010 at 05:46:19AM -0700, Peter Eriksson wrote: Just a quick followup that the same issue still seems to be there on our X4500s with the latest Solaris 10 with all the latest patches and the following SSD disks: Intel X25-M G1 firmware 8820 (80GB MLC) Intel X25-M G2

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Robert Milkowski
On 21/10/2009 03:54, Bob Friesenhahn wrote: I would be interested to know how many IOPS an OS like Solaris is able to push through a single device interface. The normal driver stack is likely limited as to how many IOPS it can sustain for a given LUN since the driver stack is optimized for

[zfs-discuss] Sharing root and cache on same SSD?

2010-06-10 Thread Peter Eriksson
Are there any potential problems that one should be aware of if you would like to make dual-use of a pair of SSD MLC units and use parts of them as mirrored (ZFS) boot disks, and then use the rest of them as ZFS L2ARC cache devices (for another zpool)? The one thing I can think of is potential

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Andrey Kuzmin
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl wrote: On 21/10/2009 03:54, Bob Friesenhahn wrote: I would be interested to know how many IOPS an OS like Solaris is able to push through a single device interface. The normal driver stack is likely limited as to how many

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Robert Milkowski
On 10/06/2010 15:39, Andrey Kuzmin wrote: On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl mailto:mi...@task.gda.pl wrote: On 21/10/2009 03:54, Bob Friesenhahn wrote: I would be interested to know how many IOPS an OS like Solaris is able to push through

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Andrey Kuzmin
Sorry, my bad. _Reading_ from /dev/null may be an issue, but not writing to it, of course. Regards, Andrey On Thu, Jun 10, 2010 at 6:46 PM, Robert Milkowski mi...@task.gda.pl wrote: On 10/06/2010 15:39, Andrey Kuzmin wrote: On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski

Re: [zfs-discuss] swap - where is it coming from?

2010-06-10 Thread Dennis Clarke
Re-read the section onSwap Space and Virtual Memory for particulars on how Solaris does virtual memory mapping, and the concept of Virtual Swap Space, which is what 'swap -s' is really reporting on. The Solaris Internals book is awesome for this sort of thing. A bit over the top in detail but

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Mike Gerdts
On Thu, Jun 10, 2010 at 9:39 AM, Andrey Kuzmin andrey.v.kuz...@gmail.com wrote: On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl wrote: On 21/10/2009 03:54, Bob Friesenhahn wrote: I would be interested to know how many IOPS an OS like Solaris is able to push through a

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2010-06-10 Thread Eugen Leitl
On Thu, Jun 10, 2010 at 04:04:42PM +0300, Pasi Kärkkäinen wrote: Intel X25-M G1 firmware 8820 (80GB MLC) Intel X25-M G2 firmware 02HD (160GB MLC) What problems did you have with the X25-M models? I'm not the OP, but I've had two X25M G2's (80 and 160 GByte) suddenly die out me, out of

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-10 Thread Joe Auty
Garrett D'Amore wrote: You can hardly have too much. At least 8 GB, maybe 16 would be good. The benefit will depend on your workload, but zfs and buffer cache will use it all if you have a big enough read working set. Could lack of RAM be contributing to some of my problems, do you

Re: [zfs-discuss] swap - where is it coming from?

2010-06-10 Thread devsk
Erik, That doesn't explain anything. More of the same that I found in man page. What is swap allocated in physical memory? I have hard time wrapping my arms around that. Is it something like swap cache in Linux? If its disk-backed, where is the actual location of the backing store? And the

Re: [zfs-discuss] Drive showing as removed

2010-06-10 Thread Joe Auty
Cindy Swearingen wrote: Hi Joe, I have no clue why this drive was removed, particularly for a one time failure. I would reconnect/reseat this disk and see if the system recognizes it. If it resilvers, then you're back in business, but I would use zpool status and fmdump to

[zfs-discuss] Reconfiguring a RAID-Z dataset

2010-06-10 Thread Joe Auty
Hello, My understanding is that people are pretty much SOL if they want to reconfigure a RAID-Z or RAID-Z2 dataset to, say, a mirror+stripe? That is, there is no way to do this via a couple of simple commands? Just say, for the purpose of my general enlightenment and filing away for if I

Re: [zfs-discuss] Reconfiguring a RAID-Z dataset

2010-06-10 Thread Roy Sigurd Karlsbakk
Hello, My understanding is that people are pretty much SOL if they want to reconfigure a RAID-Z or RAID-Z2 dataset to, say, a mirror+stripe? That is, there is no way to do this via a couple of simple commands? Just say, for the purpose of my general enlightenment and filing away for if I

Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-10 Thread Maurice Volaski
Maybe there is another way to read those, but it looks to me like David says you can trivially swap the roles of the nodes using the '-r' switch (and he provides a link to the documentation), and you say that you can't trivially swap the roles of the nodes. The -r switch temporarily reverses

[zfs-discuss] Please trim posts

2010-06-10 Thread pattonme
It's getting downright ridiculous. The digest people will kiss you. Sent via BlackBerry from T-Mobile ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Please trim posts

2010-06-10 Thread David Dyer-Bennet
On Thu, June 10, 2010 12:26, patto...@yahoo.com wrote: It's getting downright ridiculous. The digest people will kiss you. But those reading via individual message email quite possibly will not. Quoting at least what you're actually responding to is crucial to making sense out here. -- David

Re: [zfs-discuss] swap - where is it coming from?

2010-06-10 Thread Casper . Dik
Swap is perhaps the wrong name; it is really virtual memory; virtual memory consists of real memory and swap on disk. In Solaris, a page either exists on the physical swap device or in memory. Of course, not all memory is available as the kernel and other caches use a large part of the

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-10 Thread Bob Friesenhahn
On Wed, 9 Jun 2010, Travis Tabbal wrote: NFS writes on ZFS blows chunks performance wise. The only way to increase the write speed is by using an slog The above statement is not quite true. RAID-style adaptor cards which contain battery backed RAM or RAID arrays which include battery backed

Re: [zfs-discuss] Please trim posts

2010-06-10 Thread Roy Sigurd Karlsbakk
It's getting downright ridiculous. The digest people will kiss you. But those reading via individual message email quite possibly will not. Quoting at least what you're actually responding to is crucial to making sense out here. The problem is all the top-posts and similar bottom-posts

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-10 Thread Bob Friesenhahn
On Wed, 9 Jun 2010, Edward Ned Harvey wrote: disks. That is, specifically: o If you do a large sequential read, with 3 mirrors (6 disks) then you get 6x performance of a single disk. Should say up to 6x. Which disk in the pair will be read from is random so you are unlikely to get the full

Re: [zfs-discuss] swap - where is it coming from?

2010-06-10 Thread Bob Friesenhahn
On Thu, 10 Jun 2010, casper@sun.com wrote: Swap is perhaps the wrong name; it is really virtual memory; virtual memory consists of real memory and swap on disk. In Solaris, a page either exists on the physical swap device or in memory. Of course, not all memory is available as the kernel

Re: [zfs-discuss] Please trim posts

2010-06-10 Thread Bob Friesenhahn
On Thu, 10 Jun 2010, Roy Sigurd Karlsbakk wrote: The problem is all the top-posts and similar bottom-posts where everything in the thread is kept. This is not good netiquette, even in 2010. I think that you may notice that most of the perpetrators are from Gmail. It seems that Gmail is

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Ross Walker
On Jun 10, 2010, at 5:54 PM, Richard Elling richard.ell...@gmail.com wrote: On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote: Andrey Kuzmin wrote: Well, I'm more accustomed to sequential vs. random, but YMMW. As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting into

Re: [zfs-discuss] swap - where is it coming from?

2010-06-10 Thread Casper . Dik
On Thu, 10 Jun 2010, casper@sun.com wrote: Swap is perhaps the wrong name; it is really virtual memory; virtual memory consists of real memory and swap on disk. In Solaris, a page either exists on the physical swap device or in memory. Of course, not all memory is available as the

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Andrey Kuzmin
As to your results, it sounds almost too good to be true. As Bob has pointed out, h/w design targeted hundreds IOPS, and it was hard to believe it can scale 100x. Fantastic. Regards, Andrey On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl wrote: On 21/10/2009 03:54, Bob

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Garrett D'Amore
For the record, with my driver (which is not the same as the one shipped by the vendor), I was getting over 150K IOPS with a single DDRdrive X1. It is possible to get very high IOPS with Solaris. However, it might be difficult to get such high numbers with systems based on SCSI/SCSA.

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Andrey Kuzmin
On Thu, Jun 10, 2010 at 11:51 PM, Arne Jansen sensi...@gmx.net wrote: Andrey Kuzmin wrote: As to your results, it sounds almost too good to be true. As Bob has pointed out, h/w design targeted hundreds IOPS, and it was hard to believe it can scale 100x. Fantastic. Hundreds IOPS is not

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Arne Jansen
Andrey Kuzmin wrote: On Thu, Jun 10, 2010 at 11:51 PM, Arne Jansen sensi...@gmx.net mailto:sensi...@gmx.net wrote: Andrey Kuzmin wrote: As to your results, it sounds almost too good to be true. As Bob has pointed out, h/w design targeted hundreds IOPS, and it was

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Arne Jansen
Andrey Kuzmin wrote: Well, I'm more accustomed to sequential vs. random, but YMMW. As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting into cache), did you have write-back enabled? It's a sustained number, so it shouldn't matter. Regards, Andrey On Fri, Jun

[zfs-discuss] ZFS Replication hint req.

2010-06-10 Thread Jakob Tewes
Hey folks, i´m trying my luck with scriptbased zfs replication and got no more ideas left so here comes my layout. Got a small machine with two zfs pools, one protected via raidz2 and one including just 1 disk. Now i wanted to use zfs´s nice snapshot/replication options to ship data from the

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Arne Jansen
Andrey Kuzmin wrote: As to your results, it sounds almost too good to be true. As Bob has pointed out, h/w design targeted hundreds IOPS, and it was hard to believe it can scale 100x. Fantastic. Hundreds IOPS is not quite true, even with hard drives. I just tested a Hitachi 15k drive and it

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Andrey Kuzmin
Well, I'm more accustomed to sequential vs. random, but YMMW. As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting into cache), did you have write-back enabled? Regards, Andrey On Fri, Jun 11, 2010 at 12:03 AM, Arne Jansen sensi...@gmx.net wrote: Andrey Kuzmin wrote:

Re: [zfs-discuss] ZFS Replication hint req.

2010-06-10 Thread Tom Erickson
Jakob Tewes wrote: Hey folks, i´m trying my luck with scriptbased zfs replication and got no more ideas left so here comes my layout. Got a small machine with two zfs pools, one protected via raidz2 and one including just 1 disk. Now i wanted to use zfs´s nice snapshot/replication options

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Richard Elling
On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote: Andrey Kuzmin wrote: Well, I'm more accustomed to sequential vs. random, but YMMW. As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting into cache), did you have write-back enabled? It's a sustained number, so it

Re: [zfs-discuss] Native ZFS for Linux

2010-06-10 Thread Rodrigo E . De León Plicet
On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwal anu...@kqinfotech.com wrote: We at KQInfotech, initially started on an independent port of ZFS to linux. When we posted our progress about port last year, then we came to know about the work on LLNL port. Since then we started working on to re-base

Re: [zfs-discuss] Native ZFS for Linux

2010-06-10 Thread Erik Trimble
On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote: On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwalanu...@kqinfotech.com wrote: We at KQInfotech, initially started on an independent port of ZFS to linux. When we posted our progress about port last year, then we came to know about the

Re: [zfs-discuss] Native ZFS for Linux

2010-06-10 Thread zfsnoob4
I'm very excited. Throw some code up on github as soon as you are able. I'm sure there are plenty of people (me) that would like to help test it out. I've already been playing around with ZFS using zvol on Fedora 12. I would love to have a ZPL, no matter how experimental. -- This message

Re: [zfs-discuss] Native ZFS for Linux

2010-06-10 Thread Jason King
On Thu, Jun 10, 2010 at 11:32 PM, Erik Trimble erik.trim...@oracle.com wrote: On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote: On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwalanu...@kqinfotech.com  wrote: We at KQInfotech, initially started on an independent port of ZFS to linux. When

[zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-10 Thread valrh...@gmail.com
I've today set up a new fileserver using EON 0.600 (based on SNV130). I'm now copying files between mirrors, and the performance is slower than I had hoped. I am trying to figure out what to do to make things a bit faster in terms of performance. Thanks in advance for reading, and sharing any

Re: [zfs-discuss] Migrating to ZFS

2010-06-10 Thread valrh...@gmail.com
Are you going to use this machine as a fileserver, at least the OpenSolaris part? You might consider trying EON storage (http://eonstorage.blogspot.com/), which just runs on a CD. If that's all you need, then you don't have to worry about partitioning around Windows, since Windows won't be able