Re: [zfs-discuss] zfs code and fishworks fork
I can agree that the software is the one that really has the added value, but to my opinion allowing a stack like Fishworks to run outside the Sun Unified Storage would lead to lower price per unit(Fishwork license) but maybe increase revenue. I'm afraid I don't see that argument at all; I think that the economics that you're advocating would be more than undermined by the necessarily higher costs of validating and supporting a broader range of hardware and firmware... - Bryan -- Bryan Cantrill, Sun Microsystems Fishworks. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs code and fishworks fork
I can agree that the software is the one that really has the added value, but to my opinion allowing a stack like Fishworks to run outside the Sun Unified Storage would lead to lower price per unit(Fishwork license) but maybe increase revenue. I'm afraid I don't see that argument at all; I think that the economics that you're advocating would be more than undermined by the necessarily higher costs of validating and supporting a broader range of hardware and firmware... (Just playing Devil's Advocate here) There could be no economics at all. A basic warranty would be provided but running a standalone product is a wholly on your own proposition once one ventures outside a very small hardware support matrix. Perhaps Fishworks/AK would have a OpenSolaris edition - leave the bulk of the actual hardware support up to a support infrastructure that's already geared towards making wide ranges of hardware supportable - OpenSolaris/Solaris, after all, does allow that. Perhaps this could be a version of Fishworks that's not as integrated with what you get on a SUS platform; if some of the Fishworks functionality that depends on a precise hardware combo could be reduced or generalized, perhaps it's worth consideration. Knowing the little I do about what's going on under the hood of a SUS system, I wouldn't expect the version of Fishworks uses on the SUS systems to have 100% parity with a unbundled Fishworks edition - but the core features, by and large, would convey. Why would we do this? I'm all for zero-cost endeavors, but this isn't zero-cost -- and I'm having a hard time seeing the business case here, especially when we have so many paying customers for whom the business case for our time and energy is crystal clear... - Bryan -- Bryan Cantrill, Sun Microsystems Fishworks. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Split responsibility for data with ZFS
Seriously? Do you know anything about the NetApp platform? I'm hoping this is a genuine question... Off the top of my head nearly all of them. Some of them have artificial limitations because they learned the hard way that if you give customers enough rope they'll hang themselves. For instance unlimited snapshots. Do I even need to begin to tell you what a horrible, HORRIBLE idea that is? Why can't I get my space back? Oh, just do a snapshot list and figure out which one is still holding the data. What? Your console locks up for 8 hours when you try to list out the snapshots? Huh... that's weird. It's sort of like that whole unlimited filesystems thing. Just don't ever reboot your server, right? Or you can have 40pb in one pool!!!. How do you back it up? Oh, just mirror it to another system? And when you hit a bug that toasts both of them you can just start restoring from tape for the next 8 years, right? Or if by some luck we get a zfsiron, you can walk the metadata for the next 5 years. NVRAM has been replaced by flash drives in a ZFS world to get any kind of performance... so you're trading one high priced storage for another. Your snapshot creation and deletion is identical. Your incremental generations is identical. End-to-end checksums? Yup. Let's see... they don't have block-level compression, they chose dedup instead which nets better results. Hybrid storage pool is achieved through PAM modules. Outside of that... I don't see ANYTHING in your list they didn't do first. Wow -- I've spoken to many NetApp partisans over the years, but you might just take the cake. Of course, most of the people I talk to are actually _using_ NetApp's technology, a practice that tends to leave even the most stalwart proponents realistic about the (many) limitations of NetApp's technology... For example, take the PAM. Do you actually have one of these, or are you basing your thoughts on reading whitepapers? I ask because (1) they are horrifically expensive (2) they don't perform that well (especially considering that they're DRAM!) (3) they're grossly undersized (a 6000 series can still only max out at a paltry 96G -- and that's with virtually no slots left for I/O) and (4) they're not selling well. So if you actually bought a PAM, that already puts you in a razor-thin minority of NetApp customers (most of whom see through the PAM and recognize it for the kludge that it is); if you bought a PAM and think that it's somehow a replacement for the ZFS hybrid storage pool (which has an order of magnitude more cache), then I'm sure NetApp loves you: you must be the dumbest, richest customer that ever fell in their lap! - Bryan -- Bryan Cantrill, Sun Microsystems Fishworks. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] OpenStorage GUI
On Tue, Nov 11, 2008 at 09:31:26AM -0800, Adam Leventhal wrote: Is this software available for people who already have thumpers? We're considering offering an upgrade path for people with existing thumpers. Given the feedback we've been hearing, it seems very likely that we will. No word yet on pricing or availability. Just to throw some ice-cold water on this: 1. It's highly unlikely that we will ever support the x4500 -- only the x4540 is a real possibility. 2. If we do make something available, your data and any custom software won't survive the journey: you will be forced to fresh-install your x4540 with our stack. 3. If we do make something available, it will become an appliance: you will permanently lose the ability to run your own apps on the x4540. 4. If we do make something available, it won't be free. If you are willing/prepared(/eager?) to abide by these constraints, please let us ([EMAIL PROTECTED]) know -- that will help us build the business case for doing this... - Bryan -- Bryan Cantrill, Sun Microsystems Fishworks. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] OpenStorage GUI
On Tue, Nov 11, 2008 at 02:21:11PM -0500, Ed Saipetch wrote: Can someone clarify Sun's approach to opensourcing projects and software? I was under the impression the strategy was to charge for hardware, maintenance and PS. If not, some clarification would be nice. There is no single answer -- we use open source as a business strategy, not as a checkbox or edict. For this product, open source is an option going down the road, but not a priority. Will our software be open sourced in the fullness of time? My Magic 8-Ball tells me signs point to yes (or is that ask again later?) -- but it's certainly not something that we have concrete plans for at the moment... - Bryan -- Bryan Cantrill, Sun Microsystems Fishworks. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS array NVRAM cache?
On Wed, Sep 26, 2007 at 02:10:39PM -0400, Torrey McMahon wrote: Albert Chin wrote: On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote: I don't understand. How do you setup one LUN that has all of the NVRAM on the array dedicated to it I'm pretty familiar with 3510 and 3310. Forgive me for being a bit thick here, but can you be more specific for the n00b? If you're using CAM, disable NVRAM on all of your LUNs. Then, create another LUN equivalent to the size of your NVRAM. Assign the ZIL to this LUN. You'll then have an NVRAM-backed ZIL. You probably don't have to create a LUN the size of the NVRAM either. As long as its dedicated to one LUN then it should be pretty quick. The 3510 cache, last I checked, doesn't do any per LUN segmentation or sizing. Its a simple front end for any LUN that is using cache. Do we have any log sizing guidelines yet? Max size for example? That's a really good question -- and the answer essentially depends on the bandwidth of the underlying storage and the rate of activity to the ZIL. Both of these questions can be tricky to answer -- and the final answer also depends on how much headroom you desire. (That is, what drop in bandwidth and/or surge in ZIL activity does one want to be able to absorb without sacrificing latency?) For the time being, the easiest way to answer this question is to try some sizes (with 1-2 GB being a good starting point), throw some workloads at it, and monitor both your delivered performance and the utilization reported by tools like iostat... - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote: PSARC 2007/171 will be available in b68. Any documentation anywhere on how to take advantage of it? Some of the Sun storage arrays contain NVRAM. It would be really nice if the array NVRAM would be available for ZIL storage. It depends on your array, of course, but in most arrays you can control the amount of write cache (i.e., NVRAM) dedicated to particular LUNs. So to use the new separate logging most effectively, you should take your array, and dedicate all of your NVRAM to a single LUN that you then use as your separate log device. Your pool should then use a LUN or LUNs that do not have any NVRAM dedicated to it. It would also be nice for extra hardware (PCI-X, PCIe card) that added NVRAM storage to various sun low/mid-range servers that are currently acting as ZFS/NFS servers. You can do it yourself very easily -- check out the umem cards from Micro Memory, available at http://www.umem.com. Reasonable prices ($1000/GB), they have a Solaris driver, and the performance absolutely rips. Or maybe someone knows of cheap SSD storage that could be used for the ZIL? I think several HD's are available with SCSI/ATA interfaces. As Adam mentioned, this is a bit more involved, as most SSDs are biased very heavily towards reads and away from writes. So this will be quite a bit more expensive than NVRAM, at least at the moment... - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log
On Tue, Jul 03, 2007 at 01:10:25PM -0500, Albert Chin wrote: On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote: On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote: PSARC 2007/171 will be available in b68. Any documentation anywhere on how to take advantage of it? Some of the Sun storage arrays contain NVRAM. It would be really nice if the array NVRAM would be available for ZIL storage. It depends on your array, of course, but in most arrays you can control the amount of write cache (i.e., NVRAM) dedicated to particular LUNs. So to use the new separate logging most effectively, you should take your array, and dedicate all of your NVRAM to a single LUN that you then use as your separate log device. Your pool should then use a LUN or LUNs that do not have any NVRAM dedicated to it. Hmm, interesting. We'll try to find out if the 6140's can do this. Yes, they can: use CAM to set the write cache to be disabled on all but the LUN(s) that you want to use as the separate ZIL. - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)
On Thu, Apr 19, 2007 at 12:36:38AM +0200, Joerg Schilling wrote: [EMAIL PROTECTED] wrote: Actually sitting down and doing something hard (like porting ZFS - one way or another - to Linux), well, the word procrastination comes to mind and gee, isn't it easier to come up with reasons /not/ to do it? If someone really wanted ZFS on Linux, they'd just do it - licence/patents be damned. It seems that those people are a minority who know that.. A discussion on porting starting with a license talk means that there is no real technical interest on the port. Boy, is that ever the truth. If there is technical interest in a port, one should, um, do the port. Frankly, the license chatter emanating from the lwn.net crowd smells like just another way of expressing NIH -- it's a convenient excuse to not do something that they really don't want to do anyway. (This certainly seems to be the case for DTrace and Linux, where the license difference seems to have become an excuse to ignore everything about DTrace and to do their own thing.) And I will confess that I have found the sense of NIH coming out of certain segments of Linux development to be at times so overwhelming that I have found myself wondering: if we GPL'd Solaris, would that not give the lie to this excuse, and expose the Linux NIH for what it is? Especially ironic about the Linux NIH is that it seems to be a relatively new phenomenon: not so long ago, the ability to absorb innovation from elsewhere was arguably Linux's stock-in-trade. That era, however, seems to be indisuputably over, viz. the stubborn reluctance to so much as glance at ZFS, DTrace and a host of other innovations born outside of Linux... - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Thumper Origins Q
On Wed, Jan 24, 2007 at 12:15:21AM -0700, Jason J. W. Williams wrote: Wow. That's an incredibly cool story. Thank you for sharing it! Does the Thumper today pretty much resemble what you saw then? Yes, amazingly so: 4-way, 48 spindles, 4u. The real beauty of the match between ZFS and Thumper was (and is) that ZFS unlocks new economics in storage -- smart software achieving high performance and ultra-high reliability with dense, cheap hardware -- and that Thumper was (and is) the physical embodiment of those economics. And without giving away too much of our future roadmap, suffice it to say that one should expect much, much more from Sun in this vein: innovative software and innovative hardware working together to deliver world-beating systems with undeniable economics. And actually, as long as we're talking history, you might be interested to know the story behind the name Thumper: Fowler initially suggested the name as something of a joke, but, as often happens with Fowler, he tells a joke with a straight face once too many to one person too many, and next thing you know it's the plan of record. I had suggested the name Humper for the server that became Andromeda (the x8000 series) -- so you could order a datacenter by asking for (say) two Humpers and five Thumpers. (And I loved the idea of asking would you like a Humper for your Thumper?) But Fowler said the name was too risque (!). Fortunately the name Thumper stuck... - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Thumper Origins Q
well, Thumper is actually a reference to Bambi You'd have to ask Fowler, but certainly when he coined it, Bambi was the last thing on anyone's mind. I believe Fowler's intention was one that thumps (or, in the unique parlance of a certain Commander-in-Chief, one that gives a thumpin'). - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Thumper Origins Q
On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote: Well, he did say fairly cheap. the ST 3511 is about $18.5k. That's about the same price for the low-end NetApp FAS250 unit. Note that the 3511 is being replaced with the 6140: http://www.sun.com/storagetek/disk_systems/midrange/6140/ Also, don't read too much into the prices you see on the website -- that's the list price, and doesn't reflect any discounting. If you're interested in what it _actually_ costs, you should talk to a Sun rep or one of our channel partners to get a quote. (And lest anyone attack the messenger: I'm not defending this system of getting an accurate price, I'm just describing it.) - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Thumper Origins Q
You can take your pick of things that thump here: http://en.wikipedia.org/wiki/Thumper I think it's safe to say that Fowler was thinking more along the lines of whomever dubbed the M79 grenade launcher -- which you can safely bet was not named after a fictional bunny... - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Thumper Origins Q
This is a bit off-topic...but since the Thumper is the poster child for ZFS I hope its not too off-topic. What are the actual origins of the Thumper? I've heard varying stories in word and print. It appears that the Thumper was the original server Bechtolsheim designed at Kealia as a massive video server. That's correct -- it was originally called the StreamStor. Speaking personally, I first learned about it in the meeting with Andy that I described here: http://blogs.sun.com/bmc/entry/man_myth_legend I think it might be true that this was the first that anyone in Solaris had heard of it. Certainly, it was the first time that Andy had ever heard of ZFS. It was a very high bandwidth conversation, at any rate. ;) After the meeting, I returned post-haste to Menlo Park, where I excitedly described the box to Jeff Bonwick, Bill Moore and Bart Smaalders. Bill said something like I gotta see this thing and sometime later (perhaps the next week?) Bill, Bart and I went down to visit Andy. Andy gave us a much more detailed tour, with Bill asking all sorts of technical questions about the hardware (many of which were something like how did you get a supplier to build that for you?!). After the tour, Andy took the three of us to lunch, and it was one of those moments that I won't forget: Bart, Bill, Andy and I sitting in the late afternoon Palo Alto sun, with us very excited about his hardware, and Andy very excited about our software. Everyone realized that these two projects -- born independently -- were made for each other, that together they would change the market. It was one of those rare moments that reminds you why you got into this line of work -- and I feel lucky to have shared in it. - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Apple Time Machine
On Mon, Aug 07, 2006 at 04:57:44PM -0700, Eric Schrock wrote: On Mon, Aug 07, 2006 at 01:19:14PM -1000, David J. Orman wrote: (actually did they give OpenSolaris a name check at all when they mentioned DTrace ?) Nope, not that I can see. Apple's pretty notorious for that kind of oversight. I used to work for them, I know first hand how hat-tipping doesn't occur very often. Before this progresses much further, it's worth noting that all of team DTrace is at WWDC, has met with Apple engineers previously, and will be involved in one or more presentations today. So while the marketing department may not include OpenSolaris in the high level overview, Apple is not ignoring the roots of DTrace, and will not be hiding this fact from their developers (not that they could). We've had a great relationship with Apple at the engineering level -- and indeed, Team DTrace just got back from dinner with the Apple engineers involved with the port. More details here: http://blogs.sun.com/roller/page/bmc?entry=dtrace_on_mac_os_x As for the OpenSolaris name check: no, Apple didn't mention DTrace's roots and yes, I wished they had -- but DTrace was mentioned practically as an aside anyway (so much so that a developer sitting about two rows ahead of turned to the guy next to him and asked did they just say they ported DTrace?!), and I think the team involved with DTrace at Apple would have also liked to see a more prominent and complete description of their work and its origins. So in short (and brace yourself, because I know it will be a shock): mentions by executives in keynotes don't always accurately represent a technology. DynFS, anyone? ;) - Bryan -- Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss