Re: [zfs-discuss] cascading metadata modifications
Matthew Ahrens [EMAIL PROTECTED] wrote: Joerg Schilling wrote: The best documented one is the inverted meta data tree that allows wofs to write only one new generation node for one modified file while ZFS needs to also write new nodes for all directories above the file including the root directory in the fs. I believe you are thinking of indirect blocks, which are unrelated to the directory tree. In ZFS and most other filesystems, ancestor directories need not be modified when a file in a directory is modified. Isn't this against what I've read? If you write inode data to a different location than before, you need a way to tell the ancestor directory where the new data is located. From what I've read so far and what I have in mind from a personal talk with Jeff Bonwick in September 2004, this is done by rewriting at least parts of the ancestor directory inode. Jörg -- EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin [EMAIL PROTECTED](uni) [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
More here http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9034496 On 9/5/07, David Magda [EMAIL PROTECTED] wrote: Hello, Not sure if anyone at Sun can comment on this, but I thought it might be of interest to the list: This morning, NetApp filed an IP (intellectual property) lawsuit against Sun. It has two parts. The first is a declaratory judgment, asking the court to decide whether we infringe a set of patents that Sun claims we do. The second says that Sun infringes several of our patents with its ZFS technology. http://blogs.netapp.com/dave/2007/09/netapp-sues-sun.html He goes on to explain some of the logic behind NetApp's reaction. Regards, David ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Paul Kraus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
This is my personal opinion and all, but even knowing that Sun encourages open conversations on these mailing lists and blogs it seems to falter common sense for people from @sun.com to be commenting on this topic. It seems like something users should be aware of, but if I were working at Sun I would feel a very strong urge to clear any public conversation about the topic with management. As always, I do appreciate the frank insight given from the sun folks -- I am just worried that you may be doing yourself a disservice talking about it. -Wade ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
This is my personal opinion and all, but even knowing that Sun encourages open conversations on these mailing lists and blogs it seems to falter common sense for people from @sun.com to be commenting on this topic. It seems like something users should be aware of, but if I were working at Sun I would feel a very strong urge to clear any public conversation about the topic with management. As always, I do appreciate the frank insight given from the sun folks -- I am just worried that you may be doing yourself a disservice talking about it. Quite; it seems to all be done with blogs. After Netapp's blog, we now see Sun's CEO enter into the fray: http://blogs.sun.com/jonathan/entry/on_patent_trolling And no, I won't comment. Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] New zfs pr0n server :)))
Unfortunately it only comes with 4 adapters, bare metal adapters without any dampering /silencing and so on... ...anyway I wanted to make it the most silent I could, so I suspeded all the 10 disks (8 sata 320gb and a little 2,5 pata root disk) with a flexible wire, like I posted in this italian forum, the page is in italian, but the pictures show the concept well enough: http://www.pcsilenzioso.it/forum/showthread.php?t=2397 This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
At 09:33 AM 9/6/2007, [EMAIL PROTECTED] wrote: This is my personal opinion and all, but even knowing that Sun encourages open conversations on these mailing lists and blogs it seems to falter common sense for people from @sun.com to be commenting on this topic. It seems like something users should be aware of, but if I were working at Sun I would feel a very strong urge to clear any public conversation about the topic with management. As always, I do appreciate the frank insight given from the sun folks -- I am just worried that you may be doing yourself a disservice talking about it. The wicked flee when none pursue, but the righteous are bold as a lion. (Proverbs 28:1) Legally dangerous today, but I entirely understand the attitude. And this case will be fought as much in the court of public opinion as anywhere else; for Sun to get so lawyered up they silence their people while NetApp's CEO is playing a restrained version of the McBride game ... not a good idea, I think. E.g. what am I to think about taking the last steps to get OpenSolaris and ZFS running on my just built home file server? NetApp's assurances they aren't going to go after non-commercial and individual users is entirely worthless (can be withdrawn in a moment), and of course silly WRT the long term viability of ZFS. I, for one, do not welcome our new storage overlords, I don't want to add a $$$ NVRAM RAID-6 host adaptor to my system and switch to Linux (ugh) since it is unlikely to have OpenSolaris support - Harold ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
On Thu, 6 Sep 2007, Harold Ancell wrote: At 09:33 AM 9/6/2007, [EMAIL PROTECTED] wrote: This is my personal opinion and all, but even knowing that Sun encourages open conversations on these mailing lists and blogs it seems to falter common sense for people from @sun.com to be commenting on this topic. It seems like something users should be aware of, but if I were working at Sun I would feel a very strong urge to clear any public conversation about the topic with management. As always, I do appreciate the frank insight given from the sun folks -- I am just worried that you may be doing yourself a disservice talking about it. The wicked flee when none pursue, but the righteous are bold as a lion. (Proverbs 28:1) Legally dangerous today, but I entirely understand the attitude. And this case will be fought as much in the court of public opinion as anywhere else; for Sun to get so lawyered up they silence their people while NetApp's CEO is playing a restrained version of the McBride game ... not a good idea, I think. E.g. what am I to think about taking the last steps to get OpenSolaris and ZFS running on my just built home file server? NetApp's assurances they aren't going to go after non-commercial and individual users is entirely worthless (can be withdrawn in a moment), and of course silly WRT the long term viability of ZFS. I, for one, do not welcome our new storage overlords, I don't want to add a $$$ NVRAM RAID-6 host adaptor to my system and switch to Linux (ugh) since it is unlikely to have OpenSolaris support Playing with patent portfolios is the modern equivalent to playing the mutually assured destruction game with nuclear missiles. Yes we all appreciate how dangereous this game is and how high the stakes are. But ... notice that a live/armed ballistic missile has never been fired at a target. So back to patent portfolios: yes there will be (public and private) posturing; yes there will be negotiations; and, ultimately, there will be a resolution. All of this won't affect ZFS or anyone running ZFS. Just like nuclear ballistic missiles don't affect computer users either! What does all this mean to current ZFS users? Absolutely nothing. In the meantime, enjoy the show. :) Sun has all the resources necessary to play this (patent) game to its logical conclusion. Now ... back to our regularly scheduled ZFS technical discussions. PS: If there was a _real_ issue with WAFL/Netapp patent infringement - it would have been brought up way before ZFS was released and open sourced. Regards, Al Hopper Logical Approach Inc, Plano, TX. [EMAIL PROTECTED] Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
Playing with patent portfolios is the modern equivalent to playing the mutually assured destruction game with nuclear missiles. Yes we all appreciate how dangereous this game is and how high the stakes are. But ... notice that a live/armed ballistic missile has never been fired at a target. Now that you mention Nuclear weapons, am I really the only one who is amused by the uproar about a B52 with nukes flying over the US? Until the Minuteman missiles came online, we had B52s in Europe's airspace 24/7 full with nukes (and some did crash and non exploded). (That they were accidently loaded is, of course, another matter; it reminds me of the nuclear plant which was sold for scrapmetal and nearly exported) I'm somewhat surprised that people feel that Sun employees should speak out on this matter. Both Netapp and Sun seem to leave that to their CEOs (apart from the lawyers who fling documents at one another). I'm guessing that Linus is now really happy about having said this: I suspect we'd be better off talking to NetApp, and seeing if they are interested in releasing WAFL for Linux Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Nuke accidents (Re: ZFS/WAFL lawsuit)
On Thu, Sep 06, 2007 at 06:20:55PM +0200, [EMAIL PROTECTED] wrote: Now that you mention Nuclear weapons, am I really the only one who is amused by the uproar about a B52 with nukes flying over the US? Europe does not have the anti-nuke opinion set market cornered, ya know? Until the Minuteman missiles came online, we had B52s in Europe's airspace 24/7 full with nukes (and some did crash and non exploded). (That they were accidently loaded is, of course, another matter; *That* is the issue. If a nuke is/is going where it wasn't supposed to be, well, that just plain sucks. it reminds me of the nuclear plant which was sold for scrapmetal and nearly exported) It's a good way to export^Wget rid of the radioactivity ;) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
At 11:06 AM 9/6/2007, Al Hopper wrote: On Thu, 6 Sep 2007, Harold Ancell wrote: At 09:33 AM 9/6/2007, [EMAIL PROTECTED] wrote: This is my personal opinion and all, but even knowing that Sun encourages open conversations on these mailing lists and blogs it seems to falter common sense for people from @sun.com to be commenting on this topic. It seems like something users should be aware of, but if I were working at Sun I would feel a very strong urge to clear any public conversation about the topic with management. As always, I do appreciate the frank insight given from the sun folks -- I am just worried that you may be doing yourself a disservice talking about it. The wicked flee when none pursue, but the righteous are bold as a lion. (Proverbs 28:1) Legally dangerous today, but I entirely understand the attitude. And this case will be fought as much in the court of public opinion as anywhere else; for Sun to get so lawyered up they silence their people while NetApp's CEO is playing a restrained version of the McBride game ... not a good idea, I think. E.g. what am I to think about taking the last steps to get OpenSolaris and ZFS running on my just built home file server? NetApp's assurances they aren't going to go after non-commercial and individual users is entirely worthless (can be withdrawn in a moment), and of course silly WRT the long term viability of ZFS. I, for one, do not welcome our new storage overlords, I don't want to add a $$$ NVRAM RAID-6 host adaptor to my system and switch to Linux (ugh) since it is unlikely to have OpenSolaris support Playing with patent portfolios is the modern equivalent to playing the mutually assured destruction game with nuclear missiles. Yes we all appreciate how dangereous this game is and how high the stakes are. But ... notice that a live/armed ballistic missile has never been fired at a target. Ummm, unless NetApp is lying, one such ballistic missile has been fired from a Texas courthouse and targeted at ZFS. So back to patent portfolios: yes there will be (public and private) posturing; yes there will be negotiations; You go to the courts when you feel you have nothing to gain by negotiations---by definition, a lawsuit is not a negotiation, it's an attempt by one party to get the government to coerce action out of the other party. And like a missile, you cannot call one back after it is launched. Suppose this is just posturing: If NetApp and Sun were to settle today, there is no way would such an agreement not include dismissal of the suit with prejudice so that it could not be filed again. NetApp would not want such a possibility in case they later wanted to sue for real. and, ultimately, there will be a resolution. Indeed: my vote is for NetApp by the middle of the next decade being staked out on a bleached desert plain next to SCO, another stark object lesson to those who are tempted to compete in the courtroom instead of the marketplace. All of this won't affect ZFS or anyone running ZFS. Just like nuclear ballistic missiles don't affect computer users either! While it proceeded the development of the modern ICBM or computers as we know then, I suggest you ask one of the survivors of the nuclear bombings of Japan if they weren't affected by the Little Boy or Fat Man. To suggest that the existence of thousands of ceni-kiloton warheads atop delivery systems have no real meaningful existence (I hope I'm not misinterpreting your words) leaves me at a loss for further comment What does all this mean to current ZFS users? Absolutely nothing. Today, it means nothing. Tomorrow, if Sun is enjoined from developing ZFS, rather a lot. If that extends to other commercial users of it (directly, or if they decide they need more support than the community can provide), even more. Would the former kill off ZFS---*maybe* not. The latter? Almost certainly, except as a curiosity. PS: If there was a _real_ issue with WAFL/Netapp patent infringement - it would have been brought up way before ZFS was released and open sourced. It only takes one to make a war---how could Sun possibly constrain the future behavior of a competitor (assuming for the moment that Sun's account of the timeline is true and that the lawsuit is meritless)? While I'm not suggesting that people panic (certainly not for the next few days :-), ignoring a clear existential threat to ZFS would be silly. Perhaps discussing it is beyond what should be the scope of this list, in which case surely someone could set another one up---there is much to be said for segregating the discussions especially when they don't hinge so much on the technology per se---but to ignore this? I think not. - Harold ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
Playing with patent portfolios is the modern equivalent to playing the mutually assured destruction game with nuclear missiles. Yes we all appreciate how dangereous this game is and how high the stakes are. But ... notice that a live/armed ballistic missile has never been fired at a target. Ummm, unless NetApp is lying, one such ballistic missile has been fired from a Texas courthouse and targeted at ZFS. So back to patent portfolios: yes there will be (public and private) posturing; yes there will be negotiations; You go to the courts when you feel you have nothing to gain by negotiations---by definition, a lawsuit is not a negotiation, it's an attempt by one party to get the government to coerce action out of the other party. And like a missile, you cannot call one back after it is launched. Suppose this is just posturing: If NetApp and Sun were to settle today, there is no way would such an agreement not include dismissal of the suit with prejudice so that it could not be filed again. NetApp would not want such a possibility in case they later wanted to sue for real. Or to force more posturing pressure for negotiations -- many of these cases end up settled out of court. In fact it seems to be a very complex game of chess -- currently patents are most useful for cross licensing deals (to hold players out of the market, expand into markets or to collapse markets). Bringing patents into court can have the unwanted and expensive side effect of negating the patents enforcibility (prior art, failed measuring stick etc). Much of a patents value is perception. I think this is why the first case put forward by NetApp is to rule on if they infringe on Sun's patents (and to try to negate them). Sun's first move will most likely be to engage on the second part of the suit -- to lock NetApp into a battle where they must endanger their own patents (no safe retreat), thus weakening netapps position. and, ultimately, there will be a resolution. Indeed: my vote is for NetApp by the middle of the next decade being staked out on a bleached desert plain next to SCO, another stark object lesson to those who are tempted to compete in the courtroom instead of the marketplace. All of this won't affect ZFS or anyone running ZFS. Just like nuclear ballistic missiles don't affect computer users either! While it proceeded the development of the modern ICBM or computers as we know then, I suggest you ask one of the survivors of the nuclear bombings of Japan if they weren't affected by the Little Boy or Fat Man. To suggest that the existence of thousands of ceni-kiloton warheads atop delivery systems have no real meaningful existence (I hope I'm not misinterpreting your words) leaves me at a loss for further comment While a colorful analogy, mutually assured destruction is not really a good fit at all for these issues. It is chess (be it a game with a wager). What does all this mean to current ZFS users? Absolutely nothing. Today, it means nothing. Tomorrow, if Sun is enjoined from developing ZFS, rather a lot. If that extends to other commercial users of it (directly, or if they decide they need more support than the community can provide), even more. Would the former kill off ZFS---*maybe* not. The latter? Almost certainly, except as a curiosity. PS: If there was a _real_ issue with WAFL/Netapp patent infringement - it would have been brought up way before ZFS was released and open sourced. It only takes one to make a war---how could Sun possibly constrain the future behavior of a competitor (assuming for the moment that Sun's account of the timeline is true and that the lawsuit is meritless)? While I'm not suggesting that people panic (certainly not for the next few days :-), ignoring a clear existential threat to ZFS would be silly. Perhaps discussing it is beyond what should be the scope of this list, in which case surely someone could set another one up---there is much to be said for segregating the discussions especially when they don't hinge so much on the technology per se---but to ignore this? I think not. - Harold It really is a shot in the dark at this point, you really never know what will happen in court (take the example of the recent court decision that all data in RAM be held for discovery ?!WHAT, HEAD HURTS!?). But at the end of the day, if you waited for a sure bet on any technology or potential patent disputes you would not implement anything, ever. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs mount snapshot
Hello, I am messing around with zfs snapshots, and was wondering if it is possible to mount a zfs snapshot. I would like to use this snapshot to backup to tape. Currently, I see the data in the following path: /testjp1/.zfs/snapshot/testsnapjp This message and its attachments may contain legally privileged or confidential information. It is intended solely for the named addressee. If you are not the addressee indicated in this message (or responsible for delivery of the message to the addressee), you may not copy or deliver this message or its attachments to anyone. Rather, you should permanently delete this message and its attachments and kindly notify the sender by reply e-mail. Any content of this message and its attachments that does not relate to the official business of News America Incorporated or its subsidiaries must be taken not to have been sent or endorsed by any of them. No warranty is made that the e-mail or attachment(s) are free from computer virus or other defect. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs mount snapshot
Poulos, Joe wrote: Hello, I am messing around with zfs snapshots, and was wondering if it is possible to mount a zfs snapshot. I would like to use this snapshot to backup to tape. Currently, I see the data in the following path: /testjp1/.zfs/snapshot/testsnapjp and for you to see it there it is already mounted. Why is doing the back up from that directory not an option ? -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
It really is a shot in the dark at this point, you really never know what will happen in court (take the example of the recent court decision that all data in RAM be held for discovery ?!WHAT, HEAD HURTS!?). But at the end of the day, if you waited for a sure bet on any technology or potential patent disputes you would not implement anything, ever. Do you have a reference for all data in RAM most be held. I guess we need to build COW RAM as well. Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs mount snapshot
Ah, thanks! I thought it may be possible to show up as a separate mountpoint. But you're right... this is not really needed! Thanks -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Darren J Moffat Sent: Thursday, September 06, 2007 2:14 PM To: Poulos, Joe Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] zfs mount snapshot Poulos, Joe wrote: Hello, I am messing around with zfs snapshots, and was wondering if it is possible to mount a zfs snapshot. I would like to use this snapshot to backup to tape. Currently, I see the data in the following path: /testjp1/.zfs/snapshot/testsnapjp and for you to see it there it is already mounted. Why is doing the back up from that directory not an option ? -- Darren J Moffat This message and its attachments may contain legally privileged or confidential information. It is intended solely for the named addressee. If you are not the addressee indicated in this message (or responsible for delivery of the message to the addressee), you may not copy or deliver this message or its attachments to anyone. Rather, you should permanently delete this message and its attachments and kindly notify the sender by reply e-mail. Any content of this message and its attachments that does not relate to the official business of News America Incorporated or its subsidiaries must be taken not to have been sent or endorsed by any of them. No warranty is made that the e-mail or attachment(s) are free from computer virus or other defect. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
[EMAIL PROTECTED] wrote on 09/06/2007 01:14:56 PM: It really is a shot in the dark at this point, you really never know what will happen in court (take the example of the recent court decision that all data in RAM be held for discovery ?!WHAT, HEAD HURTS!?). But at the end of the day, if you waited for a sure bet on any technology or potential patent disputes you would not implement anything, ever. Do you have a reference for all data in RAM most be held. I guess we need to build COW RAM as well. It is only a magistrate ruling so far -- but I think it is expected to be upheld. http://www.law.com/jsp/article.jsp?id=1181639142254 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
It's Columbia Pictures vs. Bunnell: http://www.eff.org/legal/cases/torrentspy/columbia_v_bunnell_magistrate_order.pdf The Register syndicated a Security Focus article that summarizes the potential impact of the court decision: http://www.theregister.co.uk/2007/08/08/litigation_data_retention/ -j On Thu, Sep 06, 2007 at 08:14:56PM +0200, [EMAIL PROTECTED] wrote: It really is a shot in the dark at this point, you really never know what will happen in court (take the example of the recent court decision that all data in RAM be held for discovery ?!WHAT, HEAD HURTS!?). But at the end of the day, if you waited for a sure bet on any technology or potential patent disputes you would not implement anything, ever. Do you have a reference for all data in RAM most be held. I guess we need to build COW RAM as well. Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
On Thu, Sep 06, 2007 at 01:18:27PM -0500, [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote on 09/06/2007 01:14:56 PM: It really is a shot in the dark at this point, you really never know what will happen in court (take the example of the recent court decision that all data in RAM be held for discovery ?!WHAT, HEAD HURTS!?). But at the end of the day, if you waited for a sure bet on any technology or potential patent disputes you would not implement anything, ever. Do you have a reference for all data in RAM most be held. I guess we need to build COW RAM as well. It is only a magistrate ruling so far -- but I think it is expected to be upheld. http://www.law.com/jsp/article.jsp?id=1181639142254 It sounds like the issue is that discoverable data (access logs) isn't being kept on disk. Demanding that such data be kept persistently is not the same as demanding that RAM be retained for discovery. Woo. Big deal. Not. If that's the correct reading of the story then the story is very badly written. Or am I misreading the story? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
If that's the correct reading of the story then the story is very badly written. Or am I misreading the story? Hmmm, the order itself goes on and on about RAM. I think the judge should have been clearer that the issue is the specific data, as opposed to generic RAM contents. Exactly the articles point -- rulings have consiquences outside of the original case. The intent may have been to store logs for web server access (logical and prudent request) but the ruling states that RAM albeit working memory is no different then other storage and must be kept for discovery. This is generalized because (as I understand) the defense was arguing logs are not turned on -- they do not exist and that was met with of course the running program has this information in RAM and you are disposing of it ad nauseam. The only saving grace for the ruling is that it is not a higher court. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
On Thu, Sep 06, 2007 at 01:38:22PM -0500, [EMAIL PROTECTED] wrote: If that's the correct reading of the story then the story is very badly written. Or am I misreading the story? Hmmm, the order itself goes on and on about RAM. I think the judge should have been clearer that the issue is the specific data, as opposed to generic RAM contents. Exactly the articles point -- rulings have consiquences outside of the original case. The intent may have been to store logs for web server access (logical and prudent request) but the ruling states that RAM albeit working memory is no different then other storage and must be kept for discovery. This is generalized because (as I understand) the defense was arguing logs are not turned on -- they do not exist and that was met with of course the running program has this information in RAM and you are disposing of it ad nauseam. The only saving grace for the ruling is that it is not a higher court. Allowing for technical illiteracy in judges I think the obvious interpretation is that discoverable data should be retained and that but it exists only in RAM is not a defense, and rightly so. Further, the implication for computer and software engineers is that operating systems and applications must allow for persisting discoverable data. That is generally the case, by the way. I don't see the implication that every write to a location in RAM must result in persistent logging, say, nor would all the lawyering in the world make that economically feasible. At the end of the day this order cannot have any significant impact on the industry. Of course, IANAL. Nico -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] MS Exchange storage on ZFS?
Has anyone here attempted to store their MS Exchange data store on a ZFS pool? If so, could you please tell me about your setup? A friend is looking for a NAS solution, and may be interested in a ZFS box instead of a netapp or something like that. Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
On 9/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: This is my personal opinion and all, but even knowing that Sun encourages open conversations on these mailing lists and blogs it seems to falter common sense for people from @sun.com to be commenting on this topic. It seems like something users should be aware of, but if I were working at Sun I would feel a very strong urge to clear any public conversation about the topic with management. As always, I do appreciate the frank insight given from the sun folks -- I am just worried that you may be doing yourself a disservice talking about it. i completely disagree. i work for a fortune 50 company and we have a hell of a time with the legal department or other people who refuse to think it's okay to speak frankly about things in their company. obviously trade secrets and other things aside, i think it is ultimately beneficial and helps a company feel more accountable when it allows direct public exchange with employees and not through spin-educated marketeers or public relation folk. i don't expect anyone from sun on the zfs list would tell us anything other than their personal opinion. i appreciate it too. from reading forums and mailing lists, to having sun volunteer 6? people to help memcached continue to flourish, i think sun is a role model for a company who continues to profit but has figured out that certain things can be free and ultimately they are helping make more mature products and encourage innovation. they also would get the bonus of having things like memcached run better on their platforms then too :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
On Thu, Sep 06, 2007 at 04:16:50PM -0400, Jonathan Edwards wrote: On Sep 6, 2007, at 14:48, Nicolas Williams wrote: Allowing for technical illiteracy in judges I think the obvious interpretation is that discoverable data should be retained and that but it exists only in RAM is not a defense, and rightly so. hang on .. let me take it out and give it to you .. I'm thinking this seems to get into v-chip territory, or otherwise providing a means for agencies to track information that might have passed through a system .. err, for the safety of our children and such :P That but it existed only in RAM in my servers should not be a defense for failing to retain discoverable evidence is distinct from the issue of what constitutes discoverable evidence. Should web site access logs be retained? That seems like a political issue to me, but if some statute says that they must be retained then but a power outage ate my RAM's contents shouldn't cut it as a defense. Isn't that obvious? Or must one be a lawyer to understand that down is up? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
On Sep 6, 2007, at 14:48, Nicolas Williams wrote: Exactly the articles point -- rulings have consequences outside of the original case. The intent may have been to store logs for web server access (logical and prudent request) but the ruling states that RAM albeit working memory is no different then other storage and must be kept for discovery. This is generalized because (as I understand) the defense was arguing logs are not turned on -- they do not exist and that was met with of course the running program has this information in RAM and you are disposing of it ad nauseam. The only saving grace for the ruling is that it is not a higher court. Allowing for technical illiteracy in judges I think the obvious interpretation is that discoverable data should be retained and that but it exists only in RAM is not a defense, and rightly so. hang on .. let me take it out and give it to you .. I'm thinking this seems to get into v-chip territory, or otherwise providing a means for agencies to track information that might have passed through a system .. err, for the safety of our children and such :P ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
That but it existed only in RAM in my servers should not be a defense for failing to retain discoverable evidence is distinct from the issue of what constitutes discoverable evidence. But only if you were told you needed to retain the data in the first place. How can you be faulted for not keeping data you did not have a use for and nobody told you should be kept. Should web site access logs be retained? If you have them (note that with a load balancer front-end, it is unclear whether such logs do actually exist) The loadbalancer may know who accesses the data but only the backend may know which data is accessed. Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
On Thu, Sep 06, 2007 at 10:45:01PM +0200, [EMAIL PROTECTED] wrote: That but it existed only in RAM in my servers should not be a defense for failing to retain discoverable evidence is distinct from the issue of what constitutes discoverable evidence. But only if you were told you needed to retain the data in the first place. How can you be faulted for not keeping data you did not have a use for and nobody told you should be kept. That's the whether web site access logs be retained is a political (statutory) issue part. Not knowing the law isn't a defense either. The order quite clearly refers to the log data in question as relevant (e.g., page 11) -- presumably it is because some law said so and the defendant should have known it. Or perhaps the judge isn't merely technically illiterate. Should web site access logs be retained? If you have them (note that with a load balancer front-end, it is unclear whether such logs do actually exist) The loadbalancer may know who accesses the data but only the backend may know which data is accessed. If the law says you should have them and this is implementable at reasonable cost, then you should. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] cascading metadata modifications
Joerg Schilling wrote: Matthew Ahrens [EMAIL PROTECTED] wrote: Joerg Schilling wrote: The best documented one is the inverted meta data tree that allows wofs to write only one new generation node for one modified file while ZFS needs to also write new nodes for all directories above the file including the root directory in the fs. I believe you are thinking of indirect blocks, which are unrelated to the directory tree. In ZFS and most other filesystems, ancestor directories need not be modified when a file in a directory is modified. Isn't this against what I've read? If you write inode data to a different location than before, you need a way to tell the ancestor directory where the new data is located. No; directories point to the files (and directories) that they contain by object (inode) number, not by block pointer (physical disk location). When new contents are written to a file, its object (inode) number does not change. From what I've read so far and what I have in mind from a personal talk with Jeff Bonwick in September 2004, this is done by rewriting at least parts of the ancestor directory inode. That is incorrect; you must have misunderstood Jeff. Could you point me to where you've read this, so that it can be corrected or clarified? --matt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Consequences of adding a root vdev later?
Bill Sommerfeld wrote: On Wed, 2007-09-05 at 14:26 -0700, Richard Elling wrote: AFAIK, nobody has characterized resilvering, though this is about the 4th time this week someone has brought the topic up. Has anyone done work here that we don't know about? If so, please speak up :-) I haven't been conducting controlled experiments, but I have been moving a large pool around recently via a series of zpool replace operations, and so have been keeping an eye on a bunch of resilvering. The one conclusion I have so far is that, for the pool I'm moving, the time to complete a disk-replacement resilver seems to be largely independent of the number of disks being resilvered (so far, I've done batches of up to seven replacements) and in the same ballpark as a scrub. To be conservative, I'm moving only one disk per raidz group per pass. - Bill Thanks Bill, I've put together some tests and thus far the bottleneck is on the read side. It may take another week or so to finish my characterizations and analyze the data, though. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] New zfs pr0n server :)))
On 9/6/07, Diego Righi [EMAIL PROTECTED] wrote: ...anyway I wanted to make it the most silent I could, so I suspeded all the 10 disks Warning: unfounded speculation ahead. I've heard that this can cause performance issues and undue wear on the drive. The reasoning is that since the arm assembly accelerates in one direction, and there's not much force keeping the drive from rotating, it spins in the opposite direction a little bit. This isn't a huge problem by itself, but since the place the arm was aiming for is no longer there due to the counter-rotation, it has to seek a little bit in the other direction, generating more wear and tear on the bearings, more heat from the drive, and shorter drive lifetimes. I haven't seen any data to back this up or otherwise, but it does make some sense to me. The distance between tracks on a modern disk is ludicrously small - on the order of microns - so any small influence on where the head ends up seems likely to result in getting the wrong location and having to relocate. That said, it looks like quite a nice setup. Good choice on components, even if the memory isn't ECC ;-) Will ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS/WAFL lawsuit
Casper, Do you have a reference for all data in RAM most be held. I guess we need to build COW RAM as well. Is that one of those genetic hybrids? Regards... Sean. BTW: I remember the days when only RAS and CAS kept your data in memory intact ;-) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zfs with storedge 6130
On 9/4/07 4:34 PM, Richard Elling [EMAIL PROTECTED] wrote: Hi Andy, my comments below... note that I didn't see zfs-discuss@opensolaris.org in the CC for the original... Andy Lubel wrote: Hi All, I have been asked to implement a zfs based solution using storedge 6130 and im chasing my own tail trying to decide how best to architect this. The storage space is going to be used for database dumps/backups (nearline storage). What is killing me is that I must mix hardware raid and zfs.. Why should that be killing you? ZFS works fine with RAID arrays. What kills me is the fact that I have a choice and it was hard to decide on which one was going to be at the top of the totem pole. From now on I only want JBOD! Works even better when I export each disk in my array as a single raid0 x14 then create the zpool :) #zpool create -f vol0 c2t1d12 c2t1d11 c2t1d10 c2t1d9 c2t1d8 c2t1d7 c2t1d6 c2t1d5 c2t1d4 c2t1d3 c2t1d2 c2t1d1 c2t1d0 spare c2t1d13 The storedge shelf has 14 FC 72gb disks attached to a solaris snv_68. I was thinking that since I cant export all the disks un-raided out to the solaris system that I would instead: (on the 6130) Create 3 raid5 volumes of 200gb each using the Sun_ZFS pool (128k segment size, read ahead enabled 4 disk). (On the snv_68) Create a raid0 using zfs of the 3 volumes from the 6130, using the same 128k stripe size. OK It seemed to me that if I was going to go for redundancy with a mixture of zfs and hardware raid that I would put the redundancy into the hardware raid and use striping at the zfs level, is that methodology the best way to think of it? The way to think about this is that ZFS can only correct errors when it has redundancy. By default, for dynamic stripes, only metadata is redundant. You can set the copies parameter to add redundancy on a per-file system basis, so you could set a different policy for data you really care about. Makes perfect sense. Since this is a nearline backup solution, I think we will be OK with a dynamic stripe. Once I get approved for thumper im definitely going to go raidz2. Since we are a huge Sun partner.. It should be easier than its been :( The only requirement ive gotten so far is that it can be written to and read from at a minimum of 72mb/s locally and 1gb/35sec via nfs. I suspect I would need at least 600gb of storage. I hope you have a test case for this. It is difficult for us to predict that sort of thing because there are a large number of variables. But in general, to get high bandwidth, you need large I/Os. That implies the application is responsible for it's use of the system, since the application is the source of I/Os. Its all going to be accessed via NFS and eventually iscsi, as soon as we figure out how to backup iscsi targets from the SAN itself. Anyone have any recommendations? The last time tried to create one 13 disk raid5 with zfs filesystem the performance was terrible via nfs.. But when I shared an nfs filesystem via a raidz or mirror things were much better.. So im nervous about doing this with only one volume in the zfs pool. 13 disk RAID-5 will suck. Try to stick with fewer devices in the set. See also http://mail.opensolaris.org/pipermail/zfs-discuss/2006-December/024194.html http://blogs.digitar.com/jjww/?itemid=44 I cant find a santricity download that will work with a 6130, but that's ok.. I just created 14 volumes per shelf :) hardware raid is so yesterday. That data is somewhat dated, as we now have the ability to put the ZIL on a different log device (Nevada b70 or later). This will be more obvious if the workload creates a lot of small files, less of a performance problem for large files. -- richard Got my hands on a Ram-San SSD 64gb and I'm using that for the zil.. Its crazy fast now. -Andy -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Serious ZFS problems
Tim Spriggs wrote: Hello, I think I have gained sufficient fool status for testing the fool-proof-ness of zfs. I have a cluster of T1000 servers running Solaris 10 and two x4100's running an OpenSolaris dist (Nexenta) which is at b68. Each T1000 hosts several zones each of which has its own zpool associated with it. Each zpool is a mirrored configuration between and IBM N series Nas and another OSOL box serving iscsi from zvols. To move zones around, I move the zone configuration and then move the zpool from one T1000 to another and bring the zone up. Now for the problem. For sake of brevity: T1000-1: zpool export pool1 T1000-2: zpool export pool2 T1000-3: zpool import -f pool1 T1000-4: zpool import -f pool2 and other similar operations to move zone data around. Then I 'init 6'd all the T1000s. The reason for the init 6 was so that all of the pools would completely let go of the iscsi luns so I can remove static-configurations from each T1000. upon reboot, pool1 has the following problem: WARNING: can't process intent log for pool1 During pool startup (spa_load()) zil_claim() is called on each dataset in the pool and the first thing it tries to do is open the dataset (dmu_objset_open()). If this fails then the can't process intent log... is printed. So you have a pretty serious pool consistency problem. I guess more information is needed. Running zdb on the pool would be useful, or zdb -l device to display the labels (on a exported pool). and then attempts to export the pool fail with: cannot open 'pool1': I/O error pool2 can consistently make a T1000 (Sol1) kernel panic when imported. It will also make an x4100 panic (osol) Any ideas? Thanks in advance. -Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] New zfs pr0n server :)))
Agreed ! However, you may be able to lower the sound ever so slightly more by staggering the drives so that every other one is upside down, spinning the opposite direction and thus minimizing accumulative rotational vibration. I had to make a makeshift temporary server when our NAS gateway device had a problem that required we reinitialize the array (after moving all data off of course). I used a Coolermaster CM Stacker with 16x750GB drives in SATA 4-drive carriers all going the same direction and the system made a horrendous oscillating buzz, as well as the occasional drive timeout warning from the RAID controller when the system was under high load (all drives part of single RAID6 array). After some thought, I decided to turn 2 of the 4 drive cages upside down so that the config had 4 drives spinning normally, 4 upside down, 4 normally, and finally another 4 upside down. The oscillation was gone completely as were the rare drive timeouts under load. Your laced setup places the drives in so much dampening that it might not make much of a difference but still, might as well take care of it now rather than later when it's all buttoned up if it starts buzz. It certainly couldn't hurt. -=dave - Original Message - From: Christopher Gibbs [EMAIL PROTECTED] To: Diego Righi [EMAIL PROTECTED] Cc: zfs-discuss@opensolaris.org Sent: Thursday, September 06, 2007 8:06 AM Subject: Re: [zfs-discuss] New zfs pr0n server :))) Wow, what a creative idea. And I'll bet that allows for much more airflow than the 4-in-3 drive cages do. Very nice. On 9/6/07, Diego Righi [EMAIL PROTECTED] wrote: Unfortunately it only comes with 4 adapters, bare metal adapters without any dampering /silencing and so on... ...anyway I wanted to make it the most silent I could, so I suspeded all the 10 disks (8 sata 320gb and a little 2,5 pata root disk) with a flexible wire, like I posted in this italian forum, the page is in italian, but the pictures show the concept well enough: http://www.pcsilenzioso.it/forum/showthread.php?t=2397 This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Christopher Gibbs Email / LDAP Administrator Web Integration Programming Abilene Christian University ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Serious ZFS problems
Neil Perrin wrote: Tim Spriggs wrote: Hello, I think I have gained sufficient fool status for testing the fool-proof-ness of zfs. I have a cluster of T1000 servers running Solaris 10 and two x4100's running an OpenSolaris dist (Nexenta) which is at b68. Each T1000 hosts several zones each of which has its own zpool associated with it. Each zpool is a mirrored configuration between and IBM N series Nas and another OSOL box serving iscsi from zvols. To move zones around, I move the zone configuration and then move the zpool from one T1000 to another and bring the zone up. Now for the problem. For sake of brevity: T1000-1: zpool export pool1 T1000-2: zpool export pool2 T1000-3: zpool import -f pool1 T1000-4: zpool import -f pool2 and other similar operations to move zone data around. Then I 'init 6'd all the T1000s. The reason for the init 6 was so that all of the pools would completely let go of the iscsi luns so I can remove static-configurations from each T1000. upon reboot, pool1 has the following problem: WARNING: can't process intent log for pool1 During pool startup (spa_load()) zil_claim() is called on each dataset in the pool and the first thing it tries to do is open the dataset (dmu_objset_open()). If this fails then the can't process intent log... is printed. So you have a pretty serious pool consistency problem. I guess more information is needed. Running zdb on the pool would be useful, or zdb -l device to display the labels (on a exported pool). I can't export one of the pools. Here is the zpool status -x output for reference: # zpool status -x pool: zs-scat-dmz state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: none requested config: NAME STATE READ WRITE CKSUM zs-scat-dmzONLINE 0 072 mirror ONLINE 0 072 c2t015045BB9E322A0046CBDAAEd0 ONLINE 0 0 144 c1t80d0ONLINE 0 0 144 errors: 0 data errors, use '-v' for a list # zdb -l /dev/dsk/c2t015045BB9E322A0046CBDAAEd0 LABEL 0 LABEL 1 failed to unpack label 1 LABEL 2 LABEL 3 root @ T1000-3 zdb -l /dev/dsk/c1t80d0 LABEL 0 LABEL 1 failed to unpack label 1 LABEL 2 LABEL 3 I find that appending s0 to the device gives me better information: # zdb -l /dev/dsk/c1t80d0s0 LABEL 0 version=3 name='zs-scat-dmz' state=0 txg=1188440 pool_guid=949639000150966246 top_guid=15919546701143465277 guid=4814968902145809239 vdev_tree type='mirror' id=0 guid=15919546701143465277 whole_disk=0 metaslab_array=13 metaslab_shift=28 ashift=9 asize=42977198080 children[0] type='disk' id=0 guid=5615878807187049290 path='/dev/dsk/c2t015045BB9E322A0046CBDAAEd0s0' devid='id1,[EMAIL PROTECTED]/a' whole_disk=1 DTL=55 children[1] type='disk' id=1 guid=4814968902145809239 path='/dev/dsk/c1t80d0s0' devid='id1,[EMAIL PROTECTED]/a' whole_disk=1 DTL=50 LABEL 1 version=3 name='zs-scat-dmz' state=0 txg=1188440 pool_guid=949639000150966246 top_guid=15919546701143465277 guid=4814968902145809239 vdev_tree type='mirror' id=0 guid=15919546701143465277 whole_disk=0 metaslab_array=13 metaslab_shift=28 ashift=9 asize=42977198080 children[0] type='disk' id=0
Re: [zfs-discuss] MS Exchange storage on ZFS?
On 9/6/07 2:51 PM, Joe S [EMAIL PROTECTED] wrote: Has anyone here attempted to store their MS Exchange data store on a ZFS pool? If so, could you please tell me about your setup? A friend is looking for a NAS solution, and may be interested in a ZFS box instead of a netapp or something like that. I don't see why it wouldn't using zvols and iscsi. We use iscsi in our rather large exchange implementation - not backed by zfs but I don't see why it couldn't be. PS no NAS solution will work for exchange will it? You have to use DAS/SAN or iscsi afaik. -Andy Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] New zfs pr0n server :)))
Will Murnane wrote: On 9/6/07, Diego Righi [EMAIL PROTECTED] wrote: ...anyway I wanted to make it the most silent I could, so I suspeded all the 10 disks Warning: unfounded speculation ahead. I've heard that this can cause performance issues and undue wear on the drive. The reasoning is that since the arm assembly accelerates in one direction, and there's not much force keeping the drive from rotating, it spins in the opposite direction a little bit. This isn't a huge problem by itself, but since the place the arm was aiming for is no longer there due to the counter-rotation, it has to seek a little bit in the other direction, generating more wear and tear on the bearings, more heat from the drive, and shorter drive lifetimes. I was highly skeptical, given that drives are designed to run in both horizontal and vertical orientations (and to switch between them without reformatting, e.g. see http://www.hitachigst.com/tech/techlib.nsf/techdocs/4236D595E2C5309F862572C500813B89/$file/C10K147_IG.pdf), and that the forces on the drive arm are quite different depending on the orientation (even though the arm is counterbalanced). However, a little searching turned up http://www.sidman.com/angaccel.htm, which tends to support the reasoning above: # Accordingly, known compensation or disturbance rejection systems, while # performing satisfactorily for applications using linear or unbalanced # rotary actuators, fail to address the problems of spindle imbalance # forces, external shock or vibration and windup in systems having a # balanced rotary actuator. This failure is due to the fact that only # angular acceleration of the HDA in the direction of actuator rotation # substantially causes positioning errors in systems that utilize a balanced # rotary actuator. and suggests using an acceleration sensor specifically to compensate for angular acceleration of the HDA in the direction of actuator rotation. Suspending the drive obviously does change this acceleration. Another possible argument is that if the drive moves slightly, so does the connector to it, which seems like a bad idea. -- David Hopwood [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] New zfs pr0n server :)))
Unfortunately it only comes with 4 adapters, bare metal adapters without any dampering /silencing and so on... ...anyway I wanted to make it the most silent I could, so I suspeded all the 10 disks (8 sata 320gb and a little 2,5 pata root disk) with a flexible wire, like I posted in this italian forum, the page is in italian, but the pictures show the concept well enough: http://www.pcsilenzioso.it/forum/showthread.php?t=2397 Good work, that is a lot of suspension to do :) You should post it on the silentpcreview gallery forum to show off the masterpiece :) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss