Re: [zfs-discuss] Opensolaris is apparently dead
On Tue, Aug 17, 2010 at 5:11 AM, Andrej Podzimek and...@podzimek.org wrote: I did not say there is something wrong about published reports. I often read them. (Who doesn't?) However, there are no trustworthy reports on this topic yet, since Btrfs is unfinished. Let's see some examples: (1) http://www.phoronix.com/scan.php?page=articleitem=zfs_ext4_btrfsnum=1 My little few yen in this massacre: Phoronix usually compares apples with oranges and pigs with candies. So be careful. Disclaimer: I use Reiser4 A Killer FS™. :-) -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 64-bit vs 32-bit applications
On 17.08.10 04:17, Will Murnane wrote: On Mon, Aug 16, 2010 at 21:58, Kishore Kumar Pusukuri kish...@cs.ucr.edu wrote: Hi, I am surprised with the performances of some 64-bit multi-threaded applications on my AMD Opteron machine. For most of the applications, the performance of 32-bit version is almost same as the performance of 64-bit version. However, for a couple of applications, 32-bit versions provide better performance (running-time is around 76 secs) than 64-bit (running time is around 96 secs). Could anyone help me to find the reason behind this, please? [...] This list discusses the ZFS filesystem. Perhaps you'd be better off posting to perf-discuss or tools-gcc? That said, you need to provide more information. What compiler and flags did you use? What does your program (broadly speaking) do? What did you measure to conclude that it's slower in 64-bit mode? add to that: what OS are you using? Michael -- michael.schus...@oracle.com http://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] How do I Import rpool to an alternate location?
On 08/16/10 10:38 PM, George Wilson wrote: Robert Hartzell wrote: On 08/16/10 07:47 PM, George Wilson wrote: The root filesystem on the root pool is set to 'canmount=noauto' so you need to manually mount it first using 'zfs mount dataset name'. Then run 'zfs mount -a'. - George mounting the dataset failed because the /mnt dir was not empty and zfs mount -a failed I guess because the first command failed. It's possible that as part of the initial import that one of the mount points tried to create a directory under /mnt. You should first unmount everything associated with that pool, then ensure that /mnt is empty and mount the root filesystem first. Don't mount anything else until the root is mounted. - George Awesome! That worked... just recovered 100GB of data! Thanks for the help -- Robert Hartzell b...@rwhartzell.net RwHartzell.Net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
BM wrote: On Tue, Aug 17, 2010 at 5:11 AM, Andrej Podzimek and...@podzimek.org wrote: I did not say there is something wrong about published reports. I often read them. (Who doesn't?) However, there are no trustworthy reports on this topic yet, since Btrfs is unfinished. Let's see some examples: (1) http://www.phoronix.com/scan.php?page=articleitem=zfs_ext4_btrfsnum=1 My little few yen in this massacre: Phoronix usually compares apples with oranges and pigs with candies. So be careful. Disclaimer: I use Reiser4 A Killer FS™. :-) ZFS is the last word in file systems. Ben Rockwood's Cuddletech says Cuddletech: Use Unix or die. http://www.cuddletech.com/ Both sound pretty final. Might even be religious OS (operating systems) or FS war propaganda... :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
On 16 Aug 2010, at 23:11, Andrej Podzimek wrote: My only point was: There is no published report saying that stability or *performance* of Btrfs will be worse (or better) than that of ZFS. This is because nobody can guess how Btrfs will perform once it's finished. (In fact nobody even knows *when* it is going to be finished. My guess was that it might not be considered experimental in one year's time, but that's just a shot in the dark.) I know that. btrfs will never be finished. same as ZFS. there is always space for an improvement. always. Sami ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 64-bit vs 32-bit applications
On 08/17/10 09:43 PM, Joerg Schilling wrote: Garrett D'Amoregarr...@nexenta.com wrote: It can be as simple as impact on the cache. 64-bit programs tend to be bigger, and so they have a worse effect on the i-cache. Unless your program does something that can inherently benefit from 64-bit registers, or can take advantage of the richer instruction set that is available to amd64 programs, you probably will see a degradation when running 64-bit programs. That said, I think a great number of programs *do* benefit from the larger registers, and from the richer ISA available to 64-bit programs. If you have an orthogonal architecture like sparc, a typical 64 bit program is indeed a bit slower than the same program in 32 bit. On Amd64, you have twice as many registers in 64 bit mode and this is the reason for a typical performance gain of ~ 30% for 64 bit applications. Do you have the data to back that up? Most things I've looked at on X64 are slower in 64 bit mode. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 64-bit vs 32-bit applications
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Will Murnane I am surprised with the performances of some 64-bit multi-threaded applications on my AMD Opteron machine. For most of the applications, the performance of 32-bit version is almost same as the performance of 64-bit version. However, for a couple of applications, 32-bit versions This list discusses the ZFS filesystem. Perhaps you'd be better off posting to perf-discuss or tools-gcc? That said, you need to provide more information. What compiler and flags did you use? What does your program (broadly speaking) do? What did you measure to conclude that it's slower in 64-bit mode? Not only that, for most things the 32 vs 64bit architectures are expected to perform about the same. The 64bit architecture exists mostly for higher memory addressing bits, not for twice the performance. YMMV. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 64-bit vs 32-bit applications
Ian Collins i...@ianshome.com wrote: If you have an orthogonal architecture like sparc, a typical 64 bit program is indeed a bit slower than the same program in 32 bit. On Amd64, you have twice as many registers in 64 bit mode and this is the reason for a typical performance gain of ~ 30% for 64 bit applications. Do you have the data to back that up? Most things I've looked at on X64 are slower in 64 bit mode. Did you test on sparc or amd64? See above... Jörg -- EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin j...@cs.tu-berlin.de(uni) joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
On Aug 16, 2010, at 11:17 PM, Frank Cusack frank+lists/z...@linetwo.net wrote: On 8/16/10 9:57 AM -0400 Ross Walker wrote: No, the only real issue is the license and I highly doubt Oracle will re-release ZFS under GPL to dilute it's competitive advantage. You're saying Oracle wants to keep zfs out of Linux? I would if I were them, wouldn't you? Linux has already eroded the low-end of the Solaris business model, if Linux had ZFS it could possibly erode out the middle tier as well. Solaris with only high-end customers wouldn't be very profitable (unless seriously marked up in price), thus unsustainable as a business. Sun didn't get this, but Oracle does. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS development moving behind closed doors
On Sun, Aug 15, 2010 at 9:13 AM, David Magda dma...@ee.ryerson.ca wrote: On Aug 14, 2010, at 19:39, Kevin Walker wrote: I once watched a video interview with Larry from Oracle, this ass rambled on about how he hates cloud computing and that everyone was getting into cloud computing and in his opinion no one understood cloud computing, apart from him... :-| If this is the video you're talking about, I think you misinterpreted what he meant: Cloud computing is not only the future of computing, but it is the present, and the entire past of computing is all cloud. [...] All it is is a computer connected to a network. What do you think Google runs on? Do you think they run on water vapour? It's databases, and operating systems, and memory, and microprocessors, and the Internet. And all of a sudden it's none of that, it's the cloud. [...] All the cloud is, is computers on a network, in terms of technology. In terms of business model, you can say it's rental. All SalesForce.com was, before they were cloud computing, was software-as-a-service, and then they became cloud computing. [...] Our industry is so bizarre: they change a term and think they invented technology. http://www.youtube.com/watch?v=rmrxN3GWHpM#t=45m I don't see any inaccurate in what said. Indeed; even waaay before the SaaSillyness, they were know as service bureaus: http://drcoddwasright.blogspot.com/2009/07/cloud-lucy-in-sky-with-razorblades.html ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
I did not say there is something wrong about published reports. I often read them. (Who doesn't?) However, there are no trustworthy reports on this topic yet, since Btrfs is unfinished. Let's see some examples: (1) http://www.phoronix.com/scan.php?page=articleitem=zfs_ext4_btrfsnum=1 My little few yen in this massacre: Phoronix usually compares apples with oranges and pigs with candies. So be careful. Nobody said one should blindly trust Phoronix. ;-) In fact I clearly said the contrary. I mentioned the famous example of a totally absurd benchmark that used crippled and crashing code from the ZEN patchset to benchmark Reiser4. Disclaimer: I use Reiser4 A Killer FS™. :-) I had been using Reiser4 for quite a long time before Hans Reiser was convicted for the murder of his wife. There was absolutely no (objective technical) reason to make a change afterwards. :-) As far as speed is concerned, Reiser4 really is a Killer FS (in a very positive sense). It is now maintained by Edward Shishkin, a former Namesys employee. Patches are available for each kernel version. (http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4-for-2.6/) Admittedly, with the advent of Ext4 and Btrfs, Reiser4 is not so brilliant any more. Reiser4 could have been a much larger project with many features known from today's ZFS/Btrfs (encryption, compression and perhaps even snapshots and subvolumes), but long disputes around kernel integration and the events around Hans Reiser blocked the whole effort and Reiser4 lost its advantage. Andrej smime.p7s Description: S/MIME Cryptographic Signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
On 17-Aug-10, at 1:05 PM, Andrej Podzimek wrote: I did not say there is something wrong about published reports. I often read them. (Who doesn't?) However, there are no trustworthy reports on this topic yet, since Btrfs is unfinished. Let's see some examples: (1) http://www.phoronix.com/scan.php?page=articleitem=zfs_ext4_btrfsnum=1 My little few yen in this massacre: Phoronix usually compares apples with oranges and pigs with candies. So be careful. Nobody said one should blindly trust Phoronix. ;-) In fact I clearly said the contrary. I mentioned the famous example of a totally absurd benchmark that used crippled and crashing code from the ZEN patchset to benchmark Reiser4. Disclaimer: I use Reiser4 A Killer FS™. :-) I had been using Reiser4 for quite a long time before Hans Reiser was convicted for the murder of his wife. There was absolutely no (objective technical) reason to make a change afterwards. :-) Thankyou, well said!! The 'killer' gag wasn't funny the first time and it certainly isn't any funnier now. It's in extremely poor taste, apart from being childish. As far as speed is concerned, Reiser4 really is a Killer FS (in a very positive sense). Reiser3 is fast and solid too, I like others have used it happily on dozens of servers for many years and continue to do so. (At least, where I can't use ZFS :-X) It is now maintained by Edward Shishkin, a former Namesys employee. Who is also sharing his expertise with the btrfs project, a very positive outcome. --Toby Patches are available for each kernel version. (http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4-for-2.6/ ) Admittedly, with the advent of Ext4 and Btrfs, Reiser4 is not so brilliant any more. Reiser4 could have been a much larger project with many features known from today's ZFS/Btrfs (encryption, compression and perhaps even snapshots and subvolumes), but long disputes around kernel integration and the events around Hans Reiser blocked the whole effort and Reiser4 lost its advantage. Andrej ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
On Tue, 17 Aug 2010, Ross Walker wrote: And there lies the problem, you need the agreement of all copyright holders in a GPL project to change it's licensing terms and some just will not budge. Joerg is correct that CDDL code can legally live right alongside the GPLv2 kernel code and run in the same program. It is a mind set issue with the Linux developers rather than a legal one. If ZFS was not tied to a big greedy controlling company then the Linux kernel developers would be more likely to change their mind. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
On 8/17/10 9:14 AM -0400 Ross Walker wrote: On Aug 16, 2010, at 11:17 PM, Frank Cusack frank+lists/z...@linetwo.net wrote: On 8/16/10 9:57 AM -0400 Ross Walker wrote: No, the only real issue is the license and I highly doubt Oracle will re-release ZFS under GPL to dilute it's competitive advantage. You're saying Oracle wants to keep zfs out of Linux? I would if I were them, wouldn't you? I'm not sure either way. If Oracle really wants to keep it out of Linux, that means it wants to keep it out of FreeBSD also. Either way, to keep it out it needs to make it closed source, and as they say, the genie is already out of the bottle. I don't agree that there's a licensing problem, but that doesn't matter. Distributions, which is how nearly EVERYONE uses Linux, are free to include zfs on their own. All the major distributions already patch the kernel heavily. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
On 8/17/10 3:31 PM +0900 BM wrote: On Tue, Aug 17, 2010 at 5:11 AM, Andrej Podzimek and...@podzimek.org wrote: Disclaimer: I use Reiser4 A Killer FS™. :-) LOL ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 64-bit vs 32-bit applications
On 08/18/10 12:05 AM, Joerg Schilling wrote: Ian Collinsi...@ianshome.com wrote: If you have an orthogonal architecture like sparc, a typical 64 bit program is indeed a bit slower than the same program in 32 bit. On Amd64, you have twice as many registers in 64 bit mode and this is the reason for a typical performance gain of ~ 30% for 64 bit applications. Do you have the data to back that up? Most things I've looked at on X64 are slower in 64 bit mode. Did you test on sparc or amd64? See above... I said x64. Some application benefit from the extended register set and function call ABI, others suffer due to increased sizes impacting the cache. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
Garrett D'Amore garr...@nexenta.com wrote: On Tue, 2010-08-17 at 14:04 -0500, Bob Friesenhahn wrote: On Tue, 17 Aug 2010, Ross Walker wrote: And there lies the problem, you need the agreement of all copyright holders in a GPL project to change it's licensing terms and some just will not budge. Joerg is correct that CDDL code can legally live right alongside the GPLv2 kernel code and run in the same program. It is a mind set issue with the Linux developers rather than a legal one. My understanding is that no, this is not possible. IANAL, but I think the provisions of CDDL with respect to granting patent license and choice of law venue are incompatible with GPL's stipulations. Conventional wisdom and detailed analysis done by lawyers is that you can't mix and match these licenses this way. You are obviously mistaken. The text you quote was not written by lawyers but by laymen. If you did ask laywers, you would get a confirmation for my statements. Jörg -- EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin j...@cs.tu-berlin.de(uni) joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
On Tue, Aug 17, 2010 at 3:01 PM, Frank Cusack frank+lists/z...@linetwo.netwrote: On 8/17/10 9:14 AM -0400 Ross Walker wrote: On Aug 16, 2010, at 11:17 PM, Frank Cusack frank+lists/z...@linetwo.net wrote: On 8/16/10 9:57 AM -0400 Ross Walker wrote: No, the only real issue is the license and I highly doubt Oracle will re-release ZFS under GPL to dilute it's competitive advantage. You're saying Oracle wants to keep zfs out of Linux? I would if I were them, wouldn't you? I'm not sure either way. If Oracle really wants to keep it out of Linux, that means it wants to keep it out of FreeBSD also. Either way, to keep it out it needs to make it closed source, and as they say, the genie is already out of the bottle. I don't agree that there's a licensing problem, but that doesn't matter. Distributions, which is how nearly EVERYONE uses Linux, are free to include zfs on their own. All the major distributions already patch the kernel heavily. FreeBSD has nowhere near the installed base of Linux. There is also absolutely 0 Enterprise support for FreeBSD. ZFS will not change that. It is not a threat to Oracle. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 64-bit vs 32-bit applications
Ian Collins i...@ianshome.com wrote: On 08/18/10 12:05 AM, Joerg Schilling wrote: Ian Collinsi...@ianshome.com wrote: If you have an orthogonal architecture like sparc, a typical 64 bit program is indeed a bit slower than the same program in 32 bit. On Amd64, you have twice as many registers in 64 bit mode and this is the reason for a typical performance gain of ~ 30% for 64 bit applications. Do you have the data to back that up? Most things I've looked at on X64 are slower in 64 bit mode. Did you test on sparc or amd64? See above... I said x64. You unfortunately did not, this is why I asked. Some application benefit from the extended register set and function call ABI, others suffer due to increased sizes impacting the cache. Well, please verify your claims as they do not meet my experience. It may be that you are right in case you don't compile with optimization. I compile with a high level of optimization and all my applications run at least as fast as in 32 bit mode (as mentioned, this does not apply to sparc). BTW: this applies to Sun Studio. Jörg -- EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin j...@cs.tu-berlin.de(uni) joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Narrow escape with FAULTED disks
Hi Mark, I would recheck with fmdump to see if you have any persistent errors on the second disk. The fmdump command will display faults and fmdump -eV will display errors (persistent faults that have turned into errors based on some criteria). If fmdump -eV doesn't show any activity for that second disk, then review /var/adm/messages or iostat -En for driver-level resets and so on. Thanks, Cindy On 08/16/10 18:53, Mark Bennett wrote: Nothing like a heart in mouth moment to shave tears from your life. I rebooted a snv_132 box in perfect heath, and it came back up with two FAULTED disks in the same vdisk group. Everything an hour on Google I found basically said your data is gone. All 45Tb of it. A postmortem of fmadm showed a single disk failed with smart predictive failure. No indication why the second failed. I don't give up easily, and it is now back up and scrubbing - no errors so far. I checked both the drives were readable, so it didn't seem to be a hardware fault. I moved one into a different server and ran a zpool import to see what it made of it. The disk was ONLINE, and it's vdisk buddies were unavailable. Ok, so I moved the disks into different bays and booted from the snv_134 cdrom. Ran zpool import and the zpool came back with everything online. That was encouraging, so I exported it and booted from the origional 132 boot drive. Well, it came back, and at 1:00AM I was able to get back to the origional issue I was chasing. So, don't give up hope when all hope appears to be lost. Mark. Still an Open_Solaris fan keen to help the community achieve a 2010 release on it's own. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 64-bit vs 32-bit applications
On 08/18/10 08:40 AM, Joerg Schilling wrote: Ian Collinsi...@ianshome.com wrote: On 08/18/10 12:05 AM, Joerg Schilling wrote: Ian Collinsi...@ianshome.com wrote: If you have an orthogonal architecture like sparc, a typical 64 bit program is indeed a bit slower than the same program in 32 bit. On Amd64, you have twice as many registers in 64 bit mode and this is the reason for a typical performance gain of ~ 30% for 64 bit applications. Do you have the data to back that up? Most things I've looked at on X64 are slower in 64 bit mode. Did you test on sparc or amd64? See above... I said x64. You unfortunately did not, this is why I asked. About half a dozen lines up Most things I've looked at on X64 are slower in 64 bit mode. Some application benefit from the extended register set and function call ABI, others suffer due to increased sizes impacting the cache. Well, please verify your claims as they do not meet my experience. I will. It may be that you are right in case you don't compile with optimization. I do. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
gd == Garrett D'Amore garr...@nexenta.com writes: Joerg is correct that CDDL code can legally live right alongside the GPLv2 kernel code and run in the same program. gd My understanding is that no, this is not possible. GPLv2 and CDDL are incompatible: http://www.fsf.org/licensing/education/licenses/index_html/#GPLIncompatibleLicenses however Linus's ``interpretation'' of the GPL considers that 'insmod' is ``mere aggregation'' and not ``linking'', but subject to rules of ``bad taste''. Although this may sound ridiculous, there are blob drivers for wireless chips, video cards, and storage controllers relying on this ``interpretation'' for over a decade. I think a ZFS porting project could do the same and end up emitting the same warning about a ``tained'' kernel that proprietary modules do: http://lwn.net/Articles/147070/ the quickest link I found of Linus actually speaking about his ``interpretation'', his thoughts are IMHO completely muddled (which might be intentional): http://lkml.org/lkml/2003/12/3/228 thus ultimately I think the question of whether it's legal or not isn't very interesting compared to ``is it moral?'' (what some of us might care about), and ``is it likely to survive long enough and not blow back in your face fiercely enough that it's a good enough business case to get funded somehow?'' (the question all the hardware manufacturers shipping blob drivers presumably asked themselves) My own view on blob modules is: * that it's immoral, and that Linus is both taking the wrong position and doing it without authority. Even if his position is ``everyone, please let's not fight,'' in practice that is a strong position favouring GPL violation, and his squirrelyness may look like taking a soft view but in practice it throws so much sand into the debate it ends up being actually a much stronger position than saying outright, ``I think insmod is mere aggregation.'' My copyright shouldn't have to bow to your celebrity. * and secondly that it does make business sense and is unlikely to cause any problems, because no one is able to challenge his authority. Whatever is the view on binary blob modules, I think it's the same view on ZFS w.r.t. the law, but not necessarily the same view w.r.t. morality or business, because the copyright law itself is immoral according to the views of many and the business risk depends on how much you piss people off. pgpor5KF8fYq9.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
Oh, as an insmod, I think the question is quite cloudy indeed, since you get into questions about what forms a derivative product. I was looking at the original statement of the two licenses running together in the same program far too simply of course when considered with dynamic link (which insmod may be considered to be a form of), the boundaries of what is the program, and what is a derivative work are very murky. Unfortunately, AFAIK, the boundaries have never been tested. I think asking a non-technical court to judge the differences between static, dynamic, and insmod style linking is probably going to be difficult. - Garrett On Tue, 2010-08-17 at 17:07 -0400, Miles Nordin wrote: gd == Garrett D'Amore garr...@nexenta.com writes: Joerg is correct that CDDL code can legally live right alongside the GPLv2 kernel code and run in the same program. gd My understanding is that no, this is not possible. GPLv2 and CDDL are incompatible: http://www.fsf.org/licensing/education/licenses/index_html/#GPLIncompatibleLicenses however Linus's ``interpretation'' of the GPL considers that 'insmod' is ``mere aggregation'' and not ``linking'', but subject to rules of ``bad taste''. Although this may sound ridiculous, there are blob drivers for wireless chips, video cards, and storage controllers relying on this ``interpretation'' for over a decade. I think a ZFS porting project could do the same and end up emitting the same warning about a ``tained'' kernel that proprietary modules do: http://lwn.net/Articles/147070/ the quickest link I found of Linus actually speaking about his ``interpretation'', his thoughts are IMHO completely muddled (which might be intentional): http://lkml.org/lkml/2003/12/3/228 thus ultimately I think the question of whether it's legal or not isn't very interesting compared to ``is it moral?'' (what some of us might care about), and ``is it likely to survive long enough and not blow back in your face fiercely enough that it's a good enough business case to get funded somehow?'' (the question all the hardware manufacturers shipping blob drivers presumably asked themselves) My own view on blob modules is: * that it's immoral, and that Linus is both taking the wrong position and doing it without authority. Even if his position is ``everyone, please let's not fight,'' in practice that is a strong position favouring GPL violation, and his squirrelyness may look like taking a soft view but in practice it throws so much sand into the debate it ends up being actually a much stronger position than saying outright, ``I think insmod is mere aggregation.'' My copyright shouldn't have to bow to your celebrity. * and secondly that it does make business sense and is unlikely to cause any problems, because no one is able to challenge his authority. Whatever is the view on binary blob modules, I think it's the same view on ZFS w.r.t. the law, but not necessarily the same view w.r.t. morality or business, because the copyright law itself is immoral according to the views of many and the business risk depends on how much you piss people off. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] problem with zpool import - zil and cache drive are not displayed?
On Aug 4, 2010, at 7:15 AM, Dmitry Sorokin wrote: I'm in the same situation as Darren - my log SSD device died completely. Victor, could you please explain how did you mocked up log device in a file so zpool status started to show the device with UNAVAIL status? I lost the latest zpool.cache file, but I was able to recover GUID of the log device from the backup copy of zpool.cache. Well, that's not very difficult. You need to write proper VDEV configuration with good checksum into at least one ZFS label of some kind of new device - either disk of file. IF you have backup zpool.cache with necessary details then is it not that difficult. Btw, in Darren's case we almost succeeded - it was possible to import pool with mocked up log device, but due to corruption in metaslabs it panicked almost immediately. For some reason setting aok/zfs_recover did not help too. Last option was to try readonly import but I was not able to prepare necessary bits quickly enough and Darren decided to stop pursuing recovery and revert to partial backups he had. I'm almost sure that readonly import would let him get everything back. In future it should be easier as ZFS readonly import support is now integrated into source code thanks to George Wilson's efforts. regards victor Thanks, Dmitry -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Victor Latushkin Sent: Tuesday, August 03, 2010 7:09 PM To: Darren Taylor Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] problem with zpool import - zil and cache drive are not displayed? On Aug 4, 2010, at 12:23 AM, Darren Taylor wrote: Hi George, I think you are right. The log device looks to have suffered a complete loss, there is no data on the disk at all. The log device was a acard ram drive (with battery backup), but somehow it has faulted clearing all data. --victor gave me this advice, and queried about the zpool.cache-- Looks like there's a hardware problem with c7d0 as it appears to contain garbage. Do you have zpool.cache with this pool configuration available? Besides containing garbage former log device now appears to have different geometry and is not able to read in the higher LBA ranges. So i'd say it is broken. c7d0 was the log device. I'm unsure what the next step is, but i'm assuming there is a way to grab the drives original config from the zpool.cache file and apply back to the drive? I mocked up log device in a file, and that made zpool import more happy: bash-4.0# zpool import pool: tank id: 15136317365944618902 state: DEGRADED status: The pool was last accessed by another system. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://www.sun.com/msg/ZFS-8000-EY config: tankDEGRADED raidz1-0 ONLINE c6t4d0 ONLINE c6t5d0 ONLINE c6t6d0 ONLINE c6t7d0 ONLINE raidz1-1 ONLINE c6t0d0 ONLINE c6t1d0 ONLINE c6t2d0 ONLINE c6t3d0 ONLINE cache c8d1 logs c13d1s0 UNAVAIL cannot open bash-4.0# zpool import -fR / tank cannot import 'tank': one or more devices is currently unavailable Recovery is possible, but will result in some data loss. Returning the pool to its state as of July 21, 2010 03:49:50 AM NZST should correct the problem. Approximately 91 seconds of data must be discarded, irreversibly. After rewind, several persistent user-data errors will remain. Recovery can be attempted by executing 'zpool import -F tank'. A scrub of the pool is strongly recommended after recovery. bash-4.0# So if you are happy with the results, you can perform actual import with zpool import -fF -R / tank You should then be able to remove log device completely. regards victor ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Kernel Panic on zpool clean
On Jul 9, 2010, at 4:27 AM, George wrote: I think it is quite likely to be possible to get readonly access to your data, but this requires modified ZFS binaries. What is your pool version? What build do you have installed on your system disk or available as LiveCD? For the record - using ZFS readonly import code backported to build 134 and slightly modified to account for specific corruptions of this case we've been able to import pool in readonly mode and George is now backing up his data. As soon as that completes I hope to have a chance to have another look into it to see what else we can learn from this case. regards victor [Prompted by an off-list e-mail from Victor asking if I was still having problems] Thanks for your reply, and apologies for not having replied here sooner - I was going to try something myself (which I'll explain shortly) but have been hampered by a flakey cdrom drive - something I won't have chance to sort until the weekend. In answer to your question the installed system is running 2009.06 (b111b) and the LiveCD I've been using is b134. The problem with the Installed system crashing when I tried to run zpool clean I believe is being caused by http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6794136 which makes me think that the same command run from a later version should work fine. I haven't had any success doing this though and I believe the reason is that several of the ZFS commands won't work if the hostid of the machine to last access the pool is different from the current system (and the pool is exported/faulted), as happens when using a LiveCD. Where I was getting errors about storage2 does not exist I found it was writing errors to the syslog that the pool could not be loaded as it was last accessed by another system. I tried to get round this using the Dtrace hostid changing script I mentioned in one of my earlier messages but this seemed not to be able to fool system processes. I also tried exporting the pool from the Installed system to see if that would help but unfortunately it didn't. After having exported the pool zfs import run on the Installed system reported The pool can be imported despite missing or damaged devices. however when trying to import it (with or without -f) it refused to import it as one or more devices is currently unavailable. When booting the LiveCD after having exported the pool it still gave errors about having been last accessed by another system. I couldn't spot any method of modifying the LiveCD image to have a particular hostid so my plan therefore has been to try installing b134 onto the system, setting the hostid under /etc and seeing if things then behaved in a more straightforward fashion, which I haven't managed yet due to the cdrom problems. I also mentioned in one of my earlier e-mails that I was confused that the Installed system mentioned an unreadable intent log but the LiveCD said the problem was corrupted metadata. This seems to be caused by the functions print_import_config and print_statement_config having slightly different case statements and not a difference in the pool itself. Hopefully I'll be able to complete the reinstall soon and see if that fixes things or there's a deeper problem. Thanks again for your help, George -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss