Re: [zfs-discuss] Postmortem - file system recovered [SEC=UNCLASSIFIED]
0n Sun, Aug 29, 2010 at 08:09:22PM -0700, Brian wrote: >The fix: >"""the trick was to modify mode in in-kernel buffer containing znode_phys_t and then force ZFS to flush it out to disk.""" Can you give an example of how you did this ? -Alex IMPORTANT: This email remains the property of the Department of Defence and is subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have received this email in error, you are requested to contact the sender and delete the email. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Postmortem - file system recovered
I am writing to follow-up my post regarding a file system that became inaccessible despite a clean pool: http://opensolaris.org/jive/thread.jspa?messageID=494651 Several people helped but Victor Latushkin was instrumental in diagnosing/fixing the issue. Background: The file system became inaccessible shortly after I began using NexentaStor 3.03. I had been executing several recursive chown/chgrp/chmod commands. I've test the computer several times with Memtest and have never had any detectable hardware issues. The symptom: A ls command yielded a strange line for this file system. It would not cd into the directory/filesystem """?- ? ? ? ? ? myfilesystemname""" The problem: """ it has mode bits set that indicated that it is FIFO, character device and directory at the same time""" The fix: """the trick was to modify mode in in-kernel buffer containing znode_phys_t and then force ZFS to flush it out to disk.""" Outstanding questions: 1) Is there a bug in ZFS or NexentaStor that resulted in the mode bits being set incorrectly? 2) When the mode bits were set to an invalid state why did ZFS react ambiguously instead of reporting a clear error? Why not report this when encountered or at least have a tool to scrub the file system (not the pool) looking for invalid data. Without Victor's help I would have never in 100 years discovered what the issue was. 3) Could this error be recovered from automatically? This was the root of a zfs file system and regardless of the mode bits it was probably clear that it should be treated as a directory. Thanks for everyone's help with diagnosing this. -brian -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] native ZFS on Linux
> "aa" == Anurag Agarwal writes: aa> Every one being part of beta program will have access to aa> source code ...and the right to redistribute it if they like, which I think is also guaranteed by the license. Yes, I agree a somewhat formal beta program could be smart for this type of software, which can lose large amounts of data, and where reproducing problems isn't easy because debugging the way analagous to other software requires shipping around multi-terabyte possibly-confidential images, so you'd like competent testers so you can skip this without becoming too frustrated. But I don't see how anything fitting the definition of ``closed'' is possible with free software. Even just asking participants, ``please don't leak our software outside the beta, even though you've the legal right to do so. If you do leak it, we'll be unhappy,'' is an implicit threat to retaliate (ex. by excluding people from further beta releases, which you'll likely be making in a continuous stream). so the word ``closed'' alone, even without any further discussion, is likely to have a chilling effect on the software freedom of the beta participants, and I think this effect is absolutely intended by you, and that it's wrong. on one hand it's sort of a fine point, but on the other for the facts on the ground it can matter quite a lot. Thanks for the effort! and for clarifying that you will always release matching source along with every binary release you make! pgpN2VocVYwL0.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] VM's on ZFS - 7210
> "en" == Eff Norwood writes: en> http://www.anandtech.com/show/2738/8 but a few pages later: http://www.anandtech.com/show/2738/25 so, as you say, ``with all major SSDs in the role of a ZIL you will eventually not be happy.'' is true, but you seem to have accidentally left out the ``EXCEPT INTEL!'' Oops! Funnier still, the EXCEPT INTEL is right there in exactly the article YOU cited. however, that's not the end of it. Searching this very mailing list for 'anandtech' I found this cited about ten times: http://www.anandtech.com/show/2899/8 anandtech does not think TRIM / dirty drives are a problem any longer. You might want to redo whatever tests you did (or else read newer anandtech articles). I've made the same mistake of passing around anandtech links without keeping up with their latest posts, but the thing is, that link debunking your ideas was posted on this list *so* *many* *times* and over such a long interval! You can also use the anandtech articles as a point of reference for how you might write up your ``extensive testing'' of ``all major'' SSD's in a way that will ``assure'' people your conclusions are correct. (HINT: list the SSD's you tested. describe the testing method. Results would be nice, too, but the first two were missing from your post. They help a lot, and do not take much time to include, though leaving them out does help FUD spread further if you are trying to promote this ``DDRDrive'' with the silly external power brick.) en> I can't think of an easy way to measure pages that have not en> been consumed since it's really an SSD controller function en> which is obfuscated from the OS, yeah, SSD's are largely just a different way of selling proprietary software, but I guess a lot of ``hardware'' is. pgpi59M7WwDpr.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ufs root to zfs root liveupgrade?
hi My user error it is fine now I may have use* reboot* and not init 6 So far I try the following: 1)with two hdd, ufsroot liveug to zfsroot work 2)with one hdd but different slice ufsroot liveug to zfsroot work grub does provide the choice of ufsroot and zfsroot *init 6* seems to be very important to update the menu.1st before reboot regards On 8/28/2010 5:17 PM, Ian Collins wrote: On 08/28/10 11:39 PM, LaoTsao 老曹 wrote: hi all Try to learn how UFS root to ZFS root liveUG work. I download the vbox image of s10u8, it come up as UFS root. add a new disks (16GB) create zpool rpool run lucreate -n zfsroot -p rpool run luactivate zfsroot run lustatus it do show zfsroot will be active in next boot init 6 but it come up with UFS root, lustatus show ufsroot active zpool rpool is mounted but not used by boot As Casper said, you have to change boot drive. The easiest way to migrate to ZFS is to use a spare slice on the original drive for the new pool You can then mirror that off to another drive. <>___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss