Re: [zfs-discuss] Kernel panic on ZFS import - how do I recover?
Brilliant. I set those parameters via /etc/system, rebooted, and the pool imported with just the f switch. I had seen this as an option earlier, although not that thread, but was not sure it applied to my case. Scrub is running now. Thank you very much! -Scott On 9/23/10 7:07 PM, David Blasingame Oracle david.blasing...@oracle.com wrote: Have you tried setting zfs_recover aok in /etc/system or setting it with the mdb? Read how to set via /etc/system http://opensolaris.org/jive/thread.jspa?threadID=114906 mdb debugger http://www.listware.net/201009/opensolaris-zfs/46706-re-zfs-discuss-how-to-set -zfszfsrecover1-and-aok1-in-grub-at-startup.html After you get the variables set and system booted, try importing, then running a scrub. Dave On 09/23/10 19:48, Scott Meilicke wrote: I posted this on the www.nexentastor.org http://www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here. I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached. What I have tried thus far: Boot off of DVD, both 3.0.3 and 3.0.4 beta 8. 'zpool import -f data01' causes the panic in both cases. Boot off of 3.0.4 beta 8, ran zpool import -fF data01 That gives me a message like Pool data01 returned to its stat as of ..., and then panics. The import -fF does seem to import the pool, but then immediately panic. So after booting off of DVD, I can boot from my hard disks, and the system will not import the pool because it was last imported from another system. I have moved /etc/zfs/zfs.cache out of the way, but no luck after a reboot and import. zpool import shows all of my disks are OK, and the pool itself is online. Is it time to start working with zdb? Any suggestions? This box is hosting development VMs, so I have some people idling their thumbs at the moment. Thanks everyone, -Scott ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss We value your opinion! How may we serve you better? Please click the survey link to tell us how we are doing: http://www.craneae.com/ContactUs/VoiceofCustomer.aspx Your feedback is of the utmost importance to us. Thank you for your time. Crane Aerospace Electronics Confidentiality Statement: The information contained in this email message may be privileged and is confidential information intended only for the use of the recipient, or any employee or agent responsible to deliver it to the intended recipient. Any unauthorized use, distribution or copying of this information is strictly prohibited and may be unlawful. If you have received this communication in error, please notify the sender immediately and destroy the original message and all attachments from your electronic files. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check
Interesting. We must have different setups with our PERCs. Mine have always auto rebuilt. -- Scott Meilicke On Oct 22, 2009, at 6:14 AM, Edward Ned Harvey sola...@nedharvey.com wrote: Replacing failed disks is easy when PERC is doing the RAID. Just remove the failed drive and replace with a good one, and the PERC will rebuild automatically. Sorry, not correct. When you replace a failed drive, the perc card doesn't know for certain that the new drive you're adding is meant to be a replacement. For all it knows, you could coincidentally be adding new disks for a new VirtualDevice which already contains data, during the failure state of some other device. So it will not automatically resilver (which would be a permanently destructive process, applied to a disk which is not *certainly* meant for destruction). You have to open the perc config interface, tell it this disk is a replacement for the old disk (probably you're just saying This disk is the new global hotspare) or else the new disk will sit there like a bump on a log. Doing nothing. We value your opinion! How may we serve you better? Please click the survey link to tell us how we are doing: http://www.craneae.com/ContactUs/VoiceofCustomer.aspx Your feedback is of the utmost importance to us. Thank you for your time. Crane Aerospace Electronics Confidentiality Statement: The information contained in this email message may be privileged and is confidential information intended only for the use of the recipient, or any employee or agent responsible to deliver it to the intended recipient. Any unauthorized use, distribution or copying of this information is strictly prohibited and may be unlawful. If you have received this communication in error, please notify the sender immediately and destroy the original message and all attachments from your electronic files. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check
Thank you Bob and Richard. I will go with A, as it also keeps things simple. One physical device per pool. -Scott On 10/20/09 6:46 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 20 Oct 2009, Richard Elling wrote: The ZIL device will never require more space than RAM. In other words, if you only have 16 GB of RAM, you won't need more than that for the separate log. Does the wasted storage space annoy you? :-) What happens if the machine is upgraded to 32GB of RAM later? The write performace of the X25-E is likely to be the bottleneck for a write-mostly storage server if the storage server has excellent network connectivity. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ We value your opinion! How may we serve you better? Please click the survey link to tell us how we are doing: http://www.craneae.com/ContactUs/VoiceofCustomer.aspx Your feedback is of the utmost importance to us. Thank you for your time. Crane Aerospace Electronics Confidentiality Statement: The information contained in this email message may be privileged and is confidential information intended only for the use of the recipient, or any employee or agent responsible to deliver it to the intended recipient. Any unauthorized use, distribution or copying of this information is strictly prohibited and may be unlawful. If you have received this communication in error, please notify the sender immediately and destroy the original message and all attachments from your electronic files. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check
Thanks Ed. It sounds like you have run in this mode? No issues with the perc? -- Scott Meilicke On Oct 20, 2009, at 9:59 PM, Edward Ned Harvey sola...@nedharvey.com wrote: System: Dell 2950 16G RAM 16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no extra drive slots, a single zpool. svn_124, but with my zpool still running at the 2009.06 version (14). My plan is to put the SSD into an open disk slot on the 2950, but will have to configure it as a RAID 0, since the onboard PERC5 controller does not have a JBOD mode. You can JBOD with the perc. It might be technically a raid0 or raid1 with a single disk in it, but that would be functionally equivalent to JBOD. We value your opinion! How may we serve you better? Please click the survey link to tell us how we are doing: http://www.craneae.com/ContactUs/VoiceofCustomer.aspx Your feedback is of the utmost importance to us. Thank you for your time. Crane Aerospace Electronics Confidentiality Statement: The information contained in this email message may be privileged and is confidential information intended only for the use of the recipient, or any employee or agent responsible to deliver it to the intended recipient. Any unauthorized use, distribution or copying of this information is strictly prohibited and may be unlawful. If you have received this communication in error, please notify the sender immediately and destroy the original message and all attachments from your electronic files. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss