Re: [zfs-discuss] importing pool with missing/failed log device

2009-10-22 Thread Victor Latushkin
On 21.10.09 23:23, Paul B. Henson wrote: I've had a case open for a while (SR #66210171) regarding the inability to import a pool whose log device failed while the pool was off line. I was told this was CR #6343667, CR 6343667 synopsis is scrub/resilver has to start over when a snapshot is

Re: [zfs-discuss] Disk locating in OpenSolaris/Solaris 10

2009-10-22 Thread Bruno Sousa
If you use an LSI, maybe you install the LSI Logic MPT Configuration Utility. Example of the usage : lsiutil LSI Logic MPT Configuration Utility, Version 1.61, September 18, 2008 1 MPT Port found Port Name Chip Vendor/Type/RevMPT Rev Firmware Rev IOC 1. mpt0

[zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Bruno Sousa
Hi all, Recently i upgrade from snv_118 to snv_125, and suddently i started to see this messages at /var/adm/messages : Oct 22 12:54:37 SAN02 scsi: [ID 243001 kern.warning] WARNING: /p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0): Oct 22 12:54:37 SAN02 mpt_handle_event: IOCStatus=0x8000,

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-22 Thread Edward Ned Harvey
Replacing failed disks is easy when PERC is doing the RAID. Just remove the failed drive and replace with a good one, and the PERC will rebuild automatically. Sorry, not correct. When you replace a failed drive, the perc card doesn't know for certain that the new drive you're adding is meant

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-22 Thread Edward Ned Harvey
The Intel specified random write IOPS are with the cache enabled and without cache flushing. They also carefully only use a limited span of the device, which fits most perfectly with how the device is built. How do you know this? This sounds much more detailed than any average person could

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-22 Thread Ross
Actually, I think this is a case of crossed wires. This issue was reported a while back on a news site for the X25-M G2. Somebody pointed out that these devices have 8GB of cache, which is exactly the dataset size they use for the iops figures. The X25-E datasheet however states that while

[zfs-discuss] raidz ZFS Best Practices wiki inconsistency

2009-10-22 Thread Frank Cusack
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations says that the number of disks in a RAIDZ should be (N+P) with N = {2,4,8} and P = {1,2}. But if you go down the page just a little further to the thumper configuration

Re: [zfs-discuss] raidz ZFS Best Practices wiki inconsistency

2009-10-22 Thread Cindy Swearingen
Thanks for your comments, Frank. I will take a look at the inconsistencies. Cindy On 10/22/09 08:29, Frank Cusack wrote: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations says that the number of disks in a RAIDZ

Re: [zfs-discuss] importing pool with missing/failed log device

2009-10-22 Thread Paul B. Henson
On Thu, 22 Oct 2009, Victor Latushkin wrote: CR 6343667 synopsis is scrub/resilver has to start over when a snapshot is taken: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6343667 so I do not see how it can be related to log removal. Could you please check bug number in

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-22 Thread Bob Friesenhahn
On Thu, 22 Oct 2009, Marc Bevand wrote: Bob Friesenhahn bfriesen at simple.dallas.tx.us writes: For random write I/O, caching improves I/O latency not sustained I/O throughput (which is what random write IOPS usually refer to). So Intel can't cheat with caching. However they can cheat by

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Cindy Swearingen
Hi Bruno, I see some bugs associated with these messages (6694909) that point to an LSI firmware upgrade that cause these harmless errors to display. According to the 6694909 comments, this issue is documented in the release notes. As they are harmless, I wouldn't worry about them. Maybe

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-22 Thread Meilicke, Scott
Interesting. We must have different setups with our PERCs. Mine have always auto rebuilt. -- Scott Meilicke On Oct 22, 2009, at 6:14 AM, Edward Ned Harvey sola...@nedharvey.com wrote: Replacing failed disks is easy when PERC is doing the RAID. Just remove the failed drive and replace

Re: [zfs-discuss] strange results ...

2009-10-22 Thread Marion Hakanson
jel+...@cs.uni-magdeburg.de said: 2nd) Never had a Sun STK RAID INT before. Actually my intention was to create a zpool mirror of sd0 and sd1 for boot and logs, and a 2x2-way zpool mirror with the 4 remaining disks. However, the controller seems not to support JBODs :( - which is also bad,

[zfs-discuss] [Fwd: snv_123: kernel memory leak?]

2009-10-22 Thread Robert Milkowski
anyone? ---BeginMessage--- Hi, pre ::status debugging live kernel (64-bit) on mk-archive-1 operating system: 5.11 snv_123 (i86pc) ::system set noexec_user_stack_log=0x1 [0t1] set noexec_user_stack=0x1 [0t1] set snooping=0x1 [0t1] set zfs:zfs_arc_max=0x28000 [0t10737418240] ::memstat

Re: [zfs-discuss] ZFS disk failure question

2009-10-22 Thread Cindy Swearingen
Hi Jason, Since spare replacement is an important process, I've rewritten this section to provide 3 main examples, here: http://docs.sun.com/app/docs/doc/817-2271/gcvcw?a=view Scroll down the section: Activating and Deactivating Hot Spares in Your Storage Pool Example 4–7 Manually Replacing

Re: [zfs-discuss] strange results ...

2009-10-22 Thread Robert Milkowski
Jens Elkner wrote: Hmmm, wondering about IMHO strange ZFS results ... X4440: 4x6 2.8GHz cores (Opteron 8439 SE), 64 GB RAM 6x Sun STK RAID INT V1.0 (Hitachi H103012SCSUN146G SAS) Nevada b124 Started with a simple test using zfs on c1t0d0s0: cd /var/tmp (1) time sh -c 'mkfile

[zfs-discuss] Only a few days left for Online Registration: Solaris Security Summit Nov 3rd

2009-10-22 Thread Jennifer Bauer Scarpino
Hello All There is still time to register online. You will also be available to register on-site as well. Just to give you an idea of the presentation that will be given. * Presentation: Kerberos Authentication for Web Security * Presentation: Protecting Oracle Applications with Built-In

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Adam Cheal
Cindy: How can I view the bug report you referenced? Standard methods show my the bug number is valid (6694909) but no content or notes. We are having similar messages appear with snv_118 with a busy LSI controller, especially during scrubbing, and I'd be interested to see what they mentioned

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread James C. McPherson
Adam Cheal wrote: Cindy: How can I view the bug report you referenced? Standard methods show my the bug number is valid (6694909) but no content or notes. We are having similar messages appear with snv_118 with a busy LSI controller, especially during scrubbing, and I'd be interested to see what

[zfs-discuss] zpool with very different sized vdevs?

2009-10-22 Thread Travis Tabbal
I have a new array of 4x1.5TB drives running fine. I also have the old array of 4x400GB drives in the box on a separate pool for testing. I was planning to have the old drives just be a backup file store, so I could keep snapshots and such over there for important files. I was wondering if it

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Adam Cheal
James: We are running Phase 16 on our LSISAS3801E's, and have also tried the recently released Phase 17 but it didn't help. All firmware NVRAM settings are default. Basically, when we put the disks behind this controller under load (e.g. scrubbing, recursive ls on large ZFS filesystem) we get

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread James C. McPherson
Adam Cheal wrote: James: We are running Phase 16 on our LSISAS3801E's, and have also tried the recently released Phase 17 but it didn't help. All firmware NVRAM settings are default. Basically, when we put the disks behind this controller under load (e.g. scrubbing, recursive ls on large ZFS

[zfs-discuss] zpool getting in a stuck state?

2009-10-22 Thread Jeremy Kitchen
Hey folks! We're using zfs-based file servers for our backups and we've been having some issues as of late with certain situations causing zfs/ zpool commands to hang. Currently, it appears that raid3155 is in this broken state: r...@homiebackup10:~# ps auxwww | grep zfs root 15873

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Adam Cheal
I've filed the bug, but was unable to include the prtconf -v output as the comments field only accepted 15000 chars total. Let me know if there is anything else I can provide/do to help figure this problem out as it is essentially preventing us from doing any kind of heavy IO to these pools,

Re: [zfs-discuss] ZFS disk failure question

2009-10-22 Thread Jason Frank
Thank you for your follow-up. The doc looks great. Having good examples goes a long way to helping others that have my problem. Ideally, the replacement would all happen magically, and I would have had everything marked as good, with one failed disk (like a certain other storage vendor that has

[zfs-discuss] cannot import 'rpool': one or more devices is currently unavailable

2009-10-22 Thread Tommy McNeely
I have a system who's rpool has gone defunct. The rpool is made of a single disk which is a raid5EE made of all 8 146G disks on the box. The raid card is the Adaptec brand card. It was running nv_107, but its currently net booted to nv_121. I have already checked in the raid card bios, and it

Re: [zfs-discuss] ZFS disk failure question

2009-10-22 Thread Richard Elling
On Oct 22, 2009, at 12:29 PM, Jason Frank wrote: Thank you for your follow-up. The doc looks great. Having good examples goes a long way to helping others that have my problem. Ideally, the replacement would all happen magically, and I would have had everything marked as good, with one

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Carson Gaspar
On 10/22/09 4:07 PM, James C. McPherson wrote: Adam Cheal wrote: It seems to be timing out accessing a disk, retrying, giving up and then doing a bus reset? ... ugh. New bug time - bugs.opensolaris.org, please select Solaris / kernel / driver-mpt. In addition to the error messages and

Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-10-22 Thread David Turnbull
On 21/10/2009, at 7:39 AM, Mike Bo wrote: Once data resides within a pool, there should be an efficient method of moving it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove. I agree with this sentiment, it's certainly a surprise when you first notice. Here's my