Re: [zfs-discuss] ZFS write / read speed and traps for beginners
Further followup to this thread... After being beaten sufficiently with a clue-bat, it was determined that the nforce 750a could do ahci mode for it's SATA stuff. I set it to ahci, and redid the devlinks etc and cranked it up as AHCI. I'm now regularly peaking at 100MB/s, though spending most of the time around 70MB/s. *much better* The lesson here is: when in ahci mode in the bios, *don't* match that PCI-ID with the nv-sata driver. It's not what you want. heh. *blush*. Once I removed the extra nv_sata entries I had added to the driver_aliases in my miniroot, all was good. On the NGE front, it turns out that solaris does not seem to like the ethernet address of the card. Trying to set it's OWN ethernet address using ifconfig yielded this: # ifconfig nge0 ether 63:d0:b:7d:1d:0 ifconfig: dlpi_set_physaddr failed nge0: DLSAP address in improper format or invalid ifconfig: failed setting mac address on nge0 using ifconfig nge0 ether 0:e:c:5b:54:45 worked just fine, and the interface now passes traffic and sees responses just fine. So, the workaround here is adding ether a working ether address in the hostname.nge0 I guess I'll log a bug on that on Monday... Awesome. Now to work on audio... heh. Nathan. Nathan Kroenert wrote: Hey all - Just spent quite some time trying to work out why my 2 disk mirrored ZFS pool was running so slow, and found an interesting answer... System: new Gigabyte M750sli-DS4, AMD 9550, 4GB memory and 2 X Seagate 500GB SATA-II 32mb cache disks. The SATA ports on the nfoce 750asli chipset don't yet seem to be supported by the nv_sata driver (I'm only running nv_89 at the mo, though I'm not aware of new support going in just yet). I *can* get the driver to attach, but not to see any disks. interesting, but I digress... Anyhoo, - I'm stuck in IDE compatability mode for the moment. So - using plain dd to the zfs filesystem on said disk dd if=/dev/zero of=delete.me bs=65536 I could achieve only about 35-40MB/s write speed, whereas, if I dd to the slice directly, I can get around 90-95MB/s I tried using whole disks versus a slice and it made no appreciable difference. It turns out that when you are in IDE compatability mode, having two disks on the same 'controller' (c# in solaris) behaves just like real IDE... Crap! Moving the second disk onto from c1 to c2 got be back to at least 50MB/s with higher peaks, up to 60/70MB/s. Also of note, on the gigabyte board (and I guess other nforce 750asli based chipsets) only 4 of the 6 SATA ports work when in IDE mode. Other thoughts on the Nforce 750a: - nge plumbs up OK and can send and 'see' packets, but does not seem to know itself... In promiscuous mode, you can see returning icmp echo requests, but they don't make it to the top of the stack. I had to use an e1000g in a PCI slot to get my networking working properly... - Onboard Video works, including compiz, but you need to create an xorg.conf and update the nvidia driver with the latest from the nvidia website Seems snappy enough. With 4 cores @ 2.2Ghz (phenom 9550) it's looking like it'll do what I wanted quite nicely. Later... Nathan. -- // // Nathan Kroenert [EMAIL PROTECTED] // // Systems Engineer Phone: +61 3 9869-6255 // // Sun Microsystems Fax:+61 3 9869-6288 // // Level 7, 476 St. Kilda Road Mobile: 0419 305 456// // Melbourne 3004 VictoriaAustralia // // ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS write / read speed and traps for beginners
Nathan Kroenert wrote: On the NGE front, it turns out that solaris does not seem to like the ethernet address of the card. Trying to set it's OWN ethernet address using ifconfig yielded this: # ifconfig nge0 ether 63:d0:b:7d:1d:0 ifconfig: dlpi_set_physaddr failed nge0: DLSAP address in improper format or invalid ifconfig: failed setting mac address on nge0 using ifconfig nge0 ether 0:e:c:5b:54:45 worked just fine, and the interface now passes traffic and sees responses just fine. So, the workaround here is adding ether a working ether address in the hostname.nge0 I guess I'll log a bug on that on Monday... Nathan, I'd bet you're being bitten by: 6658667 nge - ethernet address reversed on nForce 430 chipset on ASUS M2N motherboard Menno -- Menno Lageman - Sun Microsystems - http://blogs.sun.com/menno ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] uncorrectable I/O error ... how to address?
Hi all, I have a situation I don't know how to get out of: I'm trying to 'zfs send' an FS off of my laptop, but in the middle of the send process, it hangs, and I see an message: WARNING: Pool 'p' has encountered an uncorrectable I/O error. Manual intervention is required. that's all very nice, apart from the fact that I don't see any indication what the manual intervention is supposed to be .. and worse, when I try to find out more using zpool status -v, it hangs (or appears to) after: # zpool status -v pool: p state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: none requested config: NAMESTATE READ WRITE CKSUM p ONLINE 0 0 2 c1t0d0s7 ONLINE 0 0 2 errors: Permanent errors have been detected in the following files: [ hangs ] both the zfs send and zpool status seem uninterruptible. I saw this once before and rebooted, thereafter zpool status showed nothing. so: how do I find out more about what's going on and what's broken, and how do I fix it without just deleting the FS? thx Michael -- Michael Schusterhttp://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] uncorrectable I/O error ... how to address?
Michael Schuster wrote: (btw: is the current version no printed on purpose, or is it understood that zfs is always at the latest possible version?) ah ... I just found the answer to that myself: # zpool upgrade This system is currently running ZFS pool version 10. The following pools are out of date, and can be upgraded. After being upgraded, these pools will no longer be accessible by older software versions. VER POOL --- 8 p # zfs upgrade This system is currently running ZFS filesystem version 3. internal error: unable to get version property The following filesystems are out of date, and can be upgraded. After being upgraded, these filesystems (and any 'zfs send' streams generated from subsequent snapshots) will no longer be accessible by older software versions. VER FILESYSTEM --- 2 p/csw 2 p/export 2 p/home 2 p/store Michael -- Michael Schusterhttp://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Insufficient replicas
Hello guys, I made a zpool on a Leopard computer using the binaries found on macosforge. The zpool worked really well! However, now I want to use it in an OpenSolaris fileserver, so I exported the pool. Now when I try to import, I get the following error: [EMAIL PROTECTED]:/# zpool import pool: bigstore id: 15885576894043075453 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: bigstoreUNAVAIL insufficient replicas raidz1UNAVAIL corrupted data c3d0s2 ONLINE c3d1s2 ONLINE c2d1p0 ONLINE This is really scary! I don't have any backups! I'm afraid to put it back into my Macintosh too. Is there anyway for me to recover my data? Macintosh is running ZFS version 8, OpenSolaris is on 10, so it shouldn't be incompatible. I only need to do a one way import, so it's no big deal if I have to upgrade the zpool to v10. Any help is appreciated!! Timothy. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss