Thanks. I just got the ZFS stuff working. I was just confused on the device notation and the need to run "/usr/sbin/disks" to get the device nodes created after I added a new drive.
In my case, c0t0d0, c0t2d0, c0t3d0 were the SATA devices that I needed to specify to "zpool create ..." , but I never saw those exact files under /dev/dsk, just the variants with the "p0/p1/p2/p3" or "s0/s1/s2/...", so I didn't trust that it would work. The UFS EIDE boot disk did have a c1d1s0 entry under /dev/dsk, which further confused me about why the SATA disks weren't showing up like that. BTW, after running zpool, I do see c0t0d0, etc. So that problem was mostly a case of my own Solaris ignorance. As far as the e1000g, in my case the ethernet chip is embedded on the motherboard, and uses Intel's CSA port on the 865G northbridge to avoid congesting the PCI bus / ICH5 southbridge with network traffic. I would expect that variation to be fairly common (i.e. well tested) in higher end motherboards since CSA was supposed to be a real win for gigabit ethernet, but maybe I'm wrong. Perhaps I should try moving the SATA controller to a different PCI slot just in case there is an IRQ issue of some sort (gawd, sounds like the old ISA bus days). And my hostname *is* screwed up (i.e. assigned as "unknown") due to an apparent expectation that the DHCP server would be tell it what the hostname should be, rather than having the hostname hardcoded. So my /etc/hosts has this entry at the end: 192.168.1.120 unknown # Added by DHCP And my /etc/hostname.e1000g0 is empty. The command-line installation process never prompted me for a hostname, so maybe I just need to tweak one file or so, but I haven't research how DHCP is managed by Solaris yet. I should probably just disable DHCP, since that probably isn't the best idea for a NAS. Thanks again for your ideas, and any others that this might spur. --Mark This message posted from opensolaris.org _______________________________________________ opensolaris-help mailing list [email protected]
