Jean-Paul, Our goofy disk formatting is tripping you...
Put the disk space of c8t0d0 in c8t0d0s0 and try the zpool add syntax again. If you need help with the format syntax, let me know. This command syntax should have complained: pfexec zpool add rpool cache /dev/rdsk/c8t0d0 See the zpool syntax below for pointers. Cindy # zpool add rpool cache c0t1d0s0 # zpool iostat -v rpool capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 11.4G 22.4G 0 0 5.86K 73 c0t0d0s0 11.4G 22.4G 0 0 5.86K 73 cache - - - - - - c0t1d0s0 47.8M 33.9G 0 53 18.1K 6.38M ---------- ----- ----- ----- ----- ----- ----- Jean-Paul Rivet wrote: >>Anyway, you could try simply creating standard >>FDISK/Solaris/vtoc >>partitioning on the SD card, with all the free space >>contained in one >>slice, and give that slice to ZFS. > > > This is what I've done so far. > > fdisk - > > Total disk size is 1943 cylinders > Cylinder size is 4096 (512 byte) blocks > > Cylinders > Partition Status Type Start End Length % > ========= ====== ============ ===== === ====== === > 1 Solaris2 1 1942 1942 100 > > partition> p > Volume: cache > Current partition table (original): > Total disk cylinders available: 1940 + 2 (reserved cylinders) > > Part Tag Flag Cylinders Size Blocks > 0 unassigned wm 0 0 (0/0/0) 0 > 1 unassigned wm 0 0 (0/0/0) 0 > 2 unassigned wu 1 - 1939 3.79GB (1939/0/0) 7942144 > 3 unassigned wm 0 0 (0/0/0) 0 > 4 unassigned wm 0 0 (0/0/0) 0 > 5 unassigned wm 0 0 (0/0/0) 0 > 6 unassigned wm 0 0 (0/0/0) 0 > 7 unassigned wm 0 0 (0/0/0) 0 > 8 boot wu 0 - 0 2.00MB (1/0/0) 4096 > 9 unassigned wm 0 0 (0/0/0) 0 > > partition> > > I then tried numerous variations of the command and kept getting errors like > this one: > > $ pfexec zpool add rpool cache /dev/dsk/c8t0d0s2 > cannot add to 'rpool': invalid argument for this pool operation > $ > > Eventually I tried this one: > > $ pfexec zpool add rpool cache /dev/rdsk/c8t0d0 > > I think its working but aren't sure because it still hasn't finished after > 1hr15. The HD seems to be getting hit fairly hard and iostat shows: > > zpool iostat -v rpool > capacity operations bandwidth > pool used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > rpool 49.1G 14.4G 11 56 747K 3.88M > c4t0d0s0 49.1G 14.4G 11 56 747K 3.88M > ---------- ----- ----- ----- ----- ----- ----- > > I'll let it run through the night and see what happens. > > Cheers, JP _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss