[zfs-discuss] BugID 6961707
I'm stumbling over BugID 6961707 on build 134. OpenSolaris Development snv_134 X86 Via the b134 Live CD, when I try to "zpool import -f -F -n rpool" I get this helpful panic. panic[cpu4]/thread=ff006cd06c60: zfs: allocating allocated segment(offset=95698377728 size=16384) ff006cd06580 genunix:vcmn_err+2c () ff006cd06670 zfs:zfs_panic_recover+ae () ff006cd06710 zfs:space_map_add+d3 () ff006cd067c0 zfs:space_map_load+470 () ff006cd06820 zfs:metaslab_activate+95 () ff006cd068e0 zfs:metaslab_group_alloc+246 () ff006cd069a0 zfs:metaslab_alloc_dva+2aa () ff006cd06a40 zfs:metaslab_alloc+9c () ff006cd06a80 zfs:zio_dva_allocate+57 () ff006cd06ab0 zfs:zio_execute+8d () ff006cd06b50 genunix:taskq_thread+248 () ff006cd06b60 unix:thread_start+8 () While I wait for SunSolve to forward my service request to the right engineer, has anyone here hit this and gotten it resolved? Is the pool corrupted on disk? John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive. Its marketing name is: Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 format(1M) shows it identify itself as: Seagate-External-SG11-2.73TB Under both Solaris 10 and Solaris 11x, I receive the evil message: | I/O request is not aligned with 4096 disk sector size. | It is handled through Read Modify Write but the performance is very low. However, that's not my big issue as I will use the zpool-12 hack. My big issue is that once I zpool(1M) export the pool from my W2100z running S10 or my Ultra 40 running S11x, I can't import it. I thought weird USB connectivity issue, but I can run "format -> analyze -> read" merrily. Anyone seen this bug? John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4e95cb2a.30...@oracle.com>, Cindy Swearingen writes: >What is the error when you attempt to import this pool? "cannot import 'foo': no such pool available" John groenv...@acm.org # format -e Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 /pci@0,0/pci108e,6676@2,1/hub@7/storage@2/disk@0,0 1. c8t0d0 /pci@0,0/pci108e,6676@5/disk@0,0 2. c8t1d0 /pci@0,0/pci108e,6676@5/disk@1,0 Specify disk (enter its number): ^C # zpool create foo c1t0d0 # zfs create foo/bar # zfs list -r foo NAME USED AVAIL REFER MOUNTPOINT foo 126K 2.68T32K /foo foo/bar31K 2.68T31K /foo/bar # zpool export foo # zfs list -r foo cannot open 'foo': dataset does not exist # truss -t open zpool import foo open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT open("/lib/libumem.so.1", O_RDONLY) = 3 open("/lib/libc.so.1", O_RDONLY)= 3 open("/lib/libzfs.so.1", O_RDONLY) = 3 open("/usr/lib/fm//libtopo.so", O_RDONLY) = 3 open("/lib/libxml2.so.2", O_RDONLY) = 3 open("/lib/libpthread.so.1", O_RDONLY) = 3 open("/lib/libz.so.1", O_RDONLY)= 3 open("/lib/libm.so.2", O_RDONLY)= 3 open("/lib/libsocket.so.1", O_RDONLY) = 3 open("/lib/libnsl.so.1", O_RDONLY) = 3 open("/usr/lib//libshare.so.1", O_RDONLY) = 3 open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_SGS.mo", O_RDONLY) Err#2 ENOENT open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSLIB.mo", O_RDONLY) Err#2 ENOENT open("/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3", O_RDONLY) = 3 open("/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3", O_RDONLY) = 3 open("/dev/zfs", O_RDWR)= 3 open("/etc/mnttab", O_RDONLY) = 4 open("/etc/dfs/sharetab", O_RDONLY) = 5 open("/lib/libavl.so.1", O_RDONLY) = 6 open("/lib/libnvpair.so.1", O_RDONLY) = 6 open("/lib/libuutil.so.1", O_RDONLY)= 6 open64("/dev/rdsk/", O_RDONLY) = 6 /3: openat64(6, "c8t0d0s0", O_RDONLY) = 9 /3: open("/lib/libadm.so.1", O_RDONLY) = 15 /9: openat64(6, "c8t0d0s2", O_RDONLY) = 13 /5: openat64(6, "c8t1d0s0", O_RDONLY) = 10 /7: openat64(6, "c8t1d0s2", O_RDONLY) = 14 /8: openat64(6, "c1t0d0s0", O_RDONLY) = 7 /4: openat64(6, "c1t0d0s2", O_RDONLY) Err#5 EIO /8: open("/lib/libefi.so.1", O_RDONLY) = 15 /3: openat64(6, "c1t0d0", O_RDONLY) = 9 /5: openat64(6, "c1t0d0p0", O_RDONLY) = 10 /9: openat64(6, "c1t0d0p1", O_RDONLY) = 13 /7: openat64(6, "c1t0d0p2", O_RDONLY) Err#5 EIO /4: openat64(6, "c1t0d0p3", O_RDONLY) Err#5 EIO /7: openat64(6, "c1t0d0s8", O_RDONLY) = 14 /2: openat64(6, "c7t0d0s0", O_RDONLY) = 8 /6: openat64(6, "c7t0d0s2", O_RDONLY) = 12 /1: Received signal #20, SIGWINCH, in lwp_park() [default] /3: openat64(6, "c7t0d0p0", O_RDONLY) = 9 /4: openat64(6, "c7t0d0p1", O_RDONLY) = 11 /5: openat64(6, "c7t0d0p2", O_RDONLY) = 10 /6: openat64(6, "c8t0d0p0", O_RDONLY) = 12 /6: openat64(6, "c8t0d0p1", O_RDONLY) = 12 /6: openat64(6, "c8t0d0p2", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t0d0p3", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t0d0p4", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t1d0p0", O_RDONLY) = 12 /8: openat64(6, "c7t0d0p3", O_RDONLY) = 7 /6: openat64(6, "c8t1d0p1", O_RDONLY) = 12 /6: openat64(6, "c8t1d0p2", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t1d0p3", O_RDONLY) Err#5 EIO /6: openat64(6, "c8t1d0p4", O_RDONLY) Err#5 EIO /9: openat64(6, "c7t0d0p4", O_RDONLY) = 13 /7: openat64(6, "c7t0d0s1", O_RDONLY) = 14 /1: open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.cat", O_RDONLY) Err#2 ENOENT open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.mo", O_RDONLY) Err#2 ENOENT cannot import 'foo': no such pool available ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4e95e10f.9070...@oracle.com>, Cindy Swearingen writes: >In the steps below, you're missing a zpool import step. >I would like to see the error message when the zpool import >step fails. "zpool import" returns nothing. The truss shows it poking around c1t0d0 fdisk partitions and Solaris slices presumably hunting for pools. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <201110131150.p9dbo8yk011...@acsinet22.oracle.com>, Casper.Dik@oracl e.com writes: >What is the partition table? I thought about that so I reproduced with the legacy SMI label and a Solaris fdisk partition with ZFS on slice 0. Same result as EFI; once I export the pool I cannot import it. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4e970387.3040...@oracle.com>, Cindy Swearingen writes: >Any USB-related messages in /var/adm/messages for this device? Negative. cfgadm(1M) shows the drive and format->fdisk->analyze->read runs merrily. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
As a sanity check, I connected the drive to a Windows 7 installation. I was able to partition, create an NTFS volume on it, eject and remount it. I also tried creating the zpool on my Solaris 10 system, exporting and trying to import the pool on my Solaris 11X system and again no love. I'm baffled why zpool import is unable to find the pool on the drive, but the drive is definitely functional. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <201110150202.p9f22w2n000...@elvis.arl.psu.edu>, John D Groenveld writes: >I'm baffled why zpool import is unable to find the pool on the >drive, but the drive is definitely functional. Per Richard Elling, it looks like ZFS is unable to find the requisite labels for importing. John groenv...@acm.org # prtvtoc /dev/rdsk/c1t0d0s2 * /dev/rdsk/c1t0d0s2 partition map * * Dimensions: *4096 bytes/sector * 63 sectors/track * 255 tracks/cylinder * 16065 sectors/cylinder * 45599 cylinders * 45597 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First SectorLast * Sector CountSector * 0 16065 16064 * * First SectorLast * Partition Tag FlagsSector CountSector Mount Directory 0 200 16065 732483675 732499739 2 501 0 732515805 732515804 8 101 0 16065 16064 # zpool create -f foobar c1t0d0s0 # zpool status foobar pool: foobar state: ONLINE scan: none requested config: NAMESTATE READ WRITE CKSUM foobar ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 errors: No known data errors # zdb -l /dev/dsk/c1t0d0s0 LABEL 0 failed to unpack label 0 LABEL 1 failed to unpack label 1 LABEL 2 failed to unpack label 2 LABEL 3 failed to unpack label 3 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4e9d98b1.8040...@oracle.com>, Cindy Swearingen writes: >I'm going to file a CR to get this issue reviewed by the USB team >first, but if you could humor me with another test: > >Can you run newfs to create a UFS file system on this device >and mount it? # uname -srvp SunOS 5.11 151.0.1.12 i386 # zpool destroy foobar # newfs /dev/rdsk/c1t0d0s0 newfs: construct a new file system /dev/rdsk/c1t0d0s0: (y/n)? y The device sector size 4096 is not supported by ufs! John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4e9da8b1.7020...@oracle.com>, Cindy Swearingen writes: >1. If you re-create the pool on the whole disk, like this: > ># zpool create foo c1t0d0 > >Then, resend the prtvtoc output for c1t0d0s0. # zpool create snafu c1t0d0 # zpool status snafu pool: snafu state: ONLINE scan: none requested config: NAMESTATE READ WRITE CKSUM snafu ONLINE 0 0 0 c1t0d0ONLINE 0 0 0 errors: No known data errors # prtvtoc /dev/rdsk/c1t0d0s0 * /dev/rdsk/c1t0d0s0 partition map * * Dimensions: *4096 bytes/sector * 732566642 sectors * 732566631 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First SectorLast * Sector CountSector * 6 250 255 * * First SectorLast * Partition Tag FlagsSector CountSector Mount Directory 0 400256 732549997 732550252 8 1100 732550253 16384 732566636 >We should be able to tell if format is creating a dummy label, >which means the ZFS data is never getting written to this disk. >This would be a bug. # zdb -l /dev/dsk/c1t0d0s0 LABEL 0 failed to unpack label 0 LABEL 1 failed to unpack label 1 LABEL 2 failed to unpack label 2 LABEL 3 failed to unpack label 3 >2. You are running this early S11 release: > >SunOS 5.11 151.0.1.12 i386 > >You might retry this on more recent bits, like the EA release, >which I think is b 171. Doubtful I'll find time to install EA before S11 FCS's November launch. >I'll still file the CR. Thank you. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4e9db04b.80...@oracle.com>, Cindy Swearingen writes: >This is CR 7102272. Anyone out there have Western Digital's competing 3TB Passport drive handy to duplicate this bug? John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4e9db04b.80...@oracle.com>, Cindy Swearingen writes: >This is CR 7102272. What is the title of this BugId? I'm trying to attach my Oracle CSI to it but Chuck Rozwat and company's support engineer can't seem to find it. Once I get upgraded from S11x SRU12 to S11, I'll reproduce on a more recent kernel build. Thanks, John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive
In message <4ebbfb5...@oracle.com>, Cindy Swearingen writes: >CR 7102272: > > ZFS storage pool created on a 3 TB USB 3.0 device has device label >problems > >Let us know if this is still a problem in the OS11 FCS release. I finally got upgraded from Solaris 11 Express SRU 12 to S11 FCS. Solaris 11 11/11 still spews the "I/O request is not aligned with 4096 disk sector size" warnings but zpool(1M) create's label persists and I can export and import between systems. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Can I create a mirror for a root rpool?
In message , "Anonymous R emailer (austria)" writes: >On Solaris 10 If I install using ZFS root on only one drive is there a way >to add another drive as a mirror later? Sorry if this was discussed >already. I searched the archives and couldn't find the answer. Thank you. http://docs.oracle.com/cd/E23823_01/html/819-5461/ggset.html#gkdep> | How to Create a Mirrored ZFS Root Pool (Postinstallation) John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 4k sector support in Solaris 11?
In message , Dave Pooser writes: >If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11, I'm using the Seagate's 3TB external drive with S11. Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 >will zpool let me create a new pool with ashift=12 out of the box or will Yes. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 4k sector support in Solaris 11?
In message <4f435ca9.8010...@tuneunix.com>, nathank writes: >Is there actually a fix to allow manual setting of ashift now that I No. http://docs.oracle.com/cd/E23824_01/html/821-1462/zpool-1m.html> John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Any recommendations on Perc H700 controller on Dell Rx10 ?
In message , Sriram Narayanan writes: >At work, I have an R510, and R610 and an R710 - all with the H700 PERC >controller. BTW with some effort, Dell's sales critter will sell you the H200 LSI SAS HBA as a replacement for the H700 LSI MegaRAID controller for those boxes. >Based on experiments, it seems like there is no way to bypass the PERC >controller - it seems like one can only access the individual disks if >they are set up in RAID0 each. Did you try deleting all of the RAID0 Virtual Disks and then enabling JBOD? # MegaCli -AdpSetProp -EnableJBOD -1 -aALL # MegaCli -PDMakeJBOD -PhysDrv[E0:S0,E1:S1,...] -aALL John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Good tower server for around 1,250 USD?
In message , Bob Friesenhahn writes: >Almost all of the systems listed on the HCL are defunct and no longer >purchasable except for on the used market. Obtaining an "approved" >system seems very difficult. In spite of this, Solaris runs very well >on many non-approved modern systems. http://www.oracle.com/webfolder/technetwork/hcl/data/s11ga/systems/views/nonoracle_systems_all_results.mfg.page1.html> >I don't know what that means as far as the ability to purchase Solaris >"support". I believe it must pass the HCTS before Oracle will support Solaris running on third-party hardware. http://www.oracle.com/webfolder/technetwork/hcl/hcts/index.html> John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] kernel panic during zfs import [ORACLE should notice this]
In message , =?utf- 8?Q?Carsten_John?= writes: >I just spent about an hour (or two) trying to file a bug report regarding the >issue without success. > >Seems to me, that I'm too stupid to use this "MyOracleSupport" portal. > >So, as I'm getting paid for keeping systems running and not clicking through f >lash overloaded support portals searching for CSIs, I'm giving the relevant in >formation to the list now. If the Flash interface is broken, try the non-Flash MOS site: http://SupportHTML.Oracle.COM/> John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Good tower server for around 1,250 USD?
In message <4f7571de.7080...@netdemons.com>, Erik Trimble writes: >Oracle (that is, if Oracle hasn't completely stopped selling support >contracts for Solaris for non-Oracle hardware already). Still available on Oracle Store: https://shop.oracle.com/pls/ostore/f?p=dstore:product:2091882785479247::NO:RP,6:P6_LPI:27242443094470222098916> John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] kernel panic during zfs import [ORACLE should notice this]
In message <4f735451.2020...@oracle.com>, Deepak Honnalli writes: > Thanks for your reply. I would love to take a look at the core > file. If there is a way this can somehow be transferred to > the internal cores server, I can work on the bug. > > I am not sure about the modalities of transferring the core > file though. I will ask around and see if I can help you here. How to Upload Data to Oracle Such as Explorer and Core Files [ID 1020199.1] John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Spare drive inherited cksum errors?
In message <4fc509e8.8080...@jvm.de>, Stephan Budach writes: >If now I'd only knew how to get the actual S11 release level of my box. >Neither uname -a nor cat /etc/release does give me a clue, since they >display all the same data when run on different hosts that are on >different updates. $ pkg info entire John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Is there an actual newsgroup for zfs-discuss?
In message <008c01cd4812$7399c180$5acd4480$@net>, David Combs writes: >Actual newsgroup for zfs-discuss? Did you try Gmane's interface? http://groups.google.com/groups?selm=jo43q0%24no50%241%40tr22n12.aset.psu.edu> John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] (fwd) Re: ZFS NFS service hanging on Sunday morning
In message <201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk>, tpc...@mklab.ph.r hul.ac.uk writes: >Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap My WAG is that your "zpool history" is hanging due to lack of RAM. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] S11 zfs snapshot crash
I am toying with Phil Brown's zrep script. Does anyone have an Oracle BugID for this crashdump? #!/bin/ksh srcfs=rpool/testvol destfs=rpool/destvol snap="${srcfs}@zrep_00" zfs destroy -r $srcfs zfs destroy -r $destfs zfs create -V 100M $srcfs zfs set foo:bar=foobar $srcfs zfs create -o readonly=on $destfs zfs snapshot $snap zfs send -p $snap| zfs recv -vuF $destfs # mdb unix.1 vmcore.1 > $c zap_leaf_lookup+0x4d(ff01dd4e1888, ff01e198d500, ff000858e130) fzap_lookup+0x9a(ff01e198d500, 1, 4b, ff000858e430, 0, 0) zap_lookup_norm+0x131(ff01dd48c9c0, 1, f7a99e30, 1, 4b, ff000858e430) zap_lookup+0x2d(ff01dd48c9c0, 1, f7a99e30, 1, 4b, ff000858e430) zfs_get_mlslabel+0x56(ff01dd48c9c0, ff000858e430, 4b) zfs_mount_label_policy+0x62(ff01dd68cb60, ff01ceede200) zfs_mount+0x499(ff01dd68cb60, ff01e191b500, ff000858ee20, ff01dbfb23b8) fsop_mount+0x22(ff01dd68cb60, ff01e191b500, ff000858ee20, ff01dbfb23b8) domount+0xd33(0, ff000858ee20, ff01e191b500, ff01dbfb23b8, ff000858ee18) mount+0xc0(ff01ce5cf3b8, ff000858ee98) syscall_ap+0x92() _sys_sysenter_post_swapgs+0x149() # pkg info entire| grep Summary Summary: entire incorporation including Support Repository Update (Oracle Solaris 11 11/11 SRU 8.5). John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok
# pstack core John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] HELP! RPool problem
In message , Karl Wagner writes: >The SSD was the first boot drive, and every time it tried to boot it >panicked and rebooted, ending up in a loop. I tried to change to the second >rpool drive, but either I forgot to install grub on it or it has become >corrupted (probably the first, I can be that stupid at times). > >Can anyone give me any advice on how to get this system back? Can I trick >grub, installed on the SSD, to boot from the HDD's rpool mirror? Is >something more sinister going on? Remove the broken drive, boot installation media, import the mirror drive. If it imports, you will be able to installgrub(1M). >By the way, whatever the error message is when booting, it disapears so >quickly I can't read it, so I am only guessing that this is the reason. Boot with kernel debugger so you can see the panic. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] cannot destroy, volume is busy
# zfs list -t vol NAME USED AVAIL REFER MOUNTPOINT rpool/dump4.00G 99.9G 4.00G - rpool/foo128 66.2M 100G16K - rpool/swap4.00G 99.9G 4.00G - # zfs destroy rpool/foo128 cannot destroy 'rpool/foo128': volume is busy I checked that the volume is not a dump or swap device and that iSCSI is disabled. On Solaris 11.1, how would I determine what's busying it? John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] cannot destroy, volume is busy
In message <7f44e458-5d27-42b6-ac81-7f4ff61d6...@gmail.com>, Richard Elling wri tes: >The iSCSI service is not STMF. STMF will need to be disabled, or the = >volume no longer >used by STMF. > >iSCSI service is svc:/network/iscsi/target:default >STMF service is svc:/system/stmf:default Thank you for the gentle nudge with the clue stick, forgot the process I used... http://docs.oracle.com/cd/E26502_01/html/E29007/gaypf.html> >One would think that fuser would work, but in my experience, fuser = >rarely does >what I expect. fuser(1M) came up blank. >If you suspect STMF, then try > stmfadm list-lu -v Bingo! Deleted the LU and destroyed the volume. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Opensolaris is apparently dead
In message <4c6c4e30.7060...@ianshome.com>, Ian Collins writes: >If you count Monday this week as lately, we have never had to wait more >than 24 hours for replacement drives for our 45x0 or 7000 series Same here, but two weeks ago for a failed drive in an X4150. Last week SunSolve was sending my service order requests to /dev/null, but someone manually entered after I submitted web feedback. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] BugID 6961707
In message <201008112022.o7bkmc2j028...@elvis.arl.psu.edu>, John D Groenveld wr ites: >I'm stumbling over BugID 6961707 on build 134. I see the bug has been stomped in build 150. Awesome! http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6961707> In which build did it first arrive? Thanks, John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] How do you use >1 partition on x86?
In message <1026474698.131288014974214.javamail.tweb...@sf-app1>, Bill Werner w rites: >So when I built my new workstation last year, I partitioned the one and only d >isk in half, 50% for Windows, 50% for 2009.06. Now, I'm not using Windows, s >o I'd like to use the other half for another ZFS pool, but I can't figure out >how to access it. Use beadm(1M) to duplicate your BE to a USB disk, then boot it, then format/fdisk your workstation disk, then use beadm(1M) to duplicate your BE back to your workstation disk. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Compatibility between Sun-Oracle Fishworks appliance zfs and other zfs implementations
In message <4dddc270.6060...@u.washington.edu>, Matt Weatherford writes: >amount of $ on. This is a great box and we love it, although the EDU >discounts that Sun used to provide for hardware and support contracts >seem to have dried up so the cost of supporting it moving forward is >still unknown. Ask Keith Block and company's sales critter about "Hardware from Oracle - Pricing for Education (HOPE)": http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/364419.pdf> John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Resizing ZFS partition, shrinking NTFS?
In message <444915109.61308252125289.JavaMail.Twebapp@sf-app1>, Clive Meredith writes: >I currently run a duel boot machine with a 45Gb partition for Win7 Ultimate an >d a 25Gb partition for OpenSolaris 10 (134). I need to shrink NTFS to 20Gb an >d increase the ZFS partion to 45Gb. Is this possible please? I have looked a >t using the partition tool in OpenSolaris but both partition are locked, even >under admin. Win7 won't allow me to shrink the dynamic volume, as the Finsh b >utton is always greyed out, so no luck in that direction. Shrink the NTFS filesystem first. I've used the Knoppix LiveCD against a defragmented NTFS. Then use beadm(1M) to duplicate your OpenSolaris BE to a USB drive and also send snapshots of any other rpool ZFS there. Then I would boot the USB drive, run format, fdisk and recreate the Solaris fdisk partition on your system, recreate the rpool on slice 0 of that fdisk partition, use beadm(1M) to copy your BE back to your new rpool, and then restore any other ZFS from those snapshots. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!
In message <1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com>, Stu Whi tefish writes: >I'm sorry, I don't understand this suggestion. > >The pool that won't import is a mirror on two drives. Disconnect all but the two mirrored drives that you must import and try to import from a S11X LiveUSB. John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!
In message <1313687977.77375.yahoomail...@web121903.mail.ne1.yahoo.com>, Stu Wh itefish writes: >Nope, not a clue how to do that and I have installed Windows on this box inste >ad of Solaris since I can't get my data back from ZFS. >I have my two drives the pool is on disconnected so if this ever gets resolved > I can reinstall Solaris and start learning again. I believe you can configure VirtualBox for Windows to pass thru the disk with your unimportable rpool to guest OSs. Can OpenIndiana or FreeBSD guest import the pool? Does Solaris 11X crash at the same place when run from within VirtualBox? John groenv...@acm.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss