Re: [zfs-discuss] All drives intact but vdev UNAVAIL in raidz1

2011-09-06 Thread Mark J Musante
On Tue, 6 Sep 2011, Tyler Benster wrote: It seems quite likely that all of the data is intact, and that something different is preventing me from accessing the pool. What can I do to recover the pool? I have downloaded the Solaris 11 express livecd if that would be of any use. Try running

Re: [zfs-discuss] zpool replace

2011-08-15 Thread Mark J Musante
Hi Doug, The vms pool was created in a non-redundant way, so there is no way to get the data off of it unless you can put back the original c0t3d0 disk. If you can still plug in the disk, you can always do a zpool replace on it afterwards. If not, you'll need to restore from backup,

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Mark J Musante
The fix for 6991788 would probably let the 40mb drive work, but it would depend on the asize of the pool. On Fri, 4 Mar 2011, Cindy Swearingen wrote: Hi Robert, We integrated some fixes that allowed you to replace disks of equivalent sizes, but 40 MB is probably beyond that window. Yes,

Re: [zfs-discuss] Problem with a failed replace.

2010-12-07 Thread Mark J Musante
On Mon, 6 Dec 2010, Curtis Schiewek wrote: Hi Mark, I've tried running zpool attach media ad24 ad12 (ad12 being the new disk) and I get no response. I tried leaving the command run for an extended period of time and nothing happens. What version of solaris are you running?

Re: [zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Mark J Musante
On Fri, 3 Dec 2010, Curtis Schiewek wrote: NAME STATE READ WRITE CKSUM media DEGRADED 0 0 0 raidz1 ONLINE 0 0 0 ad8ONLINE 0 0 0 ad10 ONLINE 0 0 0

Re: [zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Mark J Musante
ad24 ad18 for you. On Fri, Dec 3, 2010 at 1:38 PM, Mark J Musante mark.musa...@oracle.comwrote: On Fri, 3 Dec 2010, Curtis Schiewek wrote: NAME STATE READ WRITE CKSUM media DEGRADED 0 0 0 raidz1 ONLINE 0 0 0

Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread Mark J Musante
On Wed, 10 Nov 2010, Darren J Moffat wrote: On 10/11/2010 11:18, sridhar surampudi wrote: I was wondering how zpool split works or implemented. Or are you really asking about the implementation details ? If you want to know how it is implemented then you need to read the source code. Also

Re: [zfs-discuss] zfs unmount versus umount?

2010-09-30 Thread Mark J Musante
On Thu, 30 Sep 2010, Linder, Doug wrote: Is there any technical difference between using zfs unmount to unmount a ZFS filesystem versus the standard unix umount command? I always use zfs unmount but some of my colleagues still just use umount. Is there any reason to use one over the other?

Re: [zfs-discuss] zfs unmount versus umount?

2010-09-30 Thread Mark J Musante
On Thu, 30 Sep 2010, Darren J Moffat wrote: * It can be applied recursively down a ZFS hierarchy True. * It will unshare the filesystems first Actually, because we use the zfs command to do the unmount, we end up doing the unshare on the filesystem first. See the opensolaris code for

Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Mark J Musante
On Mon, 20 Sep 2010, Valerio Piancastelli wrote: After a crash i cannot access one of my datasets anymore. ls -v cts brwxrwxrwx+ 2 root root 0, 0 ott 18 2009 cts zfs list sas/mail-cts NAME USED AVAIL REFER MOUNTPOINT sas/mail-cts 149G 250G 149G /sas/mail-cts

Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Mark J Musante
On Mon, 20 Sep 2010, Valerio Piancastelli wrote: Yes, it is mounted r...@disk-00:/volumes/store# zfs get sas/mail-ccts NAME PROPERTY VALUESOURCE sas/mail-cts mounted yes - OK - so the next question would be where the data is. I assume when you say you cannot access

Re: [zfs-discuss] onnv_142 - vfs_mountroot: cannot mount root

2010-09-07 Thread Mark J Musante
Did you run installgrub before rebooting? On Tue, 7 Sep 2010, Piotr Jasiukajtis wrote: Hi, After upgrade from snv_138 to snv_142 or snv_145 I'm unable to boot the system. Here is what I get. Any idea why it's not able to import rpool? I saw this issue also on older builds on a different

Re: [zfs-discuss] new labelfix needed

2010-09-02 Thread Mark J Musante
On Wed, 1 Sep 2010, Benjamin Brumaire wrote: your point have only a rethoric meaning. I'm not sure what you mean by that. I was asking specifically about your situation. You want to run labelfix on /dev/rdsk/c0d1s4 - what happened to that slice that requires a labelfix? Is there

Re: [zfs-discuss] How to rebuild raidz after system reinstall

2010-09-02 Thread Mark J Musante
What does 'zpool import' show? If that's empty, what about 'zpool import -d /dev'? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to rebuild raidz after system reinstall

2010-09-02 Thread Mark J Musante
On Thu, 2 Sep 2010, Dominik Hoffmann wrote: I think, I just destroyed the information on the old raidz members by doing zpool create BackupRAID raidz /dev/disk0s2 /dev/disk1s2 /dev/disk2s2 It should have warned you that two of the disks were already formatted with a zfs pool. Did it not do

Re: [zfs-discuss] new labelfix needed

2010-08-31 Thread Mark J Musante
On Mon, 30 Aug 2010, Benjamin Brumaire wrote: As this feature didn't make it into zfs it would be nice to have it again. Better to spend time fixing the problem that requires a 'labelfix' as a workaround, surely. What's causing the need to fix vdev labels?

Re: [zfs-discuss] pool died during scrub

2010-08-30 Thread Mark J Musante
On Mon, 30 Aug 2010, Jeff Bacon wrote: All of this would be ok... except THOSE ARE THE ONLY DEVICES THAT WERE PART OF THE POOL. How can it be missing a device that didn't exist? The device(s) in question are probably the logs you refer to here: I can't obviously use b134 to import the

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Mark J Musante
On Fri, 27 Aug 2010, Rainer Orth wrote: zpool status thinks rpool is on c1t0d0s3, while format (and the kernel) correctly believe it's c11t0d0(s3) instead. Any suggestions? Try removing the symlinks or using 'devfsadm -C' as suggested here:

Re: [zfs-discuss] ZFS pool and filesystem version list, OpenSolaris builds list

2010-08-16 Thread Mark J Musante
I keep the pool version information up-to-date here: http://blogs.sun.com/mmusante/entry/a_zfs_taxonomy On Sun, 15 Aug 2010, Haudy Kazemi wrote: Hello, This is a consolidated list of ZFS pool and filesystem versions, along with the builds and systems they are found in. It is based on

Re: [zfs-discuss] Replaced pool device shows up in zpool status

2010-08-16 Thread Mark J Musante
On Mon, 16 Aug 2010, Matthias Appel wrote: Can anybody tell me how to get rid of c1t3d0 and heal my zpool? Can you do a zpool detach performance c1t3d0/o? If that works, then zpool replace performance c1t3d0 c1t0d0 should replace the bad disk with the new hot spare. Once the resilver

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Tue, 10 Aug 2010, seth keith wrote: # zpool status pool: brick state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Wed, 11 Aug 2010, Seth Keith wrote: When I do a zdb -l /dev/rdsk/any device I get the same output for all my drives in the pool, but I don't think it looks right: # zdb -l /dev/rdsk/c4d0 What about /dev/rdsk/c4d0s0? ___ zfs-discuss mailing

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Wed, 11 Aug 2010, seth keith wrote: NAME STATE READ WRITE CKSUM brick DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 c13d0 ONLINE 0 0 0 c4d0

Re: [zfs-discuss] zfs replace problems please please help

2010-08-10 Thread Mark J Musante
On Tue, 10 Aug 2010, seth keith wrote: first off I don't have the exact failure messages here, and I did not take good notes of the failures, so I will do the best I can. Please try and give me advice anyway. I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them all

Re: [zfs-discuss] How to identify user-created zfs filesystems?

2010-08-04 Thread Mark J Musante
You can use 'zpool history -l syspool' to show the username of the person who created the dataset. The history is in a ring buffer, so if too many pool operations have happened since the dataset was created, the information is lost. On Wed, 4 Aug 2010, Peter Taps wrote: Folks, In my

Re: [zfs-discuss] root pool expansion

2010-07-28 Thread Mark J Musante
On Wed, 28 Jul 2010, Gary Gendel wrote: Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs and the root pool is getting full. I do a backup of the pool nightly, so I feel confident that I don't need to mirror the drive and can break the mirror and expand the pool

Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Mark J Musante
On Thu, 15 Jul 2010, Tim Castle wrote: j...@opensolaris:~# zpool import -d /dev ...shows nothing after 20 minutes OK, then one other thing to try is to create a new directory, e.g. /mydev, and create in it symbolic links to only those drives that are part of your pool. Based on your

Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-14 Thread Mark J Musante
What does 'zpool import -d /dev' show? On Wed, 14 Jul 2010, Tim Castle wrote: My raidz1 (ZFSv6) had a power failure, and a disk failure. Now: j...@opensolaris:~# zpool import pool: files id: 3459234681059189202 state: UNAVAIL status: One or

Re: [zfs-discuss] ZFS fsck?

2010-07-06 Thread Mark J Musante
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote: Hi all With several messages in here about troublesome zpools, would there be a good reason to be able to fsck a pool? As in, check the whole thing instead of having to boot into live CDs and whatnot? You can do this with zpool scrub. It

Re: [zfs-discuss] ZFS fsck?

2010-07-06 Thread Mark J Musante
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote: what I'm saying is that there are several posts in here where the only solution is to boot onto a live cd and then do an import, due to metadata corruption. This should be doable from the installed system Ah, I understand now. A couple of

Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread Mark J Musante
On Mon, 24 May 2010, h wrote: i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB drives. i have installed the older 1TB drives in another system and would like to import the old pool to access some files i accidentally deleted from the new pool. Did you use the 'zpool

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-20 Thread Mark J Musante
On Wed, 19 May 2010, John Andrunas wrote: ff001f45e830 unix:die+dd () ff001f45e940 unix:trap+177b () ff001f45e950 unix:cmntrap+e6 () ff001f45ea50 zfs:ddt_phys_decref+c () ff001f45ea80 zfs:zio_ddt_free+55 () ff001f45eab0 zfs:zio_execute+8d () ff001f45eb50

Re: [zfs-discuss] Very serious performance degradation

2010-05-20 Thread Mark J Musante
On Thu, 20 May 2010, Edward Ned Harvey wrote: Also, since you've got s0 on there, it means you've got some partitions on that drive. You could manually wipe all that out via format, but the above is pretty brainless and reliable. The s0 on the old disk is a bug in the way we're formatting

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Mark J Musante
Do you have a coredump? Or a stack trace of the panic? On Wed, 19 May 2010, John Andrunas wrote: Running ZFS on a Nexenta box, I had a mirror get broken and apparently the metadata is corrupt now. If I try and mount vol2 it works but if I try and mount -a or mount vol2/vm2 is instantly

Re: [zfs-discuss] zpool lists 2 controllers the same, how do I replace one?

2010-04-19 Thread Mark J Musante
On Sun, 18 Apr 2010, Michelle Bhaal wrote: zpool lists my pool as having 2 disks which have identical names. One is offline, the other is online. How do I tell zpool to replace the offline one? If you're lucky, the device will be marked as not being present, and then you can use the GUID.

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Mark J Musante
On Wed, 7 Apr 2010, Neil Perrin wrote: There have previously been suggestions to read slogs periodically. I don't know if there's a CR raised for this though. Roch wrote up CR 6938883 Need to exercise read from slog dynamically Regards, markm ___

Re: [zfs-discuss] zpool split problem?

2010-04-01 Thread Mark J Musante
On Wed, 31 Mar 2010, Damon Atkins wrote: Why do we still need /etc/zfs/zpool.cache file??? The cache file contains a list of pools to import, not a list of pools that exist. If you do a zpool export foo and then reboot, we don't want foo to be imported after boot completes.

Re: [zfs-discuss] zpool split problem?

2010-04-01 Thread Mark J Musante
It would be nice for Oracle/Sun to produce a separate script which reset system/devices back to a install like beginning so if you move a OS disk with current password file and software from one system to another, and have it rebuild the device tree on the new system. You mean

Re: [zfs-discuss] zpool split problem?

2010-03-30 Thread Mark J Musante
OK, I see what the problem is: the /etc/zfs/zpool.cache file. When the pool was split, the zpool.cache file was also split - and the split happens prior to the config file being updated. So, after booting off the split side of the mirror, zfs attempts to mount rpool based on the information

Re: [zfs-discuss] zpool split problem?

2010-03-29 Thread Mark J Musante
On Sat, 27 Mar 2010, Frank Middleton wrote: Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted Rebooted from c0t1d0s0, only rpool

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-29 Thread Mark J Musante
On Mon, 29 Mar 2010, Victor Latushkin wrote: On Mar 29, 2010, at 1:57 AM, Jim wrote: Yes - but it does nothing. The drive remains FAULTED. Try to detach one of the failed devices: zpool detach tank 4407623704004485413 As Victor says, the detach should work. This is a known issue and

Re: [zfs-discuss] Convert from rz2 to rz1

2010-03-11 Thread Mark J Musante
On Thu, 11 Mar 2010, Lars-Gunnar Persson wrote: Is it possible to convert a rz2 array to rz1 array? I have a pool with to rz2 arrays. I would like to convert them to rz1. Would that be possible? No, you'll have to create a second pool with raidz1 and do a send | recv operation to copy the

Re: [zfs-discuss] Can you manually trigger spares?

2010-03-09 Thread Mark J Musante
On Mon, 8 Mar 2010, Tim Cook wrote: Is there a way to manually trigger a hot spare to kick in? Yes - just use 'zpool replace fserv 12589257915302950264 c3t6d0'. That's all the fma service does anyway. If you ever get your drive to come back online, the fma service should recognize that

Re: [zfs-discuss] Thoughts pls. : Create 3 way rpool mirror and shelve one mirror as a backup

2010-03-08 Thread Mark J Musante
On Sat, 6 Mar 2010, Richard Elling wrote: On Mar 6, 2010, at 5:38 PM, tomwaters wrote: My though is this, I remove the 3rd mirror disk and offsite it as a backup. To do this either: 1. upgrade to a later version where the zpool split command is available 2. zfs send/receive

Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-04 Thread Mark J Musante
It looks like you're running into a DTL issue. ZFS believes that ad16p2 has some data on it that hasn't been copied off yet, and it's not considering the fact that it's part of a raidz group and ad4p2. There is a CR on this, http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724

Re: [zfs-discuss] Moving dataset to another zpool but same mount?

2010-02-25 Thread Mark J Musante
On Wed, 24 Feb 2010, Gregory Gee wrote: files files/home files/mail files/VM I want to move the files/VM to another zpool, but keep the same mount point. What would be the right steps to create the new zpool, move the data and mount in the same spot? Create the new pool, take a snapshot

Re: [zfs-discuss] Import zpool from FreeBSD in OpenSolaris

2010-02-24 Thread Mark J Musante
On Tue, 23 Feb 2010, patrik wrote: I want to import my zpool's from FreeBSD 8.0 in OpenSolaris 2009.06. secureUNAVAIL insufficient replicas raidz1 UNAVAIL insufficient replicas c8t1d0p0 ONLINE c8t2d0s2 ONLINE c8t3d0s8 UNAVAIL

Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-23 Thread Mark J Musante
On Mon, 22 Feb 2010, tomwaters wrote: I have just installed open solaris 2009.6 on my server using a 250G laptop drive (using the entire drive). So, 2009.06 was based on 111b. There was a fix that went into build 117 that allows you to mirror to smaller disks if the metaslabs in zfs are

Re: [zfs-discuss] Removing Cloned Snapshot

2010-02-12 Thread Mark J Musante
On Fri, 12 Feb 2010, Daniel Carosone wrote: You can use zfs promote to change around which dataset owns the base snapshot, and which is the dependant clone with a parent, so you can deletehe other - but if you want both datasets you will need to keep the snapshot they share. Right. The

Re: [zfs-discuss] Detach ZFS Mirror

2010-02-11 Thread Mark J Musante
On Thu, 11 Feb 2010, Tony MacDoodle wrote: I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the mirror and plunk it in another system? Intact? Or as a new disk in the other system? If you want to break the mirror, and create a new pool on the disk, you can just do 'zpool

Re: [zfs-discuss] zfs import fails even though all disks are online

2010-02-11 Thread Mark J Musante
On Thu, 11 Feb 2010, Cindy Swearingen wrote: On 02/11/10 04:01, Marc Friesacher wrote: fr...@vault:~# zpool import pool: zedpool id: 10232199590840258590 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: zedpoolONLINE

Re: [zfs-discuss] Pool disk replacing fails

2010-02-05 Thread Mark J Musante
On Fri, 5 Feb 2010, Alexander M. Stetsenko wrote: NAMESTATE READ WRITE CKSUM mypool DEGRADED 0 0 0 mirrorDEGRADED 0 0 0 c1t4d0 DEGRADED 0 028 too many errors c1t5d0 ONLINE 0 0 0

Re: [zfs-discuss] ZPOOL somehow got same physical drive assigned twice

2010-02-01 Thread Mark J Musante
On Thu, 28 Jan 2010, TheJay wrote: Attached the zpool history. Did the resilver ever complete on the first c6t1d0? I see a second replace here: 2010-01-27.20:41:15 zpool replace rzpool2 c6t1d0 c6t16d0 2010-01-28.07:57:27 zpool scrub rzpool2 2010-01-28.20:39:42 zpool clear rzpool2 c6t1d0

Re: [zfs-discuss] ZPOOL somehow got same physical drive assigned twice

2010-01-28 Thread Mark J Musante
On Wed, 27 Jan 2010, TheJay wrote: Guys, Need your help. My DEV131 OSOL build with my 21TB disk system somehow got really screwed: This is what my zpool status looks like: NAME STATE READ WRITE CKSUM rzpool2 DEGRADED 0 0 0

Re: [zfs-discuss] Remove ZFS Mount Points

2010-01-22 Thread Mark J Musante
On Fri, 22 Jan 2010, Tony MacDoodle wrote: Can I move the below mounts under / ? rpool/export/export rpool/export/home /export/home Sure. Just copy the data out of the directory, do a zfs destroy on the two filesystems, and copy it back. For example: # mkdir /save # cp -r

Re: [zfs-discuss] Invalid zpool argument in Solaris 10 (10/09)

2010-01-14 Thread Mark J Musante
On Thu, 14 Jan 2010, Josh Morris wrote: Hello List, I am porting a block device driver(for a PCIe NAND flash disk driver) from OpenSolaris to Solaris 10. On Solaris 10 (10/09) I'm having an issues creating a zpool with the disk. Apparently I have an 'invalid argument' somewhere: % pfexec

Re: [zfs-discuss] unable to zfs destroy

2010-01-11 Thread Mark J Musante
On Fri, 8 Jan 2010, Rob Logan wrote: this one has me alittle confused. ideas? j...@opensolaris:~# zpool import z cannot mount 'z/nukeme': mountpoint or dataset is busy cannot share 'z/cle2003-1': smb add share failed j...@opensolaris:~# zfs destroy z/nukeme internal error: Bad exchange

Re: [zfs-discuss] Pool resize

2009-12-07 Thread Mark J Musante
Did you set autoexpand on? Conversely, did you try doing a 'zpool online bigpool disk' for each disk after the replace completed? On Mon, 7 Dec 2009, Alexandru Pirvulescu wrote: Hi, I've read before regarding zpool size increase by replacing the vdevs. The initial pool was a raidz2 with

Re: [zfs-discuss] Adding drives to system - disk labels not consistent

2009-12-01 Thread Mark J. Musante
This may be a dup of 6881631. Regards, markm On 1 Dec 2009, at 15:14, Cindy Swearingen cindy.swearin...@sun.com wrote: I was able to reproduce this problem on the latest Nevada build: # zpool create tank raidz c1t2d0 c1t3d0 c1t4d0 # zpool add -n tank raidz c1t5d0 c1t6d0 c1t7d0 would

Re: [zfs-discuss] Zpool hosed during testing

2009-11-11 Thread Mark J Musante
On 10 Nov, 2009, at 21.02, Ron Mexico wrote: This didn't occur on a production server, but I thought I'd post this anyway because it might be interesting. This is CR 6895446 and a fix for it should be going into build 129. Regards, markm ___

Re: [zfs-discuss] Zpool without any redundancy

2009-10-20 Thread Mark J Musante
On Mon, 19 Oct 2009, Espen Martinsen wrote: Let's say I've chosen to live with a zpool without redundancy, (SAN disks, has actually raid5 in disk-cabinet) What benefit are you hoping zfs will provide in this situation? Examine your situation carefully and determine what filesystem works

Re: [zfs-discuss] Checksum property change does not change pre-existing data - right?

2009-09-24 Thread Mark J Musante
On 23 Sep, 2009, at 21.54, Ray Clark wrote: My understanding is that if I zfs set checksum=different to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks. I

Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-09-24 Thread Mark J Musante
On Thu, 24 Sep 2009, Paul Archer wrote: I may have missed something in the docs, but if I have a file in one FS, and want to move it to another FS (assuming both filesystems are on the same ZFS pool), is there a way to do it outside of the standard mv/cp/rsync commands? Not yet. CR 6483179

Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Mark J Musante
On Mon, 14 Sep 2009, Marty Scholes wrote: I really want to move back to 2009.06 and keep all of my files / snapshots. Is there a way somehow to zfs send an older stream that 2009.06 will read so that I can import that into 2009.06? Can I even create an older pool/dataset using 122?

Re: [zfs-discuss] raidz replace issue

2009-09-14 Thread Mark J Musante
On Sat, 12 Sep 2009, Jeremy Kister wrote: scrub: resilver in progress, 0.12% done, 108h42m to go [...] raidz1 DEGRADED 0 0 0 c3t8d0ONLINE 0 0 0 c5t8d0ONLINE 0 0 0 c3t9d0ONLINE 0 0

Re: [zfs-discuss] raidz replace issue

2009-09-13 Thread Mark J Musante
The device is listed with s0; did you try using c5t9d0s0 as the name? On 12 Sep, 2009, at 17.44, Jeremy Kister wrote: [sorry for the cross post to solarisx86] One of my disks died that i had in a raidz configuration on a Sun V40z with Solaris 10u5. I took the bad disk out, replaced the

Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Mark J Musante
On Fri, 28 Aug 2009, Dave wrote: Thanks, Trevor. I understand the RFE/CR distinction. What I don't understand is how this is not a bug that should be fixed in all solaris versions. Just to get the terminology right: CR means Change Request, and can refer to Defects (bugs) or RFE's. Defects

Re: [zfs-discuss] Problem booting with zpool

2009-08-27 Thread Mark J Musante
Hi Stephen, Have you got many zvols (or snapshots of zvols) in your pool? You could be running into CR 6761786 and/or 6693210. On Thu, 27 Aug 2009, Stephen Green wrote: I'm having trouble booting with one of my zpools. It looks like this: pool: tank state: ONLINE scrub: none requested

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Mark J Musante
On Tue, 28 Jul 2009, Glen Gunselman wrote: # zpool list NAME SIZE USED AVAILCAP HEALTH ALTROOT zpool1 40.8T 176K 40.8T 0% ONLINE - # zfs list NAME USED AVAIL REFER MOUNTPOINT zpool1 364K 32.1T 28.8K /zpool1 This is normal, and

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Mark J Musante
On Wed, 29 Jul 2009, Glen Gunselman wrote: Where would I see CR 6308817 my usual search tools aren't find it. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817 Regards, markm ___ zfs-discuss mailing list

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Mark J Musante
On Wed, 29 Jul 2009, David Magda wrote: Which makes me wonder: is there a programmatic way to determine if a path is on ZFS? Yes, if it's local. Just use df -n $path and it'll spit out the filesystem type. If it's mounted over NFS, it'll just say something like nfs or autofs, though.

Re: [zfs-discuss] Single disk parity

2009-07-08 Thread Mark J Musante
On Wed, 8 Jul 2009, Moore, Joe wrote: The copies code is nice because it tries to put each copy far away from the others. This does have a significant performance impact when on a single spindle, however, because each logical write will be written here and then a disk seek to write it to

Re: [zfs-discuss] Q: zfs log device

2009-07-01 Thread Mark J Musante
On Tue, 30 Jun 2009, John Hoogerdijk wrote: i've setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log device. to see how well it works, i ran bonnie++, but never saw any io's on the log device (using iostat -nxce) . pool status is good - no issues or errors. any ideas? Try

Re: [zfs-discuss] zpool import: Cannot mount,

2009-06-29 Thread Mark J Musante
On Mon, 29 Jun 2009, Carsten Aulbert wrote: Is there any way to force zpool import to re-order that? I could delete all stuff under BACKUP, however given the size I don't really want to. Do a zpool export first, and then check to see what's in /atlashome. My bet is that the BACKUP directory

Re: [zfs-discuss] zpool import: Cannot mount,

2009-06-29 Thread Mark J Musante
On Mon, 29 Jun 2009, Carsten Aulbert wrote: s11 console login: root Password: Last login: Mon Jun 29 10:37:47 on console Sun Microsystems Inc. SunOS 5.10 Generic January 2005 s11:~# zpool export atlashome s11:~# ls -l /atlashome /atlashome: No such file or directory s11:~# zpool import

Re: [zfs-discuss] Narrow escape!

2009-06-23 Thread Mark J Musante
On Mon, 22 Jun 2009, Ross wrote: All seemed well, I replaced the faulty drive, imported the pool again, and kicked off the repair with: # zpool replace zfspool c1t1d0 What build are you running? Between builds 105 and 113 inclusive there's a bug in the resilver code which causes it to miss

Re: [zfs-discuss] clones and sub-datasets

2009-06-16 Thread Mark J Musante
On Mon, 15 Jun 2009, Todd Stansell wrote: Any thoughts on how this can be done? I do have other systems I can use to test this procedure, but ideally it would not introduce any downtime, but that can be arranged if necessary. I think the only work-around is to re-promote 'data', destroy the

Re: [zfs-discuss] Recover ZFS destroyed dataset?

2009-06-05 Thread Mark J Musante
Hi Jim, See if 'zpool history' gives you what you're looking for. Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Quick adding devices question

2009-05-29 Thread Mark J Musante
On Fri, 29 May 2009, Rich Teer wrote: zpool attach dpool c1t0d0 c2t0d0 zpool attach dpool c1t1d0 c2t1d0 zpool attach dpool c1t2d0 c2t2d0 These should all be zpool add dpool mirror {disk1} {disk2}, but yes. I recommend trying this out using files instead of disks beforehand so you get a

Re: [zfs-discuss] LU snv_93 - snv_101a (ZFS - ZFS )

2009-05-22 Thread Mark J Musante
On Thu, 21 May 2009, Nandini Mocherla wrote: Then I booted into failsafe mode of 101a and then tried to run the following command as given in luactivate output. Yeah, that's a known bug in the luactivate output. CR 6722845 # mount -F zfs /dev/dsk/c1t2d0s0 /mnt cannot open

Re: [zfs-discuss] replicating a root pool

2009-05-21 Thread Mark J Musante
On Thu, 21 May 2009, Ian Collins wrote: I'm trying to use zfs send/receive to replicate the root pool of a system and I can't think of a way to stop the received copy attempting to mount the filesystem over the root of the destination pool. If you're using build 107 or later, there's a

Re: [zfs-discuss] Areca 1160 ZFS

2009-05-07 Thread Mark J Musante
On Thu, 7 May 2009, Mike Gerdts wrote: Perhaps you have change the configuration of the array since the last reconfiguration boot. If you run devfsadm then run format, does it see more disks? Another thing to check is to see if the controller has a jbod mode as opposed to passthrough.

Re: [zfs-discuss] Destroying a zfs dataset

2009-04-17 Thread Mark J Musante
On Fri, 17 Apr 2009, Mark J Musante wrote: The dependency is based on the names. I should clarify what I mean by that. There are actually two dependencies here: one is based on dataset names, and one is based on snapshots and clones. If there are two datasets, pool/foo and pool/foo/bar

Re: [zfs-discuss] vdev_disk_io_start() sending NULL pointer in ldi_ioctl()

2009-04-10 Thread Mark J Musante
On Thu, 9 Apr 2009, shyamali.chakrava...@sun.com wrote: Hi All, I have corefile where we see NULL pointer de-reference PANIC as we have sent (deliberately) NULL pointer for return value. vdev_disk_io_start() error = ldi_ioctl(dvd-vd_lh, zio-io_cmd,

Re: [zfs-discuss] ZIL SSD performance testing... -IOzone works great, others not so great

2009-04-10 Thread Mark J Musante
On Fri, 10 Apr 2009, Patrick Skerrett wrote: degradation) when these write bursts come in, and if I could buffer them even for 60 seconds, it would make everything much smoother. ZFS already batches up writes into a transaction group, which currently happens every 30 seconds. Have you

Re: [zfs-discuss] RFE: creating multiple clones in one zfs(1) call and one txg

2009-03-27 Thread Mark J Musante
On Fri, 27 Mar 2009, Alec Muffett wrote: The inability to create more than 1 clone at a time (ie: in separate TXGs) is something which has hampered me (and several projects on which I have worked) for some years, now. Hi Alec, Does CR 6475257 cover what you're looking for? Regards, markm

Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Mark J Musante
On Tue, 17 Mar 2009, Neal Pollack wrote: Can anyone share some instructions for setting up the rpool mirror of the boot disks during the Solaris Nevada (SXCE) install? You'll need to use the text-based installer, and in there you choose two the two bootable disks instead of just one.

Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Mark J Musante
On 17 Mar, 2009, at 16.21, Bryan Allen wrote: Then mirror the VTOC from the first (zfsroot) disk to the second: # prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2 # zpool attach -f rpool c1t0d0s0 c1t1d0s0 # zpool status -v And then you'll still need to run installgrub to put

Re: [zfs-discuss] ZFS snapshot successfully but zfs list -r does not list the snapshot

2009-03-06 Thread Mark J Musante
Hi Steven, Try doing 'zfs list -t all'. This is a change that went in late last year to list only datasets unless snapshots were explicitly requested. On Fri, 6 Mar 2009, Steven Sim wrote: Gurus; I am using OpenSolaris 2008.11 snv_101b_rc2 X86 Prior to this I was using SXCE built 91

Re: [zfs-discuss] large file copy bug?

2009-03-06 Thread Mark J Musante
On Fri, 6 Mar 2009, Blake wrote: I have savecore enabled, but it doesn't look like the machine is dumping core as it should - that is, I don't think it's a panic - I suspect interrupt handling. Then when you say you had a machine crash, what did you mean? Did you look in /var/crash/* to see

Re: [zfs-discuss] large file copy bug?

2009-03-06 Thread Mark J Musante
On Fri, 6 Mar 2009, Blake wrote: I have savecore enabled, but nothing in /var/crash: r...@filer:~# savecore -v savecore: dump already processed r...@filer:~# ls /var/crash/filer/ r...@filer:~# OK, just to ask the dumb questions: is dumpadm configured for /var/crash/filer? Is the dump zvol

Re: [zfs-discuss] large file copy bug?

2009-03-05 Thread Mark J Musante
On Thu, 5 Mar 2009, Blake wrote: I had a 2008.11 machine crash while moving a 700gb file from one machine to another using cp. I looked for an existing bug for this, but found nothing. Has anyone else seen behavior like this? I wanted to check before filing a bug. Have you got a copy of

Re: [zfs-discuss] ZFS vdev_cache

2009-02-13 Thread Mark J Musante
On Fri, 13 Feb 2009, Tony Marshall wrote: How would i obtain the current setting for the vdev_cache from a production system? We are looking at trying to tune ZFS for better performance with respect to oracle databases, however before we start changing settings via the /etc/system file we

Re: [zfs-discuss] Cannot Mirror RPOOL, Can't Label Disk to SMI

2009-02-03 Thread Mark J Musante
Handojo wrote: hando...@opensolaris:~# zpool add rpool c4d0 Two problems: first, the command needed is 'zpool attach', because you're making a mirror. 'zpool add' is for extending stripes, and currently stripes are not supported as root pools. The second problem is that when the drive is

Re: [zfs-discuss] how to set mountpoint to default?

2009-01-31 Thread Mark J Musante
To set the mountpoint back to default, use 'zfs inherit mountpoint dataset' -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] set mountpoint but don't mount?

2009-01-30 Thread Mark J Musante
On Fri, 30 Jan 2009, Frank Cusack wrote: so, is there a way to tell zfs not to perform the mounts for data2? or another way i can replicate the pool on the same host, without exporting the original pool? There is not a way to do that currently, but I know it's coming down the road.

Re: [zfs-discuss] RFE: parsable iostat and zpool layout

2009-01-30 Thread Mark J Musante
Hi Pål, CR 6420274 covers the -p part of your question. As far as kstats go, we only have them in the arc and the vdev read-ahead cache. Regards, markm -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Hang on zfs import - build 107

2009-01-30 Thread Mark J Musante
On Fri, 30 Jan 2009, Ed Kaczmarek wrote: And/or step me thru the required mdb/kdb/whatever it's called stack trace dump command sequence after booting with -kd Dan Mick's got a good guide on his blog: http://blogs.sun.com/dmick/entry/diagnosing_kernel_hangs_panics_with Regards, markm

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-28 Thread Mark J Musante
On Wed, 28 Jan 2009, Richard Elling wrote: Orvar Korvar wrote: I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a similar vein? Would it be easy to do? Yes. To be specific, you use the 'cache' argument to zpool, as in: zpool create pool ... cache cache-device

Re: [zfs-discuss] Failure to boot from zfs on Sun v880

2009-01-22 Thread Mark J Musante
On Thu, 22 Jan 2009, Al Slater wrote: Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not supported This line is coming from svm, which leads me to believe that the zfs boot blocks were not properly installed by live upgrade. You can try doing this by hand, with the

  1   2   >