Re: [zfs-discuss] ZFS filesystem online backup question

2007-03-27 Thread Mark J Musante
On Tue, 27 Mar 2007, [UTF-8] �^Aukasz wrote: zfs send then would: 1. create replicate snapshot if it does not exist 2. send data 3. wait 10 seconds 4. rename snapshot to replicate_previous ( destroy previous if exists ) 5. goto 1. All snapshot operations are done in kernel - it works

Re: [zfs-discuss] Re: ZFS filesystem online backup question

2007-03-27 Thread Mark J Musante
On Tue, 27 Mar 2007, [UTF-8] �^Aukasz wrote: Out of curiosity, what is the timing difference between a userland script and performing the operations in the kernel? Operation takes 15 - 20 seconds In kernel it takes ( time in ms ): [between 2.5 and 14.5 seconds] Very nice improvement.

Re: [zfs-discuss] ZFS and UFS performance

2007-03-28 Thread Mark J Musante
On Wed, 28 Mar 2007, prasad wrote: We create iso images of our product in the following way (high-level): # mkfile 3g /isoimages/myiso # lofiadm -a /isoimages/myiso /dev/lofi/1 # newfs /dev/rlofi/1 # mount /dev/lofi/1 /mnt # cd /mnt; zcat /product/myproduct.tar.Z | tar xf - How big does

Re: [zfs-discuss] Add mirror to an existing Zpool

2007-04-10 Thread Mark J Musante
On Tue, 10 Apr 2007, Martin Girard wrote: Is it possible to make my zpool redundant by adding a new disk in the pool and making it a mirror with the initial disk? Sure, by using zpool attach: # mkfile 64m /tmp/foo /tmp/bar # zpool create tank /tmp/foo # zpool status pool: tank state:

Re: [zfs-discuss] Poor man's backup by attaching/detaching mirror drives on a _striped_ pool?

2007-04-10 Thread Mark J Musante
On Tue, 10 Apr 2007, Constantin Gonzalez wrote: Has anybody tried it yet with a striped mirror? What if the pool is composed out of two mirrors? Can I attach devices to both mirrors, let them resilver, then detach them and import the pool from those? You'd want to export them, not detach

Re: [zfs-discuss] Renaming a pool?

2007-04-10 Thread Mark J Musante
On Tue, 10 Apr 2007, Rich Teer wrote: I have a pool called tank/home/foo and I want to rename it to tank/home/bar. What's the best way to do this (the zfs and zpool man pages don't have a rename option)? In fact, there is a rename option for zfs: # zfs create tank/home # zfs create

Re: [zfs-discuss] How to bind the oracle 9i data file to zfs volumes

2007-04-12 Thread Mark J Musante
On Thu, 12 Apr 2007, Simon wrote: I'm installing Oracle 9i on Solaris 10 11/06(update 3),I created some zfs volumes which will be used by oracle data file,as: Have you tried using SVM volumes? I ask, because SVM does the same thing: soft-link to /devices If it works for SVM and not for ZFS,

Re: [zfs-discuss] Permanently removing vdevs from a pool

2007-04-19 Thread Mark J Musante
On Thu, 19 Apr 2007, Mario Goebbels wrote: Is it possible to gracefully and permanently remove a vdev from a pool without data loss? Is this what you're looking for? http://bugs.opensolaris.org/view_bug.do?bug_id=4852783 If so, the answer is 'not yet'. Regards, markm

Re: [zfs-discuss] ZFS Boot: Dividing up the name space

2007-04-24 Thread Mark J Musante
On Tue, 24 Apr 2007, Darren J Moffat wrote: There are obvious other places that would really benefit but I think having them as separate datasets really depends on what the machine is doing. For example /var/apache if you really are a webserver, but then why not go one better and split out

Re: [zfs-discuss] Re: ZFS disables nfs/server on a host

2007-04-26 Thread Mark J Musante
On Thu, 26 Apr 2007, Ben Miller wrote: I just rebooted this host this morning and the same thing happened again. I have the core file from zfs. [ Apr 26 07:47:01 Executing start method (/lib/svc/method/nfs-server start) ] Assertion failed: pclose(fp) == 0, file ../common/libzfs_mount.c,

Re: [zfs-discuss] zfs tcsh command completion

2007-05-08 Thread Mark J Musante
On 8 May, 2007, at 22.51, Cyril Plisko wrote: So I quickly hacked together a script which defines the necessary complete clauses (yes I am a tcsh user). After playing with it for a while I decided to share it with community in a hope that it may be improved/extended and be a useful tool in

Re: [zfs-discuss] Re: Extremely long ZFS destroy operations

2007-05-10 Thread Mark J Musante
On Wed, 9 May 2007, Anantha N. Srirama wrote: However, the poor performance of the destroy is still valid. It is quite possible that we might create another clone for reasons beyond my original reason. There are a few open bugs against destroy. It sounds like you may be running into 6509628

Re: [zfs-discuss] Is this a workable ORACLE disaster recovery solution?

2007-05-10 Thread Mark J Musante
On Thu, 10 May 2007, Bruce Shaw wrote: I don't have enough disk to do clones and I haven't figured out how to mount snapshots directly. Maybe I'm misunderstanding what you're saying, but 'zfs clone' is exactly the way to mount a snapshot. Creating a clone uses up a negligible amount of disk

Re: [zfs-discuss] ZFS Snapshot destroy to

2007-05-11 Thread Mark J Musante
On Fri, 11 May 2007, Jason J. W. Williams wrote: Is it possible (or even technically feasible) for zfs to have a destroy to feature? Basically destroy any snapshot older than a certain date? Sorta-kinda. You can use 'zfs get' to get the creation time of a snapshot. If you give it -p, it'll

Re: [zfs-discuss] Re: A quick ZFS question: RAID-Z Disk Replacement + Growth ?

2007-05-14 Thread Mark J Musante
On Mon, 14 May 2007, Alec Muffett wrote: I suspect the proper thing to do would be to build the six new large disks into a new RAID-Z vdev, add it as a mirror of the older, smaller-disk RAID-Z vdev, rezilver to zynchronize them, and then break the mirror. The 'zpool replace' command is a

Re: [zfs-discuss] Odd zpool create error

2007-05-15 Thread Mark J Musante
On Tue, 15 May 2007, Trevor Watson wrote: I don't suppose that it has anything to do with the flag being wm instead of wu on your second drive does it? Maybe if the driver thinks slice 2 is writeable, it treats it as a valid slice? If the slice doesn't take up the *entire* disk, then it

Re: [zfs-discuss] ZVol Panic on 62

2007-05-29 Thread Mark J Musante
On Fri, 25 May 2007, Ben Rockwood wrote: May 25 23:32:59 summer unix: [ID 836849 kern.notice] May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740: May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ff00232c3a80 addr=490 occurred in

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Mark J Musante
On Fri, 1 Jun 2007, Krzys wrote: bash-3.00# zpool replace mypool c1t2d0 emcpower0a bash-3.00# zpool status pool: mypool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread Mark J Musante
On Fri, 1 Jun 2007, John Plocher wrote: This seems especially true when there is closure on actions - the set of zfs snapshot foo/[EMAIL PROTECTED] zfs destroy foo/[EMAIL PROTECTED] commands is (except for debugging zfs itself) a noop Note that if you use the recursive

Re: [zfs-discuss] ZFS Send/RECV

2007-06-04 Thread Mark J Musante
On Fri, 1 Jun 2007, Ben Bressler wrote: When I do the zfs send | ssh zfs recv part, the file system (folder) is getting created, but none of the data that I have in my snapshot is sent. I can browse on the source machine to view the snapshot data pool/.zfs/snapshot/snap-name and I see the

Re: [zfs-discuss] Re: Mac OS X Leopard to use ZFS

2007-06-12 Thread Mark J Musante
On Mon, 11 Jun 2007, Rick Mann wrote: ZFS Readonly implemntation is loaded! Is that a copy-n-paste error, or is that typo in the actual output? Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] drive displayed multiple times

2007-06-13 Thread Mark J Musante
On Tue, 12 Jun 2007, Tim Cook wrote: This pool should have 7 drives total, which it does, but for some reason c4d0 is displayed twice. Once as online (which it is), and once as unavail (which it is not). What's the name of the 7th drive? Did you take all the drives from the old system and

Re: [zfs-discuss] ZFS version 5 to version 6 fails to import or upgrade

2007-06-25 Thread Mark J Musante
On Tue, 19 Jun 2007, John Brewer wrote: bash-3.00# zpool import pool: zones id: 4567711835620380868 state: ONLINE status: The pool is formatted using an older on-disk version. action: The pool can be imported using its name or numeric identifier, though some features will

Re: [zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread Mark J Musante
On Wed, 27 Jun 2007, [UTF-8] Jürgen Keil wrote: Yep, I just tried it, and it refuses to zpool import the newer pool, telling me about the incompatible version. So I guess the pool format isn't the correct explanation for the Dick Davies' (number9) problem. Have you tried creating the pool on

Re: [zfs-discuss] zfs no dataset available

2007-07-13 Thread Mark J Musante
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote: NAMESTATE READ WRITE CKSUM poolUNKNOWN 0 0 0 c0d0s5UNKNOWN 0 0 0 c0d0s6UNKNOWN 0 0 0 c0d0s4UNKNOWN 0 0 0

Re: [zfs-discuss] zfs no dataset available

2007-07-13 Thread Mark J Musante
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote: zpool import pool (my pool is named 'pool') returns cannot import 'pool': no such pool available What does 'zpool import' by itself show you? It should give you a list of available pools to import. Regards, markm

Re: [zfs-discuss] zfs no dataset available

2007-07-13 Thread Mark J Musante
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote: zpool list it shows my pool with health UNKNOWN That means it's already imported. What's the output of 'zpool status'? Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs no dataset available

2007-07-16 Thread Mark J Musante
On Mon, 16 Jul 2007, Kwang-Hyun Baek wrote: Is there any way to fix this? I actually tried to destroy the pool and try to create a new one, but it doesn't let me. Whenever I try, I get the following error: [EMAIL PROTECTED]:/var/crash# zpool create -f pool c0d0s5 internal error: No such

Re: [zfs-discuss] zfs no dataset available

2007-07-17 Thread Mark J Musante
On Tue, 17 Jul 2007, Kwang-Hyun Baek wrote: # uname -a SunOS solaris-devx 5.11 opensol-20070713 i86pc i386 i86pc === What's more interesting is that ZFS version shows that it's 8does it even exist? Yes, 8 was created to support

Re: [zfs-discuss] zpool create -f not applicable to hot spares

2007-09-17 Thread Mark J Musante
On Mon, 17 Sep 2007, Robert Milkowski wrote: If you do 'zpool create -f test A B C spare D E' and D or E contains UFS filesystem then despite of -f zpool command will complain that there is UFS file system on D. This was fixed recently in build 73. See CR 6573276. Regards, markm

Re: [zfs-discuss] zfs chattiness at boot time

2007-09-24 Thread Mark J Musante
On Mon, 24 Sep 2007, Michael Schuster wrote: I recently started seeing zfs chattiness at boot time: reading zfs config and something like mounting zfs filesystems (n/n). This was added recently because ZFS can take a while to mount large configs. Consoles would appear to freeze after the

Re: [zfs-discuss] zfs chattiness at boot time

2007-09-24 Thread Mark J Musante
On Mon, 24 Sep 2007, Michael Schuster wrote: I'm also quite prepared to see a running tally(?) after an initial timeout (your minute) has gone by and we haven't finished ... but I guess we'd also have to make sure that the output generated isn't messed up by other output to the console that's

Re: [zfs-discuss] zfs boot issue, changing device id

2007-10-09 Thread Mark J Musante
On Mon, 8 Oct 2007, Kugutsumen wrote: I just tried.. mount -o rw,remount / zpool import -f tank mount -F zfs tank/rootfs /a zpool status ls -l /dev/dsk/c1t0d0s0 # /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a csh setenv TERM vt100 vi /a/boot/solaris/bootenv.rc #

Re: [zfs-discuss] Limiting the power of zfs destroy

2007-10-23 Thread Mark J Musante
On Tue, 23 Oct 2007, A Darren Dunham wrote: On Tue, Oct 23, 2007 at 09:55:58AM -0700, Scott Laird wrote: I'm writing a couple scripts to automate backups and snapshots, and I'm finding myself cringing every time I call 'zfs destroy' to get rid of a snapshot, because a small typo could take

Re: [zfs-discuss] zpool question

2007-10-30 Thread Mark J Musante
On Mon, 29 Oct 2007, Krzys wrote: everything is great but I've made a mistake and I would like to remove emcpower2a from my pool and I cannot do that... Well the mistake that I made is that I did not format my device correctly so instead of adding 125gig I added 128meg You can't remove it

Re: [zfs-discuss] internal error: Bad file number

2007-11-15 Thread Mark J Musante
On Thu, 15 Nov 2007, Manoj Nayak wrote: I am getting following error message when I run any zfs command.I have attach the script I use to create ramdisk image for Thumper. # zfs volinit internal error: Bad file number Abort - core dumped This sounds as if you may have somehow lost the

Re: [zfs-discuss] zpool question

2007-11-16 Thread Mark J Musante
On Thu, 15 Nov 2007, Brian Lionberger wrote: The question is, should I create one zpool or two to hold /export/home and /export/backup? Currently I have one pool for /export/home and one pool for /export/backup. Should it be on pool for both??? Would this be better and why? One thing to

Re: [zfs-discuss] Fwd: zfs boot suddenly not working

2007-12-18 Thread Mark J Musante
I can think of two things to check: First, is there a 'bootfs' line in your grub entry? I didn't see it in the original email; not sure if it was left out or it simply isn't present. If it's not present, ensure the 'bootfs' property is set on your pool. Secondly, ensure that there's a

Re: [zfs-discuss] Question - does a snapshot of root include child filesystems?

2007-12-19 Thread Mark J Musante
On Wed, 19 Dec 2007, Ross wrote: The title says it all really, we'll be creating one big zpool here, with many sub filesystems for various systems. Am I right in thinking that we can use snapshots of the root filesystem to take a complete backup of everything? I believe what you're

Re: [zfs-discuss] zfs pool does remount automatically

2008-01-07 Thread Mark J Musante
On Mon, 7 Jan 2008, Andre Lue wrote: I usually have to do a zpool import -f pool to get it back. What do you mean by 'usually'? After the import, what's the output of 'zpool status'? During reboot, are there any relevant messages in the console? Regards, markm

Re: [zfs-discuss] zpool remove problem

2008-01-11 Thread Mark J Musante
On Fri, 11 Jan 2008, Wyllys Ingersoll wrote: I want to remove c0d0p4: # zpool remove bigpool c0d0p4 cannot remove c0d0p4: only inactive hot spares or cache devices can be removed Use replace, not remove. Regards, markm ___ zfs-discuss mailing list

Re: [zfs-discuss] zpool remove problem

2008-01-15 Thread Mark J Musante
On Mon, 14 Jan 2008, Wyllys Ingersoll wrote: That doesn't work either. The zpool replace command didn't work? You wouldn't happen to have a copy of the errors you received, would you? I'd like to see that. Regards, markm ___ zfs-discuss mailing

Re: [zfs-discuss] zfs pool unavailable!

2008-02-29 Thread Mark J Musante
On Fri, 29 Feb 2008, Justin Vassallo wrote: # zpool status pool: external state: FAULTED status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'.

Re: [zfs-discuss] 'zfs create' hanging

2008-03-07 Thread Mark J Musante
On Fri, 7 Mar 2008, Paul Raines wrote: zfs create -o quota=131G -o reserv=131G -o recsize=8K zpool1/itgroup_001 and this is still running now. truss on the process shows nothing. I don't know how to debug it beyond that. I thought I would ask for any info from this list before I just

Re: [zfs-discuss] Weird ZFS pool problem

2008-03-21 Thread Mark J Musante
Drew Schatt wrote: Can anyone explain how the following came about, and/or how to get rid of it? What does zdb show? Also, do the partitions look like for c5t0d0? Did something get overlapped? Regards, markm ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS mountpoints

2008-03-24 Thread Mark J Musante
On Sun, 23 Mar 2008, msl wrote: I have some zfs filesystems with two // at the beggining like //dir1/dir2/dir3. And some other filesystems correct with just one / (/dir1/dir2/). The question is: Can i set the mountpoint correctly? You can set the mountpoint at any time with 'zfs set

Re: [zfs-discuss] zfs raidz2 configuration mistake

2008-05-21 Thread Mark J Musante
On Wed, 21 May 2008, Justin Vassallo wrote: zpool add -f external c12t0d0p0 zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe that's because the fs was online) No, it had nothing to do with the pool being online. It was because a single disk was being added to a

Re: [zfs-discuss] zfs raidz2 configuration mistake

2008-05-21 Thread Mark J Musante
On Wed, 21 May 2008, Claus Guttesen wrote: Aren't one supposed to be able to add more disks to an existing raidz(2) pool and have the data spread all disks in the pool automagically? Alas, that is not yet possible. See Adam's blog for details:

Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-03 Thread Mark J Musante
On Tue, 3 Jun 2008, Gordon Ross wrote: I'd really like to know: What are the conditions under which the installer will offer ZFS root? Only the text-based installer will offer it - not the GUI. Regards, markm ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS root compressed ?

2008-06-05 Thread Mark J Musante
On Jun 5, 2008, at 4:43 PM, Bill Sommerfeld wrote: after install, I'd think you could play games with zfs send | zfs receive on an inactive BE to rewrite everything with the desired attributes (more important for copies than compression). I blogged about something similar about a year ago:

Re: [zfs-discuss] Live Upgrade to snv_90 on a system with existing ZFS boot

2008-06-05 Thread Mark J Musante
On Thu, 5 Jun 2008, Albert Lee wrote: It doesn't seem to change the bootfs property on zpl or GRUB's menu.lst on the zpool, so we do it manually: It *does* acutally update bootfs menu.lst, but not until after the init 6 is run. Regards, markm

Re: [zfs-discuss] zfs mirror broken?

2008-06-24 Thread Mark J Musante
On Tue, 24 Jun 2008, Justin Vassallo wrote: # zfs list NAME USED AVAIL REFER MOUNTPOINT external 449G 427G 27.4K /external external/backup447G 427G 374G /external/backup # zoneadm -z anzan boot could not verify fs /backup: could not access

Re: [zfs-discuss] 'zfs list' output showing incorrect mountpoint after boot -Z

2008-06-26 Thread Mark J Musante
On Jun 26, 2008, at 8:11 PM, Sumit Gupta wrote: [EMAIL PROTECTED] is the snapshot of the original installation on snv_92. snv_92.backup is the clone. You can see that the / is mounted on snv_92.backup but in zfs list output it still shows that '/' is mounted on snv_92. It's showing

Re: [zfs-discuss] zfs mount failed at boot stops network services.

2008-06-27 Thread Mark J Musante
On Fri, 27 Jun 2008, wan_jm wrote: the procedure is follows: 1. mkdir /tank 2. touch /tank/a 3. zpool create tank c0d0p3 this command give the following error message: cannot mount '/tank': directory is not empty; 4. reboot. then the os can only be login in from console. does it a bug?

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2008-07-10 Thread Mark J Musante
On Thu, 10 Jul 2008, Mark Phalan wrote: I find this annoying as well. Another way that would help (but is fairly orthogonal to your suggestion) would be to write a completion module for zsh/bash/whatever that could tab-complete options to the z* commands including zfs filesystems. You

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2008-07-10 Thread Mark J Musante
On Thu, 10 Jul 2008, Tim Foster wrote: Mark Musante (famous for recently beating the crap out of lu) Heh. Although at this point it's hard to tell who's the beat-er and who's the beat-ee... Regards, markm ___ zfs-discuss mailing list

Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-22 Thread Mark J Musante
On Tue, 22 Jul 2008, Rainer Orth wrote: I just wanted to attach a second mirror to a ZFS root pool on an Ultra 1/170E running snv_93. I've followed the workarounds for CR 6680633 and 6680633 from the ZFS Admin Guide, but booting from the newly attached mirror fails like so: I think you're

Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-23 Thread Mark J Musante
On Wed, 23 Jul 2008, [EMAIL PROTECTED] wrote: Rainer, Sorry for your trouble. I'm updating the installboot example in the ZFS Admin Guide with the -F zfs syntax now. We'll fix the installboot man page as well. Mark, I don't have an x86 system to test right now, can you send me the

Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-25 Thread Mark J Musante
On Fri, 25 Jul 2008, Alan Burlison wrote: Enda O'Connor wrote: probably 6722767 lucreate did not add new BE to menu.lst ( or grub ) Yeah, I found that bug, added A CR bumped the priority. Unfortunatrly there's no analysis or workaround in the bug, so I've no idea what the real problem

Re: [zfs-discuss] liveupgrade ufs root - zfs ?

2008-08-28 Thread Mark J Musante
On Thu, 28 Aug 2008, Paul Floyd wrote: Does anyone have a pointer to a howto for doing a liveupgrade such that I can convert the SXCE 94 UFS BE to ZFS (and liveupgrade to SXCE 96 while I'm at it) if this is possible? Searching with google shows a lot of blogs that describe the early

Re: [zfs-discuss] RFE: allow zfs to interpret '.' as da datatset?

2008-09-02 Thread Mark J Musante
On Mon, 1 Sep 2008, Gavin Maltby wrote: I'd like to be able to utter cmdlines such as $ zfs set readonly=on . $ zfs snapshot [EMAIL PROTECTED] with '.' interpreted to mean the dataset corresponding to the current working directory. Sounds like it would be a useful RFE. This would

Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Mark J. Musante
On 3 Sep 2008, at 05:20, F. Wessels [EMAIL PROTECTED] wrote: Hi, can anybody describe the correct procedure to replace a disk (in a working OK state) with a another disk without degrading my pool? This command ought to do the trick: zfs replace pool old-disk new-disk The type of pool

Re: [zfs-discuss] How to release/destroy ZFS volume dedicated to dump ?

2008-09-08 Thread Mark J Musante
On Mon, 8 Sep 2008, jan damborsky wrote: Is there any way to release dump ZFS volume after it was activated by dumpadm(1M) command ? Try 'dumpadm -d swap' to point the dump to the swap device. Regards, markm ___ zfs-discuss mailing list

Re: [zfs-discuss] Restore a ZFS Root Mirror

2008-09-13 Thread Mark J. Musante
On 13 Sep 2008, at 08:33, Guido [EMAIL PROTECTED] wrote: Hi all, after installing OpenSolaris 2008.05 in VirtualBox I've created a ZFS Root Mirror by: zfs attach rpool Disk B and it works like a charm. Now I tried to restore the rpool from the worst Case Scenario: The Disk the

Re: [zfs-discuss] How to remove any references to a zpool that's gone

2008-09-18 Thread Mark J Musante
Hi Glenn, Where is it hanging? Could you provide a stack trace? It's possible that it's just a bug and not a configuration issue. On 18 Sep, 2008, at 16.12, Glenn Lagasse wrote: I had a disk that contained a zpool. For reasons that we won't go in to, that disk had zero's written all

Re: [zfs-discuss] unable to ludelete BE with ufs

2008-09-29 Thread Mark J Musante
On Sat, 27 Sep 2008, Marcin Woźniak wrote: After successful upgrade from snv_95 to snv_98 ( ufs boot - zfs boot). After luactive new BE with zfs. I am not able to ludelete old BE with ufs. problem is, I think that zfs boot is /rpool/boot/grub. This is due to a bug in the /usr/lib/lu/lulib

Re: [zfs-discuss] unable to ludelete BE with ufs

2008-09-29 Thread Mark J Musante
On Tue, 30 Sep 2008, Ian Collins wrote: Mark J Musante wrote: On Sat, 27 Sep 2008, Marcin Woźniak wrote: After successful upgrade from snv_95 to snv_98 ( ufs boot - zfs boot). After luactive new BE with zfs. I am not able to ludelete old BE with ufs. problem is, I think that zfs boot

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Mark J Musante
On Tue, 30 Sep 2008, Ram Sharma wrote: Hi, can anyone please tell me what is the maximum number of files that can be there in 1 folder in Solaris with ZSF file system. By folder, I assume you mean directory and not, say, pool. In any case, the 'limit' is 2^48, but that's effectively no

Re: [zfs-discuss] zpool CKSUM errors since drive replace

2008-10-14 Thread Mark J Musante
So this is where I stand. I'd like to ask zfs-discuss if they've seen any ZIL/Replay style bugs associated with u3/u5 x86? Again, I'm confident in my hardware, and /var/adm/messages is showing no warnings/errors. Are you absolutely sure the hardware is OK? Is there another disk you can

Re: [zfs-discuss] Downgrading a zpool

2008-11-06 Thread Mark J Musante
On Thu, 6 Nov 2008, Chris Ridd wrote: I probably need to downgrade a machine from 10u5 to 10u3. The zpool on u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools. The only difference between a v4 pool and a v3 pool is that v4 added history ('zpool history pool'). I would expect a v3

Re: [zfs-discuss] zfs (u)mount conundrum with non-existent mountpoint

2008-11-06 Thread Mark J Musante
Hi Michael, Did you try doing an export/import of tank? On Thu, 6 Nov 2008, Michael Schuster wrote: all, I've gotten myself into a fix I don't know how to resolve (and I can't reboot the machine, it's a build server we share): $ zfs list -r tank/schuster NAME

Re: [zfs-discuss] zvol snapshot at size 100G

2008-11-13 Thread Mark J Musante
Just to try this out, I created a 9g zpool and a 5g volume in that zpool. Then I used dd to write to every block of the volume. Taking a snapshot of the volume at that point attempts to reserve an additional 5g, which fails. With 1g volumes we see it in action: bash-3.00# zpool create tank

Re: [zfs-discuss] slice overlap error when creating pools

2008-11-17 Thread Mark J Musante
On Mon, 17 Nov 2008, Vincent Boisard wrote: #zpool create pool1 c1d1s0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c1d1s0 overlaps with /dev/dsk/c1d1s2 That's CR 6419310. Regards, markm ___ zfs-discuss mailing

Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-09 Thread Mark J Musante
On Tue, 9 Dec 2008, Tim Haley wrote: ludelete doesn't handle this any better than beadm destroy does, it fails for the same reasons. lucreate does not promote the clone it creates when a new BE is spawned, either. Live upgrade's luactivate command is meant to promote the BE during init 6

Re: [zfs-discuss] ZFS vdev labels and EFI disk labels

2008-12-09 Thread Mark J Musante
On Tue, 9 Dec 2008, elaine ashton wrote: Thanks! That'd be great as I have an snv_79 system that doesn't exhibit this behaviour so I'll assume that this has been added in sometime between that release and 101a? According to the CR, the putback went into build 66. external link:

Re: [zfs-discuss] ZFS vdev labels and EFI disk labels

2008-12-09 Thread Mark J Musante
On Tue, 9 Dec 2008, Elaine Ashton wrote: If I fdisk 2 disks to have EFI partitions and label them with the appropriate partition beginning at sector 34 and then give them to ZFS for a pool, ZFS would appear to change the beginning sector to 256. Right. This is done deliberately so that we

Re: [zfs-discuss] raidz with 5 disks

2008-12-19 Thread Mark J Musante
The best you can do right now is mirroring. During the install, choose more than one hard drive and zfs will create a mirror configuration. Support for raidz and/or striping is for a future project. On Fri, 19 Dec 2008, iman habibi wrote: Hello All Im new in solaris 10 zfs structure.my

Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread Mark J Musante
Hi Amy, This is a known problem with ZFS and live upgrade. I believe the docs for s10u6 discourage the config you show here. A patch should be ready some time next month with a fix for this. On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote: I've installed an s10u6 machine with no UFS

Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread Mark J Musante
On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote: mmusante This is a known problem with ZFS and live upgrade. I believe the mmusante docs for s10u6 discourage the config you show here. A patch should mmusante be ready some time next month with a fix for this. Do you happen to have a bugid

Re: [zfs-discuss] Failure to boot from zfs on Sun v880

2009-01-22 Thread Mark J Musante
On Thu, 22 Jan 2009, Al Slater wrote: Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not supported This line is coming from svm, which leads me to believe that the zfs boot blocks were not properly installed by live upgrade. You can try doing this by hand, with the

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-28 Thread Mark J Musante
On Wed, 28 Jan 2009, Richard Elling wrote: Orvar Korvar wrote: I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a similar vein? Would it be easy to do? Yes. To be specific, you use the 'cache' argument to zpool, as in: zpool create pool ... cache cache-device

Re: [zfs-discuss] set mountpoint but don't mount?

2009-01-30 Thread Mark J Musante
On Fri, 30 Jan 2009, Frank Cusack wrote: so, is there a way to tell zfs not to perform the mounts for data2? or another way i can replicate the pool on the same host, without exporting the original pool? There is not a way to do that currently, but I know it's coming down the road.

Re: [zfs-discuss] RFE: parsable iostat and zpool layout

2009-01-30 Thread Mark J Musante
Hi Pål, CR 6420274 covers the -p part of your question. As far as kstats go, we only have them in the arc and the vdev read-ahead cache. Regards, markm -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Hang on zfs import - build 107

2009-01-30 Thread Mark J Musante
On Fri, 30 Jan 2009, Ed Kaczmarek wrote: And/or step me thru the required mdb/kdb/whatever it's called stack trace dump command sequence after booting with -kd Dan Mick's got a good guide on his blog: http://blogs.sun.com/dmick/entry/diagnosing_kernel_hangs_panics_with Regards, markm

Re: [zfs-discuss] how to set mountpoint to default?

2009-01-31 Thread Mark J Musante
To set the mountpoint back to default, use 'zfs inherit mountpoint dataset' -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cannot Mirror RPOOL, Can't Label Disk to SMI

2009-02-03 Thread Mark J Musante
Handojo wrote: hando...@opensolaris:~# zpool add rpool c4d0 Two problems: first, the command needed is 'zpool attach', because you're making a mirror. 'zpool add' is for extending stripes, and currently stripes are not supported as root pools. The second problem is that when the drive is

Re: [zfs-discuss] ZFS vdev_cache

2009-02-13 Thread Mark J Musante
On Fri, 13 Feb 2009, Tony Marshall wrote: How would i obtain the current setting for the vdev_cache from a production system? We are looking at trying to tune ZFS for better performance with respect to oracle databases, however before we start changing settings via the /etc/system file we

Re: [zfs-discuss] large file copy bug?

2009-03-05 Thread Mark J Musante
On Thu, 5 Mar 2009, Blake wrote: I had a 2008.11 machine crash while moving a 700gb file from one machine to another using cp. I looked for an existing bug for this, but found nothing. Has anyone else seen behavior like this? I wanted to check before filing a bug. Have you got a copy of

Re: [zfs-discuss] ZFS snapshot successfully but zfs list -r does not list the snapshot

2009-03-06 Thread Mark J Musante
Hi Steven, Try doing 'zfs list -t all'. This is a change that went in late last year to list only datasets unless snapshots were explicitly requested. On Fri, 6 Mar 2009, Steven Sim wrote: Gurus; I am using OpenSolaris 2008.11 snv_101b_rc2 X86 Prior to this I was using SXCE built 91

Re: [zfs-discuss] large file copy bug?

2009-03-06 Thread Mark J Musante
On Fri, 6 Mar 2009, Blake wrote: I have savecore enabled, but it doesn't look like the machine is dumping core as it should - that is, I don't think it's a panic - I suspect interrupt handling. Then when you say you had a machine crash, what did you mean? Did you look in /var/crash/* to see

Re: [zfs-discuss] large file copy bug?

2009-03-06 Thread Mark J Musante
On Fri, 6 Mar 2009, Blake wrote: I have savecore enabled, but nothing in /var/crash: r...@filer:~# savecore -v savecore: dump already processed r...@filer:~# ls /var/crash/filer/ r...@filer:~# OK, just to ask the dumb questions: is dumpadm configured for /var/crash/filer? Is the dump zvol

Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Mark J Musante
On Tue, 17 Mar 2009, Neal Pollack wrote: Can anyone share some instructions for setting up the rpool mirror of the boot disks during the Solaris Nevada (SXCE) install? You'll need to use the text-based installer, and in there you choose two the two bootable disks instead of just one.

Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Mark J Musante
On 17 Mar, 2009, at 16.21, Bryan Allen wrote: Then mirror the VTOC from the first (zfsroot) disk to the second: # prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2 # zpool attach -f rpool c1t0d0s0 c1t1d0s0 # zpool status -v And then you'll still need to run installgrub to put

Re: [zfs-discuss] RFE: creating multiple clones in one zfs(1) call and one txg

2009-03-27 Thread Mark J Musante
On Fri, 27 Mar 2009, Alec Muffett wrote: The inability to create more than 1 clone at a time (ie: in separate TXGs) is something which has hampered me (and several projects on which I have worked) for some years, now. Hi Alec, Does CR 6475257 cover what you're looking for? Regards, markm

Re: [zfs-discuss] vdev_disk_io_start() sending NULL pointer in ldi_ioctl()

2009-04-10 Thread Mark J Musante
On Thu, 9 Apr 2009, shyamali.chakrava...@sun.com wrote: Hi All, I have corefile where we see NULL pointer de-reference PANIC as we have sent (deliberately) NULL pointer for return value. vdev_disk_io_start() error = ldi_ioctl(dvd-vd_lh, zio-io_cmd,

Re: [zfs-discuss] ZIL SSD performance testing... -IOzone works great, others not so great

2009-04-10 Thread Mark J Musante
On Fri, 10 Apr 2009, Patrick Skerrett wrote: degradation) when these write bursts come in, and if I could buffer them even for 60 seconds, it would make everything much smoother. ZFS already batches up writes into a transaction group, which currently happens every 30 seconds. Have you

Re: [zfs-discuss] Destroying a zfs dataset

2009-04-17 Thread Mark J Musante
On Fri, 17 Apr 2009, Mark J Musante wrote: The dependency is based on the names. I should clarify what I mean by that. There are actually two dependencies here: one is based on dataset names, and one is based on snapshots and clones. If there are two datasets, pool/foo and pool/foo/bar

Re: [zfs-discuss] Areca 1160 ZFS

2009-05-07 Thread Mark J Musante
On Thu, 7 May 2009, Mike Gerdts wrote: Perhaps you have change the configuration of the array since the last reconfiguration boot. If you run devfsadm then run format, does it see more disks? Another thing to check is to see if the controller has a jbod mode as opposed to passthrough.

Re: [zfs-discuss] replicating a root pool

2009-05-21 Thread Mark J Musante
On Thu, 21 May 2009, Ian Collins wrote: I'm trying to use zfs send/receive to replicate the root pool of a system and I can't think of a way to stop the received copy attempting to mount the filesystem over the root of the destination pool. If you're using build 107 or later, there's a

Re: [zfs-discuss] LU snv_93 - snv_101a (ZFS - ZFS )

2009-05-22 Thread Mark J Musante
On Thu, 21 May 2009, Nandini Mocherla wrote: Then I booted into failsafe mode of 101a and then tried to run the following command as given in luactivate output. Yeah, that's a known bug in the luactivate output. CR 6722845 # mount -F zfs /dev/dsk/c1t2d0s0 /mnt cannot open

  1   2   >