[zfs-discuss] Re: zfs boot error recovery

2007-06-01 Thread Jakob Praher
hi Will, thanks for your answer. Will Murnane schrieb: On 5/31/07, Jakob Praher [EMAIL PROTECTED] wrote: c) si 3224 related question: is it possible to simply hot swap the disk (i have the disks in special hot-swappable units, but have no experience in hotswapping under solaris, such that i

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
Yes by my goal is to replace exisiting disk which is internal disk 72gb with SAN storage disk which is 100GB in size... As long as I will be able to detach the old one then its going to be great... otherwise I will be stuck with one internal disk and oneSAN disk which I do not like that much to

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
Never the less I get the following error: bash-3.00# zpool attach mypool emcpower0a missing new_device specification usage: attach [-f] pool device new_device bash-3.00# zpool status pool: mypool state: ONLINE scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Will Murnane
On 6/1/07, Krzys [EMAIL PROTECTED] wrote: bash-3.00# zpool list NAMESIZEUSED AVAILCAP HEALTH ALTROOT mypool 68G 53.1G 14.9G78% ONLINE - mypool2 123M 83.5K123M 0% ONLINE - Are you sure you've

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
ok, I think I did figure out what is the problem well what zpool does for that emc powerpath is it takes parition 0 from disk and is trying to attach it to my pool, so when I added emcpower0a I got the following: bash-3.00# zpool list NAMESIZEUSED AVAILCAP HEALTH

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
yeah it does something funky that I did not expect, zpool seems like its taking slice 0 of that emc lun rather than taking the whole device... so when I did create that lun, I formated disk and it looked like this: format verify Primary label contents: Volume name = ascii name =

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
Ok, now its seems like its working what I wanted to do: bash-3.00# zpool status pool: mypool state: ONLINE scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007 config: NAMESTATE READ WRITE CKSUM mypool ONLINE 0 0 0

[zfs-discuss] Re: current state of play with ZFS boot and install?

2007-06-01 Thread Carl Brewer
Thankyou Lori, that's fantastic. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Mark J Musante
On Fri, 1 Jun 2007, Krzys wrote: bash-3.00# zpool replace mypool c1t2d0 emcpower0a bash-3.00# zpool status pool: mypool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for

[zfs-discuss] vxfs and zfs

2007-06-01 Thread benita ulisano
Hi, I have been given the task to research converting our vxfs/vm file systems and volumes to zfs. The volumes are attached to an EMC Clariion running raid-5, and raid 1_0. I have no test machine, just a migration machine that currently hosts other things. It is possible to setup a zfs file

Re: [zfs-discuss] zfs migration

2007-06-01 Thread Richard Elling
zpool replace == zpool attach + zpool detach It is not a good practice to detach and then attach as you are vulnerable after the detach and before the attach completes. It is a good practice to attach and then detach. There is no practical limit to the number of sides of a mirror in ZFS. --

[zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-01 Thread Jürgen Keil
I wrote Has anyone else noticed a significant zfs performance deterioration when running recent opensolaris bits? My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow compilation disabled; using an lzjb compressed

Re: [zfs-discuss] vxfs and zfs

2007-06-01 Thread Michael Schuster
benita ulisano wrote: Hi, I have been given the task to research converting our vxfs/vm file systems and volumes to zfs. The volumes are attached to an EMC Clariion running raid-5, and raid 1_0. I have no test machine, just a migration machine that currently hosts other things. It is possible

Re: [zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-01 Thread Rob Logan
Patching zfs_prefetch_disable = 1 has helped It's my belief this mainly aids scanning metadata. my testing with rsync and yours with find (and seen with du ; zpool iostat -v 1 ) pans this out.. mainly tracked in bug 6437054 vdev_cache: wise up or die

[zfs-discuss] Re: vxfs and zfs

2007-06-01 Thread benita ulisano
Hi, I would like to clarify one point for the forum experts on what I would like to do after it was brought to my attention that my posting might not describe a true picture of what I am trying to accomplish. All I want to do is setup a separate zfs file system running Oracle on the machine

Re: [zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]

2007-06-01 Thread Richard Elling
Frank Cusack wrote: On May 31, 2007 1:59:04 PM -0700 Richard Elling [EMAIL PROTECTED] wrote: CF cards aren't generally very fast, so the solid state disk vendors are putting them into hard disk form factors with SAS/SATA interfaces. These If CF cards aren't fast, how will putting them into a

[zfs-discuss] [Fwd: zone mount points are busy following reboot of global zone 65505676]

2007-06-01 Thread Claire . Grandalski
Original Message Subject: zone mount points are busy following reboot of global zone 65505676 Date: Fri, 01 Jun 2007 12:33:57 -0400 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] [EMAIL PROTECTED] I need help on this customers issue. I appreciate any help that

Re: [zfs-discuss] Re: ZFS consistency guarantee

2007-06-01 Thread Darren Dunham
If i put the database in hotbackup mode,then i will have to ensure that the filesystem is consistent as well.So, you are saying that taking a ZFS snapshot is the only method to guarantee consistency in the filesystem since it flushes all the buffers to the filesystem , so its consistent.

Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher
I managed to correct the problem by writing a script inspired by Chris Gerhard's blog that did a zfs send | zfs recv. Now that things are back up, I have a couple of lingering questions: 1) I noticed that the filesystem size information is not the same between the src and dst filesystem

Re[2]: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-06-01 Thread Robert Milkowski
Hello Richard, RE But I am curious as to why you believe 2x CF are necessary? RE I presume this is so that you can mirror. But the remaining memory RE in such systems is not mirrored. Comments and experiences are welcome. I was thinking about mirroring - it's not clear from the comment above

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread eric kustarz
2) Following Chris's advice to do more with snapshots, I played with his cron-triggered snapshot routine: http://blogs.sun.com/chrisg/entry/snapping_every_minute Now, after a couple of days, zpool history shows almost 100,000 lines of output (from all the snapshots and

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher
eric kustarz wrote: We specifically didn't allow the admin the ability to truncate/prune the log as then it becomes unreliable - ooops i made a mistake, i better clear the log and file the bug against zfs I understand - auditing means never getting to blame someone else :-) There are

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread Nicolas Williams
On Fri, Jun 01, 2007 at 02:09:55PM -0700, John Plocher wrote: eric kustarz wrote: We specifically didn't allow the admin the ability to truncate/prune the log as then it becomes unreliable - ooops i made a mistake, i better clear the log and file the bug against zfs I understand -

[zfs-discuss] SMART

2007-06-01 Thread J. David Beutel
On Solaris x86, does zpool (or anything) support PATA (or SATA) IDE SMART data? With the Predictive Self Healing feature, I assumed that Solaris would have at least some SMART support, but what I've googled so far has been discouraging.

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread Mark J Musante
On Fri, 1 Jun 2007, John Plocher wrote: This seems especially true when there is closure on actions - the set of zfs snapshot foo/[EMAIL PROTECTED] zfs destroy foo/[EMAIL PROTECTED] commands is (except for debugging zfs itself) a noop Note that if you use the recursive

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread eric kustarz
On Jun 1, 2007, at 2:09 PM, John Plocher wrote: eric kustarz wrote: We specifically didn't allow the admin the ability to truncate/ prune the log as then it becomes unreliable - ooops i made a mistake, i better clear the log and file the bug against zfs I understand - auditing means

Re: [zfs-discuss] SMART

2007-06-01 Thread Eric Schrock
See: http://blogs.sun.com/eschrock/entry/solaris_platform_integration_generic_disk Prior to the above work, we only monitored disks on Thumper (x4500) platforms. With these changes we monitor basic SMART data for SATA drives. Monitoring for SCSI drives will be here soon. The next step will be

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher
Mark J Musante wrote: Note that if you use the recursive snapshot and destroy, only one line is My problem (and it really is /not/ an important one) was that I had a cron job that every minute did min=`date +%d` snap=$pool/[EMAIL PROTECTED] zfs destroy $snap

Re: [zfs-discuss] SMART

2007-06-01 Thread J. David Beutel
Excellent! Thanks! I've gleaned the following from your blog. Is this correct? * A week ago you committed a change that will: ** get current SMART parameters and faults for SATA on x86 via a single function in a private library using SCSI emulation; ** decide whether they indicate any

[zfs-discuss] ZFS Send/RECV

2007-06-01 Thread Ben Bressler
I'm trying to test an install of ZFS to see if I can backup data from one machine to another. I'm using Solaris 5.10 on two VMware installs. When I do the zfs send | ssh zfs recv part, the file system (folder) is getting created, but none of the data that I have in my snapshot is sent. I can

Re: [zfs-discuss] SMART

2007-06-01 Thread Eric Schrock
On Fri, Jun 01, 2007 at 12:33:29PM -1000, J. David Beutel wrote: Excellent! Thanks! I've gleaned the following from your blog. Is this correct? * A week ago you committed a change that will: ** get current SMART parameters and faults for SATA on x86 via a single function in a private

Re: [zfs-discuss] SMART

2007-06-01 Thread Toby Thain
On 1-Jun-07, at 7:50 PM, Eric Schrock wrote: On Fri, Jun 01, 2007 at 12:33:29PM -1000, J. David Beutel wrote: Excellent! Thanks! I've gleaned the following from your blog. Is this correct? * A week ago you committed a change that will: ** get current SMART parameters and faults for SATA

[zfs-discuss] Re: shareiscsi is cool, but what about sharefc or sharescsi?

2007-06-01 Thread Richard L. Hamilton
I'd love to be able to server zvols out as SCSI or FC targets. Are there any plans to add this to ZFS? That would be amazingly awesome. Can one use a spare SCSI or FC controller as if it were a target? Even if the hardware is capable, I don't see what you describe as a ZFS thing really; it

Re: [zfs-discuss] Re: shareiscsi is cool, but what about sharefc or sharescsi?

2007-06-01 Thread Jonathan Edwards
On Jun 1, 2007, at 18:37, Richard L. Hamilton wrote: Can one use a spare SCSI or FC controller as if it were a target? we'd need an FC or SCSI target mode driver in Solaris .. let's just say we used to have one, and leave it mysteriously there. smart idea though! --- .je

Re: [zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]

2007-06-01 Thread Frank Cusack
On June 1, 2007 9:44:23 AM -0700 Richard Elling [EMAIL PROTECTED] wrote: Frank Cusack wrote: On May 31, 2007 1:59:04 PM -0700 Richard Elling [EMAIL PROTECTED] wrote: CF cards aren't generally very fast, so the solid state disk vendors are putting them into hard disk form factors with SAS/SATA

Re: [zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]

2007-06-01 Thread Chris Csanady
On 6/1/07, Frank Cusack [EMAIL PROTECTED] wrote: On June 1, 2007 9:44:23 AM -0700 Richard Elling [EMAIL PROTECTED] wrote: [...] Semiconductor memories are accessed in parallel. Spinning disks are accessed serially. Let's take a look at a few examples and see what this looks like... Disk