oming within a month.
/Tomas
--
Enda O'Connor x19781 Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi
So ship date is 19th October for Solaris 10 10/09 ( update 8 ).
Enda
Enda O'Connor wrote:
Hi
Yes Solaris 10/09 ( update 8 ) will contain
6501037 want user/group quotas on zfs
it should be out within a few weeks.
So if they have zpools already installed they can apply
141444-09/1414
Hi
This is 6884728 which is a regression from 6837400.
the workaround is as you done, remove the lines from vfstab
Enda
Brian wrote:
I am have a strange problem with liveupgrade of ZFS boot environment. I found a similar discussion on the zones-discuss, but, this happens for me on installs wi
scuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Enda O'Connor x19781 Software Product Engineering
Pa
yes -
I don't see anything wrong with my /etc/vfstab. Until I get this resolved, I'm
afraid to patch and use the new BE.
--
Enda O'Connor x19781 Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
Hi T
his will boot ok in my opinion, not seeing any issues there.
Enda
Mark Horstman wrote:
more input:
# lumount foobar /mnt
/mnt
# cat /mnt/etc/vfstab
# cat /mnt/etc/vfstab
#live-upgrade: updated boot environment
#device device mount FS fsckmount mount
Mark Horstman wrote:
Then why the warning on the lucreate. It hasn't done that in the past.
this is from the vfstab processing code in ludo.c, in your case not
causing any issue, but shall be fixed.
Enda
Mark
On Oct 21, 2009, at 12:41 PM, "Enda O'Connor" wrote:
Hi T
On Wed, Oct 21, 2009 at 12:49 PM, Enda O'Connor <mailto:enda.ocon...@sun.com>> wrote:
Mark Horstman wrote:
Then why the warning on the lucreate. It hasn't done that in the
past.
this is from the vfstab processing code in ludo.c, in your case not
o, will we need to move our data around (send/recv or whatever your
preferred method is) to take advantage of dedup? I was hoping the
blockpointer rewrite code would allow an admin to simply turn on dedup
and let ZFS process the pool, eliminating excess redundancy as it
went.
--
Enda O
James Lever wrote:
On 03/11/2009, at 7:32 AM, Daniel Streicher wrote:
But how can I "update" my current OpenSolaris (2009.06) or Solaris 10
(5/09) to use this.
Or have I wait for a new stable release of Solaris 10 / OpenSolaris?
For OpenSolaris, you change your repository and switch to the
dick hoogendijk wrote:
OpenSolaris-b128a has zfs version 22 w/ deduplication.
Do I need to update older pools to take advantage of this dedup or can I
just create a new zfs filesystem with this option?
it's pool wide, so a zpool upgrade is necessary, or else create a new pool.
cheers
Enda
___
Hi
the live upgrade info doc
http://sunsolve.sun.com/search/document.do?assetkey=1-61-206844-1
has all the relevant patch, if you are on u6 KU or higher ( you are on
u8 ), then you can just migrate straight to zfs, so there is no need to
upgrade to u8 ufs, in order to move to u8 zfs, the u6 KU d
On 26/02/2010 14:03, Jesse Reynolds wrote:
Hello
I have an amd64 server running OpenSolaris 2009-06. In December I created one
container on this server named 'cpmail' with it's own zfs dataset and it's been
running ever since. Until earlier this evening when the server did a kernel
panic and
On 29/09/2011 23:59, Rich Teer wrote:
Hi all,
Got a quick question: what are the latest zpool and zfs versions
supported in Solaris 10 Update 10?
TIA,
root@pstx2200a # zfs upgrade -v
The following filesystem versions are supported:
VER DESCRIPTION
--- -
Hi
Need more info here, what exactly is the root FS, ie zfs?
what kernel rev is current ( uname -a )
is there a specific patch that is being installed.
if so then Live upgrade is best bet, combined with perhaps recommended
patch cluster.
apply latest rev of 119254 and 121430 ( SPARC ) or ( 121
On 15/02/2012 17:16, David Dyer-Bennet wrote:
While I'm not in need of upgrading my server at an emergency level, I'm
starting to think about it -- to be prepared (and an upgrade could be
triggered by a failure at this point; my server dates to 2006).
I'm actually more concerned with software th
On 17/04/2012 16:40, Carsten John wrote:
Hello everybody,
just to let you know what happened in the meantime:
I was able to open a Service Request at Oracle.
The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)
The bug has bin fixed (according to Oracle support) sin
On 29/11/2012 14:51, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Enda o'Connor - Oracle Ireland -
Say I have an ldoms guest that is using zfs root pool that is mirrored
On 28/09/2010 10:20, Ketan wrote:
I have created a solaris9 zfs root flash archive for sun4v environment which i
'm tryin to use for upgrading solaris10 u8 zfs root based server using live
upgrade.
one cannot use zfs flash archive with luupgrade, that is with zfs root a
flash archive archvies
Hi
so to be absolutely clear
in the same session, you ran an update, commit and select, and the
select returned an earlier value than the committed update?
Things like
ALTER SESSION set ISOLATION_LEVEL = SERIALIZABLE;
will cause a session to NOT see commits from other sessions, but in
Oracle
Hi
I'd certainly look at the sql being run, examine the explain plan and in
particular SQL_TRACE, TIMED_STATISTICS, and TKPROF, these will really
highlight issues.
see following for autotrace which can generate explain plan etc.
http://download.oracle.com/docs/cd/B10500_01/server.920/a96533/a
On 29/05/2011 19:55, BIll Palin wrote:
I'm migrating some filesystems from UFS to ZFS and I'm not sure how to create a
couple of them.
I want to migrate /, /var, /opt, /export/home and also want swap and /tmp. I
don't care about any of the others.
The first disk, and the one with the UFS fil
Boyd Adamson wrote:
> "Enda O'Connor ( Sun Micro Systems Ireland)" <[EMAIL PROTECTED]>
> writes:
> [..]
>> meant to add that on x86 the following should do the trick ( again I'm open
>> to correction )
>>
>> installgrub /boot/grub/stage1
Alan Burlison wrote:
> Lori Alt wrote:
>
>> It's hard to know what the "right" thing to do is from within
>> the installation software. Does the user want to preserve
>> as much of their current environment as possible? Or does
>> the user want to move toward the new "standard" configuration
>>
Ross wrote:
> Hey folks,
>
> I guess this is an odd question to be asking here, but I could do with some
> feedback from anybody who's actually using ZFS in anger.
>
> I'm about to go live with ZFS in our company on a new fileserver, but I have
> some real concerns about whether I can really tr
Malachi de Ælfweald wrote:
> I just tried that, but the installgrub keeps failing:
>
> [EMAIL PROTECTED]:~# zpool status
> pool: rpool
> state: ONLINE
> scrub: resilver completed after 0h1m with 0 errors on Sat Aug 2
> 01:44:55 2008
> config:
>
> NAME STATE READ WRITE CKSUM
Hi
build 93 contains all the fixes in 138053-02 it would appear.
Just to avoid confusion, patch 138053-02 is only relevant to the solaris
10 updates, and does not apply to the opensolaris variants.
To get all the fixes for opensolaris, upgrade or install build 93.
If on solaris 10, then sugges
Mark J. Musante wrote:
>
> On 3 Sep 2008, at 05:20, "F. Wessels" <[EMAIL PROTECTED]> wrote:
>
>> Hi,
>>
>> can anybody describe the correct procedure to replace a disk (in a
>> working OK state) with a another disk without degrading my pool?
>
> This command ought to do the trick:
>
> zfs rep
Steve Goldberg wrote:
> Hi Lori,
>
> is ZFS boot still planned for S10 update 6?
>
> Thanks,
>
> Steve
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/
> File and args:
> ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
>
> {1} ok boot disk1
> Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
> File and args:
> ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast
Hi Krzys
Also some info on the actual system
ie what was it upgraded to u6 from and how.
and an idea of how the filesystems are laid out, ie is usr seperate from
/ and so on ( maybe a df -k ). Don't appear to have any zones installed,
just to confirm.
Enda
On 11/05/08 14:07, Enda O'Co
ulimit for coredumps
after this, in order to not risk filling the system with coredumps in
the case of some utility coredumping in a loop say.
Enda
On 11/05/08 13:46, Krzys wrote:
>
> On Wed, 5 Nov 2008, Enda O'Connor wrote:
>
>> On 11/05/08 13:02, Krzys wrote:
>>> I
00 (0/0/0)0
> 6 unassignedwu 00 (0/0/0)0
> 7 unassignedwu 00 (0/0/0)0
>
> format>
>
>
> On Wed, 5 Nov 2008, Enda O'Connor wrote:
6gb as I am not using that much disk space
> on it. But let me try this and it might be why I am getting this problem...
>
>
>
> On Wed, 5 Nov 2008, Enda O'Connor wrote:
>
>> Hi Krzys
>> Also some info on the actual system
>> ie what was it upgraded t
.vold.24950
> -rw--- 1 root root 4126301 Nov 5 19:22 core.vold.24978
> drwxr-xr-x 3 root root 81408 Nov 5 20:06 .
> -rw--- 1 root root 31351099 Nov 5 20:06 core.cpio.6208
>
>
>
> On Wed, 5 Nov 2008, Enda O'Connor wrote:
>
>>
alloc_unlocked+0x164
> %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128
>%sp = 0xffbfe5b0
>%fp = 0xffbfe610
>
> %wim = 0x
> %tbr = 0x
>
>
>
>
>
>
>
> On Thu, 6 Nov 2008, Enda O'Connor wrote:
>
>> Hi
>>
[EMAIL PROTECTED] wrote:
> Suppose I have a single ZFS pool on a single disk;
> I want to upgrade the system to use two different, larger disks
> and I want to mirror.
>
> Can I do something like:
>
> - I start with disk #0
> - add mirror on disk #1
> (resilver)
> - repl
Enda O'Connor wrote:
> [EMAIL PROTECTED] wrote:
>> Suppose I have a single ZFS pool on a single disk;
>> I want to upgrade the system to use two different, larger disks
>> and I want to mirror.
>>
>> Can I do something like:
>>
>> - I
Mike DeMarco wrote:
> My root drive is ufs. I have corrupted my zpool which is on a different drive
> than the root drive.
> My system paniced and now it core dumps when it boots up and hits zfs start.
> I have a alt root drive that can boot the system up with but how can I
> disable zfs from s
Vincent Fox wrote:
> Reviving this thread.
>
> We have a Solaris 10u4 system recently patched with 137137-09.
> Unfortunately the patch was applied from multi-user mode, I wonder if this
> may have been original posters problem as well? Anyhow we are now stuck
> with an unbootable system as well.
Vincent Fox wrote:
> Whether tis nobler.
>
> Just wondering if (excepting the existing zones thread) there are any
> compelling arguments to keep /var as it's own filesystem for your typical
> Solaris server. Web servers and the like.
>
> Or arguments against
with zfs it's easy to set quo
ckage
Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Enda O'Connor x19781 Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Enda O'Connor x19781 Software Product Engineering
Patch System Test : Ireland : x197
Hi
for sparc
119534-15
124630-26
for x86
119535-15
124631-27
higher rev's of these will also suffice.
Note these need to be applied to the miniroot of the jumpstart image so
that it can then install zfs flash archive.
please read the README notes in these for more specific instructions,
inc
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Enda O'Connor x19781 Soft
RB wrote:
I have zfs on my base T5210 box installed with LDOMS (v.1.0.3). Every time I try to jumpstart my Guest machine, I get the following error.
ERROR: One or more disks are found, but one of the following problems exists:
- Hardware failure
- The disk(s) available on this
RB wrote:
Is it possible to create flar image of ZFS root filesystem to install it to
other macines?
yes but it needs solaris update 7 or later to install a zfs flar
see
http://www.opensolaris.org/os/community/zfs/boot/flash/;jsessionid=AB24EEFB6955AD505F19A152CDEC84A8
isn't supported on ope
Richard Elling wrote:
On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:
I'm surprised no-one else has posted about this - part of the Sun
Oracle Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with 48
or 96 GB of SLC, a built-in SAS controller and a super-capacitor for
cache prote
Richard Elling wrote:
On Sep 24, 2009, at 10:30 AM, Javier Conde wrote:
Hello,
Given the following configuration:
* Server with 12 SPARCVII CPUs and 96 GB of RAM
* ZFS used as file system for Oracle data
* Oracle 10.2.0.4 with 1.7TB of data and indexes
* 1800 concurrents users with PeopleSof
Robert Milkowski wrote:
Hello Robert,
Monday, February 5, 2007, 2:26:57 PM, you wrote:
RM> Hello zfs-discuss,
RM> I've patched U2 system to 118855-36. Several zfs related bugs id
RM> should be covered between -19 and -36 like HotSpare support.
RM> However despite -36 is installed 'zpool
Robert Milkowski wrote:
Hello Casper,
Monday, February 5, 2007, 2:32:49 PM, you wrote:
Hello zfs-discuss,
I've patched U2 system to 118855-36. Several zfs related bugs id
should be covered between -19 and -36 like HotSpare support.
However despite -36 is installed 'zpool upgrade' still
Hi
118855-36 is marked interactive and is not installable by automation, or
at least should not be installed by smpatch.
If you look in the
patchpro.download.directory
from "smpatch get"
under the dir cache ( if I remember correctly )
you will see a current.zip ( possibly with a time stamp a
Brian Hechinger wrote:
On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
On March 11, 2007 6:05:13 PM + Tim Foster <[EMAIL PROTECTED]> wrote:
* ability to add disks to mirror the root filesystem at any time,
should they become available
Can't this be done with
Christine Tran wrote:
> Hi,
>
> I understands the upgrade issue surrounding the patching and upgrade
> tools. Can I get around this with some trickery using quota and
> reservation? I would quota and reserve for a pool/somezonepath some
> capacity, say 10GB, and in this way allocate a fixed cap
David Jackson wrote:
>> I'm looking for an authoritative list of the patches that should be
>> applied for ZFS for the commercial version of Solaris. A
>> centralized URL that is maintained would be ideal. Can someone
>> reply back to me with one as I'm not a subscriber to the news list.
>>
Bob Friesenhahn wrote:
> The Sun Update Manager on my x86 Solaris 10 box describes this new
> patch as "SunOS 5.10_x86 nfs fs patch" (note use of "nfs") but looking
> at the problem descriptions this is quite clearly a big ZFS patch that
> Solaris 10 users should pay attention to since it fixes
Paul Raines wrote:
> On Sun, 9 Mar 2008, Marc Bevand wrote:
>
>
>> Paul Raines nmr.mgh.harvard.edu> writes:
>>
>>> Mar 9 03:22:16 raidsrv03 sata: NOTICE:
>>> /pci 0,0/pci1022,7458 1/pci11ab,11ab 1:
>>> Mar 9 03:22:16 raidsrv03 port 6: device reset
>>> [...]
>>>
>>> The above repeated
Michael Schuster wrote:
> Sachin Palav wrote:
>
>> Friends,
>> I have recently built a file server on x2200 with solaris x86 having zfs
>> (version4) and running NFS version2 & samba.
>>
>> the AIX 5.2 & AIX 5.2 client give error while running command "cp -R
>> as below:
>> cp: 0653-440 dire
Jim Litchfield at Sun wrote:
> I think you'll find that any attempt to make zones (certainly whole root
> ones) will fail after this.
>
right, zoneadm install actually copies in the global zones undo.z into
the local zone, so that patchrm of an existing patch will work.
haven't tried out what
Hi Tommaso
Have a look at the man page for zfs and the "attach" section in
particular, it will do the job nicely.
Enda
Tommaso Boccali wrote:
> Ciao,
> the rot filesystem of my thumper is a ZFS with a single disk:
>
> bash-3.2# zpool status rpool
> pool: rpool
> state: ONLINE
> scrub: no
Hi
S10_u5 has version 4, latest in opensolaris is version 10
see
http://opensolaris.org/os/community/zfs/version/10/
where
n=version
http://opensolaris.org/os/community/zfs/version/n/ so sub 4 for n to see
version 4 changes, and so on up to 10.
run zpool upgrade ( doesn't actually run an upgr
Hi
Say I have an ldoms guest that is using zfs root pool that is mirrored,
and the two sides of the mirror are coming from two separate vds
servers, that is
mirror-0
c3d0s0
c4d0s0
where c3d0s0 is served by one vds server, and c4d0s0 is served by
another vds server.
Now if for some reaso
Mike Gerdts wrote:
> On Wed, Jul 23, 2008 at 11:36 AM, <[EMAIL PROTECTED]> wrote:
>> Rainer,
>>
>> Sorry for your trouble.
>>
>> I'm updating the installboot example in the ZFS Admin Guide with the
>> -F zfs syntax now. We'll fix the installboot man page as well.
>
> Perhaps it also deserves a me
Enda O'Connor ( Sun Micro Systems Ireland) wrote:
> Mike Gerdts wrote:
>> On Wed, Jul 23, 2008 at 11:36 AM, <[EMAIL PROTECTED]> wrote:
>>> Rainer,
>>>
>>> Sorry for your trouble.
>>>
>>> I'm updating the installboot exampl
[EMAIL PROTECTED] wrote:
> Alan,
>
> Just make sure you use dumpadm to point to valid dump device and
> this setup should work fine. Please let us know if it doesn't.
>
> The ZFS strategy behind automatically creating separate swap and
> dump devices including the following:
>
> o Eliminates the
Dave wrote:
>
>
> Enda O'Connor wrote:
>>
>> As for thumpers, once 138053-02 ( marvell88sx driver patch ) releases
>> within the next two weeks ( assuming no issues found ), then the
>> thumper platform running s10 updates will be up to date in terms
dick hoogendijk wrote:
> My server runs S10u5. All slices are UFS. I run a couple of sparse
> zones on a seperate slice mounted on /zones.
>
> When S10u6 comes out booting of ZFS will become possible. That is great
> news. However, will it be possible to have those zones I run now too?
you can mig
Hi
Clive King has a nice blog entry showing this in action
http://blogs.sun.com/clive/entry/replication_using_zfs
with associated script at:
http://blogs.sun.com/clive/resource/zfs_repl.ksh
Which I think answers most of your questions.
Enda
Ross wrote:
> Hey folks,
>
> Is anybody able to help a
Robert Milkowski wrote:
Hello zfs-discuss,
In order to get IDR126199-01 I need to install 120473-05 first.
I can get 120473-07 but everything more than -05 is marked as
incompatible with IDR126199-01 so I do not want to force it.
Local Sun's support has problems with getting 120473-05 a
Robert Milkowski wrote:
Hello Enda,
Wednesday, April 11, 2007, 4:21:35 PM, you wrote:
EOCSMSI> Robert Milkowski wrote:
Hello zfs-discuss,
In order to get IDR126199-01 I need to install 120473-05 first.
I can get 120473-07 but everything more than -05 is marked as
incompatible with IDR12
Paul Armor wrote:
> Hi,
> I was wondering if anyone would know if this is just an accounting-type
> error with the recorded "version=" stored on disk, or if there
> are/could-be any deeper issues with an "upgraded" zpool?
>
> I created a pool under a Sol10_x86_u3 install (11/06?), and zdb correc
Dick Davies wrote:
> I had some trouble installing a zone on ZFS with S10u4
> (bug in the postgres packages) that went away when I used a
> ZVOL-backed UFS filesystem
> for the zonepath.
>
Hi
Out of interest what was the bug.
Enda
> I thought I'd push on with the experiment (in the hope Live Upg
David Runyon wrote:
> Does anyone know this?
>
> David Runyon
> Disk Sales Specialist
>
> Sun Microsystems, Inc.
> 4040 Palm Drive
> Santa Clara, CA 95054 US
> Mobile 925 323-1211
> Email [EMAIL PROTECTED]
>
>
>
>
> Russ Lai wrote:
>> Dave;
>> Does ZFS support Oracle RAC?
> __
Richard Elling wrote:
> Morris Hooten wrote:
>> I looked through the solarsinternals zfs best practices and not
>> completly sure
>> of the best scenario.
>>
>
> ok, perhaps we should add some clarifications...
>
>> I have a Solaris 10 6/06 Generic_125100-10 box with attached 3510 array
>> and
Mangan wrote:
> Is this a release that can be downloaded from the website and will work on
> SPARC systems. The write up says it is for VMware. Am I missing something?
>
>
>> Use Solaris 10 9/07. It has more than a year's worth of improvements
>> and enhancements to Solaris.
>> -- richard
>
>
my mistake, getting confused by release numbers, 9.07 was what
Richard meant.
Enda
> When is the next release for Sparc due out?
>
> Paul
>
>
> -Original Message-
>> From: "Enda O'Connor ( Sun Micro Systems Ireland)" <[EMAIL PROTECTED]>
>&g
Sam Nicholson wrote:
Greetings,
snv_79a
AMD 64x2 in 64 bit kernel mode.
I'm in the middle of migrating a large zfs set from a pair of 1TB mirrors
to a 1.3TB RAIDz.
I decided to use zfs send | zfs receive, so the first order of business
was to snap the entire source filesystem.
# zfs sna
Kenny wrote:
> Back to the top
>
> Is there a patch upgrade for ZFS on Solaris 10? Where might I find it.
it's the kernel patch, depending on how far back you are in the update's you
might have to
install m ultiple Kernel Patches.
the latest one is 127127-11/127128-11 ( the u5 KU )
it dep
Hi
Use zpool attach
from
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m
zpool attach [-f] pool device new_device
Attaches new_device to an existing zpool device. The existing device
cannot be part
of a raidz configuration. If device is not currently part of a mirrored
configuration,
Andre wrote:
> Hi there,
>
> I'm currently setting up a new system to my lab. 4 SATA drives would be
> turned into the main file system (ZFS?) running on a soft raid (raid-z?).
>
> My main target is reliability, my experience with Linux SoftRaid was
> catastrophic and the array could no be rest
Hi
It is my understanding that zfs will be available to pre S10 update 2
customers via patches, ie customers on FCS could install the necessary
zfs patches and thereby start using zfs.
But there seems to be confusion in regards to whether this is supported
or not.
Some people say only zfs on
Hi
Looks like same stack as 6413847, although it is pointed more towards hardware
failure.
the stack below is from 5.11 snv_38, but also seems to affect update 2 as
per above bug.
Enda
Thomas Maier-Komor wrote:
Hi,
my colleage is just testing ZFS and created a zpool which had a backing s
Hi
I was trying to overlay a pool onto an existing mount
# cat /etc/release
Solaris 10 6/06 s10s_u2wos_09a SPARC
# df -k /export
Filesystemkbytesused avail capacity Mounted on
/dev/dsk/c1t0d0s320174761 3329
Slight typo
I had to run
# zfs umount tank
cannot unmount 'tank': not currently mounted
# zfs umount /export/home1
# zfs umount /export/home
#
in order to get zpool destroy to run
Enda
Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:
Hi
I was trying to
;/export/home' was unavailable because
of the 'zfs mount -O'.
- Eric
On Tue, Jul 04, 2006 at 04:10:34PM +0100, Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:
Hi
I was trying to overlay a pool onto an existing mount
# cat /etc/release
Hi
I guess the problem is that David is using smpatch (our automated patching
system )
So in theory he is up to date on his patches
( he has since removed
122660-02
122658-02
122640-05
)
So when I install the following onto a system ( SPARC S10 FCS ) with two
zones already running:
1
Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:
Hi
I guess the problem is that David is using smpatch (our automated patching
system )
So in theory he is up to date on his patches
( he has since removed
122660-02
122658-02
1226
Hi
I logged CR 6457216 to track this for now.
Enda
Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:
Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:
Hi
I guess the problem is that David is using sm
88 matches
Mail list logo